diff --git a/CMU Advanced NLP 2024 (1) Introduction to NLP/CMU Advanced NLP 2024 (1) Introduction to NLP.mp4 b/CMU Advanced NLP 2024 (1) Introduction to NLP/CMU Advanced NLP 2024 (1) Introduction to NLP.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ba649c3c6b207b0d5e92c88fec39241a8852d30a --- /dev/null +++ b/CMU Advanced NLP 2024 (1) Introduction to NLP/CMU Advanced NLP 2024 (1) Introduction to NLP.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28be5d1d73b923cf7a91a66e6d77b5862dbf89020af51b887091f9aeedfd7b94 +size 66391760 diff --git a/CMU Advanced NLP 2024 (1) Introduction to NLP/metadata.json b/CMU Advanced NLP 2024 (1) Introduction to NLP/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..ff0010d1c7b6f6048dd62d7bcc320979e010f716 --- /dev/null +++ b/CMU Advanced NLP 2024 (1) Introduction to NLP/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=6NeTO61qc4M", + "title": "CMU Advanced NLP 2024 (1) Introduction to NLP" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (1) Introduction to NLP/transcript.srt b/CMU Advanced NLP 2024 (1) Introduction to NLP/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..d754c404c527248897a3f994fa2afe1a4d3a290a --- /dev/null +++ b/CMU Advanced NLP 2024 (1) Introduction to NLP/transcript.srt @@ -0,0 +1,6071 @@ +1 +00:00:01,280 --> 00:00:06,759 +so the class today is uh introduction to + +2 +00:00:04,680 --> 00:00:09,480 +natural language processing and I'll be + +3 +00:00:06,759 --> 00:00:11,200 +talking a little bit about you know what + +4 +00:00:09,480 --> 00:00:14,719 +is natural language processing why we're + +5 +00:00:11,200 --> 00:00:16,720 +motivated to do it and also some of the + +6 +00:00:14,719 --> 00:00:18,039 +difficulties that we encounter and I'll + +7 +00:00:16,720 --> 00:00:19,880 +at the end I'll also be talking about + +8 +00:00:18,039 --> 00:00:22,519 +class Logistics so you can ask any + +9 +00:00:19,880 --> 00:00:25,439 +Logistics questions at that + +10 +00:00:22,519 --> 00:00:27,720 +time so if we talk about what is NLP + +11 +00:00:25,439 --> 00:00:29,320 +anyway uh does anyone have any opinions + +12 +00:00:27,720 --> 00:00:31,439 +about the definition of what natural + +13 +00:00:29,320 --> 00:00:33,239 +language process would be oh one other + +14 +00:00:31,439 --> 00:00:35,680 +thing I should mention is I am recording + +15 +00:00:33,239 --> 00:00:38,600 +the class uh I put the class on YouTube + +16 +00:00:35,680 --> 00:00:40,520 +uh afterwards I will not take pictures + +17 +00:00:38,600 --> 00:00:41,920 +or video of any of you uh but if you + +18 +00:00:40,520 --> 00:00:44,719 +talk your voice might come in the + +19 +00:00:41,920 --> 00:00:47,440 +background so just uh be aware of that + +20 +00:00:44,719 --> 00:00:49,000 +um usually not it's a directional mic so + +21 +00:00:47,440 --> 00:00:51,559 +I try to repeat the questions after + +22 +00:00:49,000 --> 00:00:54,079 +everybody um but uh for the people who + +23 +00:00:51,559 --> 00:00:57,680 +are recordings uh listening to the + +24 +00:00:54,079 --> 00:00:59,320 +recordings um so anyway what is NLP + +25 +00:00:57,680 --> 00:01:03,120 +anyway does anybody have any ideas about + +26 +00:00:59,320 --> 00:01:03,120 +the definition of what NLP might + +27 +00:01:06,119 --> 00:01:09,119 +be + +28 +00:01:15,439 --> 00:01:21,759 +yes okay um it so the answer was it + +29 +00:01:19,240 --> 00:01:25,759 +helps machines understand language + +30 +00:01:21,759 --> 00:01:27,920 +better uh so to facilitate human human + +31 +00:01:25,759 --> 00:01:31,159 +and human machine interactions I think + +32 +00:01:27,920 --> 00:01:32,759 +that's very good um it's + +33 +00:01:31,159 --> 00:01:36,520 +uh similar to what I have written on my + +34 +00:01:32,759 --> 00:01:38,040 +slide here uh but natur in addition to + +35 +00:01:36,520 --> 00:01:41,280 +natural language understanding there's + +36 +00:01:38,040 --> 00:01:46,000 +one major other segment of NLP uh does + +37 +00:01:41,280 --> 00:01:46,000 +anyone uh have an idea what that might + +38 +00:01:48,719 --> 00:01:53,079 +be we often have a dichotomy between two + +39 +00:01:51,399 --> 00:01:55,240 +major segments natural language + +40 +00:01:53,079 --> 00:01:57,520 +understanding and natural language + +41 +00:01:55,240 --> 00:01:59,439 +generation yeah exactly so I I would say + +42 +00:01:57,520 --> 00:02:03,119 +that's almost perfect if you had said + +43 +00:01:59,439 --> 00:02:06,640 +understand and generate so very good um + +44 +00:02:03,119 --> 00:02:08,560 +so I I say natural technology to handle + +45 +00:02:06,640 --> 00:02:11,400 +human language usually text using + +46 +00:02:08,560 --> 00:02:13,200 +computers uh to Aid human machine + +47 +00:02:11,400 --> 00:02:15,480 +communication and this can include + +48 +00:02:13,200 --> 00:02:17,879 +things like question answering dialogue + +49 +00:02:15,480 --> 00:02:20,840 +or generation of code that can be + +50 +00:02:17,879 --> 00:02:23,239 +executed with uh + +51 +00:02:20,840 --> 00:02:25,080 +computers it can also Aid human human + +52 +00:02:23,239 --> 00:02:27,440 +communication and this can include + +53 +00:02:25,080 --> 00:02:30,440 +things like machine translation or spell + +54 +00:02:27,440 --> 00:02:32,640 +checking or assisted writing + +55 +00:02:30,440 --> 00:02:34,560 +and then a final uh segment that people + +56 +00:02:32,640 --> 00:02:37,400 +might think about a little bit less is + +57 +00:02:34,560 --> 00:02:39,400 +analyzing and understanding a language + +58 +00:02:37,400 --> 00:02:42,400 +and this includes things like syntactic + +59 +00:02:39,400 --> 00:02:44,959 +analysis text classification entity + +60 +00:02:42,400 --> 00:02:47,400 +recognition and linking and these can be + +61 +00:02:44,959 --> 00:02:49,159 +used for uh various reasons not + +62 +00:02:47,400 --> 00:02:51,000 +necessarily for direct human machine + +63 +00:02:49,159 --> 00:02:52,720 +communication but also for like + +64 +00:02:51,000 --> 00:02:54,400 +aggregating information across large + +65 +00:02:52,720 --> 00:02:55,760 +things for scientific studies and other + +66 +00:02:54,400 --> 00:02:57,519 +things like that I'll give a few + +67 +00:02:55,760 --> 00:03:00,920 +examples of + +68 +00:02:57,519 --> 00:03:04,040 +this um we now use an many times a day + +69 +00:03:00,920 --> 00:03:06,480 +sometimes without even knowing it so uh + +70 +00:03:04,040 --> 00:03:09,400 +whenever you're typing a doc in Google + +71 +00:03:06,480 --> 00:03:11,599 +Docs there's you know spell checking and + +72 +00:03:09,400 --> 00:03:13,959 +grammar checking going on behind it's + +73 +00:03:11,599 --> 00:03:15,920 +gotten frighten frighteningly good + +74 +00:03:13,959 --> 00:03:18,280 +recently that where it checks like most + +75 +00:03:15,920 --> 00:03:20,720 +of my mistakes and rarely Flags things + +76 +00:03:18,280 --> 00:03:22,799 +that are not mistakes so obviously they + +77 +00:03:20,720 --> 00:03:25,080 +have powerful models running behind that + +78 +00:03:22,799 --> 00:03:25,080 +uh + +79 +00:03:25,640 --> 00:03:33,080 +so and it can do things like answer + +80 +00:03:28,720 --> 00:03:34,599 +questions uh so I asked chat GPT who is + +81 +00:03:33,080 --> 00:03:37,000 +the current president of Carnegie melan + +82 +00:03:34,599 --> 00:03:38,920 +University and chat GPT said I did a + +83 +00:03:37,000 --> 00:03:40,920 +quick search for more information here + +84 +00:03:38,920 --> 00:03:43,439 +is what I found uh the current president + +85 +00:03:40,920 --> 00:03:47,120 +of car Mel University is faram Janan he + +86 +00:03:43,439 --> 00:03:50,040 +has been serving since July 1 etc etc so + +87 +00:03:47,120 --> 00:03:50,040 +as far as I can tell that's + +88 +00:03:50,400 --> 00:03:56,319 +correct um at the same time I asked how + +89 +00:03:53,799 --> 00:04:00,280 +many layers are included in the GP 3.5 + +90 +00:03:56,319 --> 00:04:02,360 +turbo architecture and it said to me + +91 +00:04:00,280 --> 00:04:05,400 +GPT 3.5 turbo which is an optimized + +92 +00:04:02,360 --> 00:04:07,239 +version of GPT 3.5 for faster responses + +93 +00:04:05,400 --> 00:04:08,959 +doesn't have a specific layer art + +94 +00:04:07,239 --> 00:04:11,720 +structure like the traditional gpt3 + +95 +00:04:08,959 --> 00:04:13,560 +models um and I don't know if this is + +96 +00:04:11,720 --> 00:04:16,600 +true or not but I'm pretty sure it's not + +97 +00:04:13,560 --> 00:04:18,840 +true I'm pretty sure that you know GPT + +98 +00:04:16,600 --> 00:04:20,560 +is a model that's much like other models + +99 +00:04:18,840 --> 00:04:21,560 +uh so it basically just made up the spec + +100 +00:04:20,560 --> 00:04:22,880 +because it didn't have any information + +101 +00:04:21,560 --> 00:04:26,000 +on the Internet or couldn't talk about + +102 +00:04:22,880 --> 00:04:26,000 +it so + +103 +00:04:26,120 --> 00:04:33,479 +um another thing is uh NLP can translate + +104 +00:04:29,639 --> 00:04:37,759 +text pretty well so I ran um Google + +105 +00:04:33,479 --> 00:04:39,560 +translate uh on Japanese uh this example + +106 +00:04:37,759 --> 00:04:41,639 +is a little bit old it's from uh you + +107 +00:04:39,560 --> 00:04:44,639 +know a few years ago about Co but I I + +108 +00:04:41,639 --> 00:04:46,240 +retranslated it a few days ago and it + +109 +00:04:44,639 --> 00:04:47,680 +comes up pretty good uh you can + +110 +00:04:46,240 --> 00:04:49,639 +basically understand what's going on + +111 +00:04:47,680 --> 00:04:53,520 +here it's not perfect but you can + +112 +00:04:49,639 --> 00:04:56,400 +understand the uh the general uh + +113 +00:04:53,520 --> 00:04:58,560 +gist at the same time uh if I put in a + +114 +00:04:56,400 --> 00:05:02,280 +relatively low resource language this is + +115 +00:04:58,560 --> 00:05:05,759 +Kurdish um it has a number of problems + +116 +00:05:02,280 --> 00:05:08,199 +when you try to understand it and just + +117 +00:05:05,759 --> 00:05:12,400 +to give an example this is talking about + +118 +00:05:08,199 --> 00:05:14,320 +uh some uh paleontology Discovery it + +119 +00:05:12,400 --> 00:05:15,800 +called this person a fossil scientist + +120 +00:05:14,320 --> 00:05:17,440 +instead of the kind of obvious English + +121 +00:05:15,800 --> 00:05:20,120 +term + +122 +00:05:17,440 --> 00:05:23,520 +paleontologist um and it's talking about + +123 +00:05:20,120 --> 00:05:25,240 +three different uh T-Rex species uh how + +124 +00:05:23,520 --> 00:05:27,039 +T-Rex should actually be split into + +125 +00:05:25,240 --> 00:05:29,639 +three species where T-Rex says king of + +126 +00:05:27,039 --> 00:05:31,560 +ferocious lizards emperator says emperor + +127 +00:05:29,639 --> 00:05:33,720 +of Savaged lizards and then T Regina + +128 +00:05:31,560 --> 00:05:35,120 +means clean of ferocious snail I'm + +129 +00:05:33,720 --> 00:05:37,240 +pretty sure that's not snail I'm pretty + +130 +00:05:35,120 --> 00:05:41,080 +sure that's lizard so uh you can see + +131 +00:05:37,240 --> 00:05:41,080 +that this is not uh this is not perfect + +132 +00:05:41,280 --> 00:05:46,680 +either some people might be thinking why + +133 +00:05:43,960 --> 00:05:48,400 +Google translate and why not GPD well it + +134 +00:05:46,680 --> 00:05:49,960 +turns out um according to one of the + +135 +00:05:48,400 --> 00:05:51,759 +recent studies we've done GPD is even + +136 +00:05:49,960 --> 00:05:55,479 +worse at these slow resource languages + +137 +00:05:51,759 --> 00:05:58,120 +so I use the best thing that's out + +138 +00:05:55,479 --> 00:06:00,440 +there um another thing is language + +139 +00:05:58,120 --> 00:06:02,039 +analysis can Aid scientific ific inquiry + +140 +00:06:00,440 --> 00:06:03,600 +so this is an example that I've been + +141 +00:06:02,039 --> 00:06:06,120 +using for a long time it's actually from + +142 +00:06:03,600 --> 00:06:09,160 +Martin sap another faculty member here + +143 +00:06:06,120 --> 00:06:12,440 +uh but I have been using it since uh + +144 +00:06:09,160 --> 00:06:14,160 +like before he joined and it uh this is + +145 +00:06:12,440 --> 00:06:16,039 +an example from computational social + +146 +00:06:14,160 --> 00:06:18,599 +science uh answering questions about + +147 +00:06:16,039 --> 00:06:20,240 +Society given observational data and + +148 +00:06:18,599 --> 00:06:22,280 +their question was do movie scripts + +149 +00:06:20,240 --> 00:06:24,599 +portray female or male characters with + +150 +00:06:22,280 --> 00:06:27,520 +more power or agency in movie script + +151 +00:06:24,599 --> 00:06:30,120 +films so it's asking kind of a so + +152 +00:06:27,520 --> 00:06:32,160 +societal question by using NLP + +153 +00:06:30,120 --> 00:06:35,360 +technology and the way they did it is + +154 +00:06:32,160 --> 00:06:36,880 +they basically analyzed text trying to + +155 +00:06:35,360 --> 00:06:43,080 +find + +156 +00:06:36,880 --> 00:06:45,280 +uh the uh agents and patients in a a + +157 +00:06:43,080 --> 00:06:46,479 +particular text which are the the things + +158 +00:06:45,280 --> 00:06:49,280 +that are doing things and the things + +159 +00:06:46,479 --> 00:06:52,639 +that things are being done to and you + +160 +00:06:49,280 --> 00:06:54,440 +can see that essentially male characters + +161 +00:06:52,639 --> 00:06:56,560 +in these movie scripts were given more + +162 +00:06:54,440 --> 00:06:58,080 +power in agency and female characters + +163 +00:06:56,560 --> 00:06:59,960 +were given less power in agency and they + +164 +00:06:58,080 --> 00:07:02,680 +were able to do this because they had + +165 +00:06:59,960 --> 00:07:04,840 +NLP technology that analyzed and + +166 +00:07:02,680 --> 00:07:08,960 +extracted useful data and made turned it + +167 +00:07:04,840 --> 00:07:11,520 +into a very easy form to do kind of + +168 +00:07:08,960 --> 00:07:15,840 +analysis of the variety that they want + +169 +00:07:11,520 --> 00:07:17,400 +so um I think that's a major use case of + +170 +00:07:15,840 --> 00:07:19,400 +NLP technology that does language + +171 +00:07:17,400 --> 00:07:20,919 +analysis nowadays turn it into a form + +172 +00:07:19,400 --> 00:07:23,960 +that allows you to very quickly do + +173 +00:07:20,919 --> 00:07:27,440 +aggregate queries and other things like + +174 +00:07:23,960 --> 00:07:30,479 +this um but at the same time uh language + +175 +00:07:27,440 --> 00:07:33,520 +analysis tools fail at very basic tasks + +176 +00:07:30,479 --> 00:07:36,000 +so these are + +177 +00:07:33,520 --> 00:07:38,199 +some things that I ran through a named + +178 +00:07:36,000 --> 00:07:41,080 +entity recognizer and these were kind of + +179 +00:07:38,199 --> 00:07:43,160 +very nice named entity recognizers uh + +180 +00:07:41,080 --> 00:07:46,240 +that a lot of people were using for + +181 +00:07:43,160 --> 00:07:48,039 +example Stanford core NLP and Spacey and + +182 +00:07:46,240 --> 00:07:50,319 +both of them I just threw in the first + +183 +00:07:48,039 --> 00:07:53,120 +thing that I found on the New York Times + +184 +00:07:50,319 --> 00:07:55,199 +at the time and it basically made at + +185 +00:07:53,120 --> 00:07:58,319 +least one mistake in the first sentence + +186 +00:07:55,199 --> 00:08:00,840 +and here it recognizes Baton Rouge as an + +187 +00:07:58,319 --> 00:08:04,720 +organization and here it recognized + +188 +00:08:00,840 --> 00:08:07,000 +hurricane EA as an organization so um + +189 +00:08:04,720 --> 00:08:08,879 +like even uh these things that we expect + +190 +00:08:07,000 --> 00:08:10,360 +should work pretty well make pretty + +191 +00:08:08,879 --> 00:08:13,360 +Solly + +192 +00:08:10,360 --> 00:08:16,199 +mistakes so in the class uh basically + +193 +00:08:13,360 --> 00:08:18,479 +what I want to cover is uh what goes + +194 +00:08:16,199 --> 00:08:20,360 +into building uh state-of-the-art NLP + +195 +00:08:18,479 --> 00:08:24,000 +systems that work really well on a wide + +196 +00:08:20,360 --> 00:08:26,240 +variety of tasks um where do current + +197 +00:08:24,000 --> 00:08:28,840 +systems + +198 +00:08:26,240 --> 00:08:30,479 +fail and how can we make appropriate + +199 +00:08:28,840 --> 00:08:35,000 +improvements and Achieve whatever we + +200 +00:08:30,479 --> 00:08:37,719 +want to do with nalp and this set of + +201 +00:08:35,000 --> 00:08:39,360 +questions that I'm asking here is + +202 +00:08:37,719 --> 00:08:40,919 +exactly the same as the set of questions + +203 +00:08:39,360 --> 00:08:43,519 +that I was asking two years ago before + +204 +00:08:40,919 --> 00:08:45,480 +chat GPT uh I still think they're + +205 +00:08:43,519 --> 00:08:46,920 +important questions but I think the + +206 +00:08:45,480 --> 00:08:48,399 +answers to these questions is very + +207 +00:08:46,920 --> 00:08:50,040 +different and because of that we're + +208 +00:08:48,399 --> 00:08:52,120 +updating the class materials to try to + +209 +00:08:50,040 --> 00:08:54,399 +cover you know the answers to these + +210 +00:08:52,120 --> 00:08:56,000 +questions and uh in kind of the era of + +211 +00:08:54,399 --> 00:08:58,200 +large language models and other things + +212 +00:08:56,000 --> 00:08:59,720 +like + +213 +00:08:58,200 --> 00:09:02,079 +that + +214 +00:08:59,720 --> 00:09:03,360 +so that's all I have for the intro maybe + +215 +00:09:02,079 --> 00:09:06,640 +maybe pretty straightforward are there + +216 +00:09:03,360 --> 00:09:08,480 +any questions or comments so far if not + +217 +00:09:06,640 --> 00:09:14,399 +I'll I'll just go + +218 +00:09:08,480 --> 00:09:17,160 +on okay great so I want to uh first go + +219 +00:09:14,399 --> 00:09:19,480 +into a very high Lev overview of NLP + +220 +00:09:17,160 --> 00:09:20,839 +system building and most of the stuff + +221 +00:09:19,480 --> 00:09:22,399 +that I want to do today is to set the + +222 +00:09:20,839 --> 00:09:24,320 +stage for what I'm going to be talking + +223 +00:09:22,399 --> 00:09:25,040 +about in more detail uh over the rest of + +224 +00:09:24,320 --> 00:09:29,200 +the + +225 +00:09:25,040 --> 00:09:31,720 +class and we could think of NLP syst + +226 +00:09:29,200 --> 00:09:34,040 +systems through this kind of General + +227 +00:09:31,720 --> 00:09:36,560 +framework where we want to create a + +228 +00:09:34,040 --> 00:09:40,600 +function to map an input X into an + +229 +00:09:36,560 --> 00:09:44,440 +output y uh where X and or Y involve + +230 +00:09:40,600 --> 00:09:47,000 +language and uh do some people have + +231 +00:09:44,440 --> 00:09:50,120 +favorite NLP tasks or NLP tasks that you + +232 +00:09:47,000 --> 00:09:52,399 +want to uh want to be handling in some + +233 +00:09:50,120 --> 00:09:57,000 +way or maybe what what do you think are + +234 +00:09:52,399 --> 00:09:57,000 +the most popular and important NLP tasks + +235 +00:09:58,120 --> 00:10:03,200 +nowadays + +236 +00:10:00,800 --> 00:10:06,120 +okay so translation is maybe easy what's + +237 +00:10:03,200 --> 00:10:06,120 +the input and output of + +238 +00:10:11,440 --> 00:10:15,720 +translation okay yeah so uh in + +239 +00:10:13,800 --> 00:10:17,959 +Translation inputs text in one language + +240 +00:10:15,720 --> 00:10:21,760 +output is text in another language and + +241 +00:10:17,959 --> 00:10:21,760 +then what what is a good + +242 +00:10:27,680 --> 00:10:32,160 +translation yeah corre or or the same is + +243 +00:10:30,320 --> 00:10:35,839 +the input basically yes um it also + +244 +00:10:32,160 --> 00:10:37,760 +should be fluent but I agree any other + +245 +00:10:35,839 --> 00:10:39,839 +things generation the reason why I said + +246 +00:10:37,760 --> 00:10:41,519 +it's tough is it's pretty broad um and + +247 +00:10:39,839 --> 00:10:43,360 +it's not like we could be doing + +248 +00:10:41,519 --> 00:10:46,360 +generation with lots of different inputs + +249 +00:10:43,360 --> 00:10:51,440 +but um yeah any any other things maybe a + +250 +00:10:46,360 --> 00:10:51,440 +little bit different yeah like + +251 +00:10:51,480 --> 00:10:55,959 +scenario a scenario and a multiple + +252 +00:10:54,000 --> 00:10:58,200 +choice question about the scenario and + +253 +00:10:55,959 --> 00:10:59,680 +so what would the scenario in the + +254 +00:10:58,200 --> 00:11:01,760 +multiple choice question are probably + +255 +00:10:59,680 --> 00:11:04,040 +the input and then the output + +256 +00:11:01,760 --> 00:11:06,480 +is an answer to the multiple choice + +257 +00:11:04,040 --> 00:11:07,920 +question um and then there it's kind of + +258 +00:11:06,480 --> 00:11:12,279 +obvious like what is good it's the + +259 +00:11:07,920 --> 00:11:14,880 +correct answer sure um interestingly I + +260 +00:11:12,279 --> 00:11:17,440 +think a lot of llm evaluation is done on + +261 +00:11:14,880 --> 00:11:21,160 +these multiple choice questions but I'm + +262 +00:11:17,440 --> 00:11:22,320 +yet to encounter an actual application + +263 +00:11:21,160 --> 00:11:24,880 +that cares about multiple choice + +264 +00:11:22,320 --> 00:11:26,880 +question answering so uh there's kind of + +265 +00:11:24,880 --> 00:11:30,959 +a funny disconnect there but uh yeah I + +266 +00:11:26,880 --> 00:11:33,519 +saw hand that think about V search comp + +267 +00:11:30,959 --> 00:11:36,360 +yeah Vector search uh that's very good + +268 +00:11:33,519 --> 00:11:36,360 +so the input + +269 +00:11:37,120 --> 00:11:45,000 +is can con it into or understanding and + +270 +00:11:42,560 --> 00:11:45,000 +it to + +271 +00:11:47,360 --> 00:11:53,760 +another okay yeah so I'd say the input + +272 +00:11:49,880 --> 00:11:56,160 +there is a query and a document base um + +273 +00:11:53,760 --> 00:11:57,959 +and then the output is maybe an index + +274 +00:11:56,160 --> 00:11:59,800 +into the document or or something else + +275 +00:11:57,959 --> 00:12:01,279 +like that sure um and then something + +276 +00:11:59,800 --> 00:12:05,040 +that's good here here's a good question + +277 +00:12:01,279 --> 00:12:05,040 +what what's a good result from + +278 +00:12:06,560 --> 00:12:10,200 +that what's a good + +279 +00:12:10,839 --> 00:12:19,279 +output be sort of simar the major + +280 +00:12:15,560 --> 00:12:21,680 +problem there I see is how you def SAR + +281 +00:12:19,279 --> 00:12:26,199 +and how you + +282 +00:12:21,680 --> 00:12:29,760 +a always like you understand + +283 +00:12:26,199 --> 00:12:33,000 +whether is actually + +284 +00:12:29,760 --> 00:12:35,079 +yeah exactly so that um just to repeat + +285 +00:12:33,000 --> 00:12:36,880 +it's like uh we need to have a + +286 +00:12:35,079 --> 00:12:38,399 +similarity a good similarity metric we + +287 +00:12:36,880 --> 00:12:40,120 +need to have a good threshold where we + +288 +00:12:38,399 --> 00:12:41,760 +get like the ones we want and we don't + +289 +00:12:40,120 --> 00:12:43,240 +get the ones we don't want we're going + +290 +00:12:41,760 --> 00:12:44,959 +to talk more about that in the retrieval + +291 +00:12:43,240 --> 00:12:48,440 +lecture exactly how we evaluate and + +292 +00:12:44,959 --> 00:12:49,920 +stuff but um yeah good so this is a good + +293 +00:12:48,440 --> 00:12:53,279 +uh here are some good examples I have + +294 +00:12:49,920 --> 00:12:55,519 +some examples of my own um the first one + +295 +00:12:53,279 --> 00:12:58,360 +is uh kind of the very generic one maybe + +296 +00:12:55,519 --> 00:13:00,800 +kind of like generation here but text in + +297 +00:12:58,360 --> 00:13:02,959 +continuing text uh so this is language + +298 +00:13:00,800 --> 00:13:04,160 +modeling so you have a text and then you + +299 +00:13:02,959 --> 00:13:05,440 +have the continuation you want to + +300 +00:13:04,160 --> 00:13:07,680 +predict the + +301 +00:13:05,440 --> 00:13:10,480 +continuation um text and text in another + +302 +00:13:07,680 --> 00:13:13,040 +language is translation uh text in a + +303 +00:13:10,480 --> 00:13:15,800 +label could be text classification uh + +304 +00:13:13,040 --> 00:13:17,760 +text in linguistic structure or uh some + +305 +00:13:15,800 --> 00:13:21,360 +s kind of entities or something like + +306 +00:13:17,760 --> 00:13:22,680 +that could be uh language analysis or um + +307 +00:13:21,360 --> 00:13:24,839 +information + +308 +00:13:22,680 --> 00:13:29,440 +extraction uh we could also have image + +309 +00:13:24,839 --> 00:13:31,320 +and text uh which is image captioning um + +310 +00:13:29,440 --> 00:13:33,560 +or speech and text which is speech + +311 +00:13:31,320 --> 00:13:35,240 +recognition and I take the very broad + +312 +00:13:33,560 --> 00:13:38,000 +view of natural language processing + +313 +00:13:35,240 --> 00:13:39,519 +which is if it's any variety of language + +314 +00:13:38,000 --> 00:13:41,519 +uh if you're handling language in some + +315 +00:13:39,519 --> 00:13:42,800 +way it's natural language processing it + +316 +00:13:41,519 --> 00:13:45,880 +doesn't necessarily have to be text + +317 +00:13:42,800 --> 00:13:47,480 +input text output um so that's relevant + +318 +00:13:45,880 --> 00:13:50,199 +for the projects that you're thinking + +319 +00:13:47,480 --> 00:13:52,160 +about too at the end of this course so + +320 +00:13:50,199 --> 00:13:55,519 +the the most common FAQ for this course + +321 +00:13:52,160 --> 00:13:57,839 +is does my project count and if you're + +322 +00:13:55,519 --> 00:13:59,360 +uncertain you should ask but usually + +323 +00:13:57,839 --> 00:14:01,040 +like if it has some sort of language + +324 +00:13:59,360 --> 00:14:05,079 +involved then I'll usually say yes it + +325 +00:14:01,040 --> 00:14:07,920 +does kind so um if it's like uh code to + +326 +00:14:05,079 --> 00:14:09,680 +code there that's not code is not + +327 +00:14:07,920 --> 00:14:11,480 +natural language it is language but it's + +328 +00:14:09,680 --> 00:14:13,000 +not natural language so that might be + +329 +00:14:11,480 --> 00:14:15,320 +borderline we might have to discuss + +330 +00:14:13,000 --> 00:14:15,320 +about + +331 +00:14:15,759 --> 00:14:21,800 +that cool um so next I'd like to talk + +332 +00:14:18,880 --> 00:14:25,240 +about methods for creating NLP systems + +333 +00:14:21,800 --> 00:14:27,839 +um and there's a lot of different ways + +334 +00:14:25,240 --> 00:14:29,720 +to create MLP systems all of these are + +335 +00:14:27,839 --> 00:14:32,880 +alive and well in + +336 +00:14:29,720 --> 00:14:35,759 +2024 uh the first one is Rule uh + +337 +00:14:32,880 --> 00:14:37,959 +rule-based system creation and so the + +338 +00:14:35,759 --> 00:14:40,399 +way this works is like let's say you + +339 +00:14:37,959 --> 00:14:42,480 +want to build a text classifier you just + +340 +00:14:40,399 --> 00:14:46,560 +write the simple python function that + +341 +00:14:42,480 --> 00:14:48,639 +classifies things into uh sports or + +342 +00:14:46,560 --> 00:14:50,240 +other and the way it classifies it into + +343 +00:14:48,639 --> 00:14:52,959 +sports or other is it checks whether + +344 +00:14:50,240 --> 00:14:55,160 +baseball soccer football and Tennis are + +345 +00:14:52,959 --> 00:14:59,399 +included in the document and classifies + +346 +00:14:55,160 --> 00:15:01,959 +it into uh Sports if so uh other if not + +347 +00:14:59,399 --> 00:15:05,279 +so has anyone written something like + +348 +00:15:01,959 --> 00:15:09,720 +this maybe not a text classifier but um + +349 +00:15:05,279 --> 00:15:11,880 +you know to identify entities or uh + +350 +00:15:09,720 --> 00:15:14,279 +split words + +351 +00:15:11,880 --> 00:15:16,680 +or something like + +352 +00:15:14,279 --> 00:15:18,399 +that has anybody not ever written + +353 +00:15:16,680 --> 00:15:22,800 +anything like + +354 +00:15:18,399 --> 00:15:24,639 +this yeah that's what I thought so um + +355 +00:15:22,800 --> 00:15:26,079 +rule-based systems are very convenient + +356 +00:15:24,639 --> 00:15:28,920 +when you don't really care about how + +357 +00:15:26,079 --> 00:15:30,759 +good your system is um or you're doing + +358 +00:15:28,920 --> 00:15:32,360 +that's really really simple and like + +359 +00:15:30,759 --> 00:15:35,600 +it'll be perfect even if you do the very + +360 +00:15:32,360 --> 00:15:37,079 +simple thing and so I I think it's worth + +361 +00:15:35,600 --> 00:15:39,959 +talking a little bit about them and I'll + +362 +00:15:37,079 --> 00:15:43,319 +talk a little bit about that uh this + +363 +00:15:39,959 --> 00:15:45,680 +time the second thing which like very + +364 +00:15:43,319 --> 00:15:47,680 +rapidly over the course of maybe three + +365 +00:15:45,680 --> 00:15:50,279 +years or so has become actually maybe + +366 +00:15:47,680 --> 00:15:52,720 +the dominant Paradigm in NLP is + +367 +00:15:50,279 --> 00:15:56,360 +prompting uh in prompting a language + +368 +00:15:52,720 --> 00:15:58,560 +model and the way this works is uh you + +369 +00:15:56,360 --> 00:16:00,720 +ask a language model if the following + +370 +00:15:58,560 --> 00:16:03,079 +sent is about sports reply Sports + +371 +00:16:00,720 --> 00:16:06,120 +otherwise reply other and you feed it to + +372 +00:16:03,079 --> 00:16:08,480 +your favorite LM uh usually that's GPT + +373 +00:16:06,120 --> 00:16:11,399 +something or other uh sometimes it's an + +374 +00:16:08,480 --> 00:16:14,440 +open source model of some variety and + +375 +00:16:11,399 --> 00:16:17,759 +then uh it will give you the + +376 +00:16:14,440 --> 00:16:20,639 +answer and then finally uh fine-tuning + +377 +00:16:17,759 --> 00:16:22,240 +uh so you take some paired data and you + +378 +00:16:20,639 --> 00:16:23,600 +do machine learning from paired data + +379 +00:16:22,240 --> 00:16:25,680 +where you have something like I love to + +380 +00:16:23,600 --> 00:16:27,440 +play baseball uh the stock price is + +381 +00:16:25,680 --> 00:16:29,519 +going up he got a hatrick yesterday he + +382 +00:16:27,440 --> 00:16:32,759 +is wearing tennis shoes and you assign + +383 +00:16:29,519 --> 00:16:35,319 +all these uh labels to them training a + +384 +00:16:32,759 --> 00:16:38,160 +model and you can even start out with a + +385 +00:16:35,319 --> 00:16:41,480 +prompting based model and fine-tune a a + +386 +00:16:38,160 --> 00:16:41,480 +language model + +387 +00:16:42,920 --> 00:16:49,399 +also so one major consideration when + +388 +00:16:47,519 --> 00:16:52,000 +you're Building Systems like this is the + +389 +00:16:49,399 --> 00:16:56,440 +data requirements for building such a + +390 +00:16:52,000 --> 00:16:59,319 +system and for rules or prompting where + +391 +00:16:56,440 --> 00:17:02,240 +it's just based on intuition really no + +392 +00:16:59,319 --> 00:17:04,640 +data is needed whatsoever it you don't + +393 +00:17:02,240 --> 00:17:08,240 +need a single example and you can start + +394 +00:17:04,640 --> 00:17:11,000 +writing rules or like just just to give + +395 +00:17:08,240 --> 00:17:12,640 +an example the rules and prompts I wrote + +396 +00:17:11,000 --> 00:17:14,679 +here I didn't look at any examples and I + +397 +00:17:12,640 --> 00:17:17,240 +just wrote them uh so this is something + +398 +00:17:14,679 --> 00:17:20,000 +that you could start out + +399 +00:17:17,240 --> 00:17:21,559 +with uh the problem is you also have no + +400 +00:17:20,000 --> 00:17:24,720 +idea how well it works if you don't have + +401 +00:17:21,559 --> 00:17:26,760 +any data whatsoever right so um you'll + +402 +00:17:24,720 --> 00:17:30,400 +you might be in trouble if you think + +403 +00:17:26,760 --> 00:17:30,400 +something should be working + +404 +00:17:30,919 --> 00:17:34,440 +so normally the next thing that people + +405 +00:17:32,919 --> 00:17:36,880 +move to nowadays when they're building + +406 +00:17:34,440 --> 00:17:39,559 +practical systems is rules are prompting + +407 +00:17:36,880 --> 00:17:41,240 +based on spot checks so that basically + +408 +00:17:39,559 --> 00:17:42,919 +means that you start out with a + +409 +00:17:41,240 --> 00:17:45,840 +rule-based system or a prompting based + +410 +00:17:42,919 --> 00:17:47,240 +system and then you go in and you run it + +411 +00:17:45,840 --> 00:17:48,720 +on some data that you're interested in + +412 +00:17:47,240 --> 00:17:50,799 +you just kind of qualitatively look at + +413 +00:17:48,720 --> 00:17:52,160 +the data and say oh it's messing up here + +414 +00:17:50,799 --> 00:17:53,440 +then you go in and fix your prompt a + +415 +00:17:52,160 --> 00:17:54,919 +little bit or you go in and fix your + +416 +00:17:53,440 --> 00:17:57,320 +rules a little bit or something like + +417 +00:17:54,919 --> 00:18:00,400 +that so uh this is kind of the second + +418 +00:17:57,320 --> 00:18:00,400 +level of difficulty + +419 +00:18:01,400 --> 00:18:04,640 +so the third level of difficulty would + +420 +00:18:03,159 --> 00:18:07,400 +be something like rules are prompting + +421 +00:18:04,640 --> 00:18:09,039 +with rigorous evaluation and so here you + +422 +00:18:07,400 --> 00:18:12,840 +would create a development set with + +423 +00:18:09,039 --> 00:18:14,840 +inputs and outputs uh so you uh create + +424 +00:18:12,840 --> 00:18:17,039 +maybe 200 to 2,000 + +425 +00:18:14,840 --> 00:18:20,080 +examples um + +426 +00:18:17,039 --> 00:18:21,720 +and then evaluate your actual accuracy + +427 +00:18:20,080 --> 00:18:23,880 +so you need an evaluation metric you + +428 +00:18:21,720 --> 00:18:26,120 +need other things like this this is the + +429 +00:18:23,880 --> 00:18:28,400 +next level of difficulty but if you're + +430 +00:18:26,120 --> 00:18:30,240 +going to be a serious you know NLP + +431 +00:18:28,400 --> 00:18:33,000 +engineer or something like this you + +432 +00:18:30,240 --> 00:18:34,720 +definitely will be doing this a lot I + +433 +00:18:33,000 --> 00:18:37,760 +feel and + +434 +00:18:34,720 --> 00:18:40,360 +then so that here now you start needing + +435 +00:18:37,760 --> 00:18:41,960 +a depth set and a test set and then + +436 +00:18:40,360 --> 00:18:46,280 +finally fine-tuning you need an + +437 +00:18:41,960 --> 00:18:48,480 +additional training set um and uh this + +438 +00:18:46,280 --> 00:18:52,240 +will generally be a lot bigger than 200 + +439 +00:18:48,480 --> 00:18:56,080 +to 2,000 examples and generally the rule + +440 +00:18:52,240 --> 00:18:56,080 +is that every time you + +441 +00:18:57,320 --> 00:19:01,080 +double + +442 +00:18:59,520 --> 00:19:02,400 +every time you double your training set + +443 +00:19:01,080 --> 00:19:07,480 +size you get about a constant + +444 +00:19:02,400 --> 00:19:07,480 +Improvement so if you start + +445 +00:19:07,799 --> 00:19:15,080 +out if you start out down here with + +446 +00:19:12,240 --> 00:19:17,039 +um zero shot accuracy with a language + +447 +00:19:15,080 --> 00:19:21,559 +model you you create a small printing + +448 +00:19:17,039 --> 00:19:21,559 +set and you get you know a pretty big + +449 +00:19:22,000 --> 00:19:29,120 +increase and then every time you double + +450 +00:19:26,320 --> 00:19:30,799 +it it increases by constant fact it's + +451 +00:19:29,120 --> 00:19:32,480 +kind of like just in general in machine + +452 +00:19:30,799 --> 00:19:37,360 +learning this is a trend that we tend to + +453 +00:19:32,480 --> 00:19:40,679 +see so um So based on this + +454 +00:19:37,360 --> 00:19:41,880 +uh there's kind of like you get a big + +455 +00:19:40,679 --> 00:19:44,200 +gain from having a little bit of + +456 +00:19:41,880 --> 00:19:45,760 +training data but the gains very quickly + +457 +00:19:44,200 --> 00:19:48,919 +drop off and you start spending a lot of + +458 +00:19:45,760 --> 00:19:48,919 +time annotating + +459 +00:19:51,000 --> 00:19:55,880 +an so um yeah this is the the general + +460 +00:19:54,760 --> 00:19:58,280 +overview of the different types of + +461 +00:19:55,880 --> 00:20:00,000 +system building uh any any question + +462 +00:19:58,280 --> 00:20:01,559 +questions about this or comments or + +463 +00:20:00,000 --> 00:20:04,000 +things like + +464 +00:20:01,559 --> 00:20:05,840 +this I think one thing that's changed + +465 +00:20:04,000 --> 00:20:08,159 +really drastically from the last time I + +466 +00:20:05,840 --> 00:20:09,600 +taught this class is the fact that + +467 +00:20:08,159 --> 00:20:11,000 +number one and number two are the things + +468 +00:20:09,600 --> 00:20:13,799 +that people are actually doing in + +469 +00:20:11,000 --> 00:20:15,360 +practice uh which was you know people + +470 +00:20:13,799 --> 00:20:16,679 +who actually care about systems are + +471 +00:20:15,360 --> 00:20:18,880 +doing number one and number two is the + +472 +00:20:16,679 --> 00:20:20,440 +main thing it used to be that if you + +473 +00:20:18,880 --> 00:20:22,679 +were actually serious about building a + +474 +00:20:20,440 --> 00:20:24,320 +system uh you really needed to do the + +475 +00:20:22,679 --> 00:20:27,080 +funing and now it's kind of like more + +476 +00:20:24,320 --> 00:20:27,080 +optional + +477 +00:20:27,159 --> 00:20:30,159 +so + +478 +00:20:44,039 --> 00:20:50,960 +yeah + +479 +00:20:46,320 --> 00:20:53,960 +so it's it's definitely an empirical + +480 +00:20:50,960 --> 00:20:53,960 +observation + +481 +00:20:54,720 --> 00:21:01,080 +um in terms of the theoretical + +482 +00:20:57,640 --> 00:21:03,120 +background I am not I can't immediately + +483 +00:21:01,080 --> 00:21:05,840 +point to a + +484 +00:21:03,120 --> 00:21:10,039 +particular paper that does that but I + +485 +00:21:05,840 --> 00:21:12,720 +think if you think about + +486 +00:21:10,039 --> 00:21:14,720 +the I I think I have seen that they do + +487 +00:21:12,720 --> 00:21:17,039 +exist in the past but I I can't think of + +488 +00:21:14,720 --> 00:21:19,000 +it right now I can try to uh try to come + +489 +00:21:17,039 --> 00:21:23,720 +up with an example of + +490 +00:21:19,000 --> 00:21:23,720 +that so yeah I I should take + +491 +00:21:26,799 --> 00:21:31,960 +notes or someone wants to share one on + +492 +00:21:29,360 --> 00:21:33,360 +Piaza uh if you have any ideas and want + +493 +00:21:31,960 --> 00:21:34,520 +to share on Patza I'm sure that would be + +494 +00:21:33,360 --> 00:21:35,640 +great it'd be great to have a discussion + +495 +00:21:34,520 --> 00:21:39,320 +on + +496 +00:21:35,640 --> 00:21:44,960 +Patza um Pi + +497 +00:21:39,320 --> 00:21:46,880 +one cool okay so next I want to try to + +498 +00:21:44,960 --> 00:21:48,200 +make a rule-based system and I'm going + +499 +00:21:46,880 --> 00:21:49,360 +to make a rule-based system for + +500 +00:21:48,200 --> 00:21:51,799 +sentiment + +501 +00:21:49,360 --> 00:21:53,480 +analysis uh and this is a bad idea I + +502 +00:21:51,799 --> 00:21:55,400 +would not encourage you to ever do this + +503 +00:21:53,480 --> 00:21:57,440 +in real life but I want to do it here to + +504 +00:21:55,400 --> 00:21:59,640 +show you why it's a bad idea and like + +505 +00:21:57,440 --> 00:22:01,200 +what are some of the hard problems that + +506 +00:21:59,640 --> 00:22:03,960 +you encounter when trying to create a + +507 +00:22:01,200 --> 00:22:06,600 +system based on rules + +508 +00:22:03,960 --> 00:22:08,080 +and then we'll move into building a + +509 +00:22:06,600 --> 00:22:12,360 +machine learning base system after we + +510 +00:22:08,080 --> 00:22:15,400 +finish this so if we look at the example + +511 +00:22:12,360 --> 00:22:18,559 +test this is review sentiment analysis + +512 +00:22:15,400 --> 00:22:21,799 +it's one of the most valuable uh tasks + +513 +00:22:18,559 --> 00:22:24,039 +uh that people do in NLP nowadays + +514 +00:22:21,799 --> 00:22:26,400 +because it allows people to know how + +515 +00:22:24,039 --> 00:22:29,200 +customers are thinking about products uh + +516 +00:22:26,400 --> 00:22:30,799 +improve their you know their product + +517 +00:22:29,200 --> 00:22:32,919 +development and other things like that + +518 +00:22:30,799 --> 00:22:34,799 +may monitor people's you know + +519 +00:22:32,919 --> 00:22:36,760 +satisfaction with their social media + +520 +00:22:34,799 --> 00:22:39,200 +service other things like this so + +521 +00:22:36,760 --> 00:22:42,720 +basically the way it works is um you + +522 +00:22:39,200 --> 00:22:44,400 +have uh outputs or you have sentences + +523 +00:22:42,720 --> 00:22:46,720 +inputs like I hate this movie I love + +524 +00:22:44,400 --> 00:22:48,520 +this movie I saw this movie and this + +525 +00:22:46,720 --> 00:22:50,600 +gets mapped into positive neutral or + +526 +00:22:48,520 --> 00:22:53,120 +negative so I hate this movie would be + +527 +00:22:50,600 --> 00:22:55,480 +negative I love this movie positive and + +528 +00:22:53,120 --> 00:22:59,039 +I saw this movie is + +529 +00:22:55,480 --> 00:23:01,200 +neutral so um + +530 +00:22:59,039 --> 00:23:05,200 +that that's the task input tax output + +531 +00:23:01,200 --> 00:23:08,880 +labels uh Kary uh sentence + +532 +00:23:05,200 --> 00:23:11,679 +label and in order to do this uh we + +533 +00:23:08,880 --> 00:23:13,120 +would like to build a model um and we're + +534 +00:23:11,679 --> 00:23:16,159 +going to build the model in a rule based + +535 +00:23:13,120 --> 00:23:19,000 +way but it we'll still call it a model + +536 +00:23:16,159 --> 00:23:21,600 +and the way it works is we do feature + +537 +00:23:19,000 --> 00:23:23,159 +extraction um so we extract the Salient + +538 +00:23:21,600 --> 00:23:25,279 +features for making the decision about + +539 +00:23:23,159 --> 00:23:27,320 +what to Output next we do score + +540 +00:23:25,279 --> 00:23:29,880 +calculation calculate a score for one or + +541 +00:23:27,320 --> 00:23:32,320 +more possib ities and we have a decision + +542 +00:23:29,880 --> 00:23:33,520 +function so we choose one of those + +543 +00:23:32,320 --> 00:23:37,679 +several + +544 +00:23:33,520 --> 00:23:40,120 +possibilities and so for feature + +545 +00:23:37,679 --> 00:23:42,200 +extraction uh formally what this looks + +546 +00:23:40,120 --> 00:23:44,240 +like is we have some function and it + +547 +00:23:42,200 --> 00:23:48,039 +extracts a feature + +548 +00:23:44,240 --> 00:23:51,159 +Vector for score calculation um we + +549 +00:23:48,039 --> 00:23:54,240 +calculate the scores based on either a + +550 +00:23:51,159 --> 00:23:56,279 +binary classification uh where we have a + +551 +00:23:54,240 --> 00:23:58,279 +a weight vector and we take the dot + +552 +00:23:56,279 --> 00:24:00,120 +product with our feature vector or we + +553 +00:23:58,279 --> 00:24:02,480 +have multi class classification where we + +554 +00:24:00,120 --> 00:24:04,520 +have a weight Matrix and we take the + +555 +00:24:02,480 --> 00:24:08,640 +product with uh the vector and that + +556 +00:24:04,520 --> 00:24:08,640 +gives us you know squares over multiple + +557 +00:24:08,919 --> 00:24:14,840 +classes and then we have a decision uh + +558 +00:24:11,600 --> 00:24:17,520 +rule so this decision rule tells us what + +559 +00:24:14,840 --> 00:24:20,080 +the output is going to be um does anyone + +560 +00:24:17,520 --> 00:24:22,200 +know what a typical decision rule is + +561 +00:24:20,080 --> 00:24:24,520 +maybe maybe so obvious that you don't + +562 +00:24:22,200 --> 00:24:28,760 +think about it often + +563 +00:24:24,520 --> 00:24:31,000 +but uh a threshold um so like for would + +564 +00:24:28,760 --> 00:24:34,440 +that be for binary a single binary + +565 +00:24:31,000 --> 00:24:37,000 +scaler score or a multiple + +566 +00:24:34,440 --> 00:24:38,520 +class binary yeah so and then you would + +567 +00:24:37,000 --> 00:24:39,960 +pick a threshold and if it's over the + +568 +00:24:38,520 --> 00:24:42,919 +threshold + +569 +00:24:39,960 --> 00:24:45,760 +you say yes and if it's under the + +570 +00:24:42,919 --> 00:24:50,279 +threshold you say no um another option + +571 +00:24:45,760 --> 00:24:51,679 +would be um you have a threshold and you + +572 +00:24:50,279 --> 00:24:56,080 +say + +573 +00:24:51,679 --> 00:24:56,080 +yes no + +574 +00:24:56,200 --> 00:25:00,559 +obain so you know you don't give an + +575 +00:24:58,360 --> 00:25:02,520 +answer and depending on how you're + +576 +00:25:00,559 --> 00:25:03,720 +evaluated what what is a good classifier + +577 +00:25:02,520 --> 00:25:07,799 +you might want to abstain some of the + +578 +00:25:03,720 --> 00:25:10,960 +time also um for multiclass what what's + +579 +00:25:07,799 --> 00:25:10,960 +a standard decision role for + +580 +00:25:11,120 --> 00:25:16,720 +multiclass argmax yeah exactly so um + +581 +00:25:14,279 --> 00:25:19,520 +basically you you find the index that + +582 +00:25:16,720 --> 00:25:22,000 +has the highest score in you output + +583 +00:25:19,520 --> 00:25:24,480 +it we're going to be talking about other + +584 +00:25:22,000 --> 00:25:26,559 +decision rules also um like + +585 +00:25:24,480 --> 00:25:29,480 +self-consistency and minimum based risk + +586 +00:25:26,559 --> 00:25:30,760 +later uh for text generation so you can + +587 +00:25:29,480 --> 00:25:33,000 +just keep that in mind and then we'll + +588 +00:25:30,760 --> 00:25:36,279 +forget about it for like several + +589 +00:25:33,000 --> 00:25:39,559 +classes um so for sentiment + +590 +00:25:36,279 --> 00:25:42,159 +class um I have a Cod + +591 +00:25:39,559 --> 00:25:45,159 +walk + +592 +00:25:42,159 --> 00:25:45,159 +here + +593 +00:25:46,240 --> 00:25:54,320 +and this is pretty simple um but if + +594 +00:25:50,320 --> 00:25:58,559 +you're bored uh of the class and would + +595 +00:25:54,320 --> 00:26:01,000 +like to um try out yourself you can + +596 +00:25:58,559 --> 00:26:04,480 +Challenge and try to get a better score + +597 +00:26:01,000 --> 00:26:06,120 +than I do um over the next few minutes + +598 +00:26:04,480 --> 00:26:06,880 +but we have this rule based classifier + +599 +00:26:06,120 --> 00:26:10,240 +in + +600 +00:26:06,880 --> 00:26:12,640 +here and I will open it up in my vs + +601 +00:26:10,240 --> 00:26:15,360 +code + +602 +00:26:12,640 --> 00:26:18,360 +to try to create a rule-based classifier + +603 +00:26:15,360 --> 00:26:18,360 +and basically the way this + +604 +00:26:22,799 --> 00:26:29,960 +works is + +605 +00:26:25,159 --> 00:26:29,960 +that we have a feature + +606 +00:26:31,720 --> 00:26:37,720 +extraction we have feature extraction we + +607 +00:26:34,120 --> 00:26:40,679 +have scoring and we have um a decision + +608 +00:26:37,720 --> 00:26:43,480 +rle so here for our feature extraction I + +609 +00:26:40,679 --> 00:26:44,720 +have created a list of good words and a + +610 +00:26:43,480 --> 00:26:46,720 +list of bad + +611 +00:26:44,720 --> 00:26:48,960 +words + +612 +00:26:46,720 --> 00:26:51,320 +and what we do is we just count the + +613 +00:26:48,960 --> 00:26:53,000 +number of good words that appeared and + +614 +00:26:51,320 --> 00:26:55,320 +count the number of bad words that + +615 +00:26:53,000 --> 00:26:57,880 +appeared then we also have a bias + +616 +00:26:55,320 --> 00:27:01,159 +feature so the bias feature is a feature + +617 +00:26:57,880 --> 00:27:03,679 +that's always one and so what that + +618 +00:27:01,159 --> 00:27:06,799 +results in is we have a dimension three + +619 +00:27:03,679 --> 00:27:08,880 +feature Vector um where this is like the + +620 +00:27:06,799 --> 00:27:11,320 +number of good words this is the number + +621 +00:27:08,880 --> 00:27:15,320 +of bad words and then you have the + +622 +00:27:11,320 --> 00:27:17,760 +bias and then I also Define the feature + +623 +00:27:15,320 --> 00:27:20,039 +weights that so for every good word we + +624 +00:27:17,760 --> 00:27:22,200 +add one to our score for every bad word + +625 +00:27:20,039 --> 00:27:25,559 +we add uh we subtract one from our score + +626 +00:27:22,200 --> 00:27:29,399 +and for the BIOS we absor and so we then + +627 +00:27:25,559 --> 00:27:30,480 +take the dot product between + +628 +00:27:29,399 --> 00:27:34,360 +these + +629 +00:27:30,480 --> 00:27:36,919 +two and we get minus + +630 +00:27:34,360 --> 00:27:37,640 +0.5 and that gives us uh that gives us + +631 +00:27:36,919 --> 00:27:41,000 +the + +632 +00:27:37,640 --> 00:27:46,000 +squore so let's run + +633 +00:27:41,000 --> 00:27:50,320 +that um and I read in some + +634 +00:27:46,000 --> 00:27:52,600 +data and what this data looks like is + +635 +00:27:50,320 --> 00:27:55,000 +basically we have a + +636 +00:27:52,600 --> 00:27:57,559 +review um which says the rock is + +637 +00:27:55,000 --> 00:27:59,480 +destined to be the 21st Century's new + +638 +00:27:57,559 --> 00:28:01,240 +Conan and that he's going to make a + +639 +00:27:59,480 --> 00:28:03,600 +splash even greater than Arnold + +640 +00:28:01,240 --> 00:28:07,000 +Schwarzenegger jeanclaude vanam or + +641 +00:28:03,600 --> 00:28:09,519 +Steven Seagal um so this seems pretty + +642 +00:28:07,000 --> 00:28:10,840 +positive right I like that's a pretty + +643 +00:28:09,519 --> 00:28:13,200 +high order to be better than Arnold + +644 +00:28:10,840 --> 00:28:16,080 +Schwarzenegger or John Claude vanam uh + +645 +00:28:13,200 --> 00:28:19,519 +if you're familiar with action movies um + +646 +00:28:16,080 --> 00:28:22,840 +and so of course this gets a positive + +647 +00:28:19,519 --> 00:28:24,120 +label and so uh we have run classifier + +648 +00:28:22,840 --> 00:28:25,240 +actually maybe I should call this + +649 +00:28:24,120 --> 00:28:27,600 +decision rule because this is + +650 +00:28:25,240 --> 00:28:29,120 +essentially our decision Rule and here + +651 +00:28:27,600 --> 00:28:32,600 +basically do the thing that I mentioned + +652 +00:28:29,120 --> 00:28:35,440 +here the yes no obstain or in this case + +653 +00:28:32,600 --> 00:28:38,360 +positive negative neutral so if the + +654 +00:28:35,440 --> 00:28:40,159 +score is greater than zero we uh return + +655 +00:28:38,360 --> 00:28:42,480 +one if the score is less than zero we + +656 +00:28:40,159 --> 00:28:44,679 +return negative one which is negative + +657 +00:28:42,480 --> 00:28:47,240 +and otherwise we returns + +658 +00:28:44,679 --> 00:28:48,760 +zero um we have an accuracy calculation + +659 +00:28:47,240 --> 00:28:51,519 +function just calculating the outputs + +660 +00:28:48,760 --> 00:28:55,840 +are good and + +661 +00:28:51,519 --> 00:28:57,440 +um this is uh the overall label count in + +662 +00:28:55,840 --> 00:28:59,919 +the in the output so we can see there + +663 +00:28:57,440 --> 00:29:03,120 +slightly more positives than there are + +664 +00:28:59,919 --> 00:29:06,080 +negatives and then we can run this and + +665 +00:29:03,120 --> 00:29:10,200 +we get a a score of + +666 +00:29:06,080 --> 00:29:14,760 +43 and so one one thing that I have + +667 +00:29:10,200 --> 00:29:19,279 +found um is I I do a lot of kind + +668 +00:29:14,760 --> 00:29:21,240 +of research on how to make NLP systems + +669 +00:29:19,279 --> 00:29:23,600 +better and one of the things I found + +670 +00:29:21,240 --> 00:29:26,679 +really invaluable + +671 +00:29:23,600 --> 00:29:27,840 +is if you're in a situation where you + +672 +00:29:26,679 --> 00:29:29,720 +have a + +673 +00:29:27,840 --> 00:29:31,760 +set task and you just want to make the + +674 +00:29:29,720 --> 00:29:33,760 +system better on the set task doing + +675 +00:29:31,760 --> 00:29:35,159 +comprehensive error analysis and + +676 +00:29:33,760 --> 00:29:37,320 +understanding where your system is + +677 +00:29:35,159 --> 00:29:39,880 +failing is one of the best ways to do + +678 +00:29:37,320 --> 00:29:42,200 +that and I would like to do a very + +679 +00:29:39,880 --> 00:29:43,640 +rudimentary version of this here and + +680 +00:29:42,200 --> 00:29:46,519 +what I'm doing essentially is I'm just + +681 +00:29:43,640 --> 00:29:47,480 +randomly picking uh several examples + +682 +00:29:46,519 --> 00:29:49,320 +that were + +683 +00:29:47,480 --> 00:29:52,000 +correct + +684 +00:29:49,320 --> 00:29:54,840 +um and so like let let's look at the + +685 +00:29:52,000 --> 00:29:58,200 +examples here um here the true label is + +686 +00:29:54,840 --> 00:30:00,760 +zero um in this predicted one um it may + +687 +00:29:58,200 --> 00:30:03,440 +not be as cutting as Woody or as true as + +688 +00:30:00,760 --> 00:30:05,039 +back in the Glory Days of uh weekend and + +689 +00:30:03,440 --> 00:30:07,440 +two or three things that I know about + +690 +00:30:05,039 --> 00:30:09,640 +her but who else engaged in film Mak + +691 +00:30:07,440 --> 00:30:12,679 +today is so cognizant of the cultural + +692 +00:30:09,640 --> 00:30:14,480 +and moral issues involved in the process + +693 +00:30:12,679 --> 00:30:17,600 +so what words in here are a good + +694 +00:30:14,480 --> 00:30:20,840 +indication that this is a neutral + +695 +00:30:17,600 --> 00:30:20,840 +sentence any + +696 +00:30:23,760 --> 00:30:28,399 +ideas little bit tough + +697 +00:30:26,240 --> 00:30:30,919 +huh starting to think maybe we should be + +698 +00:30:28,399 --> 00:30:30,919 +using machine + +699 +00:30:31,480 --> 00:30:37,440 +learning + +700 +00:30:34,080 --> 00:30:40,320 +um even by the intentionally low + +701 +00:30:37,440 --> 00:30:41,559 +standards of fratboy humor sority boys + +702 +00:30:40,320 --> 00:30:43,840 +is a + +703 +00:30:41,559 --> 00:30:46,080 +Bowser I think frat boy is maybe + +704 +00:30:43,840 --> 00:30:47,360 +negative sentiment if you're familiar + +705 +00:30:46,080 --> 00:30:50,360 +with + +706 +00:30:47,360 --> 00:30:51,960 +us us I don't have any negative + +707 +00:30:50,360 --> 00:30:54,519 +sentiment but the people who say it that + +708 +00:30:51,960 --> 00:30:55,960 +way have negative senent maybe so if we + +709 +00:30:54,519 --> 00:31:01,080 +wanted to go in and do that we could + +710 +00:30:55,960 --> 00:31:01,080 +maybe I won't save this but + +711 +00:31:01,519 --> 00:31:08,919 +uh + +712 +00:31:04,240 --> 00:31:11,840 +um oh whoops I'll go back and fix it uh + +713 +00:31:08,919 --> 00:31:14,840 +crass crass is pretty obviously negative + +714 +00:31:11,840 --> 00:31:14,840 +right so I can add + +715 +00:31:17,039 --> 00:31:21,080 +crass actually let me just add + +716 +00:31:21,760 --> 00:31:29,159 +CR and then um I'll go back and have our + +717 +00:31:26,559 --> 00:31:29,159 +train accurate + +718 +00:31:32,159 --> 00:31:36,240 +wa maybe maybe I need to run the whole + +719 +00:31:33,960 --> 00:31:36,240 +thing + +720 +00:31:36,960 --> 00:31:39,960 +again + +721 +00:31:40,960 --> 00:31:45,880 +and that budg the training accuracy a + +722 +00:31:43,679 --> 00:31:50,360 +little um the dev test accuracy not very + +723 +00:31:45,880 --> 00:31:53,919 +much so I could go through and do this + +724 +00:31:50,360 --> 00:31:53,919 +um let me add + +725 +00:31:54,000 --> 00:31:58,320 +unengaging so I could go through and do + +726 +00:31:56,000 --> 00:32:01,720 +this all day and you probably be very + +727 +00:31:58,320 --> 00:32:01,720 +bored on + +728 +00:32:04,240 --> 00:32:08,360 +engage but I won't do that uh because we + +729 +00:32:06,919 --> 00:32:10,679 +have much more important things to be + +730 +00:32:08,360 --> 00:32:14,679 +doing + +731 +00:32:10,679 --> 00:32:16,440 +um and uh so anyway we um we could go + +732 +00:32:14,679 --> 00:32:18,919 +through and design all the features here + +733 +00:32:16,440 --> 00:32:21,279 +but like why is this complicated like + +734 +00:32:18,919 --> 00:32:22,600 +the the reason why it was complicated + +735 +00:32:21,279 --> 00:32:25,840 +became pretty + +736 +00:32:22,600 --> 00:32:27,840 +clear from the uh from the very + +737 +00:32:25,840 --> 00:32:29,639 +beginning uh the very first example I + +738 +00:32:27,840 --> 00:32:32,200 +showed you which was that was a really + +739 +00:32:29,639 --> 00:32:34,720 +complicated sentence like all of us + +740 +00:32:32,200 --> 00:32:36,240 +could see that it wasn't like really + +741 +00:32:34,720 --> 00:32:38,679 +strongly positive it wasn't really + +742 +00:32:36,240 --> 00:32:40,519 +strongly negative it was kind of like in + +743 +00:32:38,679 --> 00:32:42,919 +the middle but it was in the middle and + +744 +00:32:40,519 --> 00:32:44,600 +it said it in a very long way uh you + +745 +00:32:42,919 --> 00:32:46,120 +know not using any clearly positive + +746 +00:32:44,600 --> 00:32:47,639 +sentiment words not using any clearly + +747 +00:32:46,120 --> 00:32:49,760 +negative sentiment + +748 +00:32:47,639 --> 00:32:53,760 +words + +749 +00:32:49,760 --> 00:32:56,519 +um so yeah basically I I + +750 +00:32:53,760 --> 00:33:00,559 +improved um but what are the difficult + +751 +00:32:56,519 --> 00:33:03,720 +cases uh that we saw here so the first + +752 +00:33:00,559 --> 00:33:07,639 +one is low frequency + +753 +00:33:03,720 --> 00:33:09,760 +words so um here's an example the action + +754 +00:33:07,639 --> 00:33:11,519 +switches between past and present but + +755 +00:33:09,760 --> 00:33:13,120 +the material link is too tenuous to + +756 +00:33:11,519 --> 00:33:16,840 +Anchor the emotional connections at + +757 +00:33:13,120 --> 00:33:19,519 +purport to span a 125 year divide so + +758 +00:33:16,840 --> 00:33:21,080 +this is negative um tenuous is kind of a + +759 +00:33:19,519 --> 00:33:22,799 +negative word purport is kind of a + +760 +00:33:21,080 --> 00:33:24,760 +negative word but it doesn't appear very + +761 +00:33:22,799 --> 00:33:26,159 +frequently so I would need to spend all + +762 +00:33:24,760 --> 00:33:29,720 +my time looking for these words and + +763 +00:33:26,159 --> 00:33:32,480 +trying to them in um here's yet another + +764 +00:33:29,720 --> 00:33:34,240 +horse franchise mucking up its storyline + +765 +00:33:32,480 --> 00:33:36,639 +with glitches casual fans could correct + +766 +00:33:34,240 --> 00:33:40,159 +in their sleep negative + +767 +00:33:36,639 --> 00:33:42,600 +again um so the solutions here are keep + +768 +00:33:40,159 --> 00:33:46,880 +working until we get all of them which + +769 +00:33:42,600 --> 00:33:49,159 +is maybe not super fun um or incorporate + +770 +00:33:46,880 --> 00:33:51,639 +external resources such as sentiment + +771 +00:33:49,159 --> 00:33:52,880 +dictionaries that people created uh we + +772 +00:33:51,639 --> 00:33:55,960 +could do that but that's a lot of + +773 +00:33:52,880 --> 00:33:57,480 +engineering effort to make something + +774 +00:33:55,960 --> 00:34:00,639 +work + +775 +00:33:57,480 --> 00:34:03,720 +um another one is conjugation so we saw + +776 +00:34:00,639 --> 00:34:06,600 +unengaging I guess that's an example of + +777 +00:34:03,720 --> 00:34:08,359 +conjugation uh some other ones are + +778 +00:34:06,600 --> 00:34:10,520 +operatic sprawling picture that's + +779 +00:34:08,359 --> 00:34:12,040 +entertainingly acted magnificently shot + +780 +00:34:10,520 --> 00:34:15,480 +and gripping enough to sustain most of + +781 +00:34:12,040 --> 00:34:17,399 +its 170 minute length so here we have + +782 +00:34:15,480 --> 00:34:19,079 +magnificently so even if I added + +783 +00:34:17,399 --> 00:34:20,480 +magnificent this wouldn't have been + +784 +00:34:19,079 --> 00:34:23,800 +clocked + +785 +00:34:20,480 --> 00:34:26,599 +right um it's basically an overlong + +786 +00:34:23,800 --> 00:34:28,839 +episode of tales from the cryp so that's + +787 +00:34:26,599 --> 00:34:31,480 +maybe another + +788 +00:34:28,839 --> 00:34:33,040 +example um so some things that we could + +789 +00:34:31,480 --> 00:34:35,320 +do or what we would have done before the + +790 +00:34:33,040 --> 00:34:37,720 +modern Paradigm of machine learning is + +791 +00:34:35,320 --> 00:34:40,079 +we would run some sort of normalizer + +792 +00:34:37,720 --> 00:34:42,800 +like a stemmer or other things like this + +793 +00:34:40,079 --> 00:34:45,240 +in order to convert this into uh the + +794 +00:34:42,800 --> 00:34:48,599 +root wordss that we already have seen + +795 +00:34:45,240 --> 00:34:52,040 +somewhere in our data or have already + +796 +00:34:48,599 --> 00:34:54,040 +handed so that requires um conjugation + +797 +00:34:52,040 --> 00:34:55,879 +analysis or morphological analysis as we + +798 +00:34:54,040 --> 00:34:57,400 +say it in + +799 +00:34:55,879 --> 00:35:00,680 +technicals + +800 +00:34:57,400 --> 00:35:03,960 +negation this is a tricky one so this + +801 +00:35:00,680 --> 00:35:06,760 +one's not nearly as Dreadful as expected + +802 +00:35:03,960 --> 00:35:08,800 +so Dreadful is a pretty bad word right + +803 +00:35:06,760 --> 00:35:13,000 +but not nearly as Dreadful as expected + +804 +00:35:08,800 --> 00:35:14,440 +is like a solidly neutral um you know or + +805 +00:35:13,000 --> 00:35:16,359 +maybe even + +806 +00:35:14,440 --> 00:35:18,920 +positive I would I would say that's + +807 +00:35:16,359 --> 00:35:20,640 +neutral but you know uh neutral or + +808 +00:35:18,920 --> 00:35:23,800 +positive it's definitely not + +809 +00:35:20,640 --> 00:35:26,359 +negative um serving s doesn't serve up a + +810 +00:35:23,800 --> 00:35:29,480 +whole lot of laughs so laughs is + +811 +00:35:26,359 --> 00:35:31,880 +obviously positive but not serving UPS + +812 +00:35:29,480 --> 00:35:34,440 +is obviously + +813 +00:35:31,880 --> 00:35:36,839 +negative so if negation modifies the + +814 +00:35:34,440 --> 00:35:38,240 +word disregard it now we would probably + +815 +00:35:36,839 --> 00:35:41,440 +need to do some sort of syntactic + +816 +00:35:38,240 --> 00:35:45,599 +analysis or semantic analysis of + +817 +00:35:41,440 --> 00:35:47,520 +some metaphor an analogy so puts a human + +818 +00:35:45,599 --> 00:35:50,640 +face on a land most westerners are + +819 +00:35:47,520 --> 00:35:52,880 +unfamiliar though uh this is + +820 +00:35:50,640 --> 00:35:54,960 +positive green might want to hang on to + +821 +00:35:52,880 --> 00:35:58,800 +that ski mask as robbery may be the only + +822 +00:35:54,960 --> 00:35:58,800 +way to pay for this next project + +823 +00:35:58,839 --> 00:36:03,640 +so this this is saying that the movie + +824 +00:36:01,960 --> 00:36:05,560 +was so bad that the director will have + +825 +00:36:03,640 --> 00:36:08,359 +to rob people in order to get money for + +826 +00:36:05,560 --> 00:36:11,000 +the next project so that's kind of bad I + +827 +00:36:08,359 --> 00:36:12,880 +guess um has all the depth of a waiting + +828 +00:36:11,000 --> 00:36:14,520 +pool this is kind of my favorite one + +829 +00:36:12,880 --> 00:36:15,880 +because it's really short and sweet but + +830 +00:36:14,520 --> 00:36:18,800 +you know you need to know how deep a + +831 +00:36:15,880 --> 00:36:21,440 +waiting pool is um so that's + +832 +00:36:18,800 --> 00:36:22,960 +negative so the solution here I don't + +833 +00:36:21,440 --> 00:36:24,680 +really even know how to handle this with + +834 +00:36:22,960 --> 00:36:26,880 +a rule based system I have no idea how + +835 +00:36:24,680 --> 00:36:30,040 +we would possibly do this yeah machine + +836 +00:36:26,880 --> 00:36:32,400 +learning based models seem to be pretty + +837 +00:36:30,040 --> 00:36:37,000 +adaptive okay and then I start doing + +838 +00:36:32,400 --> 00:36:37,000 +these ones um anyone have a good + +839 +00:36:38,160 --> 00:36:46,800 +idea any any other friends who know + +840 +00:36:42,520 --> 00:36:50,040 +Japanese no okay um so yeah that's + +841 +00:36:46,800 --> 00:36:52,839 +positive um that one's negative uh and + +842 +00:36:50,040 --> 00:36:54,920 +the solution here is learn Japanese I + +843 +00:36:52,839 --> 00:36:56,800 +guess or whatever other language you + +844 +00:36:54,920 --> 00:37:00,040 +want to process so like obviously + +845 +00:36:56,800 --> 00:37:03,720 +rule-based systems don't scale very + +846 +00:37:00,040 --> 00:37:05,119 +well so um we've moved but like rule + +847 +00:37:03,720 --> 00:37:06,319 +based systems don't scale very well + +848 +00:37:05,119 --> 00:37:08,160 +we're not going to be using them for + +849 +00:37:06,319 --> 00:37:11,400 +most of the things we do in this class + +850 +00:37:08,160 --> 00:37:14,240 +but I do think it's sometimes useful to + +851 +00:37:11,400 --> 00:37:15,640 +try to create one for your task maybe + +852 +00:37:14,240 --> 00:37:16,680 +right at the very beginning of a project + +853 +00:37:15,640 --> 00:37:18,560 +because it gives you an idea about + +854 +00:37:16,680 --> 00:37:21,160 +what's really hard about the task in + +855 +00:37:18,560 --> 00:37:22,480 +some cases so um yeah I wouldn't + +856 +00:37:21,160 --> 00:37:25,599 +entirely discount them I'm not + +857 +00:37:22,480 --> 00:37:27,400 +introducing them for no reason + +858 +00:37:25,599 --> 00:37:29,880 +whatsoever + +859 +00:37:27,400 --> 00:37:34,160 +so next is machine learning based anal + +860 +00:37:29,880 --> 00:37:35,400 +and machine learning uh in general uh I + +861 +00:37:34,160 --> 00:37:36,640 +here actually when I say machine + +862 +00:37:35,400 --> 00:37:38,160 +learning I'm going to be talking about + +863 +00:37:36,640 --> 00:37:39,560 +the traditional fine-tuning approach + +864 +00:37:38,160 --> 00:37:43,520 +where we have a training set Dev set + +865 +00:37:39,560 --> 00:37:46,359 +test set and so we take our training set + +866 +00:37:43,520 --> 00:37:49,680 +we run some learning algorithm over it + +867 +00:37:46,359 --> 00:37:52,319 +we have a learned feature extractor F A + +868 +00:37:49,680 --> 00:37:55,839 +possibly learned feature extractor F + +869 +00:37:52,319 --> 00:37:57,880 +possibly learned scoring function W and + +870 +00:37:55,839 --> 00:38:00,800 +uh then we apply our inference algorithm + +871 +00:37:57,880 --> 00:38:02,839 +our decision Rule and make decisions + +872 +00:38:00,800 --> 00:38:04,200 +when I say possibly learned actually the + +873 +00:38:02,839 --> 00:38:06,119 +first example I'm going to give of a + +874 +00:38:04,200 --> 00:38:07,760 +machine learning based technique is uh + +875 +00:38:06,119 --> 00:38:10,079 +doesn't have a learned feature extractor + +876 +00:38:07,760 --> 00:38:12,800 +but most things that we use nowadays do + +877 +00:38:10,079 --> 00:38:12,800 +have learned feature + +878 +00:38:13,200 --> 00:38:18,040 +extractors so our first attempt is going + +879 +00:38:15,640 --> 00:38:21,760 +to be a bag of words model uh and the + +880 +00:38:18,040 --> 00:38:27,119 +way a bag of wordss model works is uh + +881 +00:38:21,760 --> 00:38:30,160 +essentially we start out by looking up a + +882 +00:38:27,119 --> 00:38:33,240 +Vector where one element in the vector + +883 +00:38:30,160 --> 00:38:36,240 +is uh is one and all the other elements + +884 +00:38:33,240 --> 00:38:38,040 +in the vector are zero and so if the + +885 +00:38:36,240 --> 00:38:40,319 +word is different the position in the + +886 +00:38:38,040 --> 00:38:42,839 +vector that's one will be different we + +887 +00:38:40,319 --> 00:38:46,280 +add all of these together and this gives + +888 +00:38:42,839 --> 00:38:48,200 +us a vector where each element is the + +889 +00:38:46,280 --> 00:38:50,359 +frequency of that word in the vector and + +890 +00:38:48,200 --> 00:38:52,520 +then we multiply that by weights and we + +891 +00:38:50,359 --> 00:38:55,520 +get a + +892 +00:38:52,520 --> 00:38:57,160 +score and um here as I said this is not + +893 +00:38:55,520 --> 00:39:00,359 +a learned feature + +894 +00:38:57,160 --> 00:39:02,079 +uh Vector this is basically uh sorry not + +895 +00:39:00,359 --> 00:39:04,359 +a learn feature extractor this is + +896 +00:39:02,079 --> 00:39:06,200 +basically a fixed feature extractor but + +897 +00:39:04,359 --> 00:39:09,839 +the weights themselves are + +898 +00:39:06,200 --> 00:39:11,640 +learned um so my my question is I + +899 +00:39:09,839 --> 00:39:14,599 +mentioned a whole lot of problems before + +900 +00:39:11,640 --> 00:39:17,480 +I mentioned infrequent words I mentioned + +901 +00:39:14,599 --> 00:39:20,760 +conjugation I mentioned uh different + +902 +00:39:17,480 --> 00:39:22,880 +languages I mentioned syntax and + +903 +00:39:20,760 --> 00:39:24,599 +metaphor so which of these do we think + +904 +00:39:22,880 --> 00:39:25,440 +would be fixed by this sort of learning + +905 +00:39:24,599 --> 00:39:27,400 +based + +906 +00:39:25,440 --> 00:39:29,640 +approach + +907 +00:39:27,400 --> 00:39:29,640 +any + +908 +00:39:29,920 --> 00:39:35,200 +ideas maybe not fixed maybe made + +909 +00:39:32,520 --> 00:39:35,200 +significantly + +910 +00:39:36,880 --> 00:39:41,560 +better any Brave uh brave + +911 +00:39:44,880 --> 00:39:48,440 +people maybe maybe + +912 +00:39:53,720 --> 00:39:58,400 +negation okay so maybe doesn't when it + +913 +00:39:55,760 --> 00:39:58,400 +have a negative qu + +914 +00:40:02,960 --> 00:40:07,560 +yeah yeah so for the conjugation if we + +915 +00:40:05,520 --> 00:40:09,200 +had the conjugations of the stems mapped + +916 +00:40:07,560 --> 00:40:11,119 +in the same position that might fix a + +917 +00:40:09,200 --> 00:40:12,920 +conjugation problem but I would say if + +918 +00:40:11,119 --> 00:40:15,200 +you don't do that then this kind of + +919 +00:40:12,920 --> 00:40:18,160 +fixes conjugation a little bit but maybe + +920 +00:40:15,200 --> 00:40:21,319 +not not really yeah kind of fix + +921 +00:40:18,160 --> 00:40:24,079 +conjugation because like they're using + +922 +00:40:21,319 --> 00:40:26,760 +the same there + +923 +00:40:24,079 --> 00:40:28,400 +probably different variations so we + +924 +00:40:26,760 --> 00:40:31,359 +learn how to + +925 +00:40:28,400 --> 00:40:33,400 +classify surrounding + +926 +00:40:31,359 --> 00:40:35,000 +structure yeah if it's a big enough + +927 +00:40:33,400 --> 00:40:36,760 +training set you might have covered the + +928 +00:40:35,000 --> 00:40:37,880 +various conjugations but if you haven't + +929 +00:40:36,760 --> 00:40:43,000 +and you don't have any rule-based + +930 +00:40:37,880 --> 00:40:43,000 +processing it it might still be problems + +931 +00:40:45,400 --> 00:40:50,359 +yeah yeah so in frequent words if you + +932 +00:40:48,280 --> 00:40:52,560 +have a large enough training set yeah + +933 +00:40:50,359 --> 00:40:54,599 +you'll be able to fix it to some extent + +934 +00:40:52,560 --> 00:40:56,480 +so none of the problems are entirely + +935 +00:40:54,599 --> 00:40:57,880 +fixed but a lot of them are made better + +936 +00:40:56,480 --> 00:40:58,960 +different languages is also made better + +937 +00:40:57,880 --> 00:41:00,119 +if you have training data in that + +938 +00:40:58,960 --> 00:41:04,599 +language but if you don't then you're + +939 +00:41:00,119 --> 00:41:06,240 +out of BL so um so now what I'd like to + +940 +00:41:04,599 --> 00:41:10,800 +do is I'd look to like to look at what + +941 +00:41:06,240 --> 00:41:15,079 +our vectors represent so basically um in + +942 +00:41:10,800 --> 00:41:16,880 +uh in binary classification each word um + +943 +00:41:15,079 --> 00:41:19,119 +sorry so the vectors themselves + +944 +00:41:16,880 --> 00:41:21,880 +represent the counts of the words here + +945 +00:41:19,119 --> 00:41:25,319 +I'm talking about what the weight uh + +946 +00:41:21,880 --> 00:41:28,520 +vectors or matrices correspond to and + +947 +00:41:25,319 --> 00:41:31,640 +the weight uh Vector here will be + +948 +00:41:28,520 --> 00:41:33,680 +positive if the word it tends to be + +949 +00:41:31,640 --> 00:41:36,680 +positive if in a binary classification + +950 +00:41:33,680 --> 00:41:38,400 +case in a multiclass classification case + +951 +00:41:36,680 --> 00:41:42,480 +we'll actually have a matrix that looks + +952 +00:41:38,400 --> 00:41:45,480 +like this where um each column or row uh + +953 +00:41:42,480 --> 00:41:47,079 +corresponds to the word and each row or + +954 +00:41:45,480 --> 00:41:49,319 +column corresponds to a label and it + +955 +00:41:47,079 --> 00:41:51,960 +will be higher if that row tends to uh + +956 +00:41:49,319 --> 00:41:54,800 +correlate with that uh that word tends + +957 +00:41:51,960 --> 00:41:56,920 +to correlate that little + +958 +00:41:54,800 --> 00:41:59,240 +bit so + +959 +00:41:56,920 --> 00:42:04,079 +this um training of the bag of words + +960 +00:41:59,240 --> 00:42:07,720 +model is can be done uh so simply that + +961 +00:42:04,079 --> 00:42:10,200 +we uh can put it in a single slide so + +962 +00:42:07,720 --> 00:42:11,599 +basically here uh what we do is we start + +963 +00:42:10,200 --> 00:42:14,760 +out with the feature + +964 +00:42:11,599 --> 00:42:18,880 +weights and for each example in our data + +965 +00:42:14,760 --> 00:42:20,800 +set we extract features um the exact way + +966 +00:42:18,880 --> 00:42:23,920 +I'm extracting features is basically + +967 +00:42:20,800 --> 00:42:25,720 +splitting uh splitting the words using + +968 +00:42:23,920 --> 00:42:28,000 +the python split function and then uh + +969 +00:42:25,720 --> 00:42:31,319 +Counting number of times each word + +970 +00:42:28,000 --> 00:42:33,160 +exists uh we then run the classifier so + +971 +00:42:31,319 --> 00:42:36,280 +actually running the classifier is + +972 +00:42:33,160 --> 00:42:38,200 +exactly the same as what we did for the + +973 +00:42:36,280 --> 00:42:42,640 +uh the rule based system it's just that + +974 +00:42:38,200 --> 00:42:47,359 +we have feature vectors instead and + +975 +00:42:42,640 --> 00:42:51,559 +then if the predicted value is + +976 +00:42:47,359 --> 00:42:55,160 +not value then for each of the + +977 +00:42:51,559 --> 00:42:56,680 +features uh in the feature space we + +978 +00:42:55,160 --> 00:43:02,200 +upweight + +979 +00:42:56,680 --> 00:43:03,599 +the um we upweight The Weight by the + +980 +00:43:02,200 --> 00:43:06,000 +vector + +981 +00:43:03,599 --> 00:43:09,920 +size by or by the amount of the vector + +982 +00:43:06,000 --> 00:43:13,240 +if Y is positive and we downweight the + +983 +00:43:09,920 --> 00:43:16,240 +vector uh by the size of the vector if Y + +984 +00:43:13,240 --> 00:43:18,520 +is negative so this is really really + +985 +00:43:16,240 --> 00:43:20,559 +simple it's uh probably the simplest + +986 +00:43:18,520 --> 00:43:25,079 +possible algorithm for training one of + +987 +00:43:20,559 --> 00:43:27,559 +these models um but I have an + +988 +00:43:25,079 --> 00:43:30,040 +example in this that you can also take a + +989 +00:43:27,559 --> 00:43:31,960 +look at here's a trained bag of words + +990 +00:43:30,040 --> 00:43:33,680 +classifier and we could step through + +991 +00:43:31,960 --> 00:43:34,960 +this is on exactly the same data set as + +992 +00:43:33,680 --> 00:43:37,240 +I did before we're training on the + +993 +00:43:34,960 --> 00:43:42,359 +training set + +994 +00:43:37,240 --> 00:43:43,640 +um and uh evaluating on the dev set um I + +995 +00:43:42,359 --> 00:43:45,880 +also have some extra stuff like I'm + +996 +00:43:43,640 --> 00:43:47,079 +Shuffling the order of the data IDs + +997 +00:43:45,880 --> 00:43:49,440 +which is really important if you're + +998 +00:43:47,079 --> 00:43:53,160 +doing this sort of incremental algorithm + +999 +00:43:49,440 --> 00:43:54,960 +uh because uh what if what if your + +1000 +00:43:53,160 --> 00:43:57,400 +creating data set was ordered in this + +1001 +00:43:54,960 --> 00:44:00,040 +way where you have all of the positive + +1002 +00:43:57,400 --> 00:44:00,040 +labels on + +1003 +00:44:00,359 --> 00:44:04,520 +top and then you have all of the + +1004 +00:44:02,280 --> 00:44:06,680 +negative labels on the + +1005 +00:44:04,520 --> 00:44:08,200 +bottom if you do something like this it + +1006 +00:44:06,680 --> 00:44:10,200 +would see only negative labels at the + +1007 +00:44:08,200 --> 00:44:11,800 +end of training and you might have + +1008 +00:44:10,200 --> 00:44:14,400 +problems because your model would only + +1009 +00:44:11,800 --> 00:44:17,440 +predict negatives so we also Shuffle + +1010 +00:44:14,400 --> 00:44:20,319 +data um and then step through we run the + +1011 +00:44:17,440 --> 00:44:22,559 +classifier and I'm going to run uh five + +1012 +00:44:20,319 --> 00:44:23,640 +epochs of training through the data set + +1013 +00:44:22,559 --> 00:44:27,160 +uh very + +1014 +00:44:23,640 --> 00:44:29,599 +fast and calculate our accuracy + +1015 +00:44:27,160 --> 00:44:33,280 +and this got 75% accuracy on the + +1016 +00:44:29,599 --> 00:44:36,160 +training data set and uh 56% accuracy on + +1017 +00:44:33,280 --> 00:44:40,000 +the Deb data set so uh if you remember + +1018 +00:44:36,160 --> 00:44:41,520 +our rule-based classifier had 42 uh 42 + +1019 +00:44:40,000 --> 00:44:43,880 +accuracy and now our training based + +1020 +00:44:41,520 --> 00:44:45,760 +classifier has 56 accuracy but it's + +1021 +00:44:43,880 --> 00:44:49,359 +overfitting heavily to the training side + +1022 +00:44:45,760 --> 00:44:50,880 +so um basically this is a pretty strong + +1023 +00:44:49,359 --> 00:44:53,480 +advertisement for why we should be using + +1024 +00:44:50,880 --> 00:44:54,960 +machine learning you know I the amount + +1025 +00:44:53,480 --> 00:44:57,800 +of code that we had for this machine + +1026 +00:44:54,960 --> 00:44:59,720 +learning model is basically very similar + +1027 +00:44:57,800 --> 00:45:02,680 +um it's not using any external libraries + +1028 +00:44:59,720 --> 00:45:02,680 +but we're getting better at + +1029 +00:45:03,599 --> 00:45:08,800 +this + +1030 +00:45:05,800 --> 00:45:08,800 +cool + +1031 +00:45:09,559 --> 00:45:16,000 +so cool any any questions + +1032 +00:45:13,520 --> 00:45:18,240 +here and so I'm going to talk about the + +1033 +00:45:16,000 --> 00:45:20,760 +connection to between this algorithm and + +1034 +00:45:18,240 --> 00:45:22,839 +neural networks in the next class um + +1035 +00:45:20,760 --> 00:45:24,200 +because this actually is using a very + +1036 +00:45:22,839 --> 00:45:26,319 +similar training algorithm to what we + +1037 +00:45:24,200 --> 00:45:27,480 +use in neural networks with some uh + +1038 +00:45:26,319 --> 00:45:30,079 +particular + +1039 +00:45:27,480 --> 00:45:32,839 +assumptions cool um so what's missing in + +1040 +00:45:30,079 --> 00:45:34,800 +bag of words um still handling of + +1041 +00:45:32,839 --> 00:45:36,880 +conjugation or compound words is not + +1042 +00:45:34,800 --> 00:45:39,160 +perfect it we can do it to some extent + +1043 +00:45:36,880 --> 00:45:41,079 +to the point where we can uh memorize + +1044 +00:45:39,160 --> 00:45:44,079 +things so I love this movie I love this + +1045 +00:45:41,079 --> 00:45:46,920 +movie another thing is handling word Ser + +1046 +00:45:44,079 --> 00:45:49,240 +uh similarities so I love this movie and + +1047 +00:45:46,920 --> 00:45:50,720 +I adore this movie uh these basically + +1048 +00:45:49,240 --> 00:45:52,119 +mean the same thing as humans we know + +1049 +00:45:50,720 --> 00:45:54,200 +they mean the same thing so we should be + +1050 +00:45:52,119 --> 00:45:56,079 +able to take advantage of that fact to + +1051 +00:45:54,200 --> 00:45:57,839 +learn better models but we're not doing + +1052 +00:45:56,079 --> 00:46:02,760 +that in this model at the moment because + +1053 +00:45:57,839 --> 00:46:05,440 +each unit is uh treated as a atomic unit + +1054 +00:46:02,760 --> 00:46:08,040 +and there's no idea of + +1055 +00:46:05,440 --> 00:46:11,040 +similarity also handling of combination + +1056 +00:46:08,040 --> 00:46:12,760 +features so um I love this movie and I + +1057 +00:46:11,040 --> 00:46:14,920 +don't love this movie I hate this movie + +1058 +00:46:12,760 --> 00:46:17,079 +and I don't hate this movie actually + +1059 +00:46:14,920 --> 00:46:20,400 +this is a little bit tricky because + +1060 +00:46:17,079 --> 00:46:23,240 +negative words are slightly indicative + +1061 +00:46:20,400 --> 00:46:25,280 +of it being negative but actually what + +1062 +00:46:23,240 --> 00:46:28,119 +they do is they negate the other things + +1063 +00:46:25,280 --> 00:46:28,119 +that you're saying in the + +1064 +00:46:28,240 --> 00:46:36,559 +sentence + +1065 +00:46:30,720 --> 00:46:40,480 +so um like love is positive hate is + +1066 +00:46:36,559 --> 00:46:40,480 +negative but like don't + +1067 +00:46:50,359 --> 00:46:56,079 +love it's actually kind of like this + +1068 +00:46:52,839 --> 00:46:59,359 +right like um Love is very positive POS + +1069 +00:46:56,079 --> 00:47:01,760 +hate is very negative but don't love is + +1070 +00:46:59,359 --> 00:47:04,680 +like slightly less positive than don't + +1071 +00:47:01,760 --> 00:47:06,160 +hate right so um It's actually kind of + +1072 +00:47:04,680 --> 00:47:07,559 +tricky because you need to combine them + +1073 +00:47:06,160 --> 00:47:10,720 +together and figure out what's going on + +1074 +00:47:07,559 --> 00:47:12,280 +based on that another example that a lot + +1075 +00:47:10,720 --> 00:47:14,160 +of people might not think of immediately + +1076 +00:47:12,280 --> 00:47:17,880 +but is super super common in sentiment + +1077 +00:47:14,160 --> 00:47:20,160 +analysis or any other thing is butt so + +1078 +00:47:17,880 --> 00:47:22,599 +basically what but does is it throws + +1079 +00:47:20,160 --> 00:47:24,160 +away all the stuff that you said before + +1080 +00:47:22,599 --> 00:47:26,119 +um and you can just pay attention to the + +1081 +00:47:24,160 --> 00:47:29,000 +stuff that you saw beforehand so like we + +1082 +00:47:26,119 --> 00:47:30,440 +could even add this to our um like if + +1083 +00:47:29,000 --> 00:47:31,760 +you want to add this to your rule based + +1084 +00:47:30,440 --> 00:47:33,240 +classifier you can do that you just + +1085 +00:47:31,760 --> 00:47:34,640 +search for butt and delete everything + +1086 +00:47:33,240 --> 00:47:37,240 +before it and see if that inputs your + +1087 +00:47:34,640 --> 00:47:39,240 +accuracy might be might be a fun very + +1088 +00:47:37,240 --> 00:47:43,480 +quick thing + +1089 +00:47:39,240 --> 00:47:44,880 +to cool so the better solution which is + +1090 +00:47:43,480 --> 00:47:46,800 +what we're going to talk about for every + +1091 +00:47:44,880 --> 00:47:49,480 +other class other than uh other than + +1092 +00:47:46,800 --> 00:47:52,160 +this one is neural network models and + +1093 +00:47:49,480 --> 00:47:55,800 +basically uh what they do is they do a + +1094 +00:47:52,160 --> 00:47:59,400 +lookup of uh dense word embeddings so + +1095 +00:47:55,800 --> 00:48:02,520 +instead of looking up uh individual uh + +1096 +00:47:59,400 --> 00:48:04,640 +sparse uh vectors individual one hot + +1097 +00:48:02,520 --> 00:48:06,920 +vectors they look up dense word + +1098 +00:48:04,640 --> 00:48:09,680 +embeddings and then throw them into some + +1099 +00:48:06,920 --> 00:48:11,880 +complicated function to extract features + +1100 +00:48:09,680 --> 00:48:16,359 +and based on the features uh multiply by + +1101 +00:48:11,880 --> 00:48:18,280 +weights and get a score um and if you're + +1102 +00:48:16,359 --> 00:48:20,359 +doing text classification in the + +1103 +00:48:18,280 --> 00:48:22,520 +traditional way this is normally what + +1104 +00:48:20,359 --> 00:48:23,760 +you do um if you're doing text + +1105 +00:48:22,520 --> 00:48:25,960 +classification with something like + +1106 +00:48:23,760 --> 00:48:27,280 +prompting you're still actually doing + +1107 +00:48:25,960 --> 00:48:29,960 +this because you're calculating the + +1108 +00:48:27,280 --> 00:48:32,960 +score of the next word to predict and + +1109 +00:48:29,960 --> 00:48:34,720 +that's done in exactly the same way so + +1110 +00:48:32,960 --> 00:48:37,760 +uh even if you're using a large language + +1111 +00:48:34,720 --> 00:48:39,359 +model like GPT this is still probably + +1112 +00:48:37,760 --> 00:48:41,800 +happening under the hood unless open the + +1113 +00:48:39,359 --> 00:48:43,400 +eye invented something that very + +1114 +00:48:41,800 --> 00:48:45,559 +different in Alien than anything else + +1115 +00:48:43,400 --> 00:48:48,440 +that we know of but I I'm guessing that + +1116 +00:48:45,559 --> 00:48:48,440 +that propably hasn't + +1117 +00:48:48,480 --> 00:48:52,880 +happen um one nice thing about neural + +1118 +00:48:50,880 --> 00:48:54,480 +networks is neural networks + +1119 +00:48:52,880 --> 00:48:57,559 +theoretically are powerful enough to + +1120 +00:48:54,480 --> 00:49:00,000 +solve any task if you make them uh deep + +1121 +00:48:57,559 --> 00:49:01,160 +enough or wide enough uh like if you + +1122 +00:49:00,000 --> 00:49:04,520 +make them wide enough and then if you + +1123 +00:49:01,160 --> 00:49:06,799 +make them deep it also helps further so + +1124 +00:49:04,520 --> 00:49:08,079 +anytime somebody says well you can't + +1125 +00:49:06,799 --> 00:49:11,119 +just solve that problem with neural + +1126 +00:49:08,079 --> 00:49:13,240 +networks you know that they're lying + +1127 +00:49:11,119 --> 00:49:15,720 +basically because they theoretically can + +1128 +00:49:13,240 --> 00:49:17,359 +solve every problem uh but you have you + +1129 +00:49:15,720 --> 00:49:19,799 +have issues of data you have issues of + +1130 +00:49:17,359 --> 00:49:23,079 +other things like that so you know they + +1131 +00:49:19,799 --> 00:49:23,079 +don't just necessarily work + +1132 +00:49:23,119 --> 00:49:28,040 +outs cool um so the final thing I'd like + +1133 +00:49:26,400 --> 00:49:29,319 +to talk about is the road map going + +1134 +00:49:28,040 --> 00:49:31,319 +forward some of the things I'm going to + +1135 +00:49:29,319 --> 00:49:32,799 +cover in the class and some of the + +1136 +00:49:31,319 --> 00:49:35,200 +logistics + +1137 +00:49:32,799 --> 00:49:36,799 +issues so um the first thing I'm going + +1138 +00:49:35,200 --> 00:49:38,240 +to talk about in the class is language + +1139 +00:49:36,799 --> 00:49:40,559 +modeling fun + +1140 +00:49:38,240 --> 00:49:42,720 +fundamentals and uh so this could + +1141 +00:49:40,559 --> 00:49:44,240 +include language models uh that just + +1142 +00:49:42,720 --> 00:49:46,559 +predict the next words it could include + +1143 +00:49:44,240 --> 00:49:50,559 +language models that predict the output + +1144 +00:49:46,559 --> 00:49:51,599 +given the uh the input or the prompt um + +1145 +00:49:50,559 --> 00:49:54,559 +I'm going to be talking about + +1146 +00:49:51,599 --> 00:49:56,520 +representing words uh how how we get + +1147 +00:49:54,559 --> 00:49:59,319 +word representation subword models other + +1148 +00:49:56,520 --> 00:50:01,440 +things like that uh then go kind of + +1149 +00:49:59,319 --> 00:50:04,200 +deeper into language modeling uh how do + +1150 +00:50:01,440 --> 00:50:07,799 +we do it how do we evaluate it other + +1151 +00:50:04,200 --> 00:50:10,920 +things um sequence encoding uh and this + +1152 +00:50:07,799 --> 00:50:13,240 +is going to cover things like uh + +1153 +00:50:10,920 --> 00:50:16,280 +Transformers uh self attention modals + +1154 +00:50:13,240 --> 00:50:18,559 +but also very quickly cnns and rnns + +1155 +00:50:16,280 --> 00:50:20,880 +which are useful in some + +1156 +00:50:18,559 --> 00:50:22,200 +cases um and then we're going to + +1157 +00:50:20,880 --> 00:50:24,040 +specifically go very deep into the + +1158 +00:50:22,200 --> 00:50:25,960 +Transformer architecture and also talk a + +1159 +00:50:24,040 --> 00:50:27,280 +little bit about some of the modern uh + +1160 +00:50:25,960 --> 00:50:30,240 +improvements to the Transformer + +1161 +00:50:27,280 --> 00:50:31,839 +architecture so the Transformer we're + +1162 +00:50:30,240 --> 00:50:33,839 +using nowadays is very different than + +1163 +00:50:31,839 --> 00:50:36,200 +the Transformer that was invented in + +1164 +00:50:33,839 --> 00:50:37,240 +2017 uh so we're going to talk well I + +1165 +00:50:36,200 --> 00:50:38,760 +wouldn't say very different but + +1166 +00:50:37,240 --> 00:50:41,359 +different enough that it's important so + +1167 +00:50:38,760 --> 00:50:43,280 +we're going to talk about some of those + +1168 +00:50:41,359 --> 00:50:45,079 +things second thing I'd like to talk + +1169 +00:50:43,280 --> 00:50:47,000 +about is training and inference methods + +1170 +00:50:45,079 --> 00:50:48,839 +so this includes uh generation + +1171 +00:50:47,000 --> 00:50:52,119 +algorithms uh so we're going to have a + +1172 +00:50:48,839 --> 00:50:55,520 +whole class on how we generate text uh + +1173 +00:50:52,119 --> 00:50:58,319 +in different ways uh prompting how uh we + +1174 +00:50:55,520 --> 00:50:59,720 +can prompt things I hear uh world class + +1175 +00:50:58,319 --> 00:51:01,799 +prompt engineers make a lot of money + +1176 +00:50:59,720 --> 00:51:05,480 +nowadays so uh you'll want to pay + +1177 +00:51:01,799 --> 00:51:08,760 +attention to that one um and instruction + +1178 +00:51:05,480 --> 00:51:11,520 +tuning uh so how do we train models to + +1179 +00:51:08,760 --> 00:51:13,720 +handle a lot of different tasks and + +1180 +00:51:11,520 --> 00:51:15,839 +reinforcement learning so how do we uh + +1181 +00:51:13,720 --> 00:51:18,520 +you know like actually generate outputs + +1182 +00:51:15,839 --> 00:51:19,839 +uh kind of Judge them and then learn + +1183 +00:51:18,520 --> 00:51:22,599 +from + +1184 +00:51:19,839 --> 00:51:25,880 +there also experimental design and + +1185 +00:51:22,599 --> 00:51:28,079 +evaluation so experimental design uh so + +1186 +00:51:25,880 --> 00:51:30,480 +how do we design an experiment well uh + +1187 +00:51:28,079 --> 00:51:32,000 +so that it backs up what we want to be + +1188 +00:51:30,480 --> 00:51:34,559 +uh our conclusions that we want to be + +1189 +00:51:32,000 --> 00:51:37,000 +backing up how do we do human annotation + +1190 +00:51:34,559 --> 00:51:38,880 +of data in a reliable way this is + +1191 +00:51:37,000 --> 00:51:41,160 +getting harder and harder as models get + +1192 +00:51:38,880 --> 00:51:43,359 +better and better because uh getting + +1193 +00:51:41,160 --> 00:51:45,000 +humans who don't care very much about + +1194 +00:51:43,359 --> 00:51:48,559 +The annotation task they might do worse + +1195 +00:51:45,000 --> 00:51:51,119 +than gp4 so um you need to be careful of + +1196 +00:51:48,559 --> 00:51:52,240 +that also debugging and interpretation + +1197 +00:51:51,119 --> 00:51:53,960 +technique so what are some of the + +1198 +00:51:52,240 --> 00:51:55,160 +automatic techniques that you can do to + +1199 +00:51:53,960 --> 00:51:57,720 +quickly figure out what's going wrong + +1200 +00:51:55,160 --> 00:52:00,040 +with your models and improve + +1201 +00:51:57,720 --> 00:52:01,599 +them and uh bias and fairness + +1202 +00:52:00,040 --> 00:52:04,200 +considerations so it's really really + +1203 +00:52:01,599 --> 00:52:05,799 +important nowadays uh that models are + +1204 +00:52:04,200 --> 00:52:07,880 +being deployed to real people in the + +1205 +00:52:05,799 --> 00:52:09,880 +real world and like actually causing + +1206 +00:52:07,880 --> 00:52:11,760 +harm to people in some cases that we + +1207 +00:52:09,880 --> 00:52:15,160 +need to be worried about + +1208 +00:52:11,760 --> 00:52:17,000 +that Advanced Training in architectures + +1209 +00:52:15,160 --> 00:52:19,280 +so we're going to talk about distill + +1210 +00:52:17,000 --> 00:52:21,400 +distillation and quantization how can we + +1211 +00:52:19,280 --> 00:52:23,520 +make small language models uh that + +1212 +00:52:21,400 --> 00:52:24,880 +actually still work well like not large + +1213 +00:52:23,520 --> 00:52:27,559 +you can run them on your phone you can + +1214 +00:52:24,880 --> 00:52:29,920 +run them on your local + +1215 +00:52:27,559 --> 00:52:31,640 +laptop um ensembling and mixtures of + +1216 +00:52:29,920 --> 00:52:33,480 +experts how can we combine together + +1217 +00:52:31,640 --> 00:52:34,760 +multiple models in order to create + +1218 +00:52:33,480 --> 00:52:35,880 +models that are better than the sum of + +1219 +00:52:34,760 --> 00:52:38,799 +their + +1220 +00:52:35,880 --> 00:52:40,720 +parts and um retrieval and retrieval + +1221 +00:52:38,799 --> 00:52:43,920 +augmented + +1222 +00:52:40,720 --> 00:52:45,480 +generation long sequence models uh so + +1223 +00:52:43,920 --> 00:52:49,920 +how do we handle long + +1224 +00:52:45,480 --> 00:52:52,240 +outputs um and uh we're going to talk + +1225 +00:52:49,920 --> 00:52:55,760 +about applications to complex reasoning + +1226 +00:52:52,240 --> 00:52:57,760 +tasks code generation language agents + +1227 +00:52:55,760 --> 00:52:59,920 +and knowledge-based QA and information + +1228 +00:52:57,760 --> 00:53:04,160 +extraction I picked + +1229 +00:52:59,920 --> 00:53:06,760 +these because they seem to be maybe the + +1230 +00:53:04,160 --> 00:53:09,880 +most important at least in research + +1231 +00:53:06,760 --> 00:53:11,440 +nowadays and also they cover uh the + +1232 +00:53:09,880 --> 00:53:13,640 +things that when I talk to people in + +1233 +00:53:11,440 --> 00:53:15,280 +Industry are kind of most interested in + +1234 +00:53:13,640 --> 00:53:17,559 +so hopefully it'll be useful regardless + +1235 +00:53:15,280 --> 00:53:19,799 +of uh whether you plan on doing research + +1236 +00:53:17,559 --> 00:53:22,839 +or or plan on doing industry related + +1237 +00:53:19,799 --> 00:53:24,160 +things uh by by the way the two things + +1238 +00:53:22,839 --> 00:53:25,920 +that when I talk to people in Industry + +1239 +00:53:24,160 --> 00:53:29,599 +they're most interested in are Rag and + +1240 +00:53:25,920 --> 00:53:31,079 +code generation at the moment for now um + +1241 +00:53:29,599 --> 00:53:32,319 +so those are ones that you'll want to + +1242 +00:53:31,079 --> 00:53:34,680 +pay attention + +1243 +00:53:32,319 --> 00:53:36,599 +to and then finally we have a few + +1244 +00:53:34,680 --> 00:53:40,079 +lectures on Linguistics and + +1245 +00:53:36,599 --> 00:53:42,720 +multilinguality um I love Linguistics + +1246 +00:53:40,079 --> 00:53:44,839 +but uh to be honest at the moment most + +1247 +00:53:42,720 --> 00:53:47,760 +of our Cutting Edge models don't + +1248 +00:53:44,839 --> 00:53:49,240 +explicitly use linguistic structure um + +1249 +00:53:47,760 --> 00:53:50,799 +but I still think it's useful to know + +1250 +00:53:49,240 --> 00:53:52,760 +about it especially if you're working on + +1251 +00:53:50,799 --> 00:53:54,880 +multilingual things especially if you're + +1252 +00:53:52,760 --> 00:53:57,040 +interested in very robust generalization + +1253 +00:53:54,880 --> 00:53:58,920 +to new models so we're going to talk a + +1254 +00:53:57,040 --> 00:54:02,599 +little bit about that and also + +1255 +00:53:58,920 --> 00:54:06,079 +multilingual LP I'm going to have + +1256 +00:54:02,599 --> 00:54:09,119 +fure so also if you have any suggestions + +1257 +00:54:06,079 --> 00:54:11,400 +um we have two guest lecture slots still + +1258 +00:54:09,119 --> 00:54:12,799 +open uh that I'm trying to fill so if + +1259 +00:54:11,400 --> 00:54:15,440 +you have any things that you really want + +1260 +00:54:12,799 --> 00:54:16,440 +to hear about um I could either add them + +1261 +00:54:15,440 --> 00:54:19,319 +to the + +1262 +00:54:16,440 --> 00:54:21,079 +existing you know content or I could + +1263 +00:54:19,319 --> 00:54:23,240 +invite a guest lecturer who's working on + +1264 +00:54:21,079 --> 00:54:24,079 +that topic so you know please feel free + +1265 +00:54:23,240 --> 00:54:26,760 +to tell + +1266 +00:54:24,079 --> 00:54:29,160 +me um then the class format and + +1267 +00:54:26,760 --> 00:54:32,280 +structure uh the class + +1268 +00:54:29,160 --> 00:54:34,000 +content my goal is to learn in detail + +1269 +00:54:32,280 --> 00:54:36,640 +about building NLP systems from a + +1270 +00:54:34,000 --> 00:54:40,520 +research perspective so this is a 700 + +1271 +00:54:36,640 --> 00:54:43,599 +level course so it's aiming to be for + +1272 +00:54:40,520 --> 00:54:46,960 +people who really want to try new and + +1273 +00:54:43,599 --> 00:54:49,280 +Innovative things in uh kind of natural + +1274 +00:54:46,960 --> 00:54:51,359 +language processing it's not going to + +1275 +00:54:49,280 --> 00:54:52,760 +focus solely on reimplementing things + +1276 +00:54:51,359 --> 00:54:54,319 +that have been done before including in + +1277 +00:54:52,760 --> 00:54:55,280 +the project I'm going to be expecting + +1278 +00:54:54,319 --> 00:54:58,480 +everybody to do something something + +1279 +00:54:55,280 --> 00:54:59,920 +that's kind of new whether it's coming + +1280 +00:54:58,480 --> 00:55:01,359 +up with a new method or applying + +1281 +00:54:59,920 --> 00:55:03,559 +existing methods to a place where they + +1282 +00:55:01,359 --> 00:55:05,079 +haven't been used before or building out + +1283 +00:55:03,559 --> 00:55:06,640 +things for a new language or something + +1284 +00:55:05,079 --> 00:55:08,359 +like that so that's kind of one of the + +1285 +00:55:06,640 --> 00:55:11,480 +major goals of this + +1286 +00:55:08,359 --> 00:55:13,000 +class um learn basic and advanced topics + +1287 +00:55:11,480 --> 00:55:15,559 +in machine learning approaches to NLP + +1288 +00:55:13,000 --> 00:55:18,359 +and language models learn some basic + +1289 +00:55:15,559 --> 00:55:21,480 +linguistic knowledge useful in NLP uh + +1290 +00:55:18,359 --> 00:55:23,200 +see case studies of NLP applications and + +1291 +00:55:21,480 --> 00:55:25,680 +learn how to identify unique problems + +1292 +00:55:23,200 --> 00:55:29,039 +for each um one thing i' like to point + +1293 +00:55:25,680 --> 00:55:31,160 +out is I'm not going to cover every NLP + +1294 +00:55:29,039 --> 00:55:32,920 +application ever because that would be + +1295 +00:55:31,160 --> 00:55:35,520 +absolutely impossible NLP is being used + +1296 +00:55:32,920 --> 00:55:37,079 +in so many different areas nowadays but + +1297 +00:55:35,520 --> 00:55:38,960 +what I want people to pay attention to + +1298 +00:55:37,079 --> 00:55:41,280 +like even if you're not super interested + +1299 +00:55:38,960 --> 00:55:42,400 +in code generation for example what you + +1300 +00:55:41,280 --> 00:55:44,200 +can do is you can look at code + +1301 +00:55:42,400 --> 00:55:46,160 +generation look at how people identify + +1302 +00:55:44,200 --> 00:55:47,680 +problems look at the methods that people + +1303 +00:55:46,160 --> 00:55:50,880 +have proposed to solve those unique + +1304 +00:55:47,680 --> 00:55:53,039 +problems and then kind of map that try + +1305 +00:55:50,880 --> 00:55:54,799 +to do some generalization onto your own + +1306 +00:55:53,039 --> 00:55:57,799 +problems of Interest so uh that's kind + +1307 +00:55:54,799 --> 00:56:00,280 +of the goal of the NLP + +1308 +00:55:57,799 --> 00:56:02,440 +applications finally uh learning how to + +1309 +00:56:00,280 --> 00:56:05,160 +debug when and where NLP systems fail + +1310 +00:56:02,440 --> 00:56:08,200 +and build improvements based on this so + +1311 +00:56:05,160 --> 00:56:10,200 +um ever since I was a graduate student + +1312 +00:56:08,200 --> 00:56:12,720 +this has been like one of the really + +1313 +00:56:10,200 --> 00:56:15,920 +important things that I feel like I've + +1314 +00:56:12,720 --> 00:56:17,440 +done well or done better than some other + +1315 +00:56:15,920 --> 00:56:19,280 +people and I I feel like it's a really + +1316 +00:56:17,440 --> 00:56:21,119 +good way to like even if you're only + +1317 +00:56:19,280 --> 00:56:22,680 +interested in improving accuracy knowing + +1318 +00:56:21,119 --> 00:56:25,039 +why your system's failing still is the + +1319 +00:56:22,680 --> 00:56:27,599 +best way to do that I so I'm going to + +1320 +00:56:25,039 --> 00:56:30,559 +put a lot of emphasis on + +1321 +00:56:27,599 --> 00:56:32,559 +that in terms of the class format um + +1322 +00:56:30,559 --> 00:56:36,280 +before class for some classes there are + +1323 +00:56:32,559 --> 00:56:37,880 +recommended reading uh this can be + +1324 +00:56:36,280 --> 00:56:39,559 +helpful to read I'm never going to + +1325 +00:56:37,880 --> 00:56:41,119 +expect you to definitely have read it + +1326 +00:56:39,559 --> 00:56:42,480 +before the class but I would suggest + +1327 +00:56:41,119 --> 00:56:45,160 +that maybe you'll get more out of the + +1328 +00:56:42,480 --> 00:56:47,319 +class if you do that um during class + +1329 +00:56:45,160 --> 00:56:48,079 +we'll have the lecture um in discussion + +1330 +00:56:47,319 --> 00:56:50,559 +with + +1331 +00:56:48,079 --> 00:56:52,359 +everybody um sometimes we'll have a code + +1332 +00:56:50,559 --> 00:56:55,839 +or data walk + +1333 +00:56:52,359 --> 00:56:58,760 +um actually this is a a little bit old I + +1334 +00:56:55,839 --> 00:57:01,880 +I have this slide we're this year we're + +1335 +00:56:58,760 --> 00:57:04,160 +going to be adding more uh code and data + +1336 +00:57:01,880 --> 00:57:07,400 +walks during office hours and the way it + +1337 +00:57:04,160 --> 00:57:09,400 +will work is one of the Tas we have + +1338 +00:57:07,400 --> 00:57:11,160 +seven Tas who I'm going to introduce + +1339 +00:57:09,400 --> 00:57:15,000 +very soon but one of the Tas will be + +1340 +00:57:11,160 --> 00:57:16,839 +doing this kind of recitation where you + +1341 +00:57:15,000 --> 00:57:18,200 +um where we go over a library so if + +1342 +00:57:16,839 --> 00:57:19,480 +you're not familiar with the library and + +1343 +00:57:18,200 --> 00:57:21,960 +you want to be more familiar with the + +1344 +00:57:19,480 --> 00:57:23,720 +library you can join this and uh then + +1345 +00:57:21,960 --> 00:57:25,400 +we'll be able to do this and this will + +1346 +00:57:23,720 --> 00:57:28,240 +cover things like + +1347 +00:57:25,400 --> 00:57:31,039 +um pie torch and sentence piece uh we're + +1348 +00:57:28,240 --> 00:57:33,280 +going to start out with hugging face um + +1349 +00:57:31,039 --> 00:57:36,559 +inference stuff like + +1350 +00:57:33,280 --> 00:57:41,520 +VM uh debugging software like + +1351 +00:57:36,559 --> 00:57:41,520 +Xeno um what were the other + +1352 +00:57:41,960 --> 00:57:47,200 +ones oh the open AI API and light llm + +1353 +00:57:45,680 --> 00:57:50,520 +other stuff like that so we we have lots + +1354 +00:57:47,200 --> 00:57:53,599 +of them planned we'll uh uh we'll update + +1355 +00:57:50,520 --> 00:57:54,839 +that um and then after class after + +1356 +00:57:53,599 --> 00:57:58,079 +almost every class we'll have a question + +1357 +00:57:54,839 --> 00:58:00,079 +quiz um and the quiz is intended to just + +1358 +00:57:58,079 --> 00:58:02,000 +you know make sure that you uh paid + +1359 +00:58:00,079 --> 00:58:04,480 +attention to the material and are able + +1360 +00:58:02,000 --> 00:58:07,520 +to answer questions about it we will aim + +1361 +00:58:04,480 --> 00:58:09,559 +to release it on the day of the course + +1362 +00:58:07,520 --> 00:58:11,599 +the day of the actual lecture and it + +1363 +00:58:09,559 --> 00:58:14,559 +will be due at the end of the following + +1364 +00:58:11,599 --> 00:58:15,960 +day of the lecture so um it will be + +1365 +00:58:14,559 --> 00:58:18,920 +three questions it probably shouldn't + +1366 +00:58:15,960 --> 00:58:20,680 +take a whole lot of time but um uh yeah + +1367 +00:58:18,920 --> 00:58:23,400 +so we'll H + +1368 +00:58:20,680 --> 00:58:26,319 +that in terms of assignments assignment + +1369 +00:58:23,400 --> 00:58:28,640 +one is going to be build your own llama + +1370 +00:58:26,319 --> 00:58:30,200 +and so what this is going to look like + +1371 +00:58:28,640 --> 00:58:32,680 +is we're going to give you a partial + +1372 +00:58:30,200 --> 00:58:34,319 +implementation of llama which is kind of + +1373 +00:58:32,680 --> 00:58:37,960 +the most popular open source language + +1374 +00:58:34,319 --> 00:58:40,160 +model nowadays and ask you to fill in um + +1375 +00:58:37,960 --> 00:58:42,839 +ask you to fill in the parts we're going + +1376 +00:58:40,160 --> 00:58:45,920 +to train a very small version of llama + +1377 +00:58:42,839 --> 00:58:47,319 +on a small data set and get it to work + +1378 +00:58:45,920 --> 00:58:48,880 +and the reason why it's very small is + +1379 +00:58:47,319 --> 00:58:50,480 +because the smallest actual version of + +1380 +00:58:48,880 --> 00:58:53,039 +llama is 7 billion + +1381 +00:58:50,480 --> 00:58:55,359 +parameters um and that might be a little + +1382 +00:58:53,039 --> 00:58:58,400 +bit difficult to train with + +1383 +00:58:55,359 --> 00:59:00,680 +resources um for assignment two we're + +1384 +00:58:58,400 --> 00:59:04,559 +going to try to do an NLP task from + +1385 +00:59:00,680 --> 00:59:06,920 +scratch and so the way this will work is + +1386 +00:59:04,559 --> 00:59:08,520 +we're going to give you an assignment + +1387 +00:59:06,920 --> 00:59:10,880 +which we're not going to give you an + +1388 +00:59:08,520 --> 00:59:13,400 +actual data set and instead we're going + +1389 +00:59:10,880 --> 00:59:15,760 +to ask you to uh perform data creation + +1390 +00:59:13,400 --> 00:59:19,359 +modeling and evaluation for a specified + +1391 +00:59:15,760 --> 00:59:20,640 +task and so we're going to tell you uh + +1392 +00:59:19,359 --> 00:59:22,599 +what to do but we're not going to tell + +1393 +00:59:20,640 --> 00:59:26,400 +you exactly how to do it but we're going + +1394 +00:59:22,599 --> 00:59:29,680 +to try to give as conrete directions as + +1395 +00:59:26,400 --> 00:59:32,359 +we can um + +1396 +00:59:29,680 --> 00:59:34,160 +yeah will you be given a parameter limit + +1397 +00:59:32,359 --> 00:59:36,559 +on the model so that's a good question + +1398 +00:59:34,160 --> 00:59:39,119 +or like a expense limit or something + +1399 +00:59:36,559 --> 00:59:40,440 +like that um I maybe actually I should + +1400 +00:59:39,119 --> 00:59:44,240 +take a break from the assignments and + +1401 +00:59:40,440 --> 00:59:46,520 +talk about compute so right now um for + +1402 +00:59:44,240 --> 00:59:49,319 +assignment one we're planning on having + +1403 +00:59:46,520 --> 00:59:51,599 +this be able to be done either on a Mac + +1404 +00:59:49,319 --> 00:59:53,520 +laptop with an M1 or M2 processor which + +1405 +00:59:51,599 --> 00:59:57,079 +I think a lot of people have or Google + +1406 +00:59:53,520 --> 00:59:59,839 +collab um so it should be like + +1407 +00:59:57,079 --> 01:00:02,160 +sufficient to use free computational + +1408 +00:59:59,839 --> 01:00:03,640 +resources that you have for number two + +1409 +01:00:02,160 --> 01:00:06,079 +we'll think about that I think that's + +1410 +01:00:03,640 --> 01:00:08,280 +important we do have Google cloud + +1411 +01:00:06,079 --> 01:00:11,520 +credits for $50 for everybody and I'm + +1412 +01:00:08,280 --> 01:00:13,440 +working to get AWS credits for more um + +1413 +01:00:11,520 --> 01:00:18,160 +but the cloud providers nowadays are + +1414 +01:00:13,440 --> 01:00:19,680 +being very stingy so um so it's uh been + +1415 +01:00:18,160 --> 01:00:22,160 +a little bit of a fight to get uh + +1416 +01:00:19,680 --> 01:00:23,680 +credits but I I it is very important so + +1417 +01:00:22,160 --> 01:00:28,480 +I'm going to try to get as as many as we + +1418 +01:00:23,680 --> 01:00:31,119 +can um and so yeah I I think basically + +1419 +01:00:28,480 --> 01:00:32,280 +uh there will be some sort of like limit + +1420 +01:00:31,119 --> 01:00:34,480 +on the amount of things you can + +1421 +01:00:32,280 --> 01:00:36,240 +practically do and so because of that + +1422 +01:00:34,480 --> 01:00:39,920 +I'm hoping that people will rely very + +1423 +01:00:36,240 --> 01:00:43,359 +heavily on pre-trained models um or uh + +1424 +01:00:39,920 --> 01:00:46,079 +yeah pre-trained models + +1425 +01:00:43,359 --> 01:00:49,599 +and yeah so that that's the the short + +1426 +01:00:46,079 --> 01:00:52,799 +story B um the second thing uh the + +1427 +01:00:49,599 --> 01:00:54,720 +assignment three is to do a survey of + +1428 +01:00:52,799 --> 01:00:57,920 +some sort of state-ofthe-art research + +1429 +01:00:54,720 --> 01:01:00,760 +resarch and do a reimplementation of + +1430 +01:00:57,920 --> 01:01:02,000 +this and in doing this again you will + +1431 +01:01:00,760 --> 01:01:03,440 +have to think about something that's + +1432 +01:01:02,000 --> 01:01:06,359 +feasible within computational + +1433 +01:01:03,440 --> 01:01:08,680 +constraints um and so you can discuss + +1434 +01:01:06,359 --> 01:01:11,839 +with your Tas about uh about the best + +1435 +01:01:08,680 --> 01:01:13,920 +way to do this um and then the final + +1436 +01:01:11,839 --> 01:01:15,400 +project is to perform a unique project + +1437 +01:01:13,920 --> 01:01:17,559 +that either improves on the state-of-the + +1438 +01:01:15,400 --> 01:01:21,000 +art with respect to whatever you would + +1439 +01:01:17,559 --> 01:01:23,440 +like to improve with this could be uh + +1440 +01:01:21,000 --> 01:01:25,280 +accuracy for sure this could be + +1441 +01:01:23,440 --> 01:01:27,760 +efficiency + +1442 +01:01:25,280 --> 01:01:29,599 +it could be some sense of + +1443 +01:01:27,760 --> 01:01:31,520 +interpretability but if it's going to be + +1444 +01:01:29,599 --> 01:01:33,599 +something like interpretability you'll + +1445 +01:01:31,520 --> 01:01:35,440 +have to discuss with us what that means + +1446 +01:01:33,599 --> 01:01:37,240 +like how we measure that how we can like + +1447 +01:01:35,440 --> 01:01:40,839 +actually say that you did a good job + +1448 +01:01:37,240 --> 01:01:42,839 +with improving that um another thing + +1449 +01:01:40,839 --> 01:01:44,680 +that you can do is take whatever you + +1450 +01:01:42,839 --> 01:01:47,280 +implemented for assignment 3 and apply + +1451 +01:01:44,680 --> 01:01:49,039 +it to a new task or apply it to a new + +1452 +01:01:47,280 --> 01:01:50,760 +language that has never been examined + +1453 +01:01:49,039 --> 01:01:53,119 +before so these are also acceptable + +1454 +01:01:50,760 --> 01:01:54,240 +final projects but basically the idea is + +1455 +01:01:53,119 --> 01:01:55,559 +for the final project you need to do + +1456 +01:01:54,240 --> 01:01:57,480 +something something new that hasn't been + +1457 +01:01:55,559 --> 01:01:59,880 +done before and create new knowledge + +1458 +01:01:57,480 --> 01:02:04,520 +with the respect + +1459 +01:01:59,880 --> 01:02:07,640 +toy um so for this the instructor is me + +1460 +01:02:04,520 --> 01:02:09,920 +um I'm uh looking forward to you know + +1461 +01:02:07,640 --> 01:02:13,599 +discussing and working with all of you + +1462 +01:02:09,920 --> 01:02:16,119 +um for TAS we have seven Tas uh two of + +1463 +01:02:13,599 --> 01:02:18,319 +them are in transit so they're not here + +1464 +01:02:16,119 --> 01:02:22,279 +today um the other ones uh Tas would you + +1465 +01:02:18,319 --> 01:02:22,279 +mind coming up uh to introduce + +1466 +01:02:23,359 --> 01:02:26,359 +yourself + +1467 +01:02:28,400 --> 01:02:32,839 +so um yeah nhir and akshai couldn't be + +1468 +01:02:31,599 --> 01:02:34,039 +here today because they're traveling + +1469 +01:02:32,839 --> 01:02:37,119 +I'll introduce them later because + +1470 +01:02:34,039 --> 01:02:37,119 +they're coming uh next + +1471 +01:02:40,359 --> 01:02:46,480 +time cool and what I'd like everybody to + +1472 +01:02:43,000 --> 01:02:48,680 +do is say um like you know what your + +1473 +01:02:46,480 --> 01:02:53,079 +name is uh what + +1474 +01:02:48,680 --> 01:02:55,799 +your like maybe what you're interested + +1475 +01:02:53,079 --> 01:02:57,319 +in um and the reason the goal of this is + +1476 +01:02:55,799 --> 01:02:59,200 +number one for everybody to know who you + +1477 +01:02:57,319 --> 01:03:00,720 +are and number two for everybody to know + +1478 +01:02:59,200 --> 01:03:03,440 +who the best person to talk to is if + +1479 +01:03:00,720 --> 01:03:03,440 +they're interested in + +1480 +01:03:04,200 --> 01:03:09,079 +particular hi uh I'm + +1481 +01:03:07,000 --> 01:03:15,520 +Aila second + +1482 +01:03:09,079 --> 01:03:15,520 +year I work on language and social + +1483 +01:03:16,200 --> 01:03:24,559 +and I'm I'm a second this year PhD + +1484 +01:03:21,160 --> 01:03:26,799 +student Grand and Shar with you I search + +1485 +01:03:24,559 --> 01:03:28,480 +is like started in the border of MP and + +1486 +01:03:26,799 --> 01:03:31,000 +computer interaction with a lot of work + +1487 +01:03:28,480 --> 01:03:32,640 +on automating parts of the developer + +1488 +01:03:31,000 --> 01:03:35,319 +experience to make it easier for anyone + +1489 +01:03:32,640 --> 01:03:35,319 +to + +1490 +01:03:39,090 --> 01:03:42,179 +[Music] + +1491 +01:03:47,520 --> 01:03:53,279 +orif + +1492 +01:03:50,079 --> 01:03:54,680 +everyone first + +1493 +01:03:53,279 --> 01:03:57,119 +year + +1494 +01:03:54,680 --> 01:04:00,119 +[Music] + +1495 +01:03:57,119 --> 01:04:03,559 +I don't like updating primar models I + +1496 +01:04:00,119 --> 01:04:03,559 +hope to not update Prim + +1497 +01:04:14,599 --> 01:04:19,400 +modelm yeah thanks a lot everyone and + +1498 +01:04:17,200 --> 01:04:19,400 +yeah + +1499 +01:04:20,839 --> 01:04:29,400 +than and so we will um we'll have people + +1500 +01:04:25,640 --> 01:04:30,799 +uh kind of have office hours uh every ta + +1501 +01:04:29,400 --> 01:04:32,880 +has office hours at a regular time + +1502 +01:04:30,799 --> 01:04:34,480 +during the week uh please feel free to + +1503 +01:04:32,880 --> 01:04:38,400 +come to their office hours or my office + +1504 +01:04:34,480 --> 01:04:41,960 +hours um I think they are visha are they + +1505 +01:04:38,400 --> 01:04:43,880 +posted on the site or okay yeah they + +1506 +01:04:41,960 --> 01:04:47,240 +they either are or will be posted on the + +1507 +01:04:43,880 --> 01:04:49,720 +site very soon um and come by to talk + +1508 +01:04:47,240 --> 01:04:51,480 +about anything uh if there's nobody in + +1509 +01:04:49,720 --> 01:04:53,079 +my office hours I'm happy to talk about + +1510 +01:04:51,480 --> 01:04:54,599 +things that are unrelated but if there's + +1511 +01:04:53,079 --> 01:04:58,039 +lots of people waiting outside or I + +1512 +01:04:54,599 --> 01:05:00,319 +might limit it to uh like um just things + +1513 +01:04:58,039 --> 01:05:02,480 +about the class so cool and we have + +1514 +01:05:00,319 --> 01:05:04,760 +Patza we'll be checking that regularly + +1515 +01:05:02,480 --> 01:05:06,839 +uh striving to get you an answer in 24 + +1516 +01:05:04,760 --> 01:05:12,240 +hours on weekdays over weekends we might + +1517 +01:05:06,839 --> 01:05:16,000 +not so um yeah so that's all for today + +1518 +01:05:12,240 --> 01:05:16,000 +are there any questions \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (1) Introduction to NLP/transcript.vtt b/CMU Advanced NLP 2024 (1) Introduction to NLP/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..c3fbea8b4e22c2d8b143ac3fdda1db26ef789d0e --- /dev/null +++ b/CMU Advanced NLP 2024 (1) Introduction to NLP/transcript.vtt @@ -0,0 +1,4555 @@ +WEBVTT + +00:00:01.280 --> 00:00:06.759 +so the class today is uh introduction to + +00:00:04.680 --> 00:00:09.480 +natural language processing and I'll be + +00:00:06.759 --> 00:00:11.200 +talking a little bit about you know what + +00:00:09.480 --> 00:00:14.719 +is natural language processing why we're + +00:00:11.200 --> 00:00:16.720 +motivated to do it and also some of the + +00:00:14.719 --> 00:00:18.039 +difficulties that we encounter and I'll + +00:00:16.720 --> 00:00:19.880 +at the end I'll also be talking about + +00:00:18.039 --> 00:00:22.519 +class Logistics so you can ask any + +00:00:19.880 --> 00:00:25.439 +Logistics questions at that + +00:00:22.519 --> 00:00:27.720 +time so if we talk about what is NLP + +00:00:25.439 --> 00:00:29.320 +anyway uh does anyone have any opinions + +00:00:27.720 --> 00:00:31.439 +about the definition of what natural + +00:00:29.320 --> 00:00:33.239 +language process would be oh one other + +00:00:31.439 --> 00:00:35.680 +thing I should mention is I am recording + +00:00:33.239 --> 00:00:38.600 +the class uh I put the class on YouTube + +00:00:35.680 --> 00:00:40.520 +uh afterwards I will not take pictures + +00:00:38.600 --> 00:00:41.920 +or video of any of you uh but if you + +00:00:40.520 --> 00:00:44.719 +talk your voice might come in the + +00:00:41.920 --> 00:00:47.440 +background so just uh be aware of that + +00:00:44.719 --> 00:00:49.000 +um usually not it's a directional mic so + +00:00:47.440 --> 00:00:51.559 +I try to repeat the questions after + +00:00:49.000 --> 00:00:54.079 +everybody um but uh for the people who + +00:00:51.559 --> 00:00:57.680 +are recordings uh listening to the + +00:00:54.079 --> 00:00:59.320 +recordings um so anyway what is NLP + +00:00:57.680 --> 00:01:03.120 +anyway does anybody have any ideas about + +00:00:59.320 --> 00:01:03.120 +the definition of what NLP might + +00:01:06.119 --> 00:01:09.119 +be + +00:01:15.439 --> 00:01:21.759 +yes okay um it so the answer was it + +00:01:19.240 --> 00:01:25.759 +helps machines understand language + +00:01:21.759 --> 00:01:27.920 +better uh so to facilitate human human + +00:01:25.759 --> 00:01:31.159 +and human machine interactions I think + +00:01:27.920 --> 00:01:32.759 +that's very good um it's + +00:01:31.159 --> 00:01:36.520 +uh similar to what I have written on my + +00:01:32.759 --> 00:01:38.040 +slide here uh but natur in addition to + +00:01:36.520 --> 00:01:41.280 +natural language understanding there's + +00:01:38.040 --> 00:01:46.000 +one major other segment of NLP uh does + +00:01:41.280 --> 00:01:46.000 +anyone uh have an idea what that might + +00:01:48.719 --> 00:01:53.079 +be we often have a dichotomy between two + +00:01:51.399 --> 00:01:55.240 +major segments natural language + +00:01:53.079 --> 00:01:57.520 +understanding and natural language + +00:01:55.240 --> 00:01:59.439 +generation yeah exactly so I I would say + +00:01:57.520 --> 00:02:03.119 +that's almost perfect if you had said + +00:01:59.439 --> 00:02:06.640 +understand and generate so very good um + +00:02:03.119 --> 00:02:08.560 +so I I say natural technology to handle + +00:02:06.640 --> 00:02:11.400 +human language usually text using + +00:02:08.560 --> 00:02:13.200 +computers uh to Aid human machine + +00:02:11.400 --> 00:02:15.480 +communication and this can include + +00:02:13.200 --> 00:02:17.879 +things like question answering dialogue + +00:02:15.480 --> 00:02:20.840 +or generation of code that can be + +00:02:17.879 --> 00:02:23.239 +executed with uh + +00:02:20.840 --> 00:02:25.080 +computers it can also Aid human human + +00:02:23.239 --> 00:02:27.440 +communication and this can include + +00:02:25.080 --> 00:02:30.440 +things like machine translation or spell + +00:02:27.440 --> 00:02:32.640 +checking or assisted writing + +00:02:30.440 --> 00:02:34.560 +and then a final uh segment that people + +00:02:32.640 --> 00:02:37.400 +might think about a little bit less is + +00:02:34.560 --> 00:02:39.400 +analyzing and understanding a language + +00:02:37.400 --> 00:02:42.400 +and this includes things like syntactic + +00:02:39.400 --> 00:02:44.959 +analysis text classification entity + +00:02:42.400 --> 00:02:47.400 +recognition and linking and these can be + +00:02:44.959 --> 00:02:49.159 +used for uh various reasons not + +00:02:47.400 --> 00:02:51.000 +necessarily for direct human machine + +00:02:49.159 --> 00:02:52.720 +communication but also for like + +00:02:51.000 --> 00:02:54.400 +aggregating information across large + +00:02:52.720 --> 00:02:55.760 +things for scientific studies and other + +00:02:54.400 --> 00:02:57.519 +things like that I'll give a few + +00:02:55.760 --> 00:03:00.920 +examples of + +00:02:57.519 --> 00:03:04.040 +this um we now use an many times a day + +00:03:00.920 --> 00:03:06.480 +sometimes without even knowing it so uh + +00:03:04.040 --> 00:03:09.400 +whenever you're typing a doc in Google + +00:03:06.480 --> 00:03:11.599 +Docs there's you know spell checking and + +00:03:09.400 --> 00:03:13.959 +grammar checking going on behind it's + +00:03:11.599 --> 00:03:15.920 +gotten frighten frighteningly good + +00:03:13.959 --> 00:03:18.280 +recently that where it checks like most + +00:03:15.920 --> 00:03:20.720 +of my mistakes and rarely Flags things + +00:03:18.280 --> 00:03:22.799 +that are not mistakes so obviously they + +00:03:20.720 --> 00:03:25.080 +have powerful models running behind that + +00:03:22.799 --> 00:03:25.080 +uh + +00:03:25.640 --> 00:03:33.080 +so and it can do things like answer + +00:03:28.720 --> 00:03:34.599 +questions uh so I asked chat GPT who is + +00:03:33.080 --> 00:03:37.000 +the current president of Carnegie melan + +00:03:34.599 --> 00:03:38.920 +University and chat GPT said I did a + +00:03:37.000 --> 00:03:40.920 +quick search for more information here + +00:03:38.920 --> 00:03:43.439 +is what I found uh the current president + +00:03:40.920 --> 00:03:47.120 +of car Mel University is faram Janan he + +00:03:43.439 --> 00:03:50.040 +has been serving since July 1 etc etc so + +00:03:47.120 --> 00:03:50.040 +as far as I can tell that's + +00:03:50.400 --> 00:03:56.319 +correct um at the same time I asked how + +00:03:53.799 --> 00:04:00.280 +many layers are included in the GP 3.5 + +00:03:56.319 --> 00:04:02.360 +turbo architecture and it said to me + +00:04:00.280 --> 00:04:05.400 +GPT 3.5 turbo which is an optimized + +00:04:02.360 --> 00:04:07.239 +version of GPT 3.5 for faster responses + +00:04:05.400 --> 00:04:08.959 +doesn't have a specific layer art + +00:04:07.239 --> 00:04:11.720 +structure like the traditional gpt3 + +00:04:08.959 --> 00:04:13.560 +models um and I don't know if this is + +00:04:11.720 --> 00:04:16.600 +true or not but I'm pretty sure it's not + +00:04:13.560 --> 00:04:18.840 +true I'm pretty sure that you know GPT + +00:04:16.600 --> 00:04:20.560 +is a model that's much like other models + +00:04:18.840 --> 00:04:21.560 +uh so it basically just made up the spec + +00:04:20.560 --> 00:04:22.880 +because it didn't have any information + +00:04:21.560 --> 00:04:26.000 +on the Internet or couldn't talk about + +00:04:22.880 --> 00:04:26.000 +it so + +00:04:26.120 --> 00:04:33.479 +um another thing is uh NLP can translate + +00:04:29.639 --> 00:04:37.759 +text pretty well so I ran um Google + +00:04:33.479 --> 00:04:39.560 +translate uh on Japanese uh this example + +00:04:37.759 --> 00:04:41.639 +is a little bit old it's from uh you + +00:04:39.560 --> 00:04:44.639 +know a few years ago about Co but I I + +00:04:41.639 --> 00:04:46.240 +retranslated it a few days ago and it + +00:04:44.639 --> 00:04:47.680 +comes up pretty good uh you can + +00:04:46.240 --> 00:04:49.639 +basically understand what's going on + +00:04:47.680 --> 00:04:53.520 +here it's not perfect but you can + +00:04:49.639 --> 00:04:56.400 +understand the uh the general uh + +00:04:53.520 --> 00:04:58.560 +gist at the same time uh if I put in a + +00:04:56.400 --> 00:05:02.280 +relatively low resource language this is + +00:04:58.560 --> 00:05:05.759 +Kurdish um it has a number of problems + +00:05:02.280 --> 00:05:08.199 +when you try to understand it and just + +00:05:05.759 --> 00:05:12.400 +to give an example this is talking about + +00:05:08.199 --> 00:05:14.320 +uh some uh paleontology Discovery it + +00:05:12.400 --> 00:05:15.800 +called this person a fossil scientist + +00:05:14.320 --> 00:05:17.440 +instead of the kind of obvious English + +00:05:15.800 --> 00:05:20.120 +term + +00:05:17.440 --> 00:05:23.520 +paleontologist um and it's talking about + +00:05:20.120 --> 00:05:25.240 +three different uh T-Rex species uh how + +00:05:23.520 --> 00:05:27.039 +T-Rex should actually be split into + +00:05:25.240 --> 00:05:29.639 +three species where T-Rex says king of + +00:05:27.039 --> 00:05:31.560 +ferocious lizards emperator says emperor + +00:05:29.639 --> 00:05:33.720 +of Savaged lizards and then T Regina + +00:05:31.560 --> 00:05:35.120 +means clean of ferocious snail I'm + +00:05:33.720 --> 00:05:37.240 +pretty sure that's not snail I'm pretty + +00:05:35.120 --> 00:05:41.080 +sure that's lizard so uh you can see + +00:05:37.240 --> 00:05:41.080 +that this is not uh this is not perfect + +00:05:41.280 --> 00:05:46.680 +either some people might be thinking why + +00:05:43.960 --> 00:05:48.400 +Google translate and why not GPD well it + +00:05:46.680 --> 00:05:49.960 +turns out um according to one of the + +00:05:48.400 --> 00:05:51.759 +recent studies we've done GPD is even + +00:05:49.960 --> 00:05:55.479 +worse at these slow resource languages + +00:05:51.759 --> 00:05:58.120 +so I use the best thing that's out + +00:05:55.479 --> 00:06:00.440 +there um another thing is language + +00:05:58.120 --> 00:06:02.039 +analysis can Aid scientific ific inquiry + +00:06:00.440 --> 00:06:03.600 +so this is an example that I've been + +00:06:02.039 --> 00:06:06.120 +using for a long time it's actually from + +00:06:03.600 --> 00:06:09.160 +Martin sap another faculty member here + +00:06:06.120 --> 00:06:12.440 +uh but I have been using it since uh + +00:06:09.160 --> 00:06:14.160 +like before he joined and it uh this is + +00:06:12.440 --> 00:06:16.039 +an example from computational social + +00:06:14.160 --> 00:06:18.599 +science uh answering questions about + +00:06:16.039 --> 00:06:20.240 +Society given observational data and + +00:06:18.599 --> 00:06:22.280 +their question was do movie scripts + +00:06:20.240 --> 00:06:24.599 +portray female or male characters with + +00:06:22.280 --> 00:06:27.520 +more power or agency in movie script + +00:06:24.599 --> 00:06:30.120 +films so it's asking kind of a so + +00:06:27.520 --> 00:06:32.160 +societal question by using NLP + +00:06:30.120 --> 00:06:35.360 +technology and the way they did it is + +00:06:32.160 --> 00:06:36.880 +they basically analyzed text trying to + +00:06:35.360 --> 00:06:43.080 +find + +00:06:36.880 --> 00:06:45.280 +uh the uh agents and patients in a a + +00:06:43.080 --> 00:06:46.479 +particular text which are the the things + +00:06:45.280 --> 00:06:49.280 +that are doing things and the things + +00:06:46.479 --> 00:06:52.639 +that things are being done to and you + +00:06:49.280 --> 00:06:54.440 +can see that essentially male characters + +00:06:52.639 --> 00:06:56.560 +in these movie scripts were given more + +00:06:54.440 --> 00:06:58.080 +power in agency and female characters + +00:06:56.560 --> 00:06:59.960 +were given less power in agency and they + +00:06:58.080 --> 00:07:02.680 +were able to do this because they had + +00:06:59.960 --> 00:07:04.840 +NLP technology that analyzed and + +00:07:02.680 --> 00:07:08.960 +extracted useful data and made turned it + +00:07:04.840 --> 00:07:11.520 +into a very easy form to do kind of + +00:07:08.960 --> 00:07:15.840 +analysis of the variety that they want + +00:07:11.520 --> 00:07:17.400 +so um I think that's a major use case of + +00:07:15.840 --> 00:07:19.400 +NLP technology that does language + +00:07:17.400 --> 00:07:20.919 +analysis nowadays turn it into a form + +00:07:19.400 --> 00:07:23.960 +that allows you to very quickly do + +00:07:20.919 --> 00:07:27.440 +aggregate queries and other things like + +00:07:23.960 --> 00:07:30.479 +this um but at the same time uh language + +00:07:27.440 --> 00:07:33.520 +analysis tools fail at very basic tasks + +00:07:30.479 --> 00:07:36.000 +so these are + +00:07:33.520 --> 00:07:38.199 +some things that I ran through a named + +00:07:36.000 --> 00:07:41.080 +entity recognizer and these were kind of + +00:07:38.199 --> 00:07:43.160 +very nice named entity recognizers uh + +00:07:41.080 --> 00:07:46.240 +that a lot of people were using for + +00:07:43.160 --> 00:07:48.039 +example Stanford core NLP and Spacey and + +00:07:46.240 --> 00:07:50.319 +both of them I just threw in the first + +00:07:48.039 --> 00:07:53.120 +thing that I found on the New York Times + +00:07:50.319 --> 00:07:55.199 +at the time and it basically made at + +00:07:53.120 --> 00:07:58.319 +least one mistake in the first sentence + +00:07:55.199 --> 00:08:00.840 +and here it recognizes Baton Rouge as an + +00:07:58.319 --> 00:08:04.720 +organization and here it recognized + +00:08:00.840 --> 00:08:07.000 +hurricane EA as an organization so um + +00:08:04.720 --> 00:08:08.879 +like even uh these things that we expect + +00:08:07.000 --> 00:08:10.360 +should work pretty well make pretty + +00:08:08.879 --> 00:08:13.360 +Solly + +00:08:10.360 --> 00:08:16.199 +mistakes so in the class uh basically + +00:08:13.360 --> 00:08:18.479 +what I want to cover is uh what goes + +00:08:16.199 --> 00:08:20.360 +into building uh state-of-the-art NLP + +00:08:18.479 --> 00:08:24.000 +systems that work really well on a wide + +00:08:20.360 --> 00:08:26.240 +variety of tasks um where do current + +00:08:24.000 --> 00:08:28.840 +systems + +00:08:26.240 --> 00:08:30.479 +fail and how can we make appropriate + +00:08:28.840 --> 00:08:35.000 +improvements and Achieve whatever we + +00:08:30.479 --> 00:08:37.719 +want to do with nalp and this set of + +00:08:35.000 --> 00:08:39.360 +questions that I'm asking here is + +00:08:37.719 --> 00:08:40.919 +exactly the same as the set of questions + +00:08:39.360 --> 00:08:43.519 +that I was asking two years ago before + +00:08:40.919 --> 00:08:45.480 +chat GPT uh I still think they're + +00:08:43.519 --> 00:08:46.920 +important questions but I think the + +00:08:45.480 --> 00:08:48.399 +answers to these questions is very + +00:08:46.920 --> 00:08:50.040 +different and because of that we're + +00:08:48.399 --> 00:08:52.120 +updating the class materials to try to + +00:08:50.040 --> 00:08:54.399 +cover you know the answers to these + +00:08:52.120 --> 00:08:56.000 +questions and uh in kind of the era of + +00:08:54.399 --> 00:08:58.200 +large language models and other things + +00:08:56.000 --> 00:08:59.720 +like + +00:08:58.200 --> 00:09:02.079 +that + +00:08:59.720 --> 00:09:03.360 +so that's all I have for the intro maybe + +00:09:02.079 --> 00:09:06.640 +maybe pretty straightforward are there + +00:09:03.360 --> 00:09:08.480 +any questions or comments so far if not + +00:09:06.640 --> 00:09:14.399 +I'll I'll just go + +00:09:08.480 --> 00:09:17.160 +on okay great so I want to uh first go + +00:09:14.399 --> 00:09:19.480 +into a very high Lev overview of NLP + +00:09:17.160 --> 00:09:20.839 +system building and most of the stuff + +00:09:19.480 --> 00:09:22.399 +that I want to do today is to set the + +00:09:20.839 --> 00:09:24.320 +stage for what I'm going to be talking + +00:09:22.399 --> 00:09:25.040 +about in more detail uh over the rest of + +00:09:24.320 --> 00:09:29.200 +the + +00:09:25.040 --> 00:09:31.720 +class and we could think of NLP syst + +00:09:29.200 --> 00:09:34.040 +systems through this kind of General + +00:09:31.720 --> 00:09:36.560 +framework where we want to create a + +00:09:34.040 --> 00:09:40.600 +function to map an input X into an + +00:09:36.560 --> 00:09:44.440 +output y uh where X and or Y involve + +00:09:40.600 --> 00:09:47.000 +language and uh do some people have + +00:09:44.440 --> 00:09:50.120 +favorite NLP tasks or NLP tasks that you + +00:09:47.000 --> 00:09:52.399 +want to uh want to be handling in some + +00:09:50.120 --> 00:09:57.000 +way or maybe what what do you think are + +00:09:52.399 --> 00:09:57.000 +the most popular and important NLP tasks + +00:09:58.120 --> 00:10:03.200 +nowadays + +00:10:00.800 --> 00:10:06.120 +okay so translation is maybe easy what's + +00:10:03.200 --> 00:10:06.120 +the input and output of + +00:10:11.440 --> 00:10:15.720 +translation okay yeah so uh in + +00:10:13.800 --> 00:10:17.959 +Translation inputs text in one language + +00:10:15.720 --> 00:10:21.760 +output is text in another language and + +00:10:17.959 --> 00:10:21.760 +then what what is a good + +00:10:27.680 --> 00:10:32.160 +translation yeah corre or or the same is + +00:10:30.320 --> 00:10:35.839 +the input basically yes um it also + +00:10:32.160 --> 00:10:37.760 +should be fluent but I agree any other + +00:10:35.839 --> 00:10:39.839 +things generation the reason why I said + +00:10:37.760 --> 00:10:41.519 +it's tough is it's pretty broad um and + +00:10:39.839 --> 00:10:43.360 +it's not like we could be doing + +00:10:41.519 --> 00:10:46.360 +generation with lots of different inputs + +00:10:43.360 --> 00:10:51.440 +but um yeah any any other things maybe a + +00:10:46.360 --> 00:10:51.440 +little bit different yeah like + +00:10:51.480 --> 00:10:55.959 +scenario a scenario and a multiple + +00:10:54.000 --> 00:10:58.200 +choice question about the scenario and + +00:10:55.959 --> 00:10:59.680 +so what would the scenario in the + +00:10:58.200 --> 00:11:01.760 +multiple choice question are probably + +00:10:59.680 --> 00:11:04.040 +the input and then the output + +00:11:01.760 --> 00:11:06.480 +is an answer to the multiple choice + +00:11:04.040 --> 00:11:07.920 +question um and then there it's kind of + +00:11:06.480 --> 00:11:12.279 +obvious like what is good it's the + +00:11:07.920 --> 00:11:14.880 +correct answer sure um interestingly I + +00:11:12.279 --> 00:11:17.440 +think a lot of llm evaluation is done on + +00:11:14.880 --> 00:11:21.160 +these multiple choice questions but I'm + +00:11:17.440 --> 00:11:22.320 +yet to encounter an actual application + +00:11:21.160 --> 00:11:24.880 +that cares about multiple choice + +00:11:22.320 --> 00:11:26.880 +question answering so uh there's kind of + +00:11:24.880 --> 00:11:30.959 +a funny disconnect there but uh yeah I + +00:11:26.880 --> 00:11:33.519 +saw hand that think about V search comp + +00:11:30.959 --> 00:11:36.360 +yeah Vector search uh that's very good + +00:11:33.519 --> 00:11:36.360 +so the input + +00:11:37.120 --> 00:11:45.000 +is can con it into or understanding and + +00:11:42.560 --> 00:11:45.000 +it to + +00:11:47.360 --> 00:11:53.760 +another okay yeah so I'd say the input + +00:11:49.880 --> 00:11:56.160 +there is a query and a document base um + +00:11:53.760 --> 00:11:57.959 +and then the output is maybe an index + +00:11:56.160 --> 00:11:59.800 +into the document or or something else + +00:11:57.959 --> 00:12:01.279 +like that sure um and then something + +00:11:59.800 --> 00:12:05.040 +that's good here here's a good question + +00:12:01.279 --> 00:12:05.040 +what what's a good result from + +00:12:06.560 --> 00:12:10.200 +that what's a good + +00:12:10.839 --> 00:12:19.279 +output be sort of simar the major + +00:12:15.560 --> 00:12:21.680 +problem there I see is how you def SAR + +00:12:19.279 --> 00:12:26.199 +and how you + +00:12:21.680 --> 00:12:29.760 +a always like you understand + +00:12:26.199 --> 00:12:33.000 +whether is actually + +00:12:29.760 --> 00:12:35.079 +yeah exactly so that um just to repeat + +00:12:33.000 --> 00:12:36.880 +it's like uh we need to have a + +00:12:35.079 --> 00:12:38.399 +similarity a good similarity metric we + +00:12:36.880 --> 00:12:40.120 +need to have a good threshold where we + +00:12:38.399 --> 00:12:41.760 +get like the ones we want and we don't + +00:12:40.120 --> 00:12:43.240 +get the ones we don't want we're going + +00:12:41.760 --> 00:12:44.959 +to talk more about that in the retrieval + +00:12:43.240 --> 00:12:48.440 +lecture exactly how we evaluate and + +00:12:44.959 --> 00:12:49.920 +stuff but um yeah good so this is a good + +00:12:48.440 --> 00:12:53.279 +uh here are some good examples I have + +00:12:49.920 --> 00:12:55.519 +some examples of my own um the first one + +00:12:53.279 --> 00:12:58.360 +is uh kind of the very generic one maybe + +00:12:55.519 --> 00:13:00.800 +kind of like generation here but text in + +00:12:58.360 --> 00:13:02.959 +continuing text uh so this is language + +00:13:00.800 --> 00:13:04.160 +modeling so you have a text and then you + +00:13:02.959 --> 00:13:05.440 +have the continuation you want to + +00:13:04.160 --> 00:13:07.680 +predict the + +00:13:05.440 --> 00:13:10.480 +continuation um text and text in another + +00:13:07.680 --> 00:13:13.040 +language is translation uh text in a + +00:13:10.480 --> 00:13:15.800 +label could be text classification uh + +00:13:13.040 --> 00:13:17.760 +text in linguistic structure or uh some + +00:13:15.800 --> 00:13:21.360 +s kind of entities or something like + +00:13:17.760 --> 00:13:22.680 +that could be uh language analysis or um + +00:13:21.360 --> 00:13:24.839 +information + +00:13:22.680 --> 00:13:29.440 +extraction uh we could also have image + +00:13:24.839 --> 00:13:31.320 +and text uh which is image captioning um + +00:13:29.440 --> 00:13:33.560 +or speech and text which is speech + +00:13:31.320 --> 00:13:35.240 +recognition and I take the very broad + +00:13:33.560 --> 00:13:38.000 +view of natural language processing + +00:13:35.240 --> 00:13:39.519 +which is if it's any variety of language + +00:13:38.000 --> 00:13:41.519 +uh if you're handling language in some + +00:13:39.519 --> 00:13:42.800 +way it's natural language processing it + +00:13:41.519 --> 00:13:45.880 +doesn't necessarily have to be text + +00:13:42.800 --> 00:13:47.480 +input text output um so that's relevant + +00:13:45.880 --> 00:13:50.199 +for the projects that you're thinking + +00:13:47.480 --> 00:13:52.160 +about too at the end of this course so + +00:13:50.199 --> 00:13:55.519 +the the most common FAQ for this course + +00:13:52.160 --> 00:13:57.839 +is does my project count and if you're + +00:13:55.519 --> 00:13:59.360 +uncertain you should ask but usually + +00:13:57.839 --> 00:14:01.040 +like if it has some sort of language + +00:13:59.360 --> 00:14:05.079 +involved then I'll usually say yes it + +00:14:01.040 --> 00:14:07.920 +does kind so um if it's like uh code to + +00:14:05.079 --> 00:14:09.680 +code there that's not code is not + +00:14:07.920 --> 00:14:11.480 +natural language it is language but it's + +00:14:09.680 --> 00:14:13.000 +not natural language so that might be + +00:14:11.480 --> 00:14:15.320 +borderline we might have to discuss + +00:14:13.000 --> 00:14:15.320 +about + +00:14:15.759 --> 00:14:21.800 +that cool um so next I'd like to talk + +00:14:18.880 --> 00:14:25.240 +about methods for creating NLP systems + +00:14:21.800 --> 00:14:27.839 +um and there's a lot of different ways + +00:14:25.240 --> 00:14:29.720 +to create MLP systems all of these are + +00:14:27.839 --> 00:14:32.880 +alive and well in + +00:14:29.720 --> 00:14:35.759 +2024 uh the first one is Rule uh + +00:14:32.880 --> 00:14:37.959 +rule-based system creation and so the + +00:14:35.759 --> 00:14:40.399 +way this works is like let's say you + +00:14:37.959 --> 00:14:42.480 +want to build a text classifier you just + +00:14:40.399 --> 00:14:46.560 +write the simple python function that + +00:14:42.480 --> 00:14:48.639 +classifies things into uh sports or + +00:14:46.560 --> 00:14:50.240 +other and the way it classifies it into + +00:14:48.639 --> 00:14:52.959 +sports or other is it checks whether + +00:14:50.240 --> 00:14:55.160 +baseball soccer football and Tennis are + +00:14:52.959 --> 00:14:59.399 +included in the document and classifies + +00:14:55.160 --> 00:15:01.959 +it into uh Sports if so uh other if not + +00:14:59.399 --> 00:15:05.279 +so has anyone written something like + +00:15:01.959 --> 00:15:09.720 +this maybe not a text classifier but um + +00:15:05.279 --> 00:15:11.880 +you know to identify entities or uh + +00:15:09.720 --> 00:15:14.279 +split words + +00:15:11.880 --> 00:15:16.680 +or something like + +00:15:14.279 --> 00:15:18.399 +that has anybody not ever written + +00:15:16.680 --> 00:15:22.800 +anything like + +00:15:18.399 --> 00:15:24.639 +this yeah that's what I thought so um + +00:15:22.800 --> 00:15:26.079 +rule-based systems are very convenient + +00:15:24.639 --> 00:15:28.920 +when you don't really care about how + +00:15:26.079 --> 00:15:30.759 +good your system is um or you're doing + +00:15:28.920 --> 00:15:32.360 +that's really really simple and like + +00:15:30.759 --> 00:15:35.600 +it'll be perfect even if you do the very + +00:15:32.360 --> 00:15:37.079 +simple thing and so I I think it's worth + +00:15:35.600 --> 00:15:39.959 +talking a little bit about them and I'll + +00:15:37.079 --> 00:15:43.319 +talk a little bit about that uh this + +00:15:39.959 --> 00:15:45.680 +time the second thing which like very + +00:15:43.319 --> 00:15:47.680 +rapidly over the course of maybe three + +00:15:45.680 --> 00:15:50.279 +years or so has become actually maybe + +00:15:47.680 --> 00:15:52.720 +the dominant Paradigm in NLP is + +00:15:50.279 --> 00:15:56.360 +prompting uh in prompting a language + +00:15:52.720 --> 00:15:58.560 +model and the way this works is uh you + +00:15:56.360 --> 00:16:00.720 +ask a language model if the following + +00:15:58.560 --> 00:16:03.079 +sent is about sports reply Sports + +00:16:00.720 --> 00:16:06.120 +otherwise reply other and you feed it to + +00:16:03.079 --> 00:16:08.480 +your favorite LM uh usually that's GPT + +00:16:06.120 --> 00:16:11.399 +something or other uh sometimes it's an + +00:16:08.480 --> 00:16:14.440 +open source model of some variety and + +00:16:11.399 --> 00:16:17.759 +then uh it will give you the + +00:16:14.440 --> 00:16:20.639 +answer and then finally uh fine-tuning + +00:16:17.759 --> 00:16:22.240 +uh so you take some paired data and you + +00:16:20.639 --> 00:16:23.600 +do machine learning from paired data + +00:16:22.240 --> 00:16:25.680 +where you have something like I love to + +00:16:23.600 --> 00:16:27.440 +play baseball uh the stock price is + +00:16:25.680 --> 00:16:29.519 +going up he got a hatrick yesterday he + +00:16:27.440 --> 00:16:32.759 +is wearing tennis shoes and you assign + +00:16:29.519 --> 00:16:35.319 +all these uh labels to them training a + +00:16:32.759 --> 00:16:38.160 +model and you can even start out with a + +00:16:35.319 --> 00:16:41.480 +prompting based model and fine-tune a a + +00:16:38.160 --> 00:16:41.480 +language model + +00:16:42.920 --> 00:16:49.399 +also so one major consideration when + +00:16:47.519 --> 00:16:52.000 +you're Building Systems like this is the + +00:16:49.399 --> 00:16:56.440 +data requirements for building such a + +00:16:52.000 --> 00:16:59.319 +system and for rules or prompting where + +00:16:56.440 --> 00:17:02.240 +it's just based on intuition really no + +00:16:59.319 --> 00:17:04.640 +data is needed whatsoever it you don't + +00:17:02.240 --> 00:17:08.240 +need a single example and you can start + +00:17:04.640 --> 00:17:11.000 +writing rules or like just just to give + +00:17:08.240 --> 00:17:12.640 +an example the rules and prompts I wrote + +00:17:11.000 --> 00:17:14.679 +here I didn't look at any examples and I + +00:17:12.640 --> 00:17:17.240 +just wrote them uh so this is something + +00:17:14.679 --> 00:17:20.000 +that you could start out + +00:17:17.240 --> 00:17:21.559 +with uh the problem is you also have no + +00:17:20.000 --> 00:17:24.720 +idea how well it works if you don't have + +00:17:21.559 --> 00:17:26.760 +any data whatsoever right so um you'll + +00:17:24.720 --> 00:17:30.400 +you might be in trouble if you think + +00:17:26.760 --> 00:17:30.400 +something should be working + +00:17:30.919 --> 00:17:34.440 +so normally the next thing that people + +00:17:32.919 --> 00:17:36.880 +move to nowadays when they're building + +00:17:34.440 --> 00:17:39.559 +practical systems is rules are prompting + +00:17:36.880 --> 00:17:41.240 +based on spot checks so that basically + +00:17:39.559 --> 00:17:42.919 +means that you start out with a + +00:17:41.240 --> 00:17:45.840 +rule-based system or a prompting based + +00:17:42.919 --> 00:17:47.240 +system and then you go in and you run it + +00:17:45.840 --> 00:17:48.720 +on some data that you're interested in + +00:17:47.240 --> 00:17:50.799 +you just kind of qualitatively look at + +00:17:48.720 --> 00:17:52.160 +the data and say oh it's messing up here + +00:17:50.799 --> 00:17:53.440 +then you go in and fix your prompt a + +00:17:52.160 --> 00:17:54.919 +little bit or you go in and fix your + +00:17:53.440 --> 00:17:57.320 +rules a little bit or something like + +00:17:54.919 --> 00:18:00.400 +that so uh this is kind of the second + +00:17:57.320 --> 00:18:00.400 +level of difficulty + +00:18:01.400 --> 00:18:04.640 +so the third level of difficulty would + +00:18:03.159 --> 00:18:07.400 +be something like rules are prompting + +00:18:04.640 --> 00:18:09.039 +with rigorous evaluation and so here you + +00:18:07.400 --> 00:18:12.840 +would create a development set with + +00:18:09.039 --> 00:18:14.840 +inputs and outputs uh so you uh create + +00:18:12.840 --> 00:18:17.039 +maybe 200 to 2,000 + +00:18:14.840 --> 00:18:20.080 +examples um + +00:18:17.039 --> 00:18:21.720 +and then evaluate your actual accuracy + +00:18:20.080 --> 00:18:23.880 +so you need an evaluation metric you + +00:18:21.720 --> 00:18:26.120 +need other things like this this is the + +00:18:23.880 --> 00:18:28.400 +next level of difficulty but if you're + +00:18:26.120 --> 00:18:30.240 +going to be a serious you know NLP + +00:18:28.400 --> 00:18:33.000 +engineer or something like this you + +00:18:30.240 --> 00:18:34.720 +definitely will be doing this a lot I + +00:18:33.000 --> 00:18:37.760 +feel and + +00:18:34.720 --> 00:18:40.360 +then so that here now you start needing + +00:18:37.760 --> 00:18:41.960 +a depth set and a test set and then + +00:18:40.360 --> 00:18:46.280 +finally fine-tuning you need an + +00:18:41.960 --> 00:18:48.480 +additional training set um and uh this + +00:18:46.280 --> 00:18:52.240 +will generally be a lot bigger than 200 + +00:18:48.480 --> 00:18:56.080 +to 2,000 examples and generally the rule + +00:18:52.240 --> 00:18:56.080 +is that every time you + +00:18:57.320 --> 00:19:01.080 +double + +00:18:59.520 --> 00:19:02.400 +every time you double your training set + +00:19:01.080 --> 00:19:07.480 +size you get about a constant + +00:19:02.400 --> 00:19:07.480 +Improvement so if you start + +00:19:07.799 --> 00:19:15.080 +out if you start out down here with + +00:19:12.240 --> 00:19:17.039 +um zero shot accuracy with a language + +00:19:15.080 --> 00:19:21.559 +model you you create a small printing + +00:19:17.039 --> 00:19:21.559 +set and you get you know a pretty big + +00:19:22.000 --> 00:19:29.120 +increase and then every time you double + +00:19:26.320 --> 00:19:30.799 +it it increases by constant fact it's + +00:19:29.120 --> 00:19:32.480 +kind of like just in general in machine + +00:19:30.799 --> 00:19:37.360 +learning this is a trend that we tend to + +00:19:32.480 --> 00:19:40.679 +see so um So based on this + +00:19:37.360 --> 00:19:41.880 +uh there's kind of like you get a big + +00:19:40.679 --> 00:19:44.200 +gain from having a little bit of + +00:19:41.880 --> 00:19:45.760 +training data but the gains very quickly + +00:19:44.200 --> 00:19:48.919 +drop off and you start spending a lot of + +00:19:45.760 --> 00:19:48.919 +time annotating + +00:19:51.000 --> 00:19:55.880 +an so um yeah this is the the general + +00:19:54.760 --> 00:19:58.280 +overview of the different types of + +00:19:55.880 --> 00:20:00.000 +system building uh any any question + +00:19:58.280 --> 00:20:01.559 +questions about this or comments or + +00:20:00.000 --> 00:20:04.000 +things like + +00:20:01.559 --> 00:20:05.840 +this I think one thing that's changed + +00:20:04.000 --> 00:20:08.159 +really drastically from the last time I + +00:20:05.840 --> 00:20:09.600 +taught this class is the fact that + +00:20:08.159 --> 00:20:11.000 +number one and number two are the things + +00:20:09.600 --> 00:20:13.799 +that people are actually doing in + +00:20:11.000 --> 00:20:15.360 +practice uh which was you know people + +00:20:13.799 --> 00:20:16.679 +who actually care about systems are + +00:20:15.360 --> 00:20:18.880 +doing number one and number two is the + +00:20:16.679 --> 00:20:20.440 +main thing it used to be that if you + +00:20:18.880 --> 00:20:22.679 +were actually serious about building a + +00:20:20.440 --> 00:20:24.320 +system uh you really needed to do the + +00:20:22.679 --> 00:20:27.080 +funing and now it's kind of like more + +00:20:24.320 --> 00:20:27.080 +optional + +00:20:27.159 --> 00:20:30.159 +so + +00:20:44.039 --> 00:20:50.960 +yeah + +00:20:46.320 --> 00:20:53.960 +so it's it's definitely an empirical + +00:20:50.960 --> 00:20:53.960 +observation + +00:20:54.720 --> 00:21:01.080 +um in terms of the theoretical + +00:20:57.640 --> 00:21:03.120 +background I am not I can't immediately + +00:21:01.080 --> 00:21:05.840 +point to a + +00:21:03.120 --> 00:21:10.039 +particular paper that does that but I + +00:21:05.840 --> 00:21:12.720 +think if you think about + +00:21:10.039 --> 00:21:14.720 +the I I think I have seen that they do + +00:21:12.720 --> 00:21:17.039 +exist in the past but I I can't think of + +00:21:14.720 --> 00:21:19.000 +it right now I can try to uh try to come + +00:21:17.039 --> 00:21:23.720 +up with an example of + +00:21:19.000 --> 00:21:23.720 +that so yeah I I should take + +00:21:26.799 --> 00:21:31.960 +notes or someone wants to share one on + +00:21:29.360 --> 00:21:33.360 +Piaza uh if you have any ideas and want + +00:21:31.960 --> 00:21:34.520 +to share on Patza I'm sure that would be + +00:21:33.360 --> 00:21:35.640 +great it'd be great to have a discussion + +00:21:34.520 --> 00:21:39.320 +on + +00:21:35.640 --> 00:21:44.960 +Patza um Pi + +00:21:39.320 --> 00:21:46.880 +one cool okay so next I want to try to + +00:21:44.960 --> 00:21:48.200 +make a rule-based system and I'm going + +00:21:46.880 --> 00:21:49.360 +to make a rule-based system for + +00:21:48.200 --> 00:21:51.799 +sentiment + +00:21:49.360 --> 00:21:53.480 +analysis uh and this is a bad idea I + +00:21:51.799 --> 00:21:55.400 +would not encourage you to ever do this + +00:21:53.480 --> 00:21:57.440 +in real life but I want to do it here to + +00:21:55.400 --> 00:21:59.640 +show you why it's a bad idea and like + +00:21:57.440 --> 00:22:01.200 +what are some of the hard problems that + +00:21:59.640 --> 00:22:03.960 +you encounter when trying to create a + +00:22:01.200 --> 00:22:06.600 +system based on rules + +00:22:03.960 --> 00:22:08.080 +and then we'll move into building a + +00:22:06.600 --> 00:22:12.360 +machine learning base system after we + +00:22:08.080 --> 00:22:15.400 +finish this so if we look at the example + +00:22:12.360 --> 00:22:18.559 +test this is review sentiment analysis + +00:22:15.400 --> 00:22:21.799 +it's one of the most valuable uh tasks + +00:22:18.559 --> 00:22:24.039 +uh that people do in NLP nowadays + +00:22:21.799 --> 00:22:26.400 +because it allows people to know how + +00:22:24.039 --> 00:22:29.200 +customers are thinking about products uh + +00:22:26.400 --> 00:22:30.799 +improve their you know their product + +00:22:29.200 --> 00:22:32.919 +development and other things like that + +00:22:30.799 --> 00:22:34.799 +may monitor people's you know + +00:22:32.919 --> 00:22:36.760 +satisfaction with their social media + +00:22:34.799 --> 00:22:39.200 +service other things like this so + +00:22:36.760 --> 00:22:42.720 +basically the way it works is um you + +00:22:39.200 --> 00:22:44.400 +have uh outputs or you have sentences + +00:22:42.720 --> 00:22:46.720 +inputs like I hate this movie I love + +00:22:44.400 --> 00:22:48.520 +this movie I saw this movie and this + +00:22:46.720 --> 00:22:50.600 +gets mapped into positive neutral or + +00:22:48.520 --> 00:22:53.120 +negative so I hate this movie would be + +00:22:50.600 --> 00:22:55.480 +negative I love this movie positive and + +00:22:53.120 --> 00:22:59.039 +I saw this movie is + +00:22:55.480 --> 00:23:01.200 +neutral so um + +00:22:59.039 --> 00:23:05.200 +that that's the task input tax output + +00:23:01.200 --> 00:23:08.880 +labels uh Kary uh sentence + +00:23:05.200 --> 00:23:11.679 +label and in order to do this uh we + +00:23:08.880 --> 00:23:13.120 +would like to build a model um and we're + +00:23:11.679 --> 00:23:16.159 +going to build the model in a rule based + +00:23:13.120 --> 00:23:19.000 +way but it we'll still call it a model + +00:23:16.159 --> 00:23:21.600 +and the way it works is we do feature + +00:23:19.000 --> 00:23:23.159 +extraction um so we extract the Salient + +00:23:21.600 --> 00:23:25.279 +features for making the decision about + +00:23:23.159 --> 00:23:27.320 +what to Output next we do score + +00:23:25.279 --> 00:23:29.880 +calculation calculate a score for one or + +00:23:27.320 --> 00:23:32.320 +more possib ities and we have a decision + +00:23:29.880 --> 00:23:33.520 +function so we choose one of those + +00:23:32.320 --> 00:23:37.679 +several + +00:23:33.520 --> 00:23:40.120 +possibilities and so for feature + +00:23:37.679 --> 00:23:42.200 +extraction uh formally what this looks + +00:23:40.120 --> 00:23:44.240 +like is we have some function and it + +00:23:42.200 --> 00:23:48.039 +extracts a feature + +00:23:44.240 --> 00:23:51.159 +Vector for score calculation um we + +00:23:48.039 --> 00:23:54.240 +calculate the scores based on either a + +00:23:51.159 --> 00:23:56.279 +binary classification uh where we have a + +00:23:54.240 --> 00:23:58.279 +a weight vector and we take the dot + +00:23:56.279 --> 00:24:00.120 +product with our feature vector or we + +00:23:58.279 --> 00:24:02.480 +have multi class classification where we + +00:24:00.120 --> 00:24:04.520 +have a weight Matrix and we take the + +00:24:02.480 --> 00:24:08.640 +product with uh the vector and that + +00:24:04.520 --> 00:24:08.640 +gives us you know squares over multiple + +00:24:08.919 --> 00:24:14.840 +classes and then we have a decision uh + +00:24:11.600 --> 00:24:17.520 +rule so this decision rule tells us what + +00:24:14.840 --> 00:24:20.080 +the output is going to be um does anyone + +00:24:17.520 --> 00:24:22.200 +know what a typical decision rule is + +00:24:20.080 --> 00:24:24.520 +maybe maybe so obvious that you don't + +00:24:22.200 --> 00:24:28.760 +think about it often + +00:24:24.520 --> 00:24:31.000 +but uh a threshold um so like for would + +00:24:28.760 --> 00:24:34.440 +that be for binary a single binary + +00:24:31.000 --> 00:24:37.000 +scaler score or a multiple + +00:24:34.440 --> 00:24:38.520 +class binary yeah so and then you would + +00:24:37.000 --> 00:24:39.960 +pick a threshold and if it's over the + +00:24:38.520 --> 00:24:42.919 +threshold + +00:24:39.960 --> 00:24:45.760 +you say yes and if it's under the + +00:24:42.919 --> 00:24:50.279 +threshold you say no um another option + +00:24:45.760 --> 00:24:51.679 +would be um you have a threshold and you + +00:24:50.279 --> 00:24:56.080 +say + +00:24:51.679 --> 00:24:56.080 +yes no + +00:24:56.200 --> 00:25:00.559 +obain so you know you don't give an + +00:24:58.360 --> 00:25:02.520 +answer and depending on how you're + +00:25:00.559 --> 00:25:03.720 +evaluated what what is a good classifier + +00:25:02.520 --> 00:25:07.799 +you might want to abstain some of the + +00:25:03.720 --> 00:25:10.960 +time also um for multiclass what what's + +00:25:07.799 --> 00:25:10.960 +a standard decision role for + +00:25:11.120 --> 00:25:16.720 +multiclass argmax yeah exactly so um + +00:25:14.279 --> 00:25:19.520 +basically you you find the index that + +00:25:16.720 --> 00:25:22.000 +has the highest score in you output + +00:25:19.520 --> 00:25:24.480 +it we're going to be talking about other + +00:25:22.000 --> 00:25:26.559 +decision rules also um like + +00:25:24.480 --> 00:25:29.480 +self-consistency and minimum based risk + +00:25:26.559 --> 00:25:30.760 +later uh for text generation so you can + +00:25:29.480 --> 00:25:33.000 +just keep that in mind and then we'll + +00:25:30.760 --> 00:25:36.279 +forget about it for like several + +00:25:33.000 --> 00:25:39.559 +classes um so for sentiment + +00:25:36.279 --> 00:25:42.159 +class um I have a Cod + +00:25:39.559 --> 00:25:45.159 +walk + +00:25:42.159 --> 00:25:45.159 +here + +00:25:46.240 --> 00:25:54.320 +and this is pretty simple um but if + +00:25:50.320 --> 00:25:58.559 +you're bored uh of the class and would + +00:25:54.320 --> 00:26:01.000 +like to um try out yourself you can + +00:25:58.559 --> 00:26:04.480 +Challenge and try to get a better score + +00:26:01.000 --> 00:26:06.120 +than I do um over the next few minutes + +00:26:04.480 --> 00:26:06.880 +but we have this rule based classifier + +00:26:06.120 --> 00:26:10.240 +in + +00:26:06.880 --> 00:26:12.640 +here and I will open it up in my vs + +00:26:10.240 --> 00:26:15.360 +code + +00:26:12.640 --> 00:26:18.360 +to try to create a rule-based classifier + +00:26:15.360 --> 00:26:18.360 +and basically the way this + +00:26:22.799 --> 00:26:29.960 +works is + +00:26:25.159 --> 00:26:29.960 +that we have a feature + +00:26:31.720 --> 00:26:37.720 +extraction we have feature extraction we + +00:26:34.120 --> 00:26:40.679 +have scoring and we have um a decision + +00:26:37.720 --> 00:26:43.480 +rle so here for our feature extraction I + +00:26:40.679 --> 00:26:44.720 +have created a list of good words and a + +00:26:43.480 --> 00:26:46.720 +list of bad + +00:26:44.720 --> 00:26:48.960 +words + +00:26:46.720 --> 00:26:51.320 +and what we do is we just count the + +00:26:48.960 --> 00:26:53.000 +number of good words that appeared and + +00:26:51.320 --> 00:26:55.320 +count the number of bad words that + +00:26:53.000 --> 00:26:57.880 +appeared then we also have a bias + +00:26:55.320 --> 00:27:01.159 +feature so the bias feature is a feature + +00:26:57.880 --> 00:27:03.679 +that's always one and so what that + +00:27:01.159 --> 00:27:06.799 +results in is we have a dimension three + +00:27:03.679 --> 00:27:08.880 +feature Vector um where this is like the + +00:27:06.799 --> 00:27:11.320 +number of good words this is the number + +00:27:08.880 --> 00:27:15.320 +of bad words and then you have the + +00:27:11.320 --> 00:27:17.760 +bias and then I also Define the feature + +00:27:15.320 --> 00:27:20.039 +weights that so for every good word we + +00:27:17.760 --> 00:27:22.200 +add one to our score for every bad word + +00:27:20.039 --> 00:27:25.559 +we add uh we subtract one from our score + +00:27:22.200 --> 00:27:29.399 +and for the BIOS we absor and so we then + +00:27:25.559 --> 00:27:30.480 +take the dot product between + +00:27:29.399 --> 00:27:34.360 +these + +00:27:30.480 --> 00:27:36.919 +two and we get minus + +00:27:34.360 --> 00:27:37.640 +0.5 and that gives us uh that gives us + +00:27:36.919 --> 00:27:41.000 +the + +00:27:37.640 --> 00:27:46.000 +squore so let's run + +00:27:41.000 --> 00:27:50.320 +that um and I read in some + +00:27:46.000 --> 00:27:52.600 +data and what this data looks like is + +00:27:50.320 --> 00:27:55.000 +basically we have a + +00:27:52.600 --> 00:27:57.559 +review um which says the rock is + +00:27:55.000 --> 00:27:59.480 +destined to be the 21st Century's new + +00:27:57.559 --> 00:28:01.240 +Conan and that he's going to make a + +00:27:59.480 --> 00:28:03.600 +splash even greater than Arnold + +00:28:01.240 --> 00:28:07.000 +Schwarzenegger jeanclaude vanam or + +00:28:03.600 --> 00:28:09.519 +Steven Seagal um so this seems pretty + +00:28:07.000 --> 00:28:10.840 +positive right I like that's a pretty + +00:28:09.519 --> 00:28:13.200 +high order to be better than Arnold + +00:28:10.840 --> 00:28:16.080 +Schwarzenegger or John Claude vanam uh + +00:28:13.200 --> 00:28:19.519 +if you're familiar with action movies um + +00:28:16.080 --> 00:28:22.840 +and so of course this gets a positive + +00:28:19.519 --> 00:28:24.120 +label and so uh we have run classifier + +00:28:22.840 --> 00:28:25.240 +actually maybe I should call this + +00:28:24.120 --> 00:28:27.600 +decision rule because this is + +00:28:25.240 --> 00:28:29.120 +essentially our decision Rule and here + +00:28:27.600 --> 00:28:32.600 +basically do the thing that I mentioned + +00:28:29.120 --> 00:28:35.440 +here the yes no obstain or in this case + +00:28:32.600 --> 00:28:38.360 +positive negative neutral so if the + +00:28:35.440 --> 00:28:40.159 +score is greater than zero we uh return + +00:28:38.360 --> 00:28:42.480 +one if the score is less than zero we + +00:28:40.159 --> 00:28:44.679 +return negative one which is negative + +00:28:42.480 --> 00:28:47.240 +and otherwise we returns + +00:28:44.679 --> 00:28:48.760 +zero um we have an accuracy calculation + +00:28:47.240 --> 00:28:51.519 +function just calculating the outputs + +00:28:48.760 --> 00:28:55.840 +are good and + +00:28:51.519 --> 00:28:57.440 +um this is uh the overall label count in + +00:28:55.840 --> 00:28:59.919 +the in the output so we can see there + +00:28:57.440 --> 00:29:03.120 +slightly more positives than there are + +00:28:59.919 --> 00:29:06.080 +negatives and then we can run this and + +00:29:03.120 --> 00:29:10.200 +we get a a score of + +00:29:06.080 --> 00:29:14.760 +43 and so one one thing that I have + +00:29:10.200 --> 00:29:19.279 +found um is I I do a lot of kind + +00:29:14.760 --> 00:29:21.240 +of research on how to make NLP systems + +00:29:19.279 --> 00:29:23.600 +better and one of the things I found + +00:29:21.240 --> 00:29:26.679 +really invaluable + +00:29:23.600 --> 00:29:27.840 +is if you're in a situation where you + +00:29:26.679 --> 00:29:29.720 +have a + +00:29:27.840 --> 00:29:31.760 +set task and you just want to make the + +00:29:29.720 --> 00:29:33.760 +system better on the set task doing + +00:29:31.760 --> 00:29:35.159 +comprehensive error analysis and + +00:29:33.760 --> 00:29:37.320 +understanding where your system is + +00:29:35.159 --> 00:29:39.880 +failing is one of the best ways to do + +00:29:37.320 --> 00:29:42.200 +that and I would like to do a very + +00:29:39.880 --> 00:29:43.640 +rudimentary version of this here and + +00:29:42.200 --> 00:29:46.519 +what I'm doing essentially is I'm just + +00:29:43.640 --> 00:29:47.480 +randomly picking uh several examples + +00:29:46.519 --> 00:29:49.320 +that were + +00:29:47.480 --> 00:29:52.000 +correct + +00:29:49.320 --> 00:29:54.840 +um and so like let let's look at the + +00:29:52.000 --> 00:29:58.200 +examples here um here the true label is + +00:29:54.840 --> 00:30:00.760 +zero um in this predicted one um it may + +00:29:58.200 --> 00:30:03.440 +not be as cutting as Woody or as true as + +00:30:00.760 --> 00:30:05.039 +back in the Glory Days of uh weekend and + +00:30:03.440 --> 00:30:07.440 +two or three things that I know about + +00:30:05.039 --> 00:30:09.640 +her but who else engaged in film Mak + +00:30:07.440 --> 00:30:12.679 +today is so cognizant of the cultural + +00:30:09.640 --> 00:30:14.480 +and moral issues involved in the process + +00:30:12.679 --> 00:30:17.600 +so what words in here are a good + +00:30:14.480 --> 00:30:20.840 +indication that this is a neutral + +00:30:17.600 --> 00:30:20.840 +sentence any + +00:30:23.760 --> 00:30:28.399 +ideas little bit tough + +00:30:26.240 --> 00:30:30.919 +huh starting to think maybe we should be + +00:30:28.399 --> 00:30:30.919 +using machine + +00:30:31.480 --> 00:30:37.440 +learning + +00:30:34.080 --> 00:30:40.320 +um even by the intentionally low + +00:30:37.440 --> 00:30:41.559 +standards of fratboy humor sority boys + +00:30:40.320 --> 00:30:43.840 +is a + +00:30:41.559 --> 00:30:46.080 +Bowser I think frat boy is maybe + +00:30:43.840 --> 00:30:47.360 +negative sentiment if you're familiar + +00:30:46.080 --> 00:30:50.360 +with + +00:30:47.360 --> 00:30:51.960 +us us I don't have any negative + +00:30:50.360 --> 00:30:54.519 +sentiment but the people who say it that + +00:30:51.960 --> 00:30:55.960 +way have negative senent maybe so if we + +00:30:54.519 --> 00:31:01.080 +wanted to go in and do that we could + +00:30:55.960 --> 00:31:01.080 +maybe I won't save this but + +00:31:01.519 --> 00:31:08.919 +uh + +00:31:04.240 --> 00:31:11.840 +um oh whoops I'll go back and fix it uh + +00:31:08.919 --> 00:31:14.840 +crass crass is pretty obviously negative + +00:31:11.840 --> 00:31:14.840 +right so I can add + +00:31:17.039 --> 00:31:21.080 +crass actually let me just add + +00:31:21.760 --> 00:31:29.159 +CR and then um I'll go back and have our + +00:31:26.559 --> 00:31:29.159 +train accurate + +00:31:32.159 --> 00:31:36.240 +wa maybe maybe I need to run the whole + +00:31:33.960 --> 00:31:36.240 +thing + +00:31:36.960 --> 00:31:39.960 +again + +00:31:40.960 --> 00:31:45.880 +and that budg the training accuracy a + +00:31:43.679 --> 00:31:50.360 +little um the dev test accuracy not very + +00:31:45.880 --> 00:31:53.919 +much so I could go through and do this + +00:31:50.360 --> 00:31:53.919 +um let me add + +00:31:54.000 --> 00:31:58.320 +unengaging so I could go through and do + +00:31:56.000 --> 00:32:01.720 +this all day and you probably be very + +00:31:58.320 --> 00:32:01.720 +bored on + +00:32:04.240 --> 00:32:08.360 +engage but I won't do that uh because we + +00:32:06.919 --> 00:32:10.679 +have much more important things to be + +00:32:08.360 --> 00:32:14.679 +doing + +00:32:10.679 --> 00:32:16.440 +um and uh so anyway we um we could go + +00:32:14.679 --> 00:32:18.919 +through and design all the features here + +00:32:16.440 --> 00:32:21.279 +but like why is this complicated like + +00:32:18.919 --> 00:32:22.600 +the the reason why it was complicated + +00:32:21.279 --> 00:32:25.840 +became pretty + +00:32:22.600 --> 00:32:27.840 +clear from the uh from the very + +00:32:25.840 --> 00:32:29.639 +beginning uh the very first example I + +00:32:27.840 --> 00:32:32.200 +showed you which was that was a really + +00:32:29.639 --> 00:32:34.720 +complicated sentence like all of us + +00:32:32.200 --> 00:32:36.240 +could see that it wasn't like really + +00:32:34.720 --> 00:32:38.679 +strongly positive it wasn't really + +00:32:36.240 --> 00:32:40.519 +strongly negative it was kind of like in + +00:32:38.679 --> 00:32:42.919 +the middle but it was in the middle and + +00:32:40.519 --> 00:32:44.600 +it said it in a very long way uh you + +00:32:42.919 --> 00:32:46.120 +know not using any clearly positive + +00:32:44.600 --> 00:32:47.639 +sentiment words not using any clearly + +00:32:46.120 --> 00:32:49.760 +negative sentiment + +00:32:47.639 --> 00:32:53.760 +words + +00:32:49.760 --> 00:32:56.519 +um so yeah basically I I + +00:32:53.760 --> 00:33:00.559 +improved um but what are the difficult + +00:32:56.519 --> 00:33:03.720 +cases uh that we saw here so the first + +00:33:00.559 --> 00:33:07.639 +one is low frequency + +00:33:03.720 --> 00:33:09.760 +words so um here's an example the action + +00:33:07.639 --> 00:33:11.519 +switches between past and present but + +00:33:09.760 --> 00:33:13.120 +the material link is too tenuous to + +00:33:11.519 --> 00:33:16.840 +Anchor the emotional connections at + +00:33:13.120 --> 00:33:19.519 +purport to span a 125 year divide so + +00:33:16.840 --> 00:33:21.080 +this is negative um tenuous is kind of a + +00:33:19.519 --> 00:33:22.799 +negative word purport is kind of a + +00:33:21.080 --> 00:33:24.760 +negative word but it doesn't appear very + +00:33:22.799 --> 00:33:26.159 +frequently so I would need to spend all + +00:33:24.760 --> 00:33:29.720 +my time looking for these words and + +00:33:26.159 --> 00:33:32.480 +trying to them in um here's yet another + +00:33:29.720 --> 00:33:34.240 +horse franchise mucking up its storyline + +00:33:32.480 --> 00:33:36.639 +with glitches casual fans could correct + +00:33:34.240 --> 00:33:40.159 +in their sleep negative + +00:33:36.639 --> 00:33:42.600 +again um so the solutions here are keep + +00:33:40.159 --> 00:33:46.880 +working until we get all of them which + +00:33:42.600 --> 00:33:49.159 +is maybe not super fun um or incorporate + +00:33:46.880 --> 00:33:51.639 +external resources such as sentiment + +00:33:49.159 --> 00:33:52.880 +dictionaries that people created uh we + +00:33:51.639 --> 00:33:55.960 +could do that but that's a lot of + +00:33:52.880 --> 00:33:57.480 +engineering effort to make something + +00:33:55.960 --> 00:34:00.639 +work + +00:33:57.480 --> 00:34:03.720 +um another one is conjugation so we saw + +00:34:00.639 --> 00:34:06.600 +unengaging I guess that's an example of + +00:34:03.720 --> 00:34:08.359 +conjugation uh some other ones are + +00:34:06.600 --> 00:34:10.520 +operatic sprawling picture that's + +00:34:08.359 --> 00:34:12.040 +entertainingly acted magnificently shot + +00:34:10.520 --> 00:34:15.480 +and gripping enough to sustain most of + +00:34:12.040 --> 00:34:17.399 +its 170 minute length so here we have + +00:34:15.480 --> 00:34:19.079 +magnificently so even if I added + +00:34:17.399 --> 00:34:20.480 +magnificent this wouldn't have been + +00:34:19.079 --> 00:34:23.800 +clocked + +00:34:20.480 --> 00:34:26.599 +right um it's basically an overlong + +00:34:23.800 --> 00:34:28.839 +episode of tales from the cryp so that's + +00:34:26.599 --> 00:34:31.480 +maybe another + +00:34:28.839 --> 00:34:33.040 +example um so some things that we could + +00:34:31.480 --> 00:34:35.320 +do or what we would have done before the + +00:34:33.040 --> 00:34:37.720 +modern Paradigm of machine learning is + +00:34:35.320 --> 00:34:40.079 +we would run some sort of normalizer + +00:34:37.720 --> 00:34:42.800 +like a stemmer or other things like this + +00:34:40.079 --> 00:34:45.240 +in order to convert this into uh the + +00:34:42.800 --> 00:34:48.599 +root wordss that we already have seen + +00:34:45.240 --> 00:34:52.040 +somewhere in our data or have already + +00:34:48.599 --> 00:34:54.040 +handed so that requires um conjugation + +00:34:52.040 --> 00:34:55.879 +analysis or morphological analysis as we + +00:34:54.040 --> 00:34:57.400 +say it in + +00:34:55.879 --> 00:35:00.680 +technicals + +00:34:57.400 --> 00:35:03.960 +negation this is a tricky one so this + +00:35:00.680 --> 00:35:06.760 +one's not nearly as Dreadful as expected + +00:35:03.960 --> 00:35:08.800 +so Dreadful is a pretty bad word right + +00:35:06.760 --> 00:35:13.000 +but not nearly as Dreadful as expected + +00:35:08.800 --> 00:35:14.440 +is like a solidly neutral um you know or + +00:35:13.000 --> 00:35:16.359 +maybe even + +00:35:14.440 --> 00:35:18.920 +positive I would I would say that's + +00:35:16.359 --> 00:35:20.640 +neutral but you know uh neutral or + +00:35:18.920 --> 00:35:23.800 +positive it's definitely not + +00:35:20.640 --> 00:35:26.359 +negative um serving s doesn't serve up a + +00:35:23.800 --> 00:35:29.480 +whole lot of laughs so laughs is + +00:35:26.359 --> 00:35:31.880 +obviously positive but not serving UPS + +00:35:29.480 --> 00:35:34.440 +is obviously + +00:35:31.880 --> 00:35:36.839 +negative so if negation modifies the + +00:35:34.440 --> 00:35:38.240 +word disregard it now we would probably + +00:35:36.839 --> 00:35:41.440 +need to do some sort of syntactic + +00:35:38.240 --> 00:35:45.599 +analysis or semantic analysis of + +00:35:41.440 --> 00:35:47.520 +some metaphor an analogy so puts a human + +00:35:45.599 --> 00:35:50.640 +face on a land most westerners are + +00:35:47.520 --> 00:35:52.880 +unfamiliar though uh this is + +00:35:50.640 --> 00:35:54.960 +positive green might want to hang on to + +00:35:52.880 --> 00:35:58.800 +that ski mask as robbery may be the only + +00:35:54.960 --> 00:35:58.800 +way to pay for this next project + +00:35:58.839 --> 00:36:03.640 +so this this is saying that the movie + +00:36:01.960 --> 00:36:05.560 +was so bad that the director will have + +00:36:03.640 --> 00:36:08.359 +to rob people in order to get money for + +00:36:05.560 --> 00:36:11.000 +the next project so that's kind of bad I + +00:36:08.359 --> 00:36:12.880 +guess um has all the depth of a waiting + +00:36:11.000 --> 00:36:14.520 +pool this is kind of my favorite one + +00:36:12.880 --> 00:36:15.880 +because it's really short and sweet but + +00:36:14.520 --> 00:36:18.800 +you know you need to know how deep a + +00:36:15.880 --> 00:36:21.440 +waiting pool is um so that's + +00:36:18.800 --> 00:36:22.960 +negative so the solution here I don't + +00:36:21.440 --> 00:36:24.680 +really even know how to handle this with + +00:36:22.960 --> 00:36:26.880 +a rule based system I have no idea how + +00:36:24.680 --> 00:36:30.040 +we would possibly do this yeah machine + +00:36:26.880 --> 00:36:32.400 +learning based models seem to be pretty + +00:36:30.040 --> 00:36:37.000 +adaptive okay and then I start doing + +00:36:32.400 --> 00:36:37.000 +these ones um anyone have a good + +00:36:38.160 --> 00:36:46.800 +idea any any other friends who know + +00:36:42.520 --> 00:36:50.040 +Japanese no okay um so yeah that's + +00:36:46.800 --> 00:36:52.839 +positive um that one's negative uh and + +00:36:50.040 --> 00:36:54.920 +the solution here is learn Japanese I + +00:36:52.839 --> 00:36:56.800 +guess or whatever other language you + +00:36:54.920 --> 00:37:00.040 +want to process so like obviously + +00:36:56.800 --> 00:37:03.720 +rule-based systems don't scale very + +00:37:00.040 --> 00:37:05.119 +well so um we've moved but like rule + +00:37:03.720 --> 00:37:06.319 +based systems don't scale very well + +00:37:05.119 --> 00:37:08.160 +we're not going to be using them for + +00:37:06.319 --> 00:37:11.400 +most of the things we do in this class + +00:37:08.160 --> 00:37:14.240 +but I do think it's sometimes useful to + +00:37:11.400 --> 00:37:15.640 +try to create one for your task maybe + +00:37:14.240 --> 00:37:16.680 +right at the very beginning of a project + +00:37:15.640 --> 00:37:18.560 +because it gives you an idea about + +00:37:16.680 --> 00:37:21.160 +what's really hard about the task in + +00:37:18.560 --> 00:37:22.480 +some cases so um yeah I wouldn't + +00:37:21.160 --> 00:37:25.599 +entirely discount them I'm not + +00:37:22.480 --> 00:37:27.400 +introducing them for no reason + +00:37:25.599 --> 00:37:29.880 +whatsoever + +00:37:27.400 --> 00:37:34.160 +so next is machine learning based anal + +00:37:29.880 --> 00:37:35.400 +and machine learning uh in general uh I + +00:37:34.160 --> 00:37:36.640 +here actually when I say machine + +00:37:35.400 --> 00:37:38.160 +learning I'm going to be talking about + +00:37:36.640 --> 00:37:39.560 +the traditional fine-tuning approach + +00:37:38.160 --> 00:37:43.520 +where we have a training set Dev set + +00:37:39.560 --> 00:37:46.359 +test set and so we take our training set + +00:37:43.520 --> 00:37:49.680 +we run some learning algorithm over it + +00:37:46.359 --> 00:37:52.319 +we have a learned feature extractor F A + +00:37:49.680 --> 00:37:55.839 +possibly learned feature extractor F + +00:37:52.319 --> 00:37:57.880 +possibly learned scoring function W and + +00:37:55.839 --> 00:38:00.800 +uh then we apply our inference algorithm + +00:37:57.880 --> 00:38:02.839 +our decision Rule and make decisions + +00:38:00.800 --> 00:38:04.200 +when I say possibly learned actually the + +00:38:02.839 --> 00:38:06.119 +first example I'm going to give of a + +00:38:04.200 --> 00:38:07.760 +machine learning based technique is uh + +00:38:06.119 --> 00:38:10.079 +doesn't have a learned feature extractor + +00:38:07.760 --> 00:38:12.800 +but most things that we use nowadays do + +00:38:10.079 --> 00:38:12.800 +have learned feature + +00:38:13.200 --> 00:38:18.040 +extractors so our first attempt is going + +00:38:15.640 --> 00:38:21.760 +to be a bag of words model uh and the + +00:38:18.040 --> 00:38:27.119 +way a bag of wordss model works is uh + +00:38:21.760 --> 00:38:30.160 +essentially we start out by looking up a + +00:38:27.119 --> 00:38:33.240 +Vector where one element in the vector + +00:38:30.160 --> 00:38:36.240 +is uh is one and all the other elements + +00:38:33.240 --> 00:38:38.040 +in the vector are zero and so if the + +00:38:36.240 --> 00:38:40.319 +word is different the position in the + +00:38:38.040 --> 00:38:42.839 +vector that's one will be different we + +00:38:40.319 --> 00:38:46.280 +add all of these together and this gives + +00:38:42.839 --> 00:38:48.200 +us a vector where each element is the + +00:38:46.280 --> 00:38:50.359 +frequency of that word in the vector and + +00:38:48.200 --> 00:38:52.520 +then we multiply that by weights and we + +00:38:50.359 --> 00:38:55.520 +get a + +00:38:52.520 --> 00:38:57.160 +score and um here as I said this is not + +00:38:55.520 --> 00:39:00.359 +a learned feature + +00:38:57.160 --> 00:39:02.079 +uh Vector this is basically uh sorry not + +00:39:00.359 --> 00:39:04.359 +a learn feature extractor this is + +00:39:02.079 --> 00:39:06.200 +basically a fixed feature extractor but + +00:39:04.359 --> 00:39:09.839 +the weights themselves are + +00:39:06.200 --> 00:39:11.640 +learned um so my my question is I + +00:39:09.839 --> 00:39:14.599 +mentioned a whole lot of problems before + +00:39:11.640 --> 00:39:17.480 +I mentioned infrequent words I mentioned + +00:39:14.599 --> 00:39:20.760 +conjugation I mentioned uh different + +00:39:17.480 --> 00:39:22.880 +languages I mentioned syntax and + +00:39:20.760 --> 00:39:24.599 +metaphor so which of these do we think + +00:39:22.880 --> 00:39:25.440 +would be fixed by this sort of learning + +00:39:24.599 --> 00:39:27.400 +based + +00:39:25.440 --> 00:39:29.640 +approach + +00:39:27.400 --> 00:39:29.640 +any + +00:39:29.920 --> 00:39:35.200 +ideas maybe not fixed maybe made + +00:39:32.520 --> 00:39:35.200 +significantly + +00:39:36.880 --> 00:39:41.560 +better any Brave uh brave + +00:39:44.880 --> 00:39:48.440 +people maybe maybe + +00:39:53.720 --> 00:39:58.400 +negation okay so maybe doesn't when it + +00:39:55.760 --> 00:39:58.400 +have a negative qu + +00:40:02.960 --> 00:40:07.560 +yeah yeah so for the conjugation if we + +00:40:05.520 --> 00:40:09.200 +had the conjugations of the stems mapped + +00:40:07.560 --> 00:40:11.119 +in the same position that might fix a + +00:40:09.200 --> 00:40:12.920 +conjugation problem but I would say if + +00:40:11.119 --> 00:40:15.200 +you don't do that then this kind of + +00:40:12.920 --> 00:40:18.160 +fixes conjugation a little bit but maybe + +00:40:15.200 --> 00:40:21.319 +not not really yeah kind of fix + +00:40:18.160 --> 00:40:24.079 +conjugation because like they're using + +00:40:21.319 --> 00:40:26.760 +the same there + +00:40:24.079 --> 00:40:28.400 +probably different variations so we + +00:40:26.760 --> 00:40:31.359 +learn how to + +00:40:28.400 --> 00:40:33.400 +classify surrounding + +00:40:31.359 --> 00:40:35.000 +structure yeah if it's a big enough + +00:40:33.400 --> 00:40:36.760 +training set you might have covered the + +00:40:35.000 --> 00:40:37.880 +various conjugations but if you haven't + +00:40:36.760 --> 00:40:43.000 +and you don't have any rule-based + +00:40:37.880 --> 00:40:43.000 +processing it it might still be problems + +00:40:45.400 --> 00:40:50.359 +yeah yeah so in frequent words if you + +00:40:48.280 --> 00:40:52.560 +have a large enough training set yeah + +00:40:50.359 --> 00:40:54.599 +you'll be able to fix it to some extent + +00:40:52.560 --> 00:40:56.480 +so none of the problems are entirely + +00:40:54.599 --> 00:40:57.880 +fixed but a lot of them are made better + +00:40:56.480 --> 00:40:58.960 +different languages is also made better + +00:40:57.880 --> 00:41:00.119 +if you have training data in that + +00:40:58.960 --> 00:41:04.599 +language but if you don't then you're + +00:41:00.119 --> 00:41:06.240 +out of BL so um so now what I'd like to + +00:41:04.599 --> 00:41:10.800 +do is I'd look to like to look at what + +00:41:06.240 --> 00:41:15.079 +our vectors represent so basically um in + +00:41:10.800 --> 00:41:16.880 +uh in binary classification each word um + +00:41:15.079 --> 00:41:19.119 +sorry so the vectors themselves + +00:41:16.880 --> 00:41:21.880 +represent the counts of the words here + +00:41:19.119 --> 00:41:25.319 +I'm talking about what the weight uh + +00:41:21.880 --> 00:41:28.520 +vectors or matrices correspond to and + +00:41:25.319 --> 00:41:31.640 +the weight uh Vector here will be + +00:41:28.520 --> 00:41:33.680 +positive if the word it tends to be + +00:41:31.640 --> 00:41:36.680 +positive if in a binary classification + +00:41:33.680 --> 00:41:38.400 +case in a multiclass classification case + +00:41:36.680 --> 00:41:42.480 +we'll actually have a matrix that looks + +00:41:38.400 --> 00:41:45.480 +like this where um each column or row uh + +00:41:42.480 --> 00:41:47.079 +corresponds to the word and each row or + +00:41:45.480 --> 00:41:49.319 +column corresponds to a label and it + +00:41:47.079 --> 00:41:51.960 +will be higher if that row tends to uh + +00:41:49.319 --> 00:41:54.800 +correlate with that uh that word tends + +00:41:51.960 --> 00:41:56.920 +to correlate that little + +00:41:54.800 --> 00:41:59.240 +bit so + +00:41:56.920 --> 00:42:04.079 +this um training of the bag of words + +00:41:59.240 --> 00:42:07.720 +model is can be done uh so simply that + +00:42:04.079 --> 00:42:10.200 +we uh can put it in a single slide so + +00:42:07.720 --> 00:42:11.599 +basically here uh what we do is we start + +00:42:10.200 --> 00:42:14.760 +out with the feature + +00:42:11.599 --> 00:42:18.880 +weights and for each example in our data + +00:42:14.760 --> 00:42:20.800 +set we extract features um the exact way + +00:42:18.880 --> 00:42:23.920 +I'm extracting features is basically + +00:42:20.800 --> 00:42:25.720 +splitting uh splitting the words using + +00:42:23.920 --> 00:42:28.000 +the python split function and then uh + +00:42:25.720 --> 00:42:31.319 +Counting number of times each word + +00:42:28.000 --> 00:42:33.160 +exists uh we then run the classifier so + +00:42:31.319 --> 00:42:36.280 +actually running the classifier is + +00:42:33.160 --> 00:42:38.200 +exactly the same as what we did for the + +00:42:36.280 --> 00:42:42.640 +uh the rule based system it's just that + +00:42:38.200 --> 00:42:47.359 +we have feature vectors instead and + +00:42:42.640 --> 00:42:51.559 +then if the predicted value is + +00:42:47.359 --> 00:42:55.160 +not value then for each of the + +00:42:51.559 --> 00:42:56.680 +features uh in the feature space we + +00:42:55.160 --> 00:43:02.200 +upweight + +00:42:56.680 --> 00:43:03.599 +the um we upweight The Weight by the + +00:43:02.200 --> 00:43:06.000 +vector + +00:43:03.599 --> 00:43:09.920 +size by or by the amount of the vector + +00:43:06.000 --> 00:43:13.240 +if Y is positive and we downweight the + +00:43:09.920 --> 00:43:16.240 +vector uh by the size of the vector if Y + +00:43:13.240 --> 00:43:18.520 +is negative so this is really really + +00:43:16.240 --> 00:43:20.559 +simple it's uh probably the simplest + +00:43:18.520 --> 00:43:25.079 +possible algorithm for training one of + +00:43:20.559 --> 00:43:27.559 +these models um but I have an + +00:43:25.079 --> 00:43:30.040 +example in this that you can also take a + +00:43:27.559 --> 00:43:31.960 +look at here's a trained bag of words + +00:43:30.040 --> 00:43:33.680 +classifier and we could step through + +00:43:31.960 --> 00:43:34.960 +this is on exactly the same data set as + +00:43:33.680 --> 00:43:37.240 +I did before we're training on the + +00:43:34.960 --> 00:43:42.359 +training set + +00:43:37.240 --> 00:43:43.640 +um and uh evaluating on the dev set um I + +00:43:42.359 --> 00:43:45.880 +also have some extra stuff like I'm + +00:43:43.640 --> 00:43:47.079 +Shuffling the order of the data IDs + +00:43:45.880 --> 00:43:49.440 +which is really important if you're + +00:43:47.079 --> 00:43:53.160 +doing this sort of incremental algorithm + +00:43:49.440 --> 00:43:54.960 +uh because uh what if what if your + +00:43:53.160 --> 00:43:57.400 +creating data set was ordered in this + +00:43:54.960 --> 00:44:00.040 +way where you have all of the positive + +00:43:57.400 --> 00:44:00.040 +labels on + +00:44:00.359 --> 00:44:04.520 +top and then you have all of the + +00:44:02.280 --> 00:44:06.680 +negative labels on the + +00:44:04.520 --> 00:44:08.200 +bottom if you do something like this it + +00:44:06.680 --> 00:44:10.200 +would see only negative labels at the + +00:44:08.200 --> 00:44:11.800 +end of training and you might have + +00:44:10.200 --> 00:44:14.400 +problems because your model would only + +00:44:11.800 --> 00:44:17.440 +predict negatives so we also Shuffle + +00:44:14.400 --> 00:44:20.319 +data um and then step through we run the + +00:44:17.440 --> 00:44:22.559 +classifier and I'm going to run uh five + +00:44:20.319 --> 00:44:23.640 +epochs of training through the data set + +00:44:22.559 --> 00:44:27.160 +uh very + +00:44:23.640 --> 00:44:29.599 +fast and calculate our accuracy + +00:44:27.160 --> 00:44:33.280 +and this got 75% accuracy on the + +00:44:29.599 --> 00:44:36.160 +training data set and uh 56% accuracy on + +00:44:33.280 --> 00:44:40.000 +the Deb data set so uh if you remember + +00:44:36.160 --> 00:44:41.520 +our rule-based classifier had 42 uh 42 + +00:44:40.000 --> 00:44:43.880 +accuracy and now our training based + +00:44:41.520 --> 00:44:45.760 +classifier has 56 accuracy but it's + +00:44:43.880 --> 00:44:49.359 +overfitting heavily to the training side + +00:44:45.760 --> 00:44:50.880 +so um basically this is a pretty strong + +00:44:49.359 --> 00:44:53.480 +advertisement for why we should be using + +00:44:50.880 --> 00:44:54.960 +machine learning you know I the amount + +00:44:53.480 --> 00:44:57.800 +of code that we had for this machine + +00:44:54.960 --> 00:44:59.720 +learning model is basically very similar + +00:44:57.800 --> 00:45:02.680 +um it's not using any external libraries + +00:44:59.720 --> 00:45:02.680 +but we're getting better at + +00:45:03.599 --> 00:45:08.800 +this + +00:45:05.800 --> 00:45:08.800 +cool + +00:45:09.559 --> 00:45:16.000 +so cool any any questions + +00:45:13.520 --> 00:45:18.240 +here and so I'm going to talk about the + +00:45:16.000 --> 00:45:20.760 +connection to between this algorithm and + +00:45:18.240 --> 00:45:22.839 +neural networks in the next class um + +00:45:20.760 --> 00:45:24.200 +because this actually is using a very + +00:45:22.839 --> 00:45:26.319 +similar training algorithm to what we + +00:45:24.200 --> 00:45:27.480 +use in neural networks with some uh + +00:45:26.319 --> 00:45:30.079 +particular + +00:45:27.480 --> 00:45:32.839 +assumptions cool um so what's missing in + +00:45:30.079 --> 00:45:34.800 +bag of words um still handling of + +00:45:32.839 --> 00:45:36.880 +conjugation or compound words is not + +00:45:34.800 --> 00:45:39.160 +perfect it we can do it to some extent + +00:45:36.880 --> 00:45:41.079 +to the point where we can uh memorize + +00:45:39.160 --> 00:45:44.079 +things so I love this movie I love this + +00:45:41.079 --> 00:45:46.920 +movie another thing is handling word Ser + +00:45:44.079 --> 00:45:49.240 +uh similarities so I love this movie and + +00:45:46.920 --> 00:45:50.720 +I adore this movie uh these basically + +00:45:49.240 --> 00:45:52.119 +mean the same thing as humans we know + +00:45:50.720 --> 00:45:54.200 +they mean the same thing so we should be + +00:45:52.119 --> 00:45:56.079 +able to take advantage of that fact to + +00:45:54.200 --> 00:45:57.839 +learn better models but we're not doing + +00:45:56.079 --> 00:46:02.760 +that in this model at the moment because + +00:45:57.839 --> 00:46:05.440 +each unit is uh treated as a atomic unit + +00:46:02.760 --> 00:46:08.040 +and there's no idea of + +00:46:05.440 --> 00:46:11.040 +similarity also handling of combination + +00:46:08.040 --> 00:46:12.760 +features so um I love this movie and I + +00:46:11.040 --> 00:46:14.920 +don't love this movie I hate this movie + +00:46:12.760 --> 00:46:17.079 +and I don't hate this movie actually + +00:46:14.920 --> 00:46:20.400 +this is a little bit tricky because + +00:46:17.079 --> 00:46:23.240 +negative words are slightly indicative + +00:46:20.400 --> 00:46:25.280 +of it being negative but actually what + +00:46:23.240 --> 00:46:28.119 +they do is they negate the other things + +00:46:25.280 --> 00:46:28.119 +that you're saying in the + +00:46:28.240 --> 00:46:36.559 +sentence + +00:46:30.720 --> 00:46:40.480 +so um like love is positive hate is + +00:46:36.559 --> 00:46:40.480 +negative but like don't + +00:46:50.359 --> 00:46:56.079 +love it's actually kind of like this + +00:46:52.839 --> 00:46:59.359 +right like um Love is very positive POS + +00:46:56.079 --> 00:47:01.760 +hate is very negative but don't love is + +00:46:59.359 --> 00:47:04.680 +like slightly less positive than don't + +00:47:01.760 --> 00:47:06.160 +hate right so um It's actually kind of + +00:47:04.680 --> 00:47:07.559 +tricky because you need to combine them + +00:47:06.160 --> 00:47:10.720 +together and figure out what's going on + +00:47:07.559 --> 00:47:12.280 +based on that another example that a lot + +00:47:10.720 --> 00:47:14.160 +of people might not think of immediately + +00:47:12.280 --> 00:47:17.880 +but is super super common in sentiment + +00:47:14.160 --> 00:47:20.160 +analysis or any other thing is butt so + +00:47:17.880 --> 00:47:22.599 +basically what but does is it throws + +00:47:20.160 --> 00:47:24.160 +away all the stuff that you said before + +00:47:22.599 --> 00:47:26.119 +um and you can just pay attention to the + +00:47:24.160 --> 00:47:29.000 +stuff that you saw beforehand so like we + +00:47:26.119 --> 00:47:30.440 +could even add this to our um like if + +00:47:29.000 --> 00:47:31.760 +you want to add this to your rule based + +00:47:30.440 --> 00:47:33.240 +classifier you can do that you just + +00:47:31.760 --> 00:47:34.640 +search for butt and delete everything + +00:47:33.240 --> 00:47:37.240 +before it and see if that inputs your + +00:47:34.640 --> 00:47:39.240 +accuracy might be might be a fun very + +00:47:37.240 --> 00:47:43.480 +quick thing + +00:47:39.240 --> 00:47:44.880 +to cool so the better solution which is + +00:47:43.480 --> 00:47:46.800 +what we're going to talk about for every + +00:47:44.880 --> 00:47:49.480 +other class other than uh other than + +00:47:46.800 --> 00:47:52.160 +this one is neural network models and + +00:47:49.480 --> 00:47:55.800 +basically uh what they do is they do a + +00:47:52.160 --> 00:47:59.400 +lookup of uh dense word embeddings so + +00:47:55.800 --> 00:48:02.520 +instead of looking up uh individual uh + +00:47:59.400 --> 00:48:04.640 +sparse uh vectors individual one hot + +00:48:02.520 --> 00:48:06.920 +vectors they look up dense word + +00:48:04.640 --> 00:48:09.680 +embeddings and then throw them into some + +00:48:06.920 --> 00:48:11.880 +complicated function to extract features + +00:48:09.680 --> 00:48:16.359 +and based on the features uh multiply by + +00:48:11.880 --> 00:48:18.280 +weights and get a score um and if you're + +00:48:16.359 --> 00:48:20.359 +doing text classification in the + +00:48:18.280 --> 00:48:22.520 +traditional way this is normally what + +00:48:20.359 --> 00:48:23.760 +you do um if you're doing text + +00:48:22.520 --> 00:48:25.960 +classification with something like + +00:48:23.760 --> 00:48:27.280 +prompting you're still actually doing + +00:48:25.960 --> 00:48:29.960 +this because you're calculating the + +00:48:27.280 --> 00:48:32.960 +score of the next word to predict and + +00:48:29.960 --> 00:48:34.720 +that's done in exactly the same way so + +00:48:32.960 --> 00:48:37.760 +uh even if you're using a large language + +00:48:34.720 --> 00:48:39.359 +model like GPT this is still probably + +00:48:37.760 --> 00:48:41.800 +happening under the hood unless open the + +00:48:39.359 --> 00:48:43.400 +eye invented something that very + +00:48:41.800 --> 00:48:45.559 +different in Alien than anything else + +00:48:43.400 --> 00:48:48.440 +that we know of but I I'm guessing that + +00:48:45.559 --> 00:48:48.440 +that propably hasn't + +00:48:48.480 --> 00:48:52.880 +happen um one nice thing about neural + +00:48:50.880 --> 00:48:54.480 +networks is neural networks + +00:48:52.880 --> 00:48:57.559 +theoretically are powerful enough to + +00:48:54.480 --> 00:49:00.000 +solve any task if you make them uh deep + +00:48:57.559 --> 00:49:01.160 +enough or wide enough uh like if you + +00:49:00.000 --> 00:49:04.520 +make them wide enough and then if you + +00:49:01.160 --> 00:49:06.799 +make them deep it also helps further so + +00:49:04.520 --> 00:49:08.079 +anytime somebody says well you can't + +00:49:06.799 --> 00:49:11.119 +just solve that problem with neural + +00:49:08.079 --> 00:49:13.240 +networks you know that they're lying + +00:49:11.119 --> 00:49:15.720 +basically because they theoretically can + +00:49:13.240 --> 00:49:17.359 +solve every problem uh but you have you + +00:49:15.720 --> 00:49:19.799 +have issues of data you have issues of + +00:49:17.359 --> 00:49:23.079 +other things like that so you know they + +00:49:19.799 --> 00:49:23.079 +don't just necessarily work + +00:49:23.119 --> 00:49:28.040 +outs cool um so the final thing I'd like + +00:49:26.400 --> 00:49:29.319 +to talk about is the road map going + +00:49:28.040 --> 00:49:31.319 +forward some of the things I'm going to + +00:49:29.319 --> 00:49:32.799 +cover in the class and some of the + +00:49:31.319 --> 00:49:35.200 +logistics + +00:49:32.799 --> 00:49:36.799 +issues so um the first thing I'm going + +00:49:35.200 --> 00:49:38.240 +to talk about in the class is language + +00:49:36.799 --> 00:49:40.559 +modeling fun + +00:49:38.240 --> 00:49:42.720 +fundamentals and uh so this could + +00:49:40.559 --> 00:49:44.240 +include language models uh that just + +00:49:42.720 --> 00:49:46.559 +predict the next words it could include + +00:49:44.240 --> 00:49:50.559 +language models that predict the output + +00:49:46.559 --> 00:49:51.599 +given the uh the input or the prompt um + +00:49:50.559 --> 00:49:54.559 +I'm going to be talking about + +00:49:51.599 --> 00:49:56.520 +representing words uh how how we get + +00:49:54.559 --> 00:49:59.319 +word representation subword models other + +00:49:56.520 --> 00:50:01.440 +things like that uh then go kind of + +00:49:59.319 --> 00:50:04.200 +deeper into language modeling uh how do + +00:50:01.440 --> 00:50:07.799 +we do it how do we evaluate it other + +00:50:04.200 --> 00:50:10.920 +things um sequence encoding uh and this + +00:50:07.799 --> 00:50:13.240 +is going to cover things like uh + +00:50:10.920 --> 00:50:16.280 +Transformers uh self attention modals + +00:50:13.240 --> 00:50:18.559 +but also very quickly cnns and rnns + +00:50:16.280 --> 00:50:20.880 +which are useful in some + +00:50:18.559 --> 00:50:22.200 +cases um and then we're going to + +00:50:20.880 --> 00:50:24.040 +specifically go very deep into the + +00:50:22.200 --> 00:50:25.960 +Transformer architecture and also talk a + +00:50:24.040 --> 00:50:27.280 +little bit about some of the modern uh + +00:50:25.960 --> 00:50:30.240 +improvements to the Transformer + +00:50:27.280 --> 00:50:31.839 +architecture so the Transformer we're + +00:50:30.240 --> 00:50:33.839 +using nowadays is very different than + +00:50:31.839 --> 00:50:36.200 +the Transformer that was invented in + +00:50:33.839 --> 00:50:37.240 +2017 uh so we're going to talk well I + +00:50:36.200 --> 00:50:38.760 +wouldn't say very different but + +00:50:37.240 --> 00:50:41.359 +different enough that it's important so + +00:50:38.760 --> 00:50:43.280 +we're going to talk about some of those + +00:50:41.359 --> 00:50:45.079 +things second thing I'd like to talk + +00:50:43.280 --> 00:50:47.000 +about is training and inference methods + +00:50:45.079 --> 00:50:48.839 +so this includes uh generation + +00:50:47.000 --> 00:50:52.119 +algorithms uh so we're going to have a + +00:50:48.839 --> 00:50:55.520 +whole class on how we generate text uh + +00:50:52.119 --> 00:50:58.319 +in different ways uh prompting how uh we + +00:50:55.520 --> 00:50:59.720 +can prompt things I hear uh world class + +00:50:58.319 --> 00:51:01.799 +prompt engineers make a lot of money + +00:50:59.720 --> 00:51:05.480 +nowadays so uh you'll want to pay + +00:51:01.799 --> 00:51:08.760 +attention to that one um and instruction + +00:51:05.480 --> 00:51:11.520 +tuning uh so how do we train models to + +00:51:08.760 --> 00:51:13.720 +handle a lot of different tasks and + +00:51:11.520 --> 00:51:15.839 +reinforcement learning so how do we uh + +00:51:13.720 --> 00:51:18.520 +you know like actually generate outputs + +00:51:15.839 --> 00:51:19.839 +uh kind of Judge them and then learn + +00:51:18.520 --> 00:51:22.599 +from + +00:51:19.839 --> 00:51:25.880 +there also experimental design and + +00:51:22.599 --> 00:51:28.079 +evaluation so experimental design uh so + +00:51:25.880 --> 00:51:30.480 +how do we design an experiment well uh + +00:51:28.079 --> 00:51:32.000 +so that it backs up what we want to be + +00:51:30.480 --> 00:51:34.559 +uh our conclusions that we want to be + +00:51:32.000 --> 00:51:37.000 +backing up how do we do human annotation + +00:51:34.559 --> 00:51:38.880 +of data in a reliable way this is + +00:51:37.000 --> 00:51:41.160 +getting harder and harder as models get + +00:51:38.880 --> 00:51:43.359 +better and better because uh getting + +00:51:41.160 --> 00:51:45.000 +humans who don't care very much about + +00:51:43.359 --> 00:51:48.559 +The annotation task they might do worse + +00:51:45.000 --> 00:51:51.119 +than gp4 so um you need to be careful of + +00:51:48.559 --> 00:51:52.240 +that also debugging and interpretation + +00:51:51.119 --> 00:51:53.960 +technique so what are some of the + +00:51:52.240 --> 00:51:55.160 +automatic techniques that you can do to + +00:51:53.960 --> 00:51:57.720 +quickly figure out what's going wrong + +00:51:55.160 --> 00:52:00.040 +with your models and improve + +00:51:57.720 --> 00:52:01.599 +them and uh bias and fairness + +00:52:00.040 --> 00:52:04.200 +considerations so it's really really + +00:52:01.599 --> 00:52:05.799 +important nowadays uh that models are + +00:52:04.200 --> 00:52:07.880 +being deployed to real people in the + +00:52:05.799 --> 00:52:09.880 +real world and like actually causing + +00:52:07.880 --> 00:52:11.760 +harm to people in some cases that we + +00:52:09.880 --> 00:52:15.160 +need to be worried about + +00:52:11.760 --> 00:52:17.000 +that Advanced Training in architectures + +00:52:15.160 --> 00:52:19.280 +so we're going to talk about distill + +00:52:17.000 --> 00:52:21.400 +distillation and quantization how can we + +00:52:19.280 --> 00:52:23.520 +make small language models uh that + +00:52:21.400 --> 00:52:24.880 +actually still work well like not large + +00:52:23.520 --> 00:52:27.559 +you can run them on your phone you can + +00:52:24.880 --> 00:52:29.920 +run them on your local + +00:52:27.559 --> 00:52:31.640 +laptop um ensembling and mixtures of + +00:52:29.920 --> 00:52:33.480 +experts how can we combine together + +00:52:31.640 --> 00:52:34.760 +multiple models in order to create + +00:52:33.480 --> 00:52:35.880 +models that are better than the sum of + +00:52:34.760 --> 00:52:38.799 +their + +00:52:35.880 --> 00:52:40.720 +parts and um retrieval and retrieval + +00:52:38.799 --> 00:52:43.920 +augmented + +00:52:40.720 --> 00:52:45.480 +generation long sequence models uh so + +00:52:43.920 --> 00:52:49.920 +how do we handle long + +00:52:45.480 --> 00:52:52.240 +outputs um and uh we're going to talk + +00:52:49.920 --> 00:52:55.760 +about applications to complex reasoning + +00:52:52.240 --> 00:52:57.760 +tasks code generation language agents + +00:52:55.760 --> 00:52:59.920 +and knowledge-based QA and information + +00:52:57.760 --> 00:53:04.160 +extraction I picked + +00:52:59.920 --> 00:53:06.760 +these because they seem to be maybe the + +00:53:04.160 --> 00:53:09.880 +most important at least in research + +00:53:06.760 --> 00:53:11.440 +nowadays and also they cover uh the + +00:53:09.880 --> 00:53:13.640 +things that when I talk to people in + +00:53:11.440 --> 00:53:15.280 +Industry are kind of most interested in + +00:53:13.640 --> 00:53:17.559 +so hopefully it'll be useful regardless + +00:53:15.280 --> 00:53:19.799 +of uh whether you plan on doing research + +00:53:17.559 --> 00:53:22.839 +or or plan on doing industry related + +00:53:19.799 --> 00:53:24.160 +things uh by by the way the two things + +00:53:22.839 --> 00:53:25.920 +that when I talk to people in Industry + +00:53:24.160 --> 00:53:29.599 +they're most interested in are Rag and + +00:53:25.920 --> 00:53:31.079 +code generation at the moment for now um + +00:53:29.599 --> 00:53:32.319 +so those are ones that you'll want to + +00:53:31.079 --> 00:53:34.680 +pay attention + +00:53:32.319 --> 00:53:36.599 +to and then finally we have a few + +00:53:34.680 --> 00:53:40.079 +lectures on Linguistics and + +00:53:36.599 --> 00:53:42.720 +multilinguality um I love Linguistics + +00:53:40.079 --> 00:53:44.839 +but uh to be honest at the moment most + +00:53:42.720 --> 00:53:47.760 +of our Cutting Edge models don't + +00:53:44.839 --> 00:53:49.240 +explicitly use linguistic structure um + +00:53:47.760 --> 00:53:50.799 +but I still think it's useful to know + +00:53:49.240 --> 00:53:52.760 +about it especially if you're working on + +00:53:50.799 --> 00:53:54.880 +multilingual things especially if you're + +00:53:52.760 --> 00:53:57.040 +interested in very robust generalization + +00:53:54.880 --> 00:53:58.920 +to new models so we're going to talk a + +00:53:57.040 --> 00:54:02.599 +little bit about that and also + +00:53:58.920 --> 00:54:06.079 +multilingual LP I'm going to have + +00:54:02.599 --> 00:54:09.119 +fure so also if you have any suggestions + +00:54:06.079 --> 00:54:11.400 +um we have two guest lecture slots still + +00:54:09.119 --> 00:54:12.799 +open uh that I'm trying to fill so if + +00:54:11.400 --> 00:54:15.440 +you have any things that you really want + +00:54:12.799 --> 00:54:16.440 +to hear about um I could either add them + +00:54:15.440 --> 00:54:19.319 +to the + +00:54:16.440 --> 00:54:21.079 +existing you know content or I could + +00:54:19.319 --> 00:54:23.240 +invite a guest lecturer who's working on + +00:54:21.079 --> 00:54:24.079 +that topic so you know please feel free + +00:54:23.240 --> 00:54:26.760 +to tell + +00:54:24.079 --> 00:54:29.160 +me um then the class format and + +00:54:26.760 --> 00:54:32.280 +structure uh the class + +00:54:29.160 --> 00:54:34.000 +content my goal is to learn in detail + +00:54:32.280 --> 00:54:36.640 +about building NLP systems from a + +00:54:34.000 --> 00:54:40.520 +research perspective so this is a 700 + +00:54:36.640 --> 00:54:43.599 +level course so it's aiming to be for + +00:54:40.520 --> 00:54:46.960 +people who really want to try new and + +00:54:43.599 --> 00:54:49.280 +Innovative things in uh kind of natural + +00:54:46.960 --> 00:54:51.359 +language processing it's not going to + +00:54:49.280 --> 00:54:52.760 +focus solely on reimplementing things + +00:54:51.359 --> 00:54:54.319 +that have been done before including in + +00:54:52.760 --> 00:54:55.280 +the project I'm going to be expecting + +00:54:54.319 --> 00:54:58.480 +everybody to do something something + +00:54:55.280 --> 00:54:59.920 +that's kind of new whether it's coming + +00:54:58.480 --> 00:55:01.359 +up with a new method or applying + +00:54:59.920 --> 00:55:03.559 +existing methods to a place where they + +00:55:01.359 --> 00:55:05.079 +haven't been used before or building out + +00:55:03.559 --> 00:55:06.640 +things for a new language or something + +00:55:05.079 --> 00:55:08.359 +like that so that's kind of one of the + +00:55:06.640 --> 00:55:11.480 +major goals of this + +00:55:08.359 --> 00:55:13.000 +class um learn basic and advanced topics + +00:55:11.480 --> 00:55:15.559 +in machine learning approaches to NLP + +00:55:13.000 --> 00:55:18.359 +and language models learn some basic + +00:55:15.559 --> 00:55:21.480 +linguistic knowledge useful in NLP uh + +00:55:18.359 --> 00:55:23.200 +see case studies of NLP applications and + +00:55:21.480 --> 00:55:25.680 +learn how to identify unique problems + +00:55:23.200 --> 00:55:29.039 +for each um one thing i' like to point + +00:55:25.680 --> 00:55:31.160 +out is I'm not going to cover every NLP + +00:55:29.039 --> 00:55:32.920 +application ever because that would be + +00:55:31.160 --> 00:55:35.520 +absolutely impossible NLP is being used + +00:55:32.920 --> 00:55:37.079 +in so many different areas nowadays but + +00:55:35.520 --> 00:55:38.960 +what I want people to pay attention to + +00:55:37.079 --> 00:55:41.280 +like even if you're not super interested + +00:55:38.960 --> 00:55:42.400 +in code generation for example what you + +00:55:41.280 --> 00:55:44.200 +can do is you can look at code + +00:55:42.400 --> 00:55:46.160 +generation look at how people identify + +00:55:44.200 --> 00:55:47.680 +problems look at the methods that people + +00:55:46.160 --> 00:55:50.880 +have proposed to solve those unique + +00:55:47.680 --> 00:55:53.039 +problems and then kind of map that try + +00:55:50.880 --> 00:55:54.799 +to do some generalization onto your own + +00:55:53.039 --> 00:55:57.799 +problems of Interest so uh that's kind + +00:55:54.799 --> 00:56:00.280 +of the goal of the NLP + +00:55:57.799 --> 00:56:02.440 +applications finally uh learning how to + +00:56:00.280 --> 00:56:05.160 +debug when and where NLP systems fail + +00:56:02.440 --> 00:56:08.200 +and build improvements based on this so + +00:56:05.160 --> 00:56:10.200 +um ever since I was a graduate student + +00:56:08.200 --> 00:56:12.720 +this has been like one of the really + +00:56:10.200 --> 00:56:15.920 +important things that I feel like I've + +00:56:12.720 --> 00:56:17.440 +done well or done better than some other + +00:56:15.920 --> 00:56:19.280 +people and I I feel like it's a really + +00:56:17.440 --> 00:56:21.119 +good way to like even if you're only + +00:56:19.280 --> 00:56:22.680 +interested in improving accuracy knowing + +00:56:21.119 --> 00:56:25.039 +why your system's failing still is the + +00:56:22.680 --> 00:56:27.599 +best way to do that I so I'm going to + +00:56:25.039 --> 00:56:30.559 +put a lot of emphasis on + +00:56:27.599 --> 00:56:32.559 +that in terms of the class format um + +00:56:30.559 --> 00:56:36.280 +before class for some classes there are + +00:56:32.559 --> 00:56:37.880 +recommended reading uh this can be + +00:56:36.280 --> 00:56:39.559 +helpful to read I'm never going to + +00:56:37.880 --> 00:56:41.119 +expect you to definitely have read it + +00:56:39.559 --> 00:56:42.480 +before the class but I would suggest + +00:56:41.119 --> 00:56:45.160 +that maybe you'll get more out of the + +00:56:42.480 --> 00:56:47.319 +class if you do that um during class + +00:56:45.160 --> 00:56:48.079 +we'll have the lecture um in discussion + +00:56:47.319 --> 00:56:50.559 +with + +00:56:48.079 --> 00:56:52.359 +everybody um sometimes we'll have a code + +00:56:50.559 --> 00:56:55.839 +or data walk + +00:56:52.359 --> 00:56:58.760 +um actually this is a a little bit old I + +00:56:55.839 --> 00:57:01.880 +I have this slide we're this year we're + +00:56:58.760 --> 00:57:04.160 +going to be adding more uh code and data + +00:57:01.880 --> 00:57:07.400 +walks during office hours and the way it + +00:57:04.160 --> 00:57:09.400 +will work is one of the Tas we have + +00:57:07.400 --> 00:57:11.160 +seven Tas who I'm going to introduce + +00:57:09.400 --> 00:57:15.000 +very soon but one of the Tas will be + +00:57:11.160 --> 00:57:16.839 +doing this kind of recitation where you + +00:57:15.000 --> 00:57:18.200 +um where we go over a library so if + +00:57:16.839 --> 00:57:19.480 +you're not familiar with the library and + +00:57:18.200 --> 00:57:21.960 +you want to be more familiar with the + +00:57:19.480 --> 00:57:23.720 +library you can join this and uh then + +00:57:21.960 --> 00:57:25.400 +we'll be able to do this and this will + +00:57:23.720 --> 00:57:28.240 +cover things like + +00:57:25.400 --> 00:57:31.039 +um pie torch and sentence piece uh we're + +00:57:28.240 --> 00:57:33.280 +going to start out with hugging face um + +00:57:31.039 --> 00:57:36.559 +inference stuff like + +00:57:33.280 --> 00:57:41.520 +VM uh debugging software like + +00:57:36.559 --> 00:57:41.520 +Xeno um what were the other + +00:57:41.960 --> 00:57:47.200 +ones oh the open AI API and light llm + +00:57:45.680 --> 00:57:50.520 +other stuff like that so we we have lots + +00:57:47.200 --> 00:57:53.599 +of them planned we'll uh uh we'll update + +00:57:50.520 --> 00:57:54.839 +that um and then after class after + +00:57:53.599 --> 00:57:58.079 +almost every class we'll have a question + +00:57:54.839 --> 00:58:00.079 +quiz um and the quiz is intended to just + +00:57:58.079 --> 00:58:02.000 +you know make sure that you uh paid + +00:58:00.079 --> 00:58:04.480 +attention to the material and are able + +00:58:02.000 --> 00:58:07.520 +to answer questions about it we will aim + +00:58:04.480 --> 00:58:09.559 +to release it on the day of the course + +00:58:07.520 --> 00:58:11.599 +the day of the actual lecture and it + +00:58:09.559 --> 00:58:14.559 +will be due at the end of the following + +00:58:11.599 --> 00:58:15.960 +day of the lecture so um it will be + +00:58:14.559 --> 00:58:18.920 +three questions it probably shouldn't + +00:58:15.960 --> 00:58:20.680 +take a whole lot of time but um uh yeah + +00:58:18.920 --> 00:58:23.400 +so we'll H + +00:58:20.680 --> 00:58:26.319 +that in terms of assignments assignment + +00:58:23.400 --> 00:58:28.640 +one is going to be build your own llama + +00:58:26.319 --> 00:58:30.200 +and so what this is going to look like + +00:58:28.640 --> 00:58:32.680 +is we're going to give you a partial + +00:58:30.200 --> 00:58:34.319 +implementation of llama which is kind of + +00:58:32.680 --> 00:58:37.960 +the most popular open source language + +00:58:34.319 --> 00:58:40.160 +model nowadays and ask you to fill in um + +00:58:37.960 --> 00:58:42.839 +ask you to fill in the parts we're going + +00:58:40.160 --> 00:58:45.920 +to train a very small version of llama + +00:58:42.839 --> 00:58:47.319 +on a small data set and get it to work + +00:58:45.920 --> 00:58:48.880 +and the reason why it's very small is + +00:58:47.319 --> 00:58:50.480 +because the smallest actual version of + +00:58:48.880 --> 00:58:53.039 +llama is 7 billion + +00:58:50.480 --> 00:58:55.359 +parameters um and that might be a little + +00:58:53.039 --> 00:58:58.400 +bit difficult to train with + +00:58:55.359 --> 00:59:00.680 +resources um for assignment two we're + +00:58:58.400 --> 00:59:04.559 +going to try to do an NLP task from + +00:59:00.680 --> 00:59:06.920 +scratch and so the way this will work is + +00:59:04.559 --> 00:59:08.520 +we're going to give you an assignment + +00:59:06.920 --> 00:59:10.880 +which we're not going to give you an + +00:59:08.520 --> 00:59:13.400 +actual data set and instead we're going + +00:59:10.880 --> 00:59:15.760 +to ask you to uh perform data creation + +00:59:13.400 --> 00:59:19.359 +modeling and evaluation for a specified + +00:59:15.760 --> 00:59:20.640 +task and so we're going to tell you uh + +00:59:19.359 --> 00:59:22.599 +what to do but we're not going to tell + +00:59:20.640 --> 00:59:26.400 +you exactly how to do it but we're going + +00:59:22.599 --> 00:59:29.680 +to try to give as conrete directions as + +00:59:26.400 --> 00:59:32.359 +we can um + +00:59:29.680 --> 00:59:34.160 +yeah will you be given a parameter limit + +00:59:32.359 --> 00:59:36.559 +on the model so that's a good question + +00:59:34.160 --> 00:59:39.119 +or like a expense limit or something + +00:59:36.559 --> 00:59:40.440 +like that um I maybe actually I should + +00:59:39.119 --> 00:59:44.240 +take a break from the assignments and + +00:59:40.440 --> 00:59:46.520 +talk about compute so right now um for + +00:59:44.240 --> 00:59:49.319 +assignment one we're planning on having + +00:59:46.520 --> 00:59:51.599 +this be able to be done either on a Mac + +00:59:49.319 --> 00:59:53.520 +laptop with an M1 or M2 processor which + +00:59:51.599 --> 00:59:57.079 +I think a lot of people have or Google + +00:59:53.520 --> 00:59:59.839 +collab um so it should be like + +00:59:57.079 --> 01:00:02.160 +sufficient to use free computational + +00:59:59.839 --> 01:00:03.640 +resources that you have for number two + +01:00:02.160 --> 01:00:06.079 +we'll think about that I think that's + +01:00:03.640 --> 01:00:08.280 +important we do have Google cloud + +01:00:06.079 --> 01:00:11.520 +credits for $50 for everybody and I'm + +01:00:08.280 --> 01:00:13.440 +working to get AWS credits for more um + +01:00:11.520 --> 01:00:18.160 +but the cloud providers nowadays are + +01:00:13.440 --> 01:00:19.680 +being very stingy so um so it's uh been + +01:00:18.160 --> 01:00:22.160 +a little bit of a fight to get uh + +01:00:19.680 --> 01:00:23.680 +credits but I I it is very important so + +01:00:22.160 --> 01:00:28.480 +I'm going to try to get as as many as we + +01:00:23.680 --> 01:00:31.119 +can um and so yeah I I think basically + +01:00:28.480 --> 01:00:32.280 +uh there will be some sort of like limit + +01:00:31.119 --> 01:00:34.480 +on the amount of things you can + +01:00:32.280 --> 01:00:36.240 +practically do and so because of that + +01:00:34.480 --> 01:00:39.920 +I'm hoping that people will rely very + +01:00:36.240 --> 01:00:43.359 +heavily on pre-trained models um or uh + +01:00:39.920 --> 01:00:46.079 +yeah pre-trained models + +01:00:43.359 --> 01:00:49.599 +and yeah so that that's the the short + +01:00:46.079 --> 01:00:52.799 +story B um the second thing uh the + +01:00:49.599 --> 01:00:54.720 +assignment three is to do a survey of + +01:00:52.799 --> 01:00:57.920 +some sort of state-ofthe-art research + +01:00:54.720 --> 01:01:00.760 +resarch and do a reimplementation of + +01:00:57.920 --> 01:01:02.000 +this and in doing this again you will + +01:01:00.760 --> 01:01:03.440 +have to think about something that's + +01:01:02.000 --> 01:01:06.359 +feasible within computational + +01:01:03.440 --> 01:01:08.680 +constraints um and so you can discuss + +01:01:06.359 --> 01:01:11.839 +with your Tas about uh about the best + +01:01:08.680 --> 01:01:13.920 +way to do this um and then the final + +01:01:11.839 --> 01:01:15.400 +project is to perform a unique project + +01:01:13.920 --> 01:01:17.559 +that either improves on the state-of-the + +01:01:15.400 --> 01:01:21.000 +art with respect to whatever you would + +01:01:17.559 --> 01:01:23.440 +like to improve with this could be uh + +01:01:21.000 --> 01:01:25.280 +accuracy for sure this could be + +01:01:23.440 --> 01:01:27.760 +efficiency + +01:01:25.280 --> 01:01:29.599 +it could be some sense of + +01:01:27.760 --> 01:01:31.520 +interpretability but if it's going to be + +01:01:29.599 --> 01:01:33.599 +something like interpretability you'll + +01:01:31.520 --> 01:01:35.440 +have to discuss with us what that means + +01:01:33.599 --> 01:01:37.240 +like how we measure that how we can like + +01:01:35.440 --> 01:01:40.839 +actually say that you did a good job + +01:01:37.240 --> 01:01:42.839 +with improving that um another thing + +01:01:40.839 --> 01:01:44.680 +that you can do is take whatever you + +01:01:42.839 --> 01:01:47.280 +implemented for assignment 3 and apply + +01:01:44.680 --> 01:01:49.039 +it to a new task or apply it to a new + +01:01:47.280 --> 01:01:50.760 +language that has never been examined + +01:01:49.039 --> 01:01:53.119 +before so these are also acceptable + +01:01:50.760 --> 01:01:54.240 +final projects but basically the idea is + +01:01:53.119 --> 01:01:55.559 +for the final project you need to do + +01:01:54.240 --> 01:01:57.480 +something something new that hasn't been + +01:01:55.559 --> 01:01:59.880 +done before and create new knowledge + +01:01:57.480 --> 01:02:04.520 +with the respect + +01:01:59.880 --> 01:02:07.640 +toy um so for this the instructor is me + +01:02:04.520 --> 01:02:09.920 +um I'm uh looking forward to you know + +01:02:07.640 --> 01:02:13.599 +discussing and working with all of you + +01:02:09.920 --> 01:02:16.119 +um for TAS we have seven Tas uh two of + +01:02:13.599 --> 01:02:18.319 +them are in transit so they're not here + +01:02:16.119 --> 01:02:22.279 +today um the other ones uh Tas would you + +01:02:18.319 --> 01:02:22.279 +mind coming up uh to introduce + +01:02:23.359 --> 01:02:26.359 +yourself + +01:02:28.400 --> 01:02:32.839 +so um yeah nhir and akshai couldn't be + +01:02:31.599 --> 01:02:34.039 +here today because they're traveling + +01:02:32.839 --> 01:02:37.119 +I'll introduce them later because + +01:02:34.039 --> 01:02:37.119 +they're coming uh next + +01:02:40.359 --> 01:02:46.480 +time cool and what I'd like everybody to + +01:02:43.000 --> 01:02:48.680 +do is say um like you know what your + +01:02:46.480 --> 01:02:53.079 +name is uh what + +01:02:48.680 --> 01:02:55.799 +your like maybe what you're interested + +01:02:53.079 --> 01:02:57.319 +in um and the reason the goal of this is + +01:02:55.799 --> 01:02:59.200 +number one for everybody to know who you + +01:02:57.319 --> 01:03:00.720 +are and number two for everybody to know + +01:02:59.200 --> 01:03:03.440 +who the best person to talk to is if + +01:03:00.720 --> 01:03:03.440 +they're interested in + +01:03:04.200 --> 01:03:09.079 +particular hi uh I'm + +01:03:07.000 --> 01:03:15.520 +Aila second + +01:03:09.079 --> 01:03:15.520 +year I work on language and social + +01:03:16.200 --> 01:03:24.559 +and I'm I'm a second this year PhD + +01:03:21.160 --> 01:03:26.799 +student Grand and Shar with you I search + +01:03:24.559 --> 01:03:28.480 +is like started in the border of MP and + +01:03:26.799 --> 01:03:31.000 +computer interaction with a lot of work + +01:03:28.480 --> 01:03:32.640 +on automating parts of the developer + +01:03:31.000 --> 01:03:35.319 +experience to make it easier for anyone + +01:03:32.640 --> 01:03:35.319 +to + +01:03:39.090 --> 01:03:42.179 +[Music] + +01:03:47.520 --> 01:03:53.279 +orif + +01:03:50.079 --> 01:03:54.680 +everyone first + +01:03:53.279 --> 01:03:57.119 +year + +01:03:54.680 --> 01:04:00.119 +[Music] + +01:03:57.119 --> 01:04:03.559 +I don't like updating primar models I + +01:04:00.119 --> 01:04:03.559 +hope to not update Prim + +01:04:14.599 --> 01:04:19.400 +modelm yeah thanks a lot everyone and + +01:04:17.200 --> 01:04:19.400 +yeah + +01:04:20.839 --> 01:04:29.400 +than and so we will um we'll have people + +01:04:25.640 --> 01:04:30.799 +uh kind of have office hours uh every ta + +01:04:29.400 --> 01:04:32.880 +has office hours at a regular time + +01:04:30.799 --> 01:04:34.480 +during the week uh please feel free to + +01:04:32.880 --> 01:04:38.400 +come to their office hours or my office + +01:04:34.480 --> 01:04:41.960 +hours um I think they are visha are they + +01:04:38.400 --> 01:04:43.880 +posted on the site or okay yeah they + +01:04:41.960 --> 01:04:47.240 +they either are or will be posted on the + +01:04:43.880 --> 01:04:49.720 +site very soon um and come by to talk + +01:04:47.240 --> 01:04:51.480 +about anything uh if there's nobody in + +01:04:49.720 --> 01:04:53.079 +my office hours I'm happy to talk about + +01:04:51.480 --> 01:04:54.599 +things that are unrelated but if there's + +01:04:53.079 --> 01:04:58.039 +lots of people waiting outside or I + +01:04:54.599 --> 01:05:00.319 +might limit it to uh like um just things + +01:04:58.039 --> 01:05:02.480 +about the class so cool and we have + +01:05:00.319 --> 01:05:04.760 +Patza we'll be checking that regularly + +01:05:02.480 --> 01:05:06.839 +uh striving to get you an answer in 24 + +01:05:04.760 --> 01:05:12.240 +hours on weekdays over weekends we might + +01:05:06.839 --> 01:05:16.000 +not so um yeah so that's all for today + +01:05:12.240 --> 01:05:16.000 +are there any questions diff --git a/CMU Advanced NLP 2024 (10) Retrieval and RAG/CMU Advanced NLP 2024 (10) Retrieval and RAG.mp4 b/CMU Advanced NLP 2024 (10) Retrieval and RAG/CMU Advanced NLP 2024 (10) Retrieval and RAG.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0450cbc450c6bee1ea166cb26dbb3d53e4f8a0d5 --- /dev/null +++ b/CMU Advanced NLP 2024 (10) Retrieval and RAG/CMU Advanced NLP 2024 (10) Retrieval and RAG.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:81d3898858e07098de0177379421d5ba13d45835e73a1a41b3b7696c17d01774 +size 54642972 diff --git a/CMU Advanced NLP 2024 (10) Retrieval and RAG/metadata.json b/CMU Advanced NLP 2024 (10) Retrieval and RAG/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..b013bc462d6de16022ccb1b61a414a56acb9786c --- /dev/null +++ b/CMU Advanced NLP 2024 (10) Retrieval and RAG/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=WQYi-1mvGDM", + "title": "CMU Advanced NLP 2024 (10) Retrieval and RAG" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (10) Retrieval and RAG/transcript.srt b/CMU Advanced NLP 2024 (10) Retrieval and RAG/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..45fd4f797e42a00006c7cad78b7ddcd8f3371f33 --- /dev/null +++ b/CMU Advanced NLP 2024 (10) Retrieval and RAG/transcript.srt @@ -0,0 +1,5379 @@ +1 +00:00:00,040 --> 00:00:03,880 +so today I'm going to talk about + +2 +00:00:01,319 --> 00:00:06,680 +retrieval and retrieval augmented + +3 +00:00:03,880 --> 00:00:09,040 +generation so if we look at our standard + +4 +00:00:06,680 --> 00:00:10,880 +prompting flow normally what we do is we + +5 +00:00:09,040 --> 00:00:14,160 +combine together a prompt template with + +6 +00:00:10,880 --> 00:00:16,600 +an input so if we say please answer this + +7 +00:00:14,160 --> 00:00:18,720 +question I think Vin Diesel has been a + +8 +00:00:16,600 --> 00:00:21,000 +voice actor for several pictors in TV + +9 +00:00:18,720 --> 00:00:24,000 +series do you know what their names + +10 +00:00:21,000 --> 00:00:25,400 +are we could get a response from a + +11 +00:00:24,000 --> 00:00:26,840 +language model but there are several + +12 +00:00:25,400 --> 00:00:30,840 +problems with + +13 +00:00:26,840 --> 00:00:33,680 +this the first is accuracy issues + +14 +00:00:30,840 --> 00:00:36,160 +the models generally have a knowledge + +15 +00:00:33,680 --> 00:00:38,879 +cut off so the parameters are usually + +16 +00:00:36,160 --> 00:00:41,120 +only updated to a particular time so for + +17 +00:00:38,879 --> 00:00:43,200 +example if a new Vin Diesel TV series + +18 +00:00:41,120 --> 00:00:44,960 +comes out then the model that was + +19 +00:00:43,200 --> 00:00:47,440 +trained up to a certain time Point won't + +20 +00:00:44,960 --> 00:00:51,000 +be able to know anything about + +21 +00:00:47,440 --> 00:00:53,600 +it there's also issues of private data + +22 +00:00:51,000 --> 00:00:55,320 +so data stored in private text or data + +23 +00:00:53,600 --> 00:00:57,840 +repositories is not suitable for + +24 +00:00:55,320 --> 00:01:02,600 +training for a number of reasons number + +25 +00:00:57,840 --> 00:01:05,199 +one it's not available to to particular + +26 +00:01:02,600 --> 00:01:07,799 +language model training providers such + +27 +00:01:05,199 --> 00:01:10,720 +as you know open AI or Google or anybody + +28 +00:01:07,799 --> 00:01:13,840 +else like this the second thing is + +29 +00:01:10,720 --> 00:01:16,799 +Access Control issues so even if you're + +30 +00:01:13,840 --> 00:01:17,840 +within an organization that has lots of + +31 +00:01:16,799 --> 00:01:20,799 +private data and you can train a + +32 +00:01:17,840 --> 00:01:22,600 +language model on that certain people in + +33 +00:01:20,799 --> 00:01:24,200 +the organization may have access to + +34 +00:01:22,600 --> 00:01:27,640 +certain varieties of data and other + +35 +00:01:24,200 --> 00:01:29,400 +people may not so it's not just solely + +36 +00:01:27,640 --> 00:01:31,520 +an issue of third party providers it's + +37 +00:01:29,400 --> 00:01:33,840 +an issue of organization level Access + +38 +00:01:31,520 --> 00:01:36,159 +Control in + +39 +00:01:33,840 --> 00:01:38,920 +general in addition there are learning + +40 +00:01:36,159 --> 00:01:40,320 +failures so even for data that the model + +41 +00:01:38,920 --> 00:01:42,640 +was trained on it might not be + +42 +00:01:40,320 --> 00:01:44,399 +sufficient to get the right answer and + +43 +00:01:42,640 --> 00:01:47,799 +this is particularly the case for very + +44 +00:01:44,399 --> 00:01:52,320 +very large uh training data sets and + +45 +00:01:47,799 --> 00:01:53,920 +models that are you know modestly sized + +46 +00:01:52,320 --> 00:01:55,880 +because the models very often won't be + +47 +00:01:53,920 --> 00:01:58,360 +able to learn from a single look at a + +48 +00:01:55,880 --> 00:02:02,039 +particular fact or or whatever else like + +49 +00:01:58,360 --> 00:02:02,039 +this especially if iter early in + +50 +00:02:02,159 --> 00:02:08,160 +training another thing is even if the + +51 +00:02:05,240 --> 00:02:10,599 +answer is correct it might not be + +52 +00:02:08,160 --> 00:02:13,440 +verifiable so you might want to be very + +53 +00:02:10,599 --> 00:02:15,000 +sure that the model is not making any + +54 +00:02:13,440 --> 00:02:17,640 +accuracy + +55 +00:02:15,000 --> 00:02:19,040 +problems and so in order to do that very + +56 +00:02:17,640 --> 00:02:21,879 +often a human will want to go back to + +57 +00:02:19,040 --> 00:02:21,879 +the source of the + +58 +00:02:22,200 --> 00:02:27,319 +data so to solve this there's a method + +59 +00:02:25,480 --> 00:02:29,200 +called retrieval augmented generation + +60 +00:02:27,319 --> 00:02:30,280 +which will also be the topic of our + +61 +00:02:29,200 --> 00:02:32,599 +second assignment + +62 +00:02:30,280 --> 00:02:35,680 +here and the way it works is you + +63 +00:02:32,599 --> 00:02:38,319 +retrieve relevant passages + +64 +00:02:35,680 --> 00:02:40,680 +efficiently ones that kind of entail the + +65 +00:02:38,319 --> 00:02:42,480 +answer to a question and then read the + +66 +00:02:40,680 --> 00:02:46,080 +passages to answer the + +67 +00:02:42,480 --> 00:02:48,599 +query so we have documents like this we + +68 +00:02:46,080 --> 00:02:52,360 +have a query based on the query we form + +69 +00:02:48,599 --> 00:02:55,360 +retrieval we get a whole bunch of uh + +70 +00:02:52,360 --> 00:02:57,560 +passages we do reading and then we get + +71 +00:02:55,360 --> 00:02:57,560 +the + +72 +00:02:58,280 --> 00:03:04,440 +answer so this is in fact implemented in + +73 +00:03:01,720 --> 00:03:07,599 +many or even most uh language modeling + +74 +00:03:04,440 --> 00:03:09,840 +providers including open AI so to give + +75 +00:03:07,599 --> 00:03:11,480 +an example I asked the question that I + +76 +00:03:09,840 --> 00:03:12,879 +just said about Vin Diesel's voice + +77 +00:03:11,480 --> 00:03:16,599 +acting and TV + +78 +00:03:12,879 --> 00:03:19,760 +series and Chad GPT gave me an answer + +79 +00:03:16,599 --> 00:03:22,440 +and you can see that J gpt's answer + +80 +00:03:19,760 --> 00:03:24,720 +includes several places with quotes um + +81 +00:03:22,440 --> 00:03:28,159 +they the little blue quotes + +82 +00:03:24,720 --> 00:03:30,760 +there and if you click on the quote it + +83 +00:03:28,159 --> 00:03:33,120 +tells you where the information Source + +84 +00:03:30,760 --> 00:03:35,000 +came from and so this one says behind + +85 +00:03:33,120 --> 00:03:37,760 +the voice actors been + +86 +00:03:35,000 --> 00:03:39,920 +Diesel and behind the voice actors TV + +87 +00:03:37,760 --> 00:03:42,959 +shows Big Mouth V + +88 +00:03:39,920 --> 00:03:45,640 +diesel now if we look + +89 +00:03:42,959 --> 00:03:48,640 +closer into this answer we'll see that + +90 +00:03:45,640 --> 00:03:49,959 +it's not perfect even though it is uh + +91 +00:03:48,640 --> 00:03:52,519 +performing retrieval augmented + +92 +00:03:49,959 --> 00:03:54,840 +Generations so for example I only asked + +93 +00:03:52,519 --> 00:03:57,200 +about TV series but it's giving me lots + +94 +00:03:54,840 --> 00:03:59,680 +of things about movies where it says + +95 +00:03:57,200 --> 00:04:01,319 +Groot in Guardians of the Galaxy volume + +96 +00:03:59,680 --> 00:04:04,480 +3 2023 + +97 +00:04:01,319 --> 00:04:07,200 +movie and in fact uh Vin Diesel was not + +98 +00:04:04,480 --> 00:04:10,920 +even voicing a character named gut here + +99 +00:04:07,200 --> 00:04:13,480 +so that's definitely an accuracy + +100 +00:04:10,920 --> 00:04:15,079 +mistake and separately there's a place + +101 +00:04:13,480 --> 00:04:17,639 +where it says additionally though the + +102 +00:04:15,079 --> 00:04:19,959 +website for big mouthless Vin Diesel it + +103 +00:04:17,639 --> 00:04:22,040 +appears to be a misunderstanding or err + +104 +00:04:19,959 --> 00:04:25,360 +as Nick croll is credited as the voice + +105 +00:04:22,040 --> 00:04:27,800 +of Vin Diesel in that show so there + +106 +00:04:25,360 --> 00:04:30,039 +actually Nick croll was acting as V + +107 +00:04:27,800 --> 00:04:32,800 +diesel but that's um kind of a + +108 +00:04:30,039 --> 00:04:34,600 +misunderstanding of the reader model but + +109 +00:04:32,800 --> 00:04:36,600 +anyway you can get the general idea here + +110 +00:04:34,600 --> 00:04:40,199 +you can also see that it's not perfect + +111 +00:04:36,600 --> 00:04:42,720 +even for very strong models like GPD + +112 +00:04:40,199 --> 00:04:44,800 +4 so now I'd like to go into the actual + +113 +00:04:42,720 --> 00:04:46,759 +methodology that we use for this uh we + +114 +00:04:44,800 --> 00:04:50,360 +have retrieval + +115 +00:04:46,759 --> 00:04:53,160 +methods and for the retrieval methods we + +116 +00:04:50,360 --> 00:04:55,160 +have uh quite a few different options + +117 +00:04:53,160 --> 00:04:57,960 +I'm going to go through each one of them + +118 +00:04:55,160 --> 00:05:00,960 +at a time so sparse retrieval document + +119 +00:04:57,960 --> 00:05:04,240 +level dense retrieval token level DSE + +120 +00:05:00,960 --> 00:05:08,039 +retrieval cross- encoder reranking and + +121 +00:05:04,240 --> 00:05:09,320 +blackbox retrieval so blackbox retrieval + +122 +00:05:08,039 --> 00:05:11,280 +I'm not really going to go into it a + +123 +00:05:09,320 --> 00:05:16,000 +whole lot basically this is just asking + +124 +00:05:11,280 --> 00:05:17,560 +a blackbox search engine to retrieve uh + +125 +00:05:16,000 --> 00:05:20,000 +you know the relevant context and + +126 +00:05:17,560 --> 00:05:22,560 +getting the top several results + +127 +00:05:20,000 --> 00:05:24,039 +nonetheless this is a pretty you know + +128 +00:05:22,560 --> 00:05:26,800 +reasonable method to do it if you want + +129 +00:05:24,039 --> 00:05:29,080 +to do search over you know lots of data + +130 +00:05:26,800 --> 00:05:32,759 +that exists on the internet already and + +131 +00:05:29,080 --> 00:05:36,600 +that in is what chat jpt does it looks + +132 +00:05:32,759 --> 00:05:39,240 +up on Bing by generating a query to + +133 +00:05:36,600 --> 00:05:41,560 +Bing so anyway let's go into the actual + +134 +00:05:39,240 --> 00:05:43,840 +methods that you develop and control + +135 +00:05:41,560 --> 00:05:46,600 +yourself so the first one is sparse + +136 +00:05:43,840 --> 00:05:48,479 +retrieval and the way this works is you + +137 +00:05:46,600 --> 00:05:50,440 +express the query and document as a + +138 +00:05:48,479 --> 00:05:53,680 +sparse word frequency Vector usually + +139 +00:05:50,440 --> 00:05:58,759 +normalized by length and so if I ask uh + +140 +00:05:53,680 --> 00:06:01,720 +query what is NLP we get a vector where + +141 +00:05:58,759 --> 00:06:04,120 +each row the vector corresponds to a + +142 +00:06:01,720 --> 00:06:07,919 +different + +143 +00:06:04,120 --> 00:06:12,960 +token and we asked what is + +144 +00:06:07,919 --> 00:06:16,360 +NLP and so uh the places for what NLP + +145 +00:06:12,960 --> 00:06:18,199 +and is will all have a non-zero value + +146 +00:06:16,360 --> 00:06:20,199 +and everything else will have a zero + +147 +00:06:18,199 --> 00:06:21,720 +value and we also normalize by the + +148 +00:06:20,199 --> 00:06:24,120 +length of vectors so we get something + +149 +00:06:21,720 --> 00:06:24,120 +like + +150 +00:06:24,840 --> 00:06:28,440 +333333 then we have a whole bunch of + +151 +00:06:26,759 --> 00:06:30,720 +documents so the first document says + +152 +00:06:28,440 --> 00:06:31,759 +what is life can is life someone really + +153 +00:06:30,720 --> 00:06:33,960 +likes + +154 +00:06:31,759 --> 00:06:36,000 +candy we also have another one that says + +155 +00:06:33,960 --> 00:06:38,360 +NLP as an acronym for natural language + +156 +00:06:36,000 --> 00:06:39,479 +processing so this is a pretty good uh + +157 +00:06:38,360 --> 00:06:42,479 +you + +158 +00:06:39,479 --> 00:06:44,840 +know answer to our + +159 +00:06:42,479 --> 00:06:48,039 +question then we also have I like to do + +160 +00:06:44,840 --> 00:06:49,360 +good research on NLP which is you know a + +161 +00:06:48,039 --> 00:06:51,360 +nice sentiment but not a very good + +162 +00:06:49,360 --> 00:06:54,400 +answer to our question I + +163 +00:06:51,360 --> 00:06:59,479 +guess so if we look at the vectors here + +164 +00:06:54,400 --> 00:07:03,280 +we have uh what and candy and is have uh + +165 +00:06:59,479 --> 00:07:07,120 +a fairly high + +166 +00:07:03,280 --> 00:07:12,520 +score and we have here NLP and is have a + +167 +00:07:07,120 --> 00:07:16,479 +high score and NLP has a a nonzero + +168 +00:07:12,520 --> 00:07:18,400 +score So based on this um we find the + +169 +00:07:16,479 --> 00:07:20,560 +document similarity with the highest + +170 +00:07:18,400 --> 00:07:22,039 +inner product or cosine similarity in + +171 +00:07:20,560 --> 00:07:24,360 +the document + +172 +00:07:22,039 --> 00:07:27,000 +collection and so if we take the inner + +173 +00:07:24,360 --> 00:07:28,759 +product between these vectors we + +174 +00:07:27,000 --> 00:07:31,280 +actually see that the first one got the + +175 +00:07:28,759 --> 00:07:34,479 +highest score because of its + +176 +00:07:31,280 --> 00:07:37,440 +relatively High values for the words + +177 +00:07:34,479 --> 00:07:37,440 +what and + +178 +00:07:38,160 --> 00:07:43,759 +is + +179 +00:07:40,199 --> 00:07:46,720 +so as you can see common words like what + +180 +00:07:43,759 --> 00:07:49,000 +and is can get a high score kind of + +181 +00:07:46,720 --> 00:07:51,800 +regardless of whether a document is very + +182 +00:07:49,000 --> 00:07:53,919 +relevant and so one way we can fix this + +183 +00:07:51,800 --> 00:07:55,960 +is through something called term + +184 +00:07:53,919 --> 00:07:59,479 +waiting and the way that term waiting + +185 +00:07:55,960 --> 00:08:02,680 +works is in addition to having this + +186 +00:07:59,479 --> 00:08:04,599 +Vector that + +187 +00:08:02,680 --> 00:08:07,680 +calculates + +188 +00:08:04,599 --> 00:08:10,680 +the frequency within a particular + +189 +00:08:07,680 --> 00:08:13,639 +document we also have an upweighting + +190 +00:08:10,680 --> 00:08:15,599 +term that gives higher weight to low + +191 +00:08:13,639 --> 00:08:18,199 +frequency words because low frequency + +192 +00:08:15,599 --> 00:08:20,280 +words like NLP tend to be more + +193 +00:08:18,199 --> 00:08:22,759 +informative about whether the document + +194 +00:08:20,280 --> 00:08:25,240 +is relevant than high frequency words + +195 +00:08:22,759 --> 00:08:27,080 +like what it is because these high + +196 +00:08:25,240 --> 00:08:31,320 +frequency words like what and is Could + +197 +00:08:27,080 --> 00:08:34,279 +Happen kind of regardless of whether + +198 +00:08:31,320 --> 00:08:36,680 +the you know document is relevant the + +199 +00:08:34,279 --> 00:08:41,800 +particular terms the person is asking + +200 +00:08:36,680 --> 00:08:44,000 +about so one well used and easy to + +201 +00:08:41,800 --> 00:08:46,560 +understand version of this is uh tfidf + +202 +00:08:44,000 --> 00:08:48,839 +or term frequency indument + +203 +00:08:46,560 --> 00:08:51,200 +frequency so the way we Define term + +204 +00:08:48,839 --> 00:08:52,959 +frequency is exactly what I talked about + +205 +00:08:51,200 --> 00:08:56,959 +before so it's basically the frequency + +206 +00:08:52,959 --> 00:08:59,839 +of the term uh T in the document d + +207 +00:08:56,959 --> 00:09:01,640 +normalized by the total term frequency + +208 +00:08:59,839 --> 00:09:03,680 +within the document so that that's what + +209 +00:09:01,640 --> 00:09:06,800 +I already showed in the previous + +210 +00:09:03,680 --> 00:09:09,360 +slide and then indument frequency is a + +211 +00:09:06,800 --> 00:09:13,760 +little bit more involved but basically + +212 +00:09:09,360 --> 00:09:15,760 +the way this works is we have log of the + +213 +00:09:13,760 --> 00:09:18,160 +total number of documents in the + +214 +00:09:15,760 --> 00:09:24,040 +collection divided + +215 +00:09:18,160 --> 00:09:26,760 +by the total number of uh times this + +216 +00:09:24,040 --> 00:09:30,279 +term appeared in any particular + +217 +00:09:26,760 --> 00:09:33,360 +document and so if a term appears many + +218 +00:09:30,279 --> 00:09:36,120 +times in any particular document it will + +219 +00:09:33,360 --> 00:09:39,240 +have a low IDF score uh one that's close + +220 +00:09:36,120 --> 00:09:41,519 +to zero but if it rarely appears it will + +221 +00:09:39,240 --> 00:09:44,120 +have a high IDF score so basically this + +222 +00:09:41,519 --> 00:09:45,040 +is upweighting our frequent terms and + +223 +00:09:44,120 --> 00:09:47,560 +then for + +224 +00:09:45,040 --> 00:09:51,320 +tfidf uh we basically multiply these two + +225 +00:09:47,560 --> 00:09:53,120 +terms together and we upweight the low + +226 +00:09:51,320 --> 00:09:55,640 +frequency + +227 +00:09:53,120 --> 00:10:00,519 +words there's another version of this + +228 +00:09:55,640 --> 00:10:03,640 +called bm25 that is uh widely used used + +229 +00:10:00,519 --> 00:10:05,800 +um this is more involved so I'm not + +230 +00:10:03,640 --> 00:10:08,120 +going to go into all of the details but + +231 +00:10:05,800 --> 00:10:12,399 +basically if you remember back to the + +232 +00:10:08,120 --> 00:10:13,720 +lecture on count-based language models + +233 +00:10:12,399 --> 00:10:14,880 +there were a bunch of smoothing + +234 +00:10:13,720 --> 00:10:18,839 +techniques for these count-based + +235 +00:10:14,880 --> 00:10:21,839 +language models and this uses uh kind of + +236 +00:10:18,839 --> 00:10:25,839 +a m multiplicative additive smoothing + +237 +00:10:21,839 --> 00:10:27,160 +term to upway things instead of using + +238 +00:10:25,839 --> 00:10:30,200 +the term + +239 +00:10:27,160 --> 00:10:33,399 +frequency and uh the actual formula is + +240 +00:10:30,200 --> 00:10:37,240 +here K and B are kind of + +241 +00:10:33,399 --> 00:10:39,360 +hyperparameters and um average DL is + +242 +00:10:37,240 --> 00:10:40,639 +average document length the details of + +243 +00:10:39,360 --> 00:10:42,120 +this are not really important but + +244 +00:10:40,639 --> 00:10:43,800 +basically what you should know is that + +245 +00:10:42,120 --> 00:10:45,639 +this is doing some smoothing on the term + +246 +00:10:43,800 --> 00:10:48,240 +frequencies and you can look in more + +247 +00:10:45,639 --> 00:10:48,240 +detail if you're + +248 +00:10:49,160 --> 00:10:54,920 +interested so now that we have this sort + +249 +00:10:52,880 --> 00:10:57,959 +of term + +250 +00:10:54,920 --> 00:11:00,320 +based uh sparse Vector we would like to + +251 +00:10:57,959 --> 00:11:03,320 +use this to look up relevant documents + +252 +00:11:00,320 --> 00:11:06,000 +in a collection very quickly because you + +253 +00:11:03,320 --> 00:11:08,000 +know we might have a collection that's + +254 +00:11:06,000 --> 00:11:09,720 +extremely large like as large as the + +255 +00:11:08,000 --> 00:11:12,320 +entire internet like what Google is + +256 +00:11:09,720 --> 00:11:14,160 +doing when it searches and so in order + +257 +00:11:12,320 --> 00:11:16,240 +to solve this we need a data structure + +258 +00:11:14,160 --> 00:11:17,279 +that allows for efficient sparse lookup + +259 +00:11:16,240 --> 00:11:19,480 +of + +260 +00:11:17,279 --> 00:11:23,720 +vectors and so we have all of these + +261 +00:11:19,480 --> 00:11:27,279 +sparse vectors like this + +262 +00:11:23,720 --> 00:11:31,240 +and we uh basically turn this into an + +263 +00:11:27,279 --> 00:11:34,720 +index where we have something like a you + +264 +00:11:31,240 --> 00:11:37,920 +know python style dictionary or map that + +265 +00:11:34,720 --> 00:11:41,079 +has it's the key each uh word we would + +266 +00:11:37,920 --> 00:11:45,000 +look like to look up and is the vector + +267 +00:11:41,079 --> 00:11:48,480 +the corresponding um index of that + +268 +00:11:45,000 --> 00:11:50,480 +document so for example what in our case + +269 +00:11:48,480 --> 00:11:54,200 +here only appears in document one so it + +270 +00:11:50,480 --> 00:11:56,279 +would point to document one candy uh + +271 +00:11:54,200 --> 00:11:58,560 +also appears in document one NLP appears + +272 +00:11:56,279 --> 00:11:59,839 +in two and three and so you can create + +273 +00:11:58,560 --> 00:12:02,760 +this index IND like this and this is + +274 +00:11:59,839 --> 00:12:02,760 +called an inverted + +275 +00:12:03,079 --> 00:12:08,760 +Index this is an important application + +276 +00:12:06,000 --> 00:12:11,600 +of course so there's lots of software + +277 +00:12:08,760 --> 00:12:14,920 +the most kind of pical software for this + +278 +00:12:11,600 --> 00:12:18,760 +is Apache Lucine so if you want to build + +279 +00:12:14,920 --> 00:12:21,639 +a big index uh to look up vectors using + +280 +00:12:18,760 --> 00:12:24,160 +this sparse index like this you can uh + +281 +00:12:21,639 --> 00:12:24,160 +take a look at + +282 +00:12:26,160 --> 00:12:30,880 +Lucy so the next thing I'd like to talk + +283 +00:12:28,399 --> 00:12:33,199 +about is dense retrieval and the way + +284 +00:12:30,880 --> 00:12:36,000 +dense retrieval works is you encode the + +285 +00:12:33,199 --> 00:12:37,240 +document in query into a dense factor + +286 +00:12:36,000 --> 00:12:40,240 +and find the nearest + +287 +00:12:37,240 --> 00:12:42,160 +neighbor in order to do this encoding + +288 +00:12:40,240 --> 00:12:44,639 +you can use a number of things you can + +289 +00:12:42,160 --> 00:12:47,440 +use out of the box embeddings or you can + +290 +00:12:44,639 --> 00:12:49,959 +use learned embeddings specifically + +291 +00:12:47,440 --> 00:12:53,519 +created for the purpose of + +292 +00:12:49,959 --> 00:12:56,240 +retrieving and so what we do is we take + +293 +00:12:53,519 --> 00:12:57,920 +all of these uh documents here we + +294 +00:12:56,240 --> 00:12:59,920 +convert them into embeddings using + +295 +00:12:57,920 --> 00:13:04,040 +whatever embedding method that we want + +296 +00:12:59,920 --> 00:13:05,920 +to use we then have a query and we take + +297 +00:13:04,040 --> 00:13:07,720 +that query and we match it and find the + +298 +00:13:05,920 --> 00:13:10,040 +nearest neighbor + +299 +00:13:07,720 --> 00:13:13,120 +here so if you're just using out of the + +300 +00:13:10,040 --> 00:13:14,839 +box embeddings you don't need to um you + +301 +00:13:13,120 --> 00:13:15,880 +know do anything special for retrieval + +302 +00:13:14,839 --> 00:13:18,440 +you can just take your favorite + +303 +00:13:15,880 --> 00:13:22,800 +embeddings like the sentence BT + +304 +00:13:18,440 --> 00:13:25,639 +embeddings or the open AI uh Adda + +305 +00:13:22,800 --> 00:13:27,240 +embeddings or something like this but + +306 +00:13:25,639 --> 00:13:29,519 +actually the type of embeddings you need + +307 +00:13:27,240 --> 00:13:32,040 +for retrieval are kind of + +308 +00:13:29,519 --> 00:13:33,519 +very special and because of that it's + +309 +00:13:32,040 --> 00:13:36,160 +important + +310 +00:13:33,519 --> 00:13:38,600 +to if you're very serious about doing a + +311 +00:13:36,160 --> 00:13:39,800 +good job of retal it's important to use + +312 +00:13:38,600 --> 00:13:41,360 +embeddings that were specifically + +313 +00:13:39,800 --> 00:13:45,040 +tailored for + +314 +00:13:41,360 --> 00:13:47,680 +retrieval and the reason why it is + +315 +00:13:45,040 --> 00:13:50,079 +important to do this is severalfold but + +316 +00:13:47,680 --> 00:13:53,800 +the most intuitive way to think about it + +317 +00:13:50,079 --> 00:13:57,600 +is if we think about uh the things that + +318 +00:13:53,800 --> 00:13:59,440 +tfidf does tfidf is giving a very high + +319 +00:13:57,600 --> 00:14:03,000 +weight to + +320 +00:13:59,440 --> 00:14:04,959 +contentful words and rare words and + +321 +00:14:03,000 --> 00:14:06,639 +we're not guaranteed that any random + +322 +00:14:04,959 --> 00:14:10,600 +embedding that we get is going to do + +323 +00:14:06,639 --> 00:14:13,800 +that so for example if we just take the + +324 +00:14:10,600 --> 00:14:16,160 +average word embeddings of every word in + +325 +00:14:13,800 --> 00:14:20,160 +a sequence it's going to give the same + +326 +00:14:16,160 --> 00:14:22,320 +weight to all of the words um in the + +327 +00:14:20,160 --> 00:14:24,680 +output and in fact common words tend to + +328 +00:14:22,320 --> 00:14:27,959 +have slightly higher Norms than + +329 +00:14:24,680 --> 00:14:29,639 +infrequent words and so that would + +330 +00:14:27,959 --> 00:14:31,880 +actually upli common wordss which is + +331 +00:14:29,639 --> 00:14:34,639 +kind of exactly the opposite thing we + +332 +00:14:31,880 --> 00:14:36,480 +want so how do we learn retrieval + +333 +00:14:34,639 --> 00:14:39,160 +oriented + +334 +00:14:36,480 --> 00:14:40,920 +embeddings the normal way we do this is + +335 +00:14:39,160 --> 00:14:43,399 +we select positive and negative + +336 +00:14:40,920 --> 00:14:46,839 +documents and then train using a + +337 +00:14:43,399 --> 00:14:50,240 +contrastive loss and so an example of + +338 +00:14:46,839 --> 00:14:52,519 +this is we have a query and then we have + +339 +00:14:50,240 --> 00:14:55,519 +negative documents for that query and we + +340 +00:14:52,519 --> 00:14:58,199 +have positive documents for that query + +341 +00:14:55,519 --> 00:15:00,079 +and uh we form formulate a hinge loss or + +342 +00:14:58,199 --> 00:15:04,000 +maybe some sort of probabilistic loss + +343 +00:15:00,079 --> 00:15:06,560 +similar to the Hench loss and uh do fine + +344 +00:15:04,000 --> 00:15:06,560 +tuning of the + +345 +00:15:07,160 --> 00:15:13,440 +embeddings so if + +346 +00:15:09,399 --> 00:15:16,320 +you have gold standard positive + +347 +00:15:13,440 --> 00:15:18,800 +documents then this is relatively easy + +348 +00:15:16,320 --> 00:15:21,040 +to train uh because you just need the + +349 +00:15:18,800 --> 00:15:23,800 +positive documents and then you can get + +350 +00:15:21,040 --> 00:15:25,959 +Negative documents in a number of ways + +351 +00:15:23,800 --> 00:15:29,279 +one common way of getting negative + +352 +00:15:25,959 --> 00:15:32,279 +documents is you just form a batch of + +353 +00:15:29,279 --> 00:15:34,560 +data and given that batch of data you + +354 +00:15:32,279 --> 00:15:37,480 +take all of the other documents in the + +355 +00:15:34,560 --> 00:15:39,480 +batch um all of the documents in the + +356 +00:15:37,480 --> 00:15:42,839 +batch that are positive for some other + +357 +00:15:39,480 --> 00:15:46,399 +query and you use those as negative + +358 +00:15:42,839 --> 00:15:49,000 +documents so you sample 32 query + +359 +00:15:46,399 --> 00:15:50,759 +document pairs you use the aligned ones + +360 +00:15:49,000 --> 00:15:53,759 +as positive documents and then use the + +361 +00:15:50,759 --> 00:15:57,440 +31 other ones as negative documents and + +362 +00:15:53,759 --> 00:16:00,279 +this is both effective and efficient + +363 +00:15:57,440 --> 00:16:02,000 +because you can kind of learned from the + +364 +00:16:00,279 --> 00:16:05,079 +query document pairs all at the same + +365 +00:16:02,000 --> 00:16:05,079 +time in an efficient + +366 +00:16:05,680 --> 00:16:13,680 +implementation however this is not + +367 +00:16:09,160 --> 00:16:16,279 +enough in many cases because that will + +368 +00:16:13,680 --> 00:16:19,040 +end up having lots of very kind of + +369 +00:16:16,279 --> 00:16:20,440 +obviously wrong documents because you + +370 +00:16:19,040 --> 00:16:23,120 +know + +371 +00:16:20,440 --> 00:16:25,360 +they're documents that are relevant for + +372 +00:16:23,120 --> 00:16:27,880 +a completely different query and it's + +373 +00:16:25,360 --> 00:16:29,880 +kind of easy to distinguish uh between + +374 +00:16:27,880 --> 00:16:32,319 +those you can just at superficial word + +375 +00:16:29,880 --> 00:16:34,519 +overlap so another common thing to do + +376 +00:16:32,319 --> 00:16:35,759 +when you're training these models is to + +377 +00:16:34,519 --> 00:16:38,160 +get hard + +378 +00:16:35,759 --> 00:16:40,680 +negatives so hard negatives are + +379 +00:16:38,160 --> 00:16:44,360 +basically negative examples that look + +380 +00:16:40,680 --> 00:16:49,399 +plausible but are actually wrong and + +381 +00:16:44,360 --> 00:16:53,199 +so here uh this famous method called DPR + +382 +00:16:49,399 --> 00:16:55,880 +is it basically learns the uh encoders + +383 +00:16:53,199 --> 00:16:57,759 +based on both inbatch negatives like I + +384 +00:16:55,880 --> 00:17:00,160 +mentioned before and hard negatives that + +385 +00:16:57,759 --> 00:17:01,360 +were created by looking up documents + +386 +00:17:00,160 --> 00:17:03,839 +with + +387 +00:17:01,360 --> 00:17:06,039 +bm25 and so the ones that were looked up + +388 +00:17:03,839 --> 00:17:07,640 +by bm25 you know kind of look very + +389 +00:17:06,039 --> 00:17:10,039 +similar superficially but they might + +390 +00:17:07,640 --> 00:17:12,400 +have you know subtle errors in them for + +391 +00:17:10,039 --> 00:17:12,400 +why they're + +392 +00:17:12,799 --> 00:17:17,160 +inappropriate there's also methods to + +393 +00:17:15,679 --> 00:17:20,000 +learn these + +394 +00:17:17,160 --> 00:17:23,199 +retrievers based on kind of not + +395 +00:17:20,000 --> 00:17:26,199 +supervised data so one major bottleneck + +396 +00:17:23,199 --> 00:17:29,000 +if you're taking the positive documents + +397 +00:17:26,199 --> 00:17:30,440 +from Human annotations of whether + +398 +00:17:29,000 --> 00:17:33,440 +something is correct or not or human + +399 +00:17:30,440 --> 00:17:37,880 +clickthrough logs or other things like + +400 +00:17:33,440 --> 00:17:40,640 +this is that you need that data in order + +401 +00:17:37,880 --> 00:17:44,440 +to start training a bottle so uh + +402 +00:17:40,640 --> 00:17:47,880 +contriver is another method that uses + +403 +00:17:44,440 --> 00:17:51,520 +two random spans within a document is a + +404 +00:17:47,880 --> 00:17:54,440 +positive pair and random spans from + +405 +00:17:51,520 --> 00:17:56,559 +across documents is negative Pairs and + +406 +00:17:54,440 --> 00:17:58,960 +so this can be used for you know very + +407 +00:17:56,559 --> 00:18:00,039 +very large scale initial pre-training of + +408 +00:17:58,960 --> 00:18:02,280 +the + +409 +00:18:00,039 --> 00:18:04,520 +models and then after you've done that + +410 +00:18:02,280 --> 00:18:06,840 +large scale initial pre-training you can + +411 +00:18:04,520 --> 00:18:10,799 +then go in and fine-tune it on you know + +412 +00:18:06,840 --> 00:18:10,799 +actually annotate the data to improve it + +413 +00:18:12,120 --> 00:18:18,799 +further Okay so we've talked about + +414 +00:18:15,159 --> 00:18:21,559 +training uh these dense product uh + +415 +00:18:18,799 --> 00:18:24,559 +models these uh models that look at + +416 +00:18:21,559 --> 00:18:27,720 +dense embedding overlap for nearest + +417 +00:18:24,559 --> 00:18:28,919 +neighbors but the problem is in order to + +418 +00:18:27,720 --> 00:18:30,919 +calculate this you would need to + +419 +00:18:28,919 --> 00:18:35,159 +calculate it over a very very large + +420 +00:18:30,919 --> 00:18:37,960 +document base and just taking a product + +421 +00:18:35,159 --> 00:18:40,480 +between the query and all of the other + +422 +00:18:37,960 --> 00:18:42,400 +documents in the document base is + +423 +00:18:40,480 --> 00:18:46,080 +extremely + +424 +00:18:42,400 --> 00:18:48,080 +costly and so in order to fix this there + +425 +00:18:46,080 --> 00:18:49,080 +are methods for approximate nearest + +426 +00:18:48,080 --> 00:18:52,280 +neighbor + +427 +00:18:49,080 --> 00:18:54,200 +search and these are methods that allow + +428 +00:18:52,280 --> 00:18:57,360 +you to retrieve embeddings that have the + +429 +00:18:54,200 --> 00:19:00,280 +maximum inner product between them in + +430 +00:18:57,360 --> 00:19:02,520 +sublinear time and because you're doing + +431 +00:19:00,280 --> 00:19:03,960 +the maximum inner product this is also + +432 +00:19:02,520 --> 00:19:06,600 +often called maximum inner product + +433 +00:19:03,960 --> 00:19:06,600 +search or + +434 +00:19:06,679 --> 00:19:12,360 +myips so I'm going to introduce on a + +435 +00:19:09,440 --> 00:19:15,360 +very high level two common methods to do + +436 +00:19:12,360 --> 00:19:19,320 +this the first one is locality sensitive + +437 +00:19:15,360 --> 00:19:22,440 +hashen um or this can also be called + +438 +00:19:19,320 --> 00:19:24,799 +kind of inverted index as well and what + +439 +00:19:22,440 --> 00:19:26,840 +you do is you make partitions in + +440 +00:19:24,799 --> 00:19:29,320 +continuous space and then you use it + +441 +00:19:26,840 --> 00:19:31,240 +like an inverted index + +442 +00:19:29,320 --> 00:19:33,679 +so let's say we have a whole bunch of + +443 +00:19:31,240 --> 00:19:34,919 +embeddings uh I demonstrated two + +444 +00:19:33,679 --> 00:19:36,640 +dimensional embeddings here but in + +445 +00:19:34,919 --> 00:19:38,440 +reality this would be you know as large + +446 +00:19:36,640 --> 00:19:41,159 +as your word + +447 +00:19:38,440 --> 00:19:42,880 +embedding your query and document + +448 +00:19:41,159 --> 00:19:47,120 +embedding space so this would be you + +449 +00:19:42,880 --> 00:19:49,760 +know 512 or 1024 or something like that + +450 +00:19:47,120 --> 00:19:53,480 +and what you do is you define a whole + +451 +00:19:49,760 --> 00:19:56,720 +bunch of planes that separate these + +452 +00:19:53,480 --> 00:19:59,320 +points into two spaces so if this is our + +453 +00:19:56,720 --> 00:20:02,520 +first plane all the points above the + +454 +00:19:59,320 --> 00:20:04,280 +plane will get a one for this partition + +455 +00:20:02,520 --> 00:20:06,799 +and all the points below the plane will + +456 +00:20:04,280 --> 00:20:08,840 +get a zero for this partition and we do + +457 +00:20:06,799 --> 00:20:12,400 +it similarly we we create a whole bunch + +458 +00:20:08,840 --> 00:20:15,840 +of them and then based on this we can + +459 +00:20:12,400 --> 00:20:18,440 +now assign sparse vectors depending on + +460 +00:20:15,840 --> 00:20:21,520 +each of these planes so we have uh for + +461 +00:20:18,440 --> 00:20:24,000 +example the top one uh one0 0 because + +462 +00:20:21,520 --> 00:20:26,400 +it's on the right side of the blue plane + +463 +00:20:24,000 --> 00:20:28,760 +and the um wrong side of the red and the + +464 +00:20:26,400 --> 00:20:30,679 +green planes and then for the top right + +465 +00:20:28,760 --> 00:20:32,799 +we have one1 because it's on the right + +466 +00:20:30,679 --> 00:20:37,159 +side of the blueing the green planes and + +467 +00:20:32,799 --> 00:20:39,440 +the wrong side of the red plane and So + +468 +00:20:37,159 --> 00:20:41,000 +based on this now we have a sparse + +469 +00:20:39,440 --> 00:20:42,600 +vector and we already know what to do + +470 +00:20:41,000 --> 00:20:44,640 +with a sparse Vector right we look it up + +471 +00:20:42,600 --> 00:20:49,039 +in an inverted index just like we did + +472 +00:20:44,640 --> 00:20:51,520 +for a sparse um you know sparse lookup + +473 +00:20:49,039 --> 00:20:54,520 +table so that's one + +474 +00:20:51,520 --> 00:20:57,799 +method another method uses a graph-based + +475 +00:20:54,520 --> 00:21:01,320 +search and the basic idea behind this is + +476 +00:20:57,799 --> 00:21:02,480 +that we create hubs uh and these hubs + +477 +00:21:01,320 --> 00:21:05,200 +are kind + +478 +00:21:02,480 --> 00:21:07,960 +of a small number of points that are + +479 +00:21:05,200 --> 00:21:09,440 +close to other points in the space and + +480 +00:21:07,960 --> 00:21:10,880 +so we create some hubs and then we + +481 +00:21:09,440 --> 00:21:12,200 +search from there so if we have a + +482 +00:21:10,880 --> 00:21:16,880 +similar + +483 +00:21:12,200 --> 00:21:19,159 +looking uh set of points in the space we + +484 +00:21:16,880 --> 00:21:21,520 +find these hubs which are something like + +485 +00:21:19,159 --> 00:21:24,880 +cluster centroids and then based on the + +486 +00:21:21,520 --> 00:21:28,559 +cluster centroids we then rule down or + +487 +00:21:24,880 --> 00:21:31,200 +we greatly reduce the number of + +488 +00:21:28,559 --> 00:21:33,400 +points that we need to be looking at and + +489 +00:21:31,200 --> 00:21:36,960 +then we search through only those points + +490 +00:21:33,400 --> 00:21:38,600 +in a more kind of extensive Manner and + +491 +00:21:36,960 --> 00:21:41,840 +you can even turn this into a tree where + +492 +00:21:38,600 --> 00:21:43,760 +you have hubs and then you have uh kind + +493 +00:21:41,840 --> 00:21:46,600 +of mini hubs and then you have all the + +494 +00:21:43,760 --> 00:21:50,200 +points so this allows you to do a kind + +495 +00:21:46,600 --> 00:21:50,200 +of tree based or graph based + +496 +00:21:50,600 --> 00:21:55,840 +search so obviously unless you're really + +497 +00:21:54,159 --> 00:21:57,039 +excited about these algorithms this is + +498 +00:21:55,840 --> 00:22:00,080 +something that you probably don't want + +499 +00:21:57,039 --> 00:22:01,440 +to be implementing yourself um and the + +500 +00:22:00,080 --> 00:22:03,000 +good news is there's lots of very good + +501 +00:22:01,440 --> 00:22:04,480 +libraries that help you do this in fact + +502 +00:22:03,000 --> 00:22:08,799 +there are so many libraries it's hard to + +503 +00:22:04,480 --> 00:22:11,960 +manage them but some libraries that + +504 +00:22:08,799 --> 00:22:13,799 +people very commonly use I I think face + +505 +00:22:11,960 --> 00:22:17,320 +uh FIS + +506 +00:22:13,799 --> 00:22:20,200 +SS is a widely used one created by uh + +507 +00:22:17,320 --> 00:22:23,760 +fair and meta and chroma DB is a + +508 +00:22:20,200 --> 00:22:27,720 +separate one uh that is kind of an AI + +509 +00:22:23,760 --> 00:22:30,720 +native uh embedding search database so + +510 +00:22:27,720 --> 00:22:30,720 +both those are good + +511 +00:22:32,960 --> 00:22:41,120 +options even with intelligent training + +512 +00:22:37,880 --> 00:22:42,640 +of dense embeddings however there still + +513 +00:22:41,120 --> 00:22:45,600 +are + +514 +00:22:42,640 --> 00:22:48,240 +problems and the biggest + +515 +00:22:45,600 --> 00:22:51,720 +problem that you face when you're + +516 +00:22:48,240 --> 00:22:54,000 +looking at something like uh cross + +517 +00:22:51,720 --> 00:22:56,880 +encoders um that sorry when you're + +518 +00:22:54,000 --> 00:23:00,240 +looking at dense embeddings is that in + +519 +00:22:56,880 --> 00:23:02,159 +order to form a good dense embedding you + +520 +00:23:00,240 --> 00:23:03,840 +need to kind of know in advance what + +521 +00:23:02,159 --> 00:23:05,799 +you're looking for right because you're + +522 +00:23:03,840 --> 00:23:09,120 +taking a long document you're condensing + +523 +00:23:05,799 --> 00:23:10,679 +it down into a single embedding and or a + +524 +00:23:09,120 --> 00:23:13,320 +long passage and you're condensing it + +525 +00:23:10,679 --> 00:23:16,200 +down to a single embedding and so if + +526 +00:23:13,320 --> 00:23:19,520 +that during that condensation process + +527 +00:23:16,200 --> 00:23:21,240 +actually there's other information that + +528 +00:23:19,520 --> 00:23:23,159 +is relevant to a query but you have to + +529 +00:23:21,240 --> 00:23:27,600 +throw out because of the limited + +530 +00:23:23,159 --> 00:23:30,600 +embedding capacity this causes you to + +531 +00:23:27,600 --> 00:23:32,320 +you know essentially fail at um doing + +532 +00:23:30,600 --> 00:23:34,840 +retrieval + +533 +00:23:32,320 --> 00:23:38,159 +appropriately so there's a couple + +534 +00:23:34,840 --> 00:23:40,880 +methods that can be used to fix this so + +535 +00:23:38,159 --> 00:23:42,279 +the first method is in contrast to the + +536 +00:23:40,880 --> 00:23:44,159 +buy encoder which is what I've been + +537 +00:23:42,279 --> 00:23:47,000 +talking out about at this point where + +538 +00:23:44,159 --> 00:23:48,520 +you kind of do full encoding of queries + +539 +00:23:47,000 --> 00:23:52,120 +full encoding of documents and then do + +540 +00:23:48,520 --> 00:23:53,840 +inner product search for a score uh you + +541 +00:23:52,120 --> 00:23:56,760 +can use a cross encoder and the way the + +542 +00:23:53,840 --> 00:23:58,559 +cross- encoder works is you append the + +543 +00:23:56,760 --> 00:24:00,799 +query and document and then you run them + +544 +00:23:58,559 --> 00:24:03,400 +through a model like a Transformer model + +545 +00:24:00,799 --> 00:24:07,840 +and you calculate the output + +546 +00:24:03,400 --> 00:24:09,880 +score so the problem with this um so + +547 +00:24:07,840 --> 00:24:12,480 +this this is great uh because it gives + +548 +00:24:09,880 --> 00:24:15,799 +you maximum flexibility um Transformer + +549 +00:24:12,480 --> 00:24:18,799 +models are powerful you can uh assess + +550 +00:24:15,799 --> 00:24:20,520 +relevance very well the problem with + +551 +00:24:18,799 --> 00:24:22,200 +this is this precludes approximate + +552 +00:24:20,520 --> 00:24:23,720 +nearest neighbor lookup because now + +553 +00:24:22,200 --> 00:24:25,799 +you're running through you know many + +554 +00:24:23,720 --> 00:24:28,880 +many nonlinearities + +555 +00:24:25,799 --> 00:24:32,760 +here so this is can only be used for + +556 +00:24:28,880 --> 00:24:34,360 +reranking documents um or if even if + +557 +00:24:32,760 --> 00:24:36,880 +you're doing retrieval doing retrieval + +558 +00:24:34,360 --> 00:24:39,679 +over a very very small number of + +559 +00:24:36,880 --> 00:24:41,960 +documents but if you really want maximal + +560 +00:24:39,679 --> 00:24:44,080 +accuracy I definitely would recommend uh + +561 +00:24:41,960 --> 00:24:45,720 +doing something like this because it can + +562 +00:24:44,080 --> 00:24:47,960 +allow you to do kind of a second pass + +563 +00:24:45,720 --> 00:24:49,360 +filtering over the most relevant looking + +564 +00:24:47,960 --> 00:24:52,399 +documents to identify the ones you + +565 +00:24:49,360 --> 00:24:52,399 +really want to add to your + +566 +00:24:54,240 --> 00:24:58,240 +context so then there are also + +567 +00:24:56,760 --> 00:25:01,360 +approaches that are kind kind of in the + +568 +00:24:58,240 --> 00:25:02,159 +middle of these two uh the most famous + +569 +00:25:01,360 --> 00:25:05,880 +one is + +570 +00:25:02,159 --> 00:25:08,320 +Kar and the I called this token level + +571 +00:25:05,880 --> 00:25:10,840 +dense retrieval it's also called uh late + +572 +00:25:08,320 --> 00:25:12,720 +interaction in the coold bear paper but + +573 +00:25:10,840 --> 00:25:14,919 +the way it works is you use + +574 +00:25:12,720 --> 00:25:18,440 +contextualized representations of all + +575 +00:25:14,919 --> 00:25:19,440 +query and document tokens to compute a + +576 +00:25:18,440 --> 00:25:23,559 +retrieval + +577 +00:25:19,440 --> 00:25:26,919 +score and so you do offline indexing of + +578 +00:25:23,559 --> 00:25:29,159 +every token in the document and then + +579 +00:25:26,919 --> 00:25:31,399 +based on this offline X indexing of + +580 +00:25:29,159 --> 00:25:35,320 +every token in the document you then + +581 +00:25:31,399 --> 00:25:38,760 +have a query encoder and you do matching + +582 +00:25:35,320 --> 00:25:41,799 +between each token in the query and the + +583 +00:25:38,760 --> 00:25:43,399 +highest scoring tokens in each + +584 +00:25:41,799 --> 00:25:46,320 +document + +585 +00:25:43,399 --> 00:25:48,399 +and the reason why this is good is it + +586 +00:25:46,320 --> 00:25:49,600 +still allows you to encode all of the + +587 +00:25:48,399 --> 00:25:52,120 +tokens in the + +588 +00:25:49,600 --> 00:25:55,440 +document and but each of these + +589 +00:25:52,120 --> 00:25:59,679 +similarity searches is still just + +590 +00:25:55,440 --> 00:26:03,559 +a kind of maximum product search and + +591 +00:25:59,679 --> 00:26:06,279 +because of this this allows you to do + +592 +00:26:03,559 --> 00:26:07,960 +each of these searches efficiently and + +593 +00:26:06,279 --> 00:26:09,840 +doesn't preclude you from running it + +594 +00:26:07,960 --> 00:26:12,919 +over an entire + +595 +00:26:09,840 --> 00:26:16,399 +database the downside to this method uh + +596 +00:26:12,919 --> 00:26:19,120 +may already be obvious but in the + +597 +00:26:16,399 --> 00:26:22,200 +traditional bu encoder we have a single + +598 +00:26:19,120 --> 00:26:26,880 +Vector for each document but here we + +599 +00:26:22,200 --> 00:26:29,320 +have one vector for um each token in the + +600 +00:26:26,880 --> 00:26:31,880 +document so BAS basically your vector + +601 +00:26:29,320 --> 00:26:34,399 +database gets n times larger where n is + +602 +00:26:31,880 --> 00:26:36,679 +the number of tokens in the document and + +603 +00:26:34,399 --> 00:26:38,080 +there are certain methods to make this + +604 +00:26:36,679 --> 00:26:41,559 +better like you can compress each + +605 +00:26:38,080 --> 00:26:42,960 +document to a smaller number of n uh but + +606 +00:26:41,559 --> 00:26:45,880 +still this is definitely going to be + +607 +00:26:42,960 --> 00:26:48,399 +more costly than looking up each uh + +608 +00:26:45,880 --> 00:26:50,360 +token so this is definitely something to + +609 +00:26:48,399 --> 00:26:53,520 +consider if you want to get you know + +610 +00:26:50,360 --> 00:26:55,159 +very good scores and Co bear is a good + +611 +00:26:53,520 --> 00:26:59,600 +implementation of that to start with if + +612 +00:26:55,159 --> 00:26:59,600 +you're interested in trying it out + +613 +00:27:00,480 --> 00:27:07,000 +so this is a final thing this is uh + +614 +00:27:03,080 --> 00:27:08,679 +something that is a little bit uh + +615 +00:27:07,000 --> 00:27:10,080 +different than all the other things I I + +616 +00:27:08,679 --> 00:27:12,399 +talked about before but I've used it + +617 +00:27:10,080 --> 00:27:15,840 +myself and it actually can be pretty + +618 +00:27:12,399 --> 00:27:18,799 +effective um it was also made at CMU so + +619 +00:27:15,840 --> 00:27:24,399 +by Lal so I would like to promote our + +620 +00:27:18,799 --> 00:27:26,880 +CMU work of course but um the HP idea + +621 +00:27:24,399 --> 00:27:28,080 +between behind a hypothetical document + +622 +00:27:26,880 --> 00:27:30,320 +embedding + +623 +00:27:28,080 --> 00:27:33,440 +is that it's actually somewhat difficult + +624 +00:27:30,320 --> 00:27:36,200 +to match a query and a document right + +625 +00:27:33,440 --> 00:27:38,919 +because a query is a very short possibly + +626 +00:27:36,200 --> 00:27:42,240 +ungrammatical output that's asking a + +627 +00:27:38,919 --> 00:27:44,799 +question and then a document is a very + +628 +00:27:42,240 --> 00:27:49,440 +long output that's written in a + +629 +00:27:44,799 --> 00:27:50,799 +different proos style and you you know + +630 +00:27:49,440 --> 00:27:53,159 +it might have lots of irrelevant + +631 +00:27:50,799 --> 00:27:54,519 +information or or boiler plate or fluff + +632 +00:27:53,159 --> 00:27:57,640 +or something like + +633 +00:27:54,519 --> 00:28:00,640 +that so the idea behind a hypothetical + +634 +00:27:57,640 --> 00:28:03,120 +document embedding is that it's e easier + +635 +00:28:00,640 --> 00:28:05,279 +to match a document in a document than + +636 +00:28:03,120 --> 00:28:08,159 +it is to match a query in a + +637 +00:28:05,279 --> 00:28:10,159 +document but the input to our model is a + +638 +00:28:08,159 --> 00:28:14,360 +query right so what do we + +639 +00:28:10,159 --> 00:28:17,919 +do and so essentially what we do is we + +640 +00:28:14,360 --> 00:28:20,399 +then take a large language model we feed + +641 +00:28:17,919 --> 00:28:23,320 +it in a query in a prompt and say + +642 +00:28:20,399 --> 00:28:25,399 +generate a document that looks like it + +643 +00:28:23,320 --> 00:28:30,080 +should be the answer to this + +644 +00:28:25,399 --> 00:28:32,120 +query and so so then the llm goes in and + +645 +00:28:30,080 --> 00:28:34,440 +it generates a document and hopefully + +646 +00:28:32,120 --> 00:28:38,440 +this document looks more similar to the + +647 +00:28:34,440 --> 00:28:41,440 +documents you want to retrieve than the + +648 +00:28:38,440 --> 00:28:44,039 +um than the original query does and I've + +649 +00:28:41,440 --> 00:28:47,240 +actually found this to be relatively + +650 +00:28:44,039 --> 00:28:51,880 +effective at improving accuracy + +651 +00:28:47,240 --> 00:28:53,200 +on kind of difficult uh tasks especially + +652 +00:28:51,880 --> 00:28:55,840 +ones that are out of domain from the + +653 +00:28:53,200 --> 00:28:58,000 +trend models that I'm + +654 +00:28:55,840 --> 00:29:01,880 +using so I've gone through a whole bunch + +655 +00:28:58,000 --> 00:29:04,039 +of methods and I would like to finish up + +656 +00:29:01,880 --> 00:29:05,679 +this section by giving some insight + +657 +00:29:04,039 --> 00:29:11,399 +about which one you should be + +658 +00:29:05,679 --> 00:29:14,559 +using so my impression right now is + +659 +00:29:11,399 --> 00:29:17,760 +that a good basine to start out with is + +660 +00:29:14,559 --> 00:29:20,679 +something like bm25 it's very easy to + +661 +00:29:17,760 --> 00:29:23,080 +start out and compared to embedding + +662 +00:29:20,679 --> 00:29:26,120 +based models it tends to be relatively + +663 +00:29:23,080 --> 00:29:28,279 +robust to new domains so if you have a + +664 +00:29:26,120 --> 00:29:30,559 +new domain you're more less guaranteed + +665 +00:29:28,279 --> 00:29:32,240 +that bm25 will give you some performance + +666 +00:29:30,559 --> 00:29:35,320 +whereas embeddings may be really good + +667 +00:29:32,240 --> 00:29:38,399 +but they may be really bad uh depending + +668 +00:29:35,320 --> 00:29:40,880 +on how out of domain that is compared to + +669 +00:29:38,399 --> 00:29:42,799 +your underlying embedding + +670 +00:29:40,880 --> 00:29:44,760 +model + +671 +00:29:42,799 --> 00:29:48,039 +so however if you want to get the + +672 +00:29:44,760 --> 00:29:51,080 +highest accuracy definitely tuned models + +673 +00:29:48,039 --> 00:29:53,200 +are going to be better and if you're not + +674 +00:29:51,080 --> 00:29:56,039 +worried about computation efficiency + +675 +00:29:53,200 --> 00:29:58,480 +using something like P bear um with kind + +676 +00:29:56,039 --> 00:30:01,320 +of the token level retrieval will + +677 +00:29:58,480 --> 00:30:05,559 +definitely give you uh good accuracy + +678 +00:30:01,320 --> 00:30:08,559 +here however there's better support for + +679 +00:30:05,559 --> 00:30:12,159 +bu encoder style models um in kind of + +680 +00:30:08,559 --> 00:30:15,240 +standard Vector databases like feice and + +681 +00:30:12,159 --> 00:30:17,519 +uh chroma and other things like that so + +682 +00:30:15,240 --> 00:30:19,799 +if you want a kind of easier method to + +683 +00:30:17,519 --> 00:30:23,279 +get started very quickly then using a bu + +684 +00:30:19,799 --> 00:30:23,279 +encoder is probably the best way to + +685 +00:30:25,080 --> 00:30:31,080 +go okay so now moving on to actual + +686 +00:30:28,279 --> 00:30:33,159 +retrieval augmented generation models we + +687 +00:30:31,080 --> 00:30:38,360 +have uh retriever reader + +688 +00:30:33,159 --> 00:30:40,880 +models and the way these work is you + +689 +00:30:38,360 --> 00:30:43,279 +basically the simplest way they can work + +690 +00:30:40,880 --> 00:30:45,799 +is you basically just chain retrieval + +691 +00:30:43,279 --> 00:30:47,640 +and reading together so you use an outof + +692 +00:30:45,799 --> 00:30:52,519 +thebox Retriever and an outof thebox + +693 +00:30:47,640 --> 00:30:54,039 +reader model and you have your query uh + +694 +00:30:52,519 --> 00:30:56,159 +you could for example look something up + +695 +00:30:54,039 --> 00:30:58,039 +on Google get a whole bunch of passages + +696 +00:30:56,159 --> 00:30:59,760 +and then feed them into a GP key model + +697 +00:30:58,039 --> 00:31:03,919 +and get an + +698 +00:30:59,760 --> 00:31:06,960 +answer this overall is quite effective + +699 +00:31:03,919 --> 00:31:09,159 +um you it's easy to implement and it + +700 +00:31:06,960 --> 00:31:10,600 +will give you decent results so + +701 +00:31:09,159 --> 00:31:15,480 +definitely it's something to be worth + +702 +00:31:10,600 --> 00:31:20,720 +thinking about uh for assignment two in + +703 +00:31:15,480 --> 00:31:24,799 +the um in the class you're required to + +704 +00:31:20,720 --> 00:31:26,679 +only use uh kind of public models or + +705 +00:31:24,799 --> 00:31:29,760 +open source implementations so you could + +706 +00:31:26,679 --> 00:31:34,360 +still replace that with Apachi Lucine + +707 +00:31:29,760 --> 00:31:36,360 +and then um you know any standard llm + +708 +00:31:34,360 --> 00:31:39,159 +and that could be you know llama llama + +709 +00:31:36,360 --> 00:31:41,600 +Chad or M mistol or mixol or something + +710 +00:31:39,159 --> 00:31:45,360 +like that so uh you could definitely + +711 +00:31:41,600 --> 00:31:48,120 +feel feel free to do something like + +712 +00:31:45,360 --> 00:31:51,559 +that um of course the passages are + +713 +00:31:48,120 --> 00:31:53,200 +concatenated to the context and so + +714 +00:31:51,559 --> 00:31:54,799 +because the passages are concatenated to + +715 +00:31:53,200 --> 00:31:56,679 +context the contacts can get relatively + +716 +00:31:54,799 --> 00:31:58,399 +long and expensive and other things like + +717 +00:31:56,679 --> 00:32:01,960 +that but it's just something you have to + +718 +00:31:58,399 --> 00:32:01,960 +deal with when you're using + +719 +00:32:02,600 --> 00:32:07,480 +R there are methods also for Retriever + +720 +00:32:05,799 --> 00:32:11,600 +and Generator endtoend + +721 +00:32:07,480 --> 00:32:14,720 +training so this is the paper actually + +722 +00:32:11,600 --> 00:32:17,600 +where the name rag came from and I'll + +723 +00:32:14,720 --> 00:32:20,200 +use that as an example here uh but + +724 +00:32:17,600 --> 00:32:21,600 +basically um there are several methods + +725 +00:32:20,200 --> 00:32:23,399 +that propos to train the Retriever and + +726 +00:32:21,600 --> 00:32:27,440 +reader to improve + +727 +00:32:23,399 --> 00:32:31,240 +accuracy and specifically the rag p by + +728 +00:32:27,440 --> 00:32:33,200 +Lewis at all the way it trained the um + +729 +00:32:31,240 --> 00:32:35,639 +reader was to maximize generation + +730 +00:32:33,200 --> 00:32:38,600 +likelihood given a single retrieved + +731 +00:32:35,639 --> 00:32:40,279 +document and for the retriever it + +732 +00:32:38,600 --> 00:32:41,880 +maximized overall likelihood by + +733 +00:32:40,279 --> 00:32:44,480 +optimizing the mixture weight over + +734 +00:32:41,880 --> 00:32:46,559 +documents so here's kind of a a + +735 +00:32:44,480 --> 00:32:50,480 +schematic uh which is you have your + +736 +00:32:46,559 --> 00:32:54,039 +query encoder um you run the Retriever + +737 +00:32:50,480 --> 00:32:57,760 +with uh maximum inner product search it + +738 +00:32:54,039 --> 00:33:00,919 +gives you several documents and each + +739 +00:32:57,760 --> 00:33:05,880 +document has a score and then based on + +740 +00:33:00,919 --> 00:33:09,399 +the documents and the scores you + +741 +00:33:05,880 --> 00:33:11,200 +generate uh with each document in the + +742 +00:33:09,399 --> 00:33:15,360 +context and + +743 +00:33:11,200 --> 00:33:17,080 +then sum together the probabilities + +744 +00:33:15,360 --> 00:33:18,639 +multiplied by the weights and I have the + +745 +00:33:17,080 --> 00:33:20,320 +actual equations here because I think + +746 +00:33:18,639 --> 00:33:23,039 +it'll be a little bit easier to + +747 +00:33:20,320 --> 00:33:25,760 +understand after looking at the + +748 +00:33:23,039 --> 00:33:28,360 +equations so generation is a mixture + +749 +00:33:25,760 --> 00:33:31,440 +model and you pick a document and + +750 +00:33:28,360 --> 00:33:36,519 +generate from the document this + +751 +00:33:31,440 --> 00:33:40,080 +p z given X is the probability of + +752 +00:33:36,519 --> 00:33:44,679 +picking that document given the query X + +753 +00:33:40,080 --> 00:33:48,880 +and then this P Theta x z and all of the + +754 +00:33:44,679 --> 00:33:51,480 +previous tokens is basically the uh + +755 +00:33:48,880 --> 00:33:54,840 +probability of the next token given that + +756 +00:33:51,480 --> 00:33:56,559 +you have this particular document so you + +757 +00:33:54,840 --> 00:34:00,840 +can see that this is basically linearly + +758 +00:33:56,559 --> 00:34:00,840 +interpr ating between the multiple + +759 +00:34:01,559 --> 00:34:05,760 +documents and if we look this can be + +760 +00:34:04,600 --> 00:34:09,039 +considered the Retriever and the + +761 +00:34:05,760 --> 00:34:09,039 +generator the Retriever and the + +762 +00:34:10,839 --> 00:34:16,119 +reader one really important thing here + +763 +00:34:13,639 --> 00:34:17,760 +uh that enables endtoend training is + +764 +00:34:16,119 --> 00:34:19,639 +they have this probability of the + +765 +00:34:17,760 --> 00:34:22,919 +retriever be based on + +766 +00:34:19,639 --> 00:34:25,480 +embeddings and so here we have the + +767 +00:34:22,919 --> 00:34:29,040 +document embedding and the query + +768 +00:34:25,480 --> 00:34:31,440 +embedding and the probability is + +769 +00:34:29,040 --> 00:34:33,320 +proportional to the inner product of + +770 +00:34:31,440 --> 00:34:36,599 +these exponentiated so you're basically + +771 +00:34:33,320 --> 00:34:38,839 +taking a soft Max over uh the inner + +772 +00:34:36,599 --> 00:34:40,599 +product between the + +773 +00:34:38,839 --> 00:34:44,200 +two + +774 +00:34:40,599 --> 00:34:47,919 +and this adjusts the retriever to give + +775 +00:34:44,200 --> 00:34:49,560 +higher similarities to helpful + +776 +00:34:47,919 --> 00:34:52,560 +documents + +777 +00:34:49,560 --> 00:34:52,560 +and + +778 +00:34:54,040 --> 00:35:02,800 +so because the prob probability of the + +779 +00:34:59,800 --> 00:35:04,839 +retriever model here is included in the + +780 +00:35:02,800 --> 00:35:07,160 +endtoend probability you don't actually + +781 +00:35:04,839 --> 00:35:10,680 +need any annotations + +782 +00:35:07,160 --> 00:35:12,839 +about which documents are useful you can + +783 +00:35:10,680 --> 00:35:16,680 +just train all of this end to end and + +784 +00:35:12,839 --> 00:35:19,480 +let backrop do its thing to update the + +785 +00:35:16,680 --> 00:35:22,640 +uh the retriever as + +786 +00:35:19,480 --> 00:35:25,000 +well one important issue when training + +787 +00:35:22,640 --> 00:35:27,480 +models like this is that the search + +788 +00:35:25,000 --> 00:35:30,400 +index will become stale so what do I + +789 +00:35:27,480 --> 00:35:34,760 +mean by this if we go back to our + +790 +00:35:30,400 --> 00:35:34,760 +previous uh thing about dense + +791 +00:35:35,480 --> 00:35:43,560 +models creating this blue search index + +792 +00:35:39,800 --> 00:35:45,400 +on the right side of the figure here is + +793 +00:35:43,560 --> 00:35:48,680 +very costly so like let's say you want + +794 +00:35:45,400 --> 00:35:50,520 +to embed a million documents or a + +795 +00:35:48,680 --> 00:35:55,240 +billion documents if you're a big search + +796 +00:35:50,520 --> 00:35:58,200 +engine company so doing this is very + +797 +00:35:55,240 --> 00:36:00,599 +slow and + +798 +00:35:58,200 --> 00:36:01,920 +in contrast doing lookup with kind of + +799 +00:36:00,599 --> 00:36:04,160 +these approximate nearest neighbor + +800 +00:36:01,920 --> 00:36:05,440 +searches is sublinear time or even you + +801 +00:36:04,160 --> 00:36:08,119 +know log time so you can do it + +802 +00:36:05,440 --> 00:36:12,319 +relatively quickly + +803 +00:36:08,119 --> 00:36:15,680 +so it's fine to do lookup over this big + +804 +00:36:12,319 --> 00:36:17,520 +index but if you start updating this + +805 +00:36:15,680 --> 00:36:19,920 +document embedding you need to recreate + +806 +00:36:17,520 --> 00:36:23,760 +the entire index and that would be you + +807 +00:36:19,920 --> 00:36:27,240 +know very computationally costly so the + +808 +00:36:23,760 --> 00:36:30,119 +solution to this proposed in this rag + +809 +00:36:27,240 --> 00:36:33,640 +paper by Lewis at all is uh we only + +810 +00:36:30,119 --> 00:36:35,640 +train the query embeddings and we keep + +811 +00:36:33,640 --> 00:36:39,640 +the document embedding + +812 +00:36:35,640 --> 00:36:41,920 +swix there's other Alternatives like um + +813 +00:36:39,640 --> 00:36:45,000 +there was a paper called realm uh from + +814 +00:36:41,920 --> 00:36:48,040 +early in retrieval base modeling and in + +815 +00:36:45,000 --> 00:36:50,040 +that in that method they basically had + +816 +00:36:48,040 --> 00:36:51,520 +an asynchronous process that was going + +817 +00:36:50,040 --> 00:36:55,760 +through and using the most recent + +818 +00:36:51,520 --> 00:36:59,960 +document in better to re-update the + +819 +00:36:55,760 --> 00:37:03,359 +search index during training but that is + +820 +00:36:59,960 --> 00:37:05,960 +uh you know kind of a very onerous + +821 +00:37:03,359 --> 00:37:07,800 +process so I think it's quite common to + +822 +00:37:05,960 --> 00:37:11,000 +use kind of a fixed document embedding + +823 +00:37:07,800 --> 00:37:11,000 +in update only the + +824 +00:37:12,079 --> 00:37:17,720 +queries another thing to think about is + +825 +00:37:14,359 --> 00:37:21,160 +when do we do retrieval um so there's a + +826 +00:37:17,720 --> 00:37:23,079 +bunch of different methods the rag paper + +827 +00:37:21,160 --> 00:37:24,440 +that I mentioned before did this only + +828 +00:37:23,079 --> 00:37:26,359 +once right at the very beginning of + +829 +00:37:24,440 --> 00:37:29,400 +generation it grabbed a single document + +830 +00:37:26,359 --> 00:37:32,560 +and generated the entire output this is + +831 +00:37:29,400 --> 00:37:34,800 +the default method used by most + +832 +00:37:32,560 --> 00:37:37,240 +systems however there's other options as + +833 +00:37:34,800 --> 00:37:39,640 +well you can retrieve uh several times + +834 +00:37:37,240 --> 00:37:43,040 +during generation as + +835 +00:37:39,640 --> 00:37:44,480 +necessary and the way this works uh we + +836 +00:37:43,040 --> 00:37:46,280 +can do this either by generating a + +837 +00:37:44,480 --> 00:37:48,480 +search token uh saying that we should + +838 +00:37:46,280 --> 00:37:50,200 +start searching or searching when the + +839 +00:37:48,480 --> 00:37:52,640 +model is + +840 +00:37:50,200 --> 00:37:55,920 +uncertain and another way is to do this + +841 +00:37:52,640 --> 00:37:58,079 +every token so we can do this by finding + +842 +00:37:55,920 --> 00:37:59,760 +similar final embeddings and using this + +843 +00:37:58,079 --> 00:38:02,240 +to influence the + +844 +00:37:59,760 --> 00:38:04,720 +probabilities or approximating attention + +845 +00:38:02,240 --> 00:38:06,440 +with nearest neighbors so I'm going to + +846 +00:38:04,720 --> 00:38:08,920 +explain about each of these in a bit + +847 +00:38:06,440 --> 00:38:12,480 +more detail + +848 +00:38:08,920 --> 00:38:16,119 +in so triggering retrieval with token + +849 +00:38:12,480 --> 00:38:19,720 +embeddings is um was proposed by Tool + +850 +00:38:16,119 --> 00:38:22,119 +forer shik all and the way it works is + +851 +00:38:19,720 --> 00:38:25,000 +you generate tokens that Tri trigger + +852 +00:38:22,119 --> 00:38:27,880 +retrieval or other tools so in this + +853 +00:38:25,000 --> 00:38:30,079 +particular method it uh had several + +854 +00:38:27,880 --> 00:38:32,000 +tools including asking a QA model or + +855 +00:38:30,079 --> 00:38:34,800 +getting a calculator or having a machine + +856 +00:38:32,000 --> 00:38:37,200 +translation system but with respect to + +857 +00:38:34,800 --> 00:38:40,000 +retrieval augmented generation it had + +858 +00:38:37,200 --> 00:38:41,560 +this essentially Wiki search + +859 +00:38:40,000 --> 00:38:43,680 +functionality that would look up + +860 +00:38:41,560 --> 00:38:46,680 +something in Wikipedia and then use that + +861 +00:38:43,680 --> 00:38:46,680 +to influence the final + +862 +00:38:46,760 --> 00:38:52,200 +probabilities + +863 +00:38:48,800 --> 00:38:55,160 +and the way this was trained is training + +864 +00:38:52,200 --> 00:38:59,800 +was done in an inative manner where it + +865 +00:38:55,160 --> 00:38:59,800 +basically generated uh kind + +866 +00:39:00,000 --> 00:39:05,680 +of examples of tools being useful and + +867 +00:39:04,359 --> 00:39:09,560 +when the + +868 +00:39:05,680 --> 00:39:14,160 +tools improve the probability of the + +869 +00:39:09,560 --> 00:39:16,119 +following output then that would be kind + +870 +00:39:14,160 --> 00:39:19,560 +of treated as a positive example and + +871 +00:39:16,119 --> 00:39:21,520 +used to further train the model so this + +872 +00:39:19,560 --> 00:39:23,400 +was really influential and in fact this + +873 +00:39:21,520 --> 00:39:27,000 +is how things are implemented in chat + +874 +00:39:23,400 --> 00:39:29,319 +GPT nowadays not only for um doing + +875 +00:39:27,000 --> 00:39:33,400 +retrieval but also doing other tools + +876 +00:39:29,319 --> 00:39:35,200 +like um for example uh generating code + +877 +00:39:33,400 --> 00:39:37,440 +or generating images or other things + +878 +00:39:35,200 --> 00:39:37,440 +like + +879 +00:39:38,200 --> 00:39:45,079 +this another option is to trigger + +880 +00:39:40,920 --> 00:39:48,240 +retrieval uh with uncertainty estimates + +881 +00:39:45,079 --> 00:39:52,280 +so flare this is a paper by my student + +882 +00:39:48,240 --> 00:39:55,160 +Jang bang um where we try to generate + +883 +00:39:52,280 --> 00:39:58,560 +content and then do retrieval if the + +884 +00:39:55,160 --> 00:40:01,800 +language model certainty is low so + +885 +00:39:58,560 --> 00:40:05,599 +here's a schematic of how this works but + +886 +00:40:01,800 --> 00:40:09,160 +basically um if we have + +887 +00:40:05,599 --> 00:40:13,440 +some uh retrieved documents we can say + +888 +00:40:09,160 --> 00:40:16,560 +generate a a summary about Joe Biden and + +889 +00:40:13,440 --> 00:40:19,560 +when it generates a summary maybe for + +890 +00:40:16,560 --> 00:40:20,960 +the first output um the language model + +891 +00:40:19,560 --> 00:40:22,960 +has high + +892 +00:40:20,960 --> 00:40:24,240 +confidence and because the language + +893 +00:40:22,960 --> 00:40:25,359 +model has high confidence we just + +894 +00:40:24,240 --> 00:40:27,520 +generate the + +895 +00:40:25,359 --> 00:40:29,599 +output + +896 +00:40:27,520 --> 00:40:31,839 +however in the next step if it might + +897 +00:40:29,599 --> 00:40:33,599 +generate something like saying Joe Biden + +898 +00:40:31,839 --> 00:40:35,680 +attended the University of Pennsylvania + +899 +00:40:33,599 --> 00:40:37,160 +where he earned a law degree but the + +900 +00:40:35,680 --> 00:40:39,000 +model might not be very certain about + +901 +00:40:37,160 --> 00:40:41,560 +this it might have a low probability of + +902 +00:40:39,000 --> 00:40:45,839 +certain important entities and So based + +903 +00:40:41,560 --> 00:40:48,839 +on this uh we then form a a query where + +904 +00:40:45,839 --> 00:40:52,119 +what we do is essentially we blank out + +905 +00:40:48,839 --> 00:40:55,079 +the low probability parts of this and we + +906 +00:40:52,119 --> 00:40:57,200 +do a search and so this is also a little + +907 +00:40:55,079 --> 00:41:00,240 +bit like the hypothetical + +908 +00:40:57,200 --> 00:41:02,520 +edings method where we basically create + +909 +00:41:00,240 --> 00:41:04,040 +a document that we think will look + +910 +00:41:02,520 --> 00:41:07,119 +similar to the document that we want to + +911 +00:41:04,040 --> 00:41:09,480 +find we use that to create search + +912 +00:41:07,119 --> 00:41:11,359 +results and then we generate the output + +913 +00:41:09,480 --> 00:41:13,880 +and then we continue doing that and + +914 +00:41:11,359 --> 00:41:15,960 +whenever we have a high confidence + +915 +00:41:13,880 --> 00:41:18,800 +output like the one here we don't do any + +916 +00:41:15,960 --> 00:41:20,040 +retrieval we just you know generate uh + +917 +00:41:18,800 --> 00:41:21,880 +directly from the parameters of the + +918 +00:41:20,040 --> 00:41:23,960 +model but whenever we have low + +919 +00:41:21,880 --> 00:41:27,400 +confidence outputs we do the retrieval + +920 +00:41:23,960 --> 00:41:30,400 +and base the output on this and so I I + +921 +00:41:27,400 --> 00:41:33,119 +think this is uh you know a nice method + +922 +00:41:30,400 --> 00:41:35,000 +that could potentially be uh used the + +923 +00:41:33,119 --> 00:41:36,920 +downside to that is you might sometimes + +924 +00:41:35,000 --> 00:41:38,920 +need to generate twice because you would + +925 +00:41:36,920 --> 00:41:40,480 +generate the output once and then find + +926 +00:41:38,920 --> 00:41:42,720 +the low confidence parts and generate + +927 +00:41:40,480 --> 00:41:45,400 +again but you know if you really care + +928 +00:41:42,720 --> 00:41:47,319 +about the uh kind of quality of the + +929 +00:41:45,400 --> 00:41:49,640 +output this is I think a reasonable + +930 +00:41:47,319 --> 00:41:49,640 +thing to + +931 +00:41:50,160 --> 00:41:54,920 +do okay so now moving on to the Token by + +932 +00:41:53,000 --> 00:41:59,800 +token retrieval + +933 +00:41:54,920 --> 00:42:03,560 +methods the kind of original or one of + +934 +00:41:59,800 --> 00:42:05,200 +the methods that popularized this idea + +935 +00:42:03,560 --> 00:42:08,720 +of token by token retrieval is something + +936 +00:42:05,200 --> 00:42:10,760 +called K&N LM and the way it works is it + +937 +00:42:08,720 --> 00:42:13,839 +retrieves similar + +938 +00:42:10,760 --> 00:42:16,680 +examples and then uses the following + +939 +00:42:13,839 --> 00:42:20,880 +tokens from these + +940 +00:42:16,680 --> 00:42:23,800 +examples and this is kind of like a very + +941 +00:42:20,880 --> 00:42:25,839 +powerful count-based byr model in a way + +942 +00:42:23,800 --> 00:42:28,440 +so if you remember back to when we were + +943 +00:42:25,839 --> 00:42:32,920 +talking about count based Pam models + +944 +00:42:28,440 --> 00:42:36,440 +what we would do is we would take the + +945 +00:42:32,920 --> 00:42:39,400 +previous token and we would calculate + +946 +00:42:36,440 --> 00:42:41,319 +the probability of the next token by + +947 +00:42:39,400 --> 00:42:43,040 +summing up together all of the next + +948 +00:42:41,319 --> 00:42:44,800 +tokens and dividing by the total number + +949 +00:42:43,040 --> 00:42:49,240 +of times that previous token + +950 +00:42:44,800 --> 00:42:52,720 +occurred and so given that background uh + +951 +00:42:49,240 --> 00:42:56,760 +we can talk about how the KLM + +952 +00:42:52,720 --> 00:43:00,319 +works so we have the text context X + +953 +00:42:56,760 --> 00:43:02,240 +and we want to generate a Target output + +954 +00:43:00,319 --> 00:43:04,839 +separately from this we have all of the + +955 +00:43:02,240 --> 00:43:06,440 +training contexts so this is all of the + +956 +00:43:04,839 --> 00:43:09,920 +contexts that appeared in our training + +957 +00:43:06,440 --> 00:43:13,520 +data and we encode all of these training + +958 +00:43:09,920 --> 00:43:15,720 +contexts specifically by calculating the + +959 +00:43:13,520 --> 00:43:18,559 +representation of the final layer or + +960 +00:43:15,720 --> 00:43:21,119 +near the final layer of the model and so + +961 +00:43:18,559 --> 00:43:23,200 +we encode that as + +962 +00:43:21,119 --> 00:43:25,240 +representations separately from that we + +963 +00:43:23,200 --> 00:43:27,920 +remember the next word that appeared + +964 +00:43:25,240 --> 00:43:29,720 +after this Contex + +965 +00:43:27,920 --> 00:43:32,920 +so now we have a data store consisting + +966 +00:43:29,720 --> 00:43:35,040 +of representations in next words we then + +967 +00:43:32,920 --> 00:43:38,440 +take the representation of the current + +968 +00:43:35,040 --> 00:43:40,880 +context and we calculate the distance + +969 +00:43:38,440 --> 00:43:43,400 +between the current context and all of + +970 +00:43:40,880 --> 00:43:47,119 +the other similar context in the + +971 +00:43:43,400 --> 00:43:49,839 +database we take the nearest K so we + +972 +00:43:47,119 --> 00:43:52,440 +take the top uh K examples here which + +973 +00:43:49,839 --> 00:43:55,240 +would be Hawaii Illinois and + +974 +00:43:52,440 --> 00:43:57,520 +Hawaii we then do uh some sort of + +975 +00:43:55,240 --> 00:44:01,440 +normalization based on the + +976 +00:43:57,520 --> 00:44:05,200 +distance and this gives us a probability + +977 +00:44:01,440 --> 00:44:06,680 +distribution over all of the next tokens + +978 +00:44:05,200 --> 00:44:10,599 +sometimes these tokens are duplicated + +979 +00:44:06,680 --> 00:44:13,599 +multiple times and so we aggregate all + +980 +00:44:10,599 --> 00:44:15,800 +of these counts to be Hawaii for example + +981 +00:44:13,599 --> 00:44:18,839 +0.8 and Illinois + +982 +00:44:15,800 --> 00:44:21,839 +0.2 and then we interpolate this with + +983 +00:44:18,839 --> 00:44:24,040 +the probability given by the standard + +984 +00:44:21,839 --> 00:44:26,440 +language model using an interpolation + +985 +00:44:24,040 --> 00:44:28,400 +coefficient Lambda and this gives us our + +986 +00:44:26,440 --> 00:44:31,000 +final + +987 +00:44:28,400 --> 00:44:34,559 +probability so the nice thing about this + +988 +00:44:31,000 --> 00:44:38,000 +is this allows us to explicitly ground + +989 +00:44:34,559 --> 00:44:42,079 +our outputs in individual + +990 +00:44:38,000 --> 00:44:45,319 +examples uh and it's a pretty effective + +991 +00:44:42,079 --> 00:44:48,760 +way to improve the probability of models + +992 +00:44:45,319 --> 00:44:53,839 +improve translation and other stuff like + +993 +00:44:48,760 --> 00:44:56,119 +this the disadvantage of doing this is + +994 +00:44:53,839 --> 00:44:59,319 +that it provides it it kind of ADD add + +995 +00:44:56,119 --> 00:45:01,800 +an extra component of the model it adds + +996 +00:44:59,319 --> 00:45:05,440 +extra + +997 +00:45:01,800 --> 00:45:08,520 +um kind of hyperparameters like Lambda + +998 +00:45:05,440 --> 00:45:11,680 +and things like this so it is a little + +999 +00:45:08,520 --> 00:45:16,960 +bit finicky and it doesn't work in all + +1000 +00:45:11,680 --> 00:45:21,440 +situations and so another method that we + +1001 +00:45:16,960 --> 00:45:23,559 +uh proposed or by Manda Birch who gave + +1002 +00:45:21,440 --> 00:45:26,920 +the uh previous lecture on generation in + +1003 +00:45:23,559 --> 00:45:29,240 +this class is unlimi forer and basically + +1004 +00:45:26,920 --> 00:45:32,680 +what unlimi forer does is it notes that + +1005 +00:45:29,240 --> 00:45:36,079 +attention itself is an in inner product + +1006 +00:45:32,680 --> 00:45:40,440 +search and it does topk + +1007 +00:45:36,079 --> 00:45:42,680 +attention and the way we do this is we + +1008 +00:45:40,440 --> 00:45:45,160 +first process the input with a sliding + +1009 +00:45:42,680 --> 00:45:47,480 +window and then perform attention using + +1010 +00:45:45,160 --> 00:45:49,960 +a vector index so if we have a really + +1011 +00:45:47,480 --> 00:45:54,280 +long input that we want to encode what + +1012 +00:45:49,960 --> 00:45:56,559 +we do is we first encode chunks so we + +1013 +00:45:54,280 --> 00:46:01,960 +encode for example AB + +1014 +00:45:56,559 --> 00:46:03,839 +then we encode CD and we encode EF we + +1015 +00:46:01,960 --> 00:46:06,240 +concatenate them together into a big + +1016 +00:46:03,839 --> 00:46:07,800 +index of one long input so in a way that + +1017 +00:46:06,240 --> 00:46:10,920 +this is similar to what they did in the + +1018 +00:46:07,800 --> 00:46:12,720 +KLM you know concatenate all of these + +1019 +00:46:10,920 --> 00:46:16,520 +embeddings into a single + +1020 +00:46:12,720 --> 00:46:18,680 +input but the difference is that this is + +1021 +00:46:16,520 --> 00:46:21,640 +done with + +1022 +00:46:18,680 --> 00:46:24,280 +um the values that we are attending to + +1023 +00:46:21,640 --> 00:46:27,559 +as opposed to just the final + +1024 +00:46:24,280 --> 00:46:30,079 +layer and + +1025 +00:46:27,559 --> 00:46:33,680 +the interesting thing about this is now + +1026 +00:46:30,079 --> 00:46:36,200 +we have an index of one long input and + +1027 +00:46:33,680 --> 00:46:39,800 +when we want to do our next version of + +1028 +00:46:36,200 --> 00:46:42,240 +attention we do KNN search from the + +1029 +00:46:39,800 --> 00:46:44,280 +query we take the retrieved hidden + +1030 +00:46:42,240 --> 00:46:47,880 +States and then we just do attention + +1031 +00:46:44,280 --> 00:46:50,440 +over them so the nice thing about this + +1032 +00:46:47,880 --> 00:46:53,079 +is in the extreme case this makes no + +1033 +00:46:50,440 --> 00:46:55,240 +changes to the model what I mean by this + +1034 +00:46:53,079 --> 00:46:57,520 +is let's say our input was small enough + +1035 +00:46:55,240 --> 00:47:02,240 +that we could coded in only a single + +1036 +00:46:57,520 --> 00:47:06,400 +chunk and for KNN search we also did KNN + +1037 +00:47:02,240 --> 00:47:09,559 +search um we did you know exact Canon + +1038 +00:47:06,400 --> 00:47:12,400 +search over all of the embeddings in the + +1039 +00:47:09,559 --> 00:47:14,680 +trunk in that case this would just be + +1040 +00:47:12,400 --> 00:47:16,520 +normal attention it's exactly the same + +1041 +00:47:14,680 --> 00:47:18,640 +as normal + +1042 +00:47:16,520 --> 00:47:20,160 +attention however there are some + +1043 +00:47:18,640 --> 00:47:21,760 +approximations that go into here like + +1044 +00:47:20,160 --> 00:47:24,000 +when we encode chunks they might not be + +1045 +00:47:21,760 --> 00:47:26,359 +exactly the same as if we encoded the + +1046 +00:47:24,000 --> 00:47:29,839 +entire thing together and we're also + +1047 +00:47:26,359 --> 00:47:33,640 +chopping off some of the values with + +1048 +00:47:29,839 --> 00:47:35,800 +very low um kind of inner products and + +1049 +00:47:33,640 --> 00:47:37,400 +so because of this there are some + +1050 +00:47:35,800 --> 00:47:38,760 +approximations being made but in the + +1051 +00:47:37,400 --> 00:47:40,160 +extreme case if we made no + +1052 +00:47:38,760 --> 00:47:41,880 +approximations this would just be + +1053 +00:47:40,160 --> 00:47:44,359 +exactly the same model as we were using + +1054 +00:47:41,880 --> 00:47:46,160 +before so I find this pretty attractive + +1055 +00:47:44,359 --> 00:47:48,760 +and uh you know empirically it gives + +1056 +00:47:46,160 --> 00:47:51,720 +very good results over long + +1057 +00:47:48,760 --> 00:47:53,440 +distances and you know we can always + +1058 +00:47:51,720 --> 00:47:56,240 +make our approximations better and + +1059 +00:47:53,440 --> 00:47:57,680 +improve this model as well so I I think + +1060 +00:47:56,240 --> 00:48:00,960 +this is a attractive method that you + +1061 +00:47:57,680 --> 00:48:00,960 +might be interested in taking a look + +1062 +00:48:02,240 --> 00:48:06,200 +at okay for the final part of this I'd + +1063 +00:48:04,559 --> 00:48:08,079 +like to talk about long context + +1064 +00:48:06,200 --> 00:48:12,400 +Transformers and these are models that + +1065 +00:48:08,079 --> 00:48:15,119 +are explicitly trained in a way that + +1066 +00:48:12,400 --> 00:48:16,920 +allows you to attend to longer contexts + +1067 +00:48:15,119 --> 00:48:18,839 +in an efficient + +1068 +00:48:16,920 --> 00:48:21,960 +manner + +1069 +00:48:18,839 --> 00:48:23,680 +so one way that we can train over longer + +1070 +00:48:21,960 --> 00:48:25,880 +context is just append all of the + +1071 +00:48:23,680 --> 00:48:28,040 +context together and in fact shortly + +1072 +00:48:25,880 --> 00:48:32,200 +after Transformers came out uh this + +1073 +00:48:28,040 --> 00:48:34,280 +paper by VOA at all demonstrated that um + +1074 +00:48:32,200 --> 00:48:36,160 +it doing this can learn you know + +1075 +00:48:34,280 --> 00:48:38,119 +interesting document level phenomena so + +1076 +00:48:36,160 --> 00:48:40,440 +it can identify when + +1077 +00:48:38,119 --> 00:48:42,480 +multiple uh words refer to the same + +1078 +00:48:40,440 --> 00:48:43,680 +thing or co-reference and other things + +1079 +00:48:42,480 --> 00:48:45,640 +like + +1080 +00:48:43,680 --> 00:48:47,720 +this however the problem with + +1081 +00:48:45,640 --> 00:48:51,119 +Transformers is that computation is + +1082 +00:48:47,720 --> 00:48:52,799 +quadratic in the sentence length because + +1083 +00:48:51,119 --> 00:48:54,599 +you're multiplying all of the query + +1084 +00:48:52,799 --> 00:48:56,799 +vectors by all of the key + +1085 +00:48:54,599 --> 00:48:59,480 +vectors + +1086 +00:48:56,799 --> 00:49:02,799 +and that basically causes a big problem + +1087 +00:48:59,480 --> 00:49:02,799 +if your sequences become very + +1088 +00:49:03,480 --> 00:49:09,760 +long so if we go back to what we did in + +1089 +00:49:07,480 --> 00:49:12,400 +rnns uh from the very beginning of the + +1090 +00:49:09,760 --> 00:49:14,359 +class in rnns they don't have this + +1091 +00:49:12,400 --> 00:49:16,280 +problem because computation is linear in + +1092 +00:49:14,359 --> 00:49:20,440 +the length of the sequence you just pass + +1093 +00:49:16,280 --> 00:49:22,200 +along the RNN State and every single + +1094 +00:49:20,440 --> 00:49:23,839 +time you do the same computation over it + +1095 +00:49:22,200 --> 00:49:26,559 +so there's no quadratic term in + +1096 +00:49:23,839 --> 00:49:32,400 +calculating rnns + +1097 +00:49:26,559 --> 00:49:34,880 +another thing is that when doing rnns + +1098 +00:49:32,400 --> 00:49:37,680 +you can actually P State infinitely + +1099 +00:49:34,880 --> 00:49:39,040 +during the forward pass by just + +1100 +00:49:37,680 --> 00:49:40,240 +calculating the hidden State and then + +1101 +00:49:39,040 --> 00:49:42,119 +throwing away the rest of the + +1102 +00:49:40,240 --> 00:49:43,359 +computation graph that was used in + +1103 +00:49:42,119 --> 00:49:45,160 +calculating that hidden State and + +1104 +00:49:43,359 --> 00:49:48,319 +there's no approximation that goes on + +1105 +00:49:45,160 --> 00:49:49,680 +there so unlike on in un liform that I + +1106 +00:49:48,319 --> 00:49:51,640 +was talking about before where we needed + +1107 +00:49:49,680 --> 00:49:54,119 +to make approximations none need to be + +1108 +00:49:51,640 --> 00:49:56,400 +made in this + +1109 +00:49:54,119 --> 00:50:00,200 +case however there is a problem with + +1110 +00:49:56,400 --> 00:50:02,040 +doing back propop uh because in order to + +1111 +00:50:00,200 --> 00:50:05,839 +do back propop normally you maintain the + +1112 +00:50:02,040 --> 00:50:09,720 +entire you know state of the computation + +1113 +00:50:05,839 --> 00:50:12,400 +graph and so there a common method to + +1114 +00:50:09,720 --> 00:50:15,280 +fix this is basically you pass along the + +1115 +00:50:12,400 --> 00:50:16,920 +RNN state from the previous sentence but + +1116 +00:50:15,280 --> 00:50:19,240 +you just don't do backdrop into the + +1117 +00:50:16,920 --> 00:50:21,200 +previous sentence and this is called + +1118 +00:50:19,240 --> 00:50:24,040 +truncated backrop or truncated back + +1119 +00:50:21,200 --> 00:50:27,280 +propagation through time and this allows + +1120 +00:50:24,040 --> 00:50:30,160 +you to essentially train models with + +1121 +00:50:27,280 --> 00:50:32,319 +infinite context um or at least models + +1122 +00:50:30,160 --> 00:50:33,720 +that can pass along context infinitely + +1123 +00:50:32,319 --> 00:50:36,359 +even if you're not back propping into + +1124 +00:50:33,720 --> 00:50:36,359 +they Cod ear + +1125 +00:50:37,480 --> 00:50:43,520 +there so of course a problem with this + +1126 +00:50:40,720 --> 00:50:45,880 +over long contexts is recurrents uh + +1127 +00:50:43,520 --> 00:50:47,520 +recurrent models can be slow due to the + +1128 +00:50:45,880 --> 00:50:51,400 +kind of sequential dependence they're + +1129 +00:50:47,520 --> 00:50:54,280 +not ideal for um you know running on + +1130 +00:50:51,400 --> 00:50:57,359 +gpus or things like that and this is + +1131 +00:50:54,280 --> 00:51:01,960 +improved by recent architectures like + +1132 +00:50:57,359 --> 00:51:05,359 +Mamba and RW KV which are more conducive + +1133 +00:51:01,960 --> 00:51:07,079 +to GPU Based training um while still + +1134 +00:51:05,359 --> 00:51:08,599 +maintaining linear time complexity and + +1135 +00:51:07,079 --> 00:51:11,480 +so I'm looking forward to talking about + +1136 +00:51:08,599 --> 00:51:11,480 +that more in a future + +1137 +00:51:13,000 --> 00:51:17,559 +class so actually if we take this idea + +1138 +00:51:15,880 --> 00:51:20,440 +of truncated back propagation through + +1139 +00:51:17,559 --> 00:51:22,359 +time this can also be applied to + +1140 +00:51:20,440 --> 00:51:25,440 +Transformers and there's a really nice + +1141 +00:51:22,359 --> 00:51:27,880 +paper Transformer XEL also created by + +1142 +00:51:25,440 --> 00:51:31,119 +kungai who was formerly at + +1143 +00:51:27,880 --> 00:51:33,119 +CMU and what this does is this attempts + +1144 +00:51:31,119 --> 00:51:35,760 +to fix vectors from the previous + +1145 +00:51:33,119 --> 00:51:39,440 +sentence so if we have a standard + +1146 +00:51:35,760 --> 00:51:40,720 +Transformer uh in a Transformer XL + +1147 +00:51:39,440 --> 00:51:44,640 +normally what we do in the standard + +1148 +00:51:40,720 --> 00:51:48,480 +Transformer is each Vector attends back + +1149 +00:51:44,640 --> 00:51:50,920 +to all the other vectors in the current + +1150 +00:51:48,480 --> 00:51:53,839 +context what Transformer XEL does + +1151 +00:51:50,920 --> 00:51:56,359 +instead is when you have a new segment + +1152 +00:51:53,839 --> 00:51:58,960 +that you want to do backrop + +1153 +00:51:56,359 --> 00:52:01,200 +into um you have a new segment that you + +1154 +00:51:58,960 --> 00:52:03,960 +want to basically train over you also + +1155 +00:52:01,200 --> 00:52:06,400 +attend to all of the previous tokens in + +1156 +00:52:03,960 --> 00:52:07,640 +the previous segment but you don't do + +1157 +00:52:06,400 --> 00:52:10,319 +back propop into + +1158 +00:52:07,640 --> 00:52:12,079 +them so this is essentially truncated + +1159 +00:52:10,319 --> 00:52:14,480 +backpropagation through time from the + +1160 +00:52:12,079 --> 00:52:17,760 +Transformer + +1161 +00:52:14,480 --> 00:52:19,520 +perspective this is also really nice + +1162 +00:52:17,760 --> 00:52:21,200 +because what it allows you to do is if + +1163 +00:52:19,520 --> 00:52:25,880 +you have a multi-layer + +1164 +00:52:21,200 --> 00:52:27,720 +Transformer it allows you to attend far + +1165 +00:52:25,880 --> 00:52:30,520 +back so if you look at the last layer + +1166 +00:52:27,720 --> 00:52:33,520 +it's attending um to things in the + +1167 +00:52:30,520 --> 00:52:36,599 +previous context window but the second + +1168 +00:52:33,520 --> 00:52:39,760 +to last layer is attending to things in + +1169 +00:52:36,599 --> 00:52:41,520 +the um not just one context window + +1170 +00:52:39,760 --> 00:52:44,079 +before but multiple context windows + +1171 +00:52:41,520 --> 00:52:45,760 +before and actually this allows you to + +1172 +00:52:44,079 --> 00:52:47,880 +very effectively attend a very long + +1173 +00:52:45,760 --> 00:52:51,720 +context because each time kind of the + +1174 +00:52:47,880 --> 00:52:54,799 +context expands in an exponential + +1175 +00:52:51,720 --> 00:52:56,520 +manner so um recently there's a popular + +1176 +00:52:54,799 --> 00:52:57,799 +model called mistol that I'm sure a lot + +1177 +00:52:56,520 --> 00:52:59,480 +of people have heard about and this is + +1178 +00:52:57,799 --> 00:53:01,920 +using sliding window attention which is + +1179 +00:52:59,480 --> 00:53:04,160 +essentially the same mechanism proposed + +1180 +00:53:01,920 --> 00:53:09,240 +by Transformer XEL so this method is + +1181 +00:53:04,160 --> 00:53:09,240 +still uh used in uh very practical + +1182 +00:53:10,400 --> 00:53:17,359 +systems another paper that has been + +1183 +00:53:13,440 --> 00:53:19,319 +pretty influential in this general area + +1184 +00:53:17,359 --> 00:53:21,079 +is something called sparse + +1185 +00:53:19,319 --> 00:53:23,359 +Transformers and the way sparse + +1186 +00:53:21,079 --> 00:53:25,960 +Transformers work is instead of + +1187 +00:53:23,359 --> 00:53:29,520 +attending to every single previous state + +1188 +00:53:25,960 --> 00:53:32,640 +you attend to every n previous + +1189 +00:53:29,520 --> 00:53:34,599 +States and what this allows you to do is + +1190 +00:53:32,640 --> 00:53:37,119 +this allows you to essentially create + +1191 +00:53:34,599 --> 00:53:40,319 +something like the strided uh + +1192 +00:53:37,119 --> 00:53:42,079 +convolutions or um pyramidal recurrent + +1193 +00:53:40,319 --> 00:53:45,520 +neural networks that I talked about + +1194 +00:53:42,079 --> 00:53:49,760 +earlier um so what this looks like + +1195 +00:53:45,520 --> 00:53:51,079 +essentially is you have um this like if + +1196 +00:53:49,760 --> 00:53:54,880 +you have a particular state it might + +1197 +00:53:51,079 --> 00:53:56,480 +attend to all of the previous end tokens + +1198 +00:53:54,880 --> 00:54:00,240 +but then it + +1199 +00:53:56,480 --> 00:54:04,400 +also attends to all of the + +1200 +00:54:00,240 --> 00:54:06,880 +previous um kind of M chunks so you kind + +1201 +00:54:04,400 --> 00:54:08,920 +of have a combination of local and + +1202 +00:54:06,880 --> 00:54:11,640 +Global + +1203 +00:54:08,920 --> 00:54:14,760 +attention or not local and Global but + +1204 +00:54:11,640 --> 00:54:16,760 +local and kind of longer range attention + +1205 +00:54:14,760 --> 00:54:18,760 +and this can be very effective because + +1206 +00:54:16,760 --> 00:54:22,319 +you can attend to you know much longer + +1207 +00:54:18,760 --> 00:54:24,079 +context with a minimal increase in a + +1208 +00:54:22,319 --> 00:54:26,520 +computational + +1209 +00:54:24,079 --> 00:54:28,720 +complexity + +1210 +00:54:26,520 --> 00:54:31,160 +so another method that's a little bit + +1211 +00:54:28,720 --> 00:54:32,960 +like this uh or it's very similar in + +1212 +00:54:31,160 --> 00:54:34,359 +spirit but slightly different in + +1213 +00:54:32,960 --> 00:54:35,599 +implementation is something called the + +1214 +00:54:34,359 --> 00:54:37,520 +compressive + +1215 +00:54:35,599 --> 00:54:40,400 +Transformer and in the compressive + +1216 +00:54:37,520 --> 00:54:43,000 +Transformer you also have this idea of a + +1217 +00:54:40,400 --> 00:54:44,319 +local memory and then a longer term + +1218 +00:54:43,000 --> 00:54:47,200 +compressed + +1219 +00:54:44,319 --> 00:54:50,799 +memory but you have an explicit + +1220 +00:54:47,200 --> 00:54:54,319 +compression step that + +1221 +00:54:50,799 --> 00:54:58,079 +directly essentially generates this uh + +1222 +00:54:54,319 --> 00:55:00,960 +compressed mem M itself and so this is a + +1223 +00:54:58,079 --> 00:55:04,119 +little bit more flexible I guess it + +1224 +00:55:00,960 --> 00:55:06,280 +allows you to take all of the you know + +1225 +00:55:04,119 --> 00:55:09,000 +relevant things from your local memory + +1226 +00:55:06,280 --> 00:55:12,000 +and compress it down so it's another + +1227 +00:55:09,000 --> 00:55:12,000 +method that's worth thinking + +1228 +00:55:12,760 --> 00:55:18,400 +about finally uh there are some very + +1229 +00:55:15,799 --> 00:55:20,200 +interesting methods that do low rank + +1230 +00:55:18,400 --> 00:55:23,039 +approximations for + +1231 +00:55:20,200 --> 00:55:25,920 +Transformers and so calculating the + +1232 +00:55:23,039 --> 00:55:29,119 +attention Matrix is expensive but this + +1233 +00:55:25,920 --> 00:55:31,640 +is a matrix and because it's a matrix we + +1234 +00:55:29,119 --> 00:55:32,640 +can also approximate it with a lower + +1235 +00:55:31,640 --> 00:55:35,480 +rank + +1236 +00:55:32,640 --> 00:55:38,559 +Matrix and there's a couple methods that + +1237 +00:55:35,480 --> 00:55:40,599 +do things uh like this uh the first one + +1238 +00:55:38,559 --> 00:55:42,680 +is something called Blind forer which + +1239 +00:55:40,599 --> 00:55:44,520 +adds low rank linear projections into + +1240 +00:55:42,680 --> 00:55:47,319 +the model at appropriate + +1241 +00:55:44,520 --> 00:55:50,359 +places and um there's another one called + +1242 +00:55:47,319 --> 00:55:52,200 +NR forer which approximates using the ni + +1243 +00:55:50,359 --> 00:55:54,440 +run method which is based on sampling + +1244 +00:55:52,200 --> 00:55:56,520 +Landmark points but basically the + +1245 +00:55:54,440 --> 00:56:00,319 +general IDE aide behind this is normally + +1246 +00:55:56,520 --> 00:56:03,400 +we do this kind of softmax over you know + +1247 +00:56:00,319 --> 00:56:06,240 +a very large attention Vector but + +1248 +00:56:03,400 --> 00:56:08,440 +instead we can approximate the softmax + +1249 +00:56:06,240 --> 00:56:11,520 +by having some low rank vectors kind of + +1250 +00:56:08,440 --> 00:56:12,799 +like what we used in Laura and uh + +1251 +00:56:11,520 --> 00:56:16,440 +nonetheless get a reasonable + +1252 +00:56:12,799 --> 00:56:16,440 +approximation of the softmax used + +1253 +00:56:17,799 --> 00:56:24,039 +inion okay so we're nearing the end of + +1254 +00:56:21,520 --> 00:56:26,000 +what I want to talk about today and + +1255 +00:56:24,039 --> 00:56:29,720 +finally the thing that I'd like to talk + +1256 +00:56:26,000 --> 00:56:33,240 +about is benchmarks for long PEX models + +1257 +00:56:29,720 --> 00:56:35,000 +and there's a few benchmarks one very + +1258 +00:56:33,240 --> 00:56:37,359 +well-known one is something called long + +1259 +00:56:35,000 --> 00:56:40,599 +range Arena this is a composite + +1260 +00:56:37,359 --> 00:56:43,000 +Benchmark containing mostly non NLP + +1261 +00:56:40,599 --> 00:56:45,280 +tasks and it's definitely used for long + +1262 +00:56:43,000 --> 00:56:46,760 +sequence modeling but the results on the + +1263 +00:56:45,280 --> 00:56:49,400 +long range Arena actually tend to + +1264 +00:56:46,760 --> 00:56:51,599 +diverge uh somewhat from the results + +1265 +00:56:49,400 --> 00:56:54,440 +that you get for longdistance language + +1266 +00:56:51,599 --> 00:56:56,520 +modeling so in addition to this another + +1267 +00:56:54,440 --> 00:56:58,400 +benchmark that I uh personally like and + +1268 +00:56:56,520 --> 00:57:01,960 +have used a bit is something called + +1269 +00:56:58,400 --> 00:57:05,720 +Scrolls which uh combines together a + +1270 +00:57:01,960 --> 00:57:07,960 +whole bunch of kind of QA style or + +1271 +00:57:05,720 --> 00:57:10,920 +summarization style tasks that have very + +1272 +00:57:07,960 --> 00:57:13,280 +long contexts including over narratives + +1273 +00:57:10,920 --> 00:57:15,680 +or books or government reports or other + +1274 +00:57:13,280 --> 00:57:17,280 +things like that so you can also take a + +1275 +00:57:15,680 --> 00:57:20,680 +look at this if you're interested in + +1276 +00:57:17,280 --> 00:57:20,680 +kind of benchmarking longer range + +1277 +00:57:21,839 --> 00:57:28,280 +models okay the final thing I'd like to + +1278 +00:57:24,559 --> 00:57:30,280 +talk about is now that we have retriever + +1279 +00:57:28,280 --> 00:57:31,680 +models we have reader models we maybe + +1280 +00:57:30,280 --> 00:57:34,000 +even have reader models that can + +1281 +00:57:31,680 --> 00:57:35,520 +effectively use very long contexts like + +1282 +00:57:34,000 --> 00:57:37,880 +the ones that we retrieve over whole + +1283 +00:57:35,520 --> 00:57:39,240 +documents how do we effectively use them + +1284 +00:57:37,880 --> 00:57:43,640 +in our + +1285 +00:57:39,240 --> 00:57:46,680 +models so there was a very nice paper um + +1286 +00:57:43,640 --> 00:57:48,880 +by Nelson Leo at Stanford that about a + +1287 +00:57:46,680 --> 00:57:51,160 +phenomenon that was kinded lost in the + +1288 +00:57:48,880 --> 00:57:53,079 +middle and basically what it does is it + +1289 +00:57:51,160 --> 00:57:55,119 +demonstrates that many many different + +1290 +00:57:53,079 --> 00:57:57,720 +models including state-of-the-art model + +1291 +00:57:55,119 --> 00:58:00,799 +models pay less attention to things in + +1292 +00:57:57,720 --> 00:58:03,960 +the middle of long context windows and + +1293 +00:58:00,799 --> 00:58:06,760 +so if we have an answer and we put it in + +1294 +00:58:03,960 --> 00:58:09,200 +you know the first position in Doc in + +1295 +00:58:06,760 --> 00:58:12,280 +you know a concatenated context or the + +1296 +00:58:09,200 --> 00:58:13,799 +20th position in a concatenated context + +1297 +00:58:12,280 --> 00:58:15,240 +it tends to attend more to the ones at + +1298 +00:58:13,799 --> 00:58:18,359 +the beginning or the + +1299 +00:58:15,240 --> 00:58:19,480 +end in contrast the ones in the middle + +1300 +00:58:18,359 --> 00:58:22,760 +kind of get + +1301 +00:58:19,480 --> 00:58:26,680 +lost hence the name lost in the middle + +1302 +00:58:22,760 --> 00:58:29,520 +and the problem with this is you know if + +1303 +00:58:26,680 --> 00:58:32,480 +we are doing something like retrieval in + +1304 +00:58:29,520 --> 00:58:34,160 +Reading then that's maybe not such a + +1305 +00:58:32,480 --> 00:58:35,680 +huge problem because we could just put + +1306 +00:58:34,160 --> 00:58:37,680 +you know the highest scoring documents + +1307 +00:58:35,680 --> 00:58:39,920 +at the beginning that might even be more + +1308 +00:58:37,680 --> 00:58:42,440 +effective than uh you know concatenating + +1309 +00:58:39,920 --> 00:58:44,160 +lots of low scoring documents together + +1310 +00:58:42,440 --> 00:58:45,559 +but if we want to read a really long + +1311 +00:58:44,160 --> 00:58:48,839 +document and synthesize something + +1312 +00:58:45,559 --> 00:58:52,200 +without doing kind of another uh scoring + +1313 +00:58:48,839 --> 00:58:54,200 +step uh that can be an issue and also + +1314 +00:58:52,200 --> 00:58:56,359 +you know our retriever is not perfect so + +1315 +00:58:54,200 --> 00:58:58,799 +we would like the model to the reader + +1316 +00:58:56,359 --> 00:59:00,520 +model to do a good job with the outputs + +1317 +00:58:58,799 --> 00:59:04,839 +that it + +1318 +00:59:00,520 --> 00:59:06,359 +has so there are methods uh to ensure + +1319 +00:59:04,839 --> 00:59:09,440 +use of relevant + +1320 +00:59:06,359 --> 00:59:12,119 +context so of course better retrievers + +1321 +00:59:09,440 --> 00:59:14,880 +make more relevant context you can do + +1322 +00:59:12,119 --> 00:59:16,240 +you know reranking or other things like + +1323 +00:59:14,880 --> 00:59:17,280 +that and only include the context that + +1324 +00:59:16,240 --> 00:59:19,680 +looks most + +1325 +00:59:17,280 --> 00:59:22,880 +relevant um or you know refine your + +1326 +00:59:19,680 --> 00:59:25,200 +reader model but there's also methods + +1327 +00:59:22,880 --> 00:59:28,720 +that can decide whether contact should + +1328 +00:59:25,200 --> 00:59:32,400 +be used in the first place so um there + +1329 +00:59:28,720 --> 00:59:35,440 +are methods uh to decide whether to use + +1330 +00:59:32,400 --> 00:59:37,559 +whether to include passages or not and + +1331 +00:59:35,440 --> 00:59:39,920 +also uh recently we proposed a method to + +1332 +00:59:37,559 --> 00:59:42,640 +filter down to parts of retrieve + +1333 +00:59:39,920 --> 00:59:44,920 +passages uh to have only appropriate + +1334 +00:59:42,640 --> 00:59:47,480 +content and this is a model uh that we + +1335 +00:59:44,920 --> 00:59:49,319 +called filco it basically filters the + +1336 +00:59:47,480 --> 00:59:52,160 +context down to the most relevant + +1337 +00:59:49,319 --> 00:59:53,920 +content that we think is appropriate and + +1338 +00:59:52,160 --> 00:59:56,960 +that allows us to get better results + +1339 +00:59:53,920 --> 00:59:56,960 +when it's fed to the + +1340 +00:59:57,079 --> 01:00:03,640 +generator so that's all I have for today + +1341 +01:00:00,319 --> 01:00:06,200 +um thank you for watching the video and + +1342 +01:00:03,640 --> 01:00:08,599 +for people in the class I'll be happy to + +1343 +01:00:06,200 --> 01:00:13,079 +take questions on Piaza or during the + +1344 +01:00:08,599 --> 01:00:13,079 +office hours that I had planned thanks a + +1345 +01:00:15,319 --> 01:00:18,319 +lot \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (10) Retrieval and RAG/transcript.vtt b/CMU Advanced NLP 2024 (10) Retrieval and RAG/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..fd4387d7ffb2776774ee04ed4aa0d08c202f428c --- /dev/null +++ b/CMU Advanced NLP 2024 (10) Retrieval and RAG/transcript.vtt @@ -0,0 +1,4036 @@ +WEBVTT + +00:00:00.040 --> 00:00:03.880 +so today I'm going to talk about + +00:00:01.319 --> 00:00:06.680 +retrieval and retrieval augmented + +00:00:03.880 --> 00:00:09.040 +generation so if we look at our standard + +00:00:06.680 --> 00:00:10.880 +prompting flow normally what we do is we + +00:00:09.040 --> 00:00:14.160 +combine together a prompt template with + +00:00:10.880 --> 00:00:16.600 +an input so if we say please answer this + +00:00:14.160 --> 00:00:18.720 +question I think Vin Diesel has been a + +00:00:16.600 --> 00:00:21.000 +voice actor for several pictors in TV + +00:00:18.720 --> 00:00:24.000 +series do you know what their names + +00:00:21.000 --> 00:00:25.400 +are we could get a response from a + +00:00:24.000 --> 00:00:26.840 +language model but there are several + +00:00:25.400 --> 00:00:30.840 +problems with + +00:00:26.840 --> 00:00:33.680 +this the first is accuracy issues + +00:00:30.840 --> 00:00:36.160 +the models generally have a knowledge + +00:00:33.680 --> 00:00:38.879 +cut off so the parameters are usually + +00:00:36.160 --> 00:00:41.120 +only updated to a particular time so for + +00:00:38.879 --> 00:00:43.200 +example if a new Vin Diesel TV series + +00:00:41.120 --> 00:00:44.960 +comes out then the model that was + +00:00:43.200 --> 00:00:47.440 +trained up to a certain time Point won't + +00:00:44.960 --> 00:00:51.000 +be able to know anything about + +00:00:47.440 --> 00:00:53.600 +it there's also issues of private data + +00:00:51.000 --> 00:00:55.320 +so data stored in private text or data + +00:00:53.600 --> 00:00:57.840 +repositories is not suitable for + +00:00:55.320 --> 00:01:02.600 +training for a number of reasons number + +00:00:57.840 --> 00:01:05.199 +one it's not available to to particular + +00:01:02.600 --> 00:01:07.799 +language model training providers such + +00:01:05.199 --> 00:01:10.720 +as you know open AI or Google or anybody + +00:01:07.799 --> 00:01:13.840 +else like this the second thing is + +00:01:10.720 --> 00:01:16.799 +Access Control issues so even if you're + +00:01:13.840 --> 00:01:17.840 +within an organization that has lots of + +00:01:16.799 --> 00:01:20.799 +private data and you can train a + +00:01:17.840 --> 00:01:22.600 +language model on that certain people in + +00:01:20.799 --> 00:01:24.200 +the organization may have access to + +00:01:22.600 --> 00:01:27.640 +certain varieties of data and other + +00:01:24.200 --> 00:01:29.400 +people may not so it's not just solely + +00:01:27.640 --> 00:01:31.520 +an issue of third party providers it's + +00:01:29.400 --> 00:01:33.840 +an issue of organization level Access + +00:01:31.520 --> 00:01:36.159 +Control in + +00:01:33.840 --> 00:01:38.920 +general in addition there are learning + +00:01:36.159 --> 00:01:40.320 +failures so even for data that the model + +00:01:38.920 --> 00:01:42.640 +was trained on it might not be + +00:01:40.320 --> 00:01:44.399 +sufficient to get the right answer and + +00:01:42.640 --> 00:01:47.799 +this is particularly the case for very + +00:01:44.399 --> 00:01:52.320 +very large uh training data sets and + +00:01:47.799 --> 00:01:53.920 +models that are you know modestly sized + +00:01:52.320 --> 00:01:55.880 +because the models very often won't be + +00:01:53.920 --> 00:01:58.360 +able to learn from a single look at a + +00:01:55.880 --> 00:02:02.039 +particular fact or or whatever else like + +00:01:58.360 --> 00:02:02.039 +this especially if iter early in + +00:02:02.159 --> 00:02:08.160 +training another thing is even if the + +00:02:05.240 --> 00:02:10.599 +answer is correct it might not be + +00:02:08.160 --> 00:02:13.440 +verifiable so you might want to be very + +00:02:10.599 --> 00:02:15.000 +sure that the model is not making any + +00:02:13.440 --> 00:02:17.640 +accuracy + +00:02:15.000 --> 00:02:19.040 +problems and so in order to do that very + +00:02:17.640 --> 00:02:21.879 +often a human will want to go back to + +00:02:19.040 --> 00:02:21.879 +the source of the + +00:02:22.200 --> 00:02:27.319 +data so to solve this there's a method + +00:02:25.480 --> 00:02:29.200 +called retrieval augmented generation + +00:02:27.319 --> 00:02:30.280 +which will also be the topic of our + +00:02:29.200 --> 00:02:32.599 +second assignment + +00:02:30.280 --> 00:02:35.680 +here and the way it works is you + +00:02:32.599 --> 00:02:38.319 +retrieve relevant passages + +00:02:35.680 --> 00:02:40.680 +efficiently ones that kind of entail the + +00:02:38.319 --> 00:02:42.480 +answer to a question and then read the + +00:02:40.680 --> 00:02:46.080 +passages to answer the + +00:02:42.480 --> 00:02:48.599 +query so we have documents like this we + +00:02:46.080 --> 00:02:52.360 +have a query based on the query we form + +00:02:48.599 --> 00:02:55.360 +retrieval we get a whole bunch of uh + +00:02:52.360 --> 00:02:57.560 +passages we do reading and then we get + +00:02:55.360 --> 00:02:57.560 +the + +00:02:58.280 --> 00:03:04.440 +answer so this is in fact implemented in + +00:03:01.720 --> 00:03:07.599 +many or even most uh language modeling + +00:03:04.440 --> 00:03:09.840 +providers including open AI so to give + +00:03:07.599 --> 00:03:11.480 +an example I asked the question that I + +00:03:09.840 --> 00:03:12.879 +just said about Vin Diesel's voice + +00:03:11.480 --> 00:03:16.599 +acting and TV + +00:03:12.879 --> 00:03:19.760 +series and Chad GPT gave me an answer + +00:03:16.599 --> 00:03:22.440 +and you can see that J gpt's answer + +00:03:19.760 --> 00:03:24.720 +includes several places with quotes um + +00:03:22.440 --> 00:03:28.159 +they the little blue quotes + +00:03:24.720 --> 00:03:30.760 +there and if you click on the quote it + +00:03:28.159 --> 00:03:33.120 +tells you where the information Source + +00:03:30.760 --> 00:03:35.000 +came from and so this one says behind + +00:03:33.120 --> 00:03:37.760 +the voice actors been + +00:03:35.000 --> 00:03:39.920 +Diesel and behind the voice actors TV + +00:03:37.760 --> 00:03:42.959 +shows Big Mouth V + +00:03:39.920 --> 00:03:45.640 +diesel now if we look + +00:03:42.959 --> 00:03:48.640 +closer into this answer we'll see that + +00:03:45.640 --> 00:03:49.959 +it's not perfect even though it is uh + +00:03:48.640 --> 00:03:52.519 +performing retrieval augmented + +00:03:49.959 --> 00:03:54.840 +Generations so for example I only asked + +00:03:52.519 --> 00:03:57.200 +about TV series but it's giving me lots + +00:03:54.840 --> 00:03:59.680 +of things about movies where it says + +00:03:57.200 --> 00:04:01.319 +Groot in Guardians of the Galaxy volume + +00:03:59.680 --> 00:04:04.480 +3 2023 + +00:04:01.319 --> 00:04:07.200 +movie and in fact uh Vin Diesel was not + +00:04:04.480 --> 00:04:10.920 +even voicing a character named gut here + +00:04:07.200 --> 00:04:13.480 +so that's definitely an accuracy + +00:04:10.920 --> 00:04:15.079 +mistake and separately there's a place + +00:04:13.480 --> 00:04:17.639 +where it says additionally though the + +00:04:15.079 --> 00:04:19.959 +website for big mouthless Vin Diesel it + +00:04:17.639 --> 00:04:22.040 +appears to be a misunderstanding or err + +00:04:19.959 --> 00:04:25.360 +as Nick croll is credited as the voice + +00:04:22.040 --> 00:04:27.800 +of Vin Diesel in that show so there + +00:04:25.360 --> 00:04:30.039 +actually Nick croll was acting as V + +00:04:27.800 --> 00:04:32.800 +diesel but that's um kind of a + +00:04:30.039 --> 00:04:34.600 +misunderstanding of the reader model but + +00:04:32.800 --> 00:04:36.600 +anyway you can get the general idea here + +00:04:34.600 --> 00:04:40.199 +you can also see that it's not perfect + +00:04:36.600 --> 00:04:42.720 +even for very strong models like GPD + +00:04:40.199 --> 00:04:44.800 +4 so now I'd like to go into the actual + +00:04:42.720 --> 00:04:46.759 +methodology that we use for this uh we + +00:04:44.800 --> 00:04:50.360 +have retrieval + +00:04:46.759 --> 00:04:53.160 +methods and for the retrieval methods we + +00:04:50.360 --> 00:04:55.160 +have uh quite a few different options + +00:04:53.160 --> 00:04:57.960 +I'm going to go through each one of them + +00:04:55.160 --> 00:05:00.960 +at a time so sparse retrieval document + +00:04:57.960 --> 00:05:04.240 +level dense retrieval token level DSE + +00:05:00.960 --> 00:05:08.039 +retrieval cross- encoder reranking and + +00:05:04.240 --> 00:05:09.320 +blackbox retrieval so blackbox retrieval + +00:05:08.039 --> 00:05:11.280 +I'm not really going to go into it a + +00:05:09.320 --> 00:05:16.000 +whole lot basically this is just asking + +00:05:11.280 --> 00:05:17.560 +a blackbox search engine to retrieve uh + +00:05:16.000 --> 00:05:20.000 +you know the relevant context and + +00:05:17.560 --> 00:05:22.560 +getting the top several results + +00:05:20.000 --> 00:05:24.039 +nonetheless this is a pretty you know + +00:05:22.560 --> 00:05:26.800 +reasonable method to do it if you want + +00:05:24.039 --> 00:05:29.080 +to do search over you know lots of data + +00:05:26.800 --> 00:05:32.759 +that exists on the internet already and + +00:05:29.080 --> 00:05:36.600 +that in is what chat jpt does it looks + +00:05:32.759 --> 00:05:39.240 +up on Bing by generating a query to + +00:05:36.600 --> 00:05:41.560 +Bing so anyway let's go into the actual + +00:05:39.240 --> 00:05:43.840 +methods that you develop and control + +00:05:41.560 --> 00:05:46.600 +yourself so the first one is sparse + +00:05:43.840 --> 00:05:48.479 +retrieval and the way this works is you + +00:05:46.600 --> 00:05:50.440 +express the query and document as a + +00:05:48.479 --> 00:05:53.680 +sparse word frequency Vector usually + +00:05:50.440 --> 00:05:58.759 +normalized by length and so if I ask uh + +00:05:53.680 --> 00:06:01.720 +query what is NLP we get a vector where + +00:05:58.759 --> 00:06:04.120 +each row the vector corresponds to a + +00:06:01.720 --> 00:06:07.919 +different + +00:06:04.120 --> 00:06:12.960 +token and we asked what is + +00:06:07.919 --> 00:06:16.360 +NLP and so uh the places for what NLP + +00:06:12.960 --> 00:06:18.199 +and is will all have a non-zero value + +00:06:16.360 --> 00:06:20.199 +and everything else will have a zero + +00:06:18.199 --> 00:06:21.720 +value and we also normalize by the + +00:06:20.199 --> 00:06:24.120 +length of vectors so we get something + +00:06:21.720 --> 00:06:24.120 +like + +00:06:24.840 --> 00:06:28.440 +333333 then we have a whole bunch of + +00:06:26.759 --> 00:06:30.720 +documents so the first document says + +00:06:28.440 --> 00:06:31.759 +what is life can is life someone really + +00:06:30.720 --> 00:06:33.960 +likes + +00:06:31.759 --> 00:06:36.000 +candy we also have another one that says + +00:06:33.960 --> 00:06:38.360 +NLP as an acronym for natural language + +00:06:36.000 --> 00:06:39.479 +processing so this is a pretty good uh + +00:06:38.360 --> 00:06:42.479 +you + +00:06:39.479 --> 00:06:44.840 +know answer to our + +00:06:42.479 --> 00:06:48.039 +question then we also have I like to do + +00:06:44.840 --> 00:06:49.360 +good research on NLP which is you know a + +00:06:48.039 --> 00:06:51.360 +nice sentiment but not a very good + +00:06:49.360 --> 00:06:54.400 +answer to our question I + +00:06:51.360 --> 00:06:59.479 +guess so if we look at the vectors here + +00:06:54.400 --> 00:07:03.280 +we have uh what and candy and is have uh + +00:06:59.479 --> 00:07:07.120 +a fairly high + +00:07:03.280 --> 00:07:12.520 +score and we have here NLP and is have a + +00:07:07.120 --> 00:07:16.479 +high score and NLP has a a nonzero + +00:07:12.520 --> 00:07:18.400 +score So based on this um we find the + +00:07:16.479 --> 00:07:20.560 +document similarity with the highest + +00:07:18.400 --> 00:07:22.039 +inner product or cosine similarity in + +00:07:20.560 --> 00:07:24.360 +the document + +00:07:22.039 --> 00:07:27.000 +collection and so if we take the inner + +00:07:24.360 --> 00:07:28.759 +product between these vectors we + +00:07:27.000 --> 00:07:31.280 +actually see that the first one got the + +00:07:28.759 --> 00:07:34.479 +highest score because of its + +00:07:31.280 --> 00:07:37.440 +relatively High values for the words + +00:07:34.479 --> 00:07:37.440 +what and + +00:07:38.160 --> 00:07:43.759 +is + +00:07:40.199 --> 00:07:46.720 +so as you can see common words like what + +00:07:43.759 --> 00:07:49.000 +and is can get a high score kind of + +00:07:46.720 --> 00:07:51.800 +regardless of whether a document is very + +00:07:49.000 --> 00:07:53.919 +relevant and so one way we can fix this + +00:07:51.800 --> 00:07:55.960 +is through something called term + +00:07:53.919 --> 00:07:59.479 +waiting and the way that term waiting + +00:07:55.960 --> 00:08:02.680 +works is in addition to having this + +00:07:59.479 --> 00:08:04.599 +Vector that + +00:08:02.680 --> 00:08:07.680 +calculates + +00:08:04.599 --> 00:08:10.680 +the frequency within a particular + +00:08:07.680 --> 00:08:13.639 +document we also have an upweighting + +00:08:10.680 --> 00:08:15.599 +term that gives higher weight to low + +00:08:13.639 --> 00:08:18.199 +frequency words because low frequency + +00:08:15.599 --> 00:08:20.280 +words like NLP tend to be more + +00:08:18.199 --> 00:08:22.759 +informative about whether the document + +00:08:20.280 --> 00:08:25.240 +is relevant than high frequency words + +00:08:22.759 --> 00:08:27.080 +like what it is because these high + +00:08:25.240 --> 00:08:31.320 +frequency words like what and is Could + +00:08:27.080 --> 00:08:34.279 +Happen kind of regardless of whether + +00:08:31.320 --> 00:08:36.680 +the you know document is relevant the + +00:08:34.279 --> 00:08:41.800 +particular terms the person is asking + +00:08:36.680 --> 00:08:44.000 +about so one well used and easy to + +00:08:41.800 --> 00:08:46.560 +understand version of this is uh tfidf + +00:08:44.000 --> 00:08:48.839 +or term frequency indument + +00:08:46.560 --> 00:08:51.200 +frequency so the way we Define term + +00:08:48.839 --> 00:08:52.959 +frequency is exactly what I talked about + +00:08:51.200 --> 00:08:56.959 +before so it's basically the frequency + +00:08:52.959 --> 00:08:59.839 +of the term uh T in the document d + +00:08:56.959 --> 00:09:01.640 +normalized by the total term frequency + +00:08:59.839 --> 00:09:03.680 +within the document so that that's what + +00:09:01.640 --> 00:09:06.800 +I already showed in the previous + +00:09:03.680 --> 00:09:09.360 +slide and then indument frequency is a + +00:09:06.800 --> 00:09:13.760 +little bit more involved but basically + +00:09:09.360 --> 00:09:15.760 +the way this works is we have log of the + +00:09:13.760 --> 00:09:18.160 +total number of documents in the + +00:09:15.760 --> 00:09:24.040 +collection divided + +00:09:18.160 --> 00:09:26.760 +by the total number of uh times this + +00:09:24.040 --> 00:09:30.279 +term appeared in any particular + +00:09:26.760 --> 00:09:33.360 +document and so if a term appears many + +00:09:30.279 --> 00:09:36.120 +times in any particular document it will + +00:09:33.360 --> 00:09:39.240 +have a low IDF score uh one that's close + +00:09:36.120 --> 00:09:41.519 +to zero but if it rarely appears it will + +00:09:39.240 --> 00:09:44.120 +have a high IDF score so basically this + +00:09:41.519 --> 00:09:45.040 +is upweighting our frequent terms and + +00:09:44.120 --> 00:09:47.560 +then for + +00:09:45.040 --> 00:09:51.320 +tfidf uh we basically multiply these two + +00:09:47.560 --> 00:09:53.120 +terms together and we upweight the low + +00:09:51.320 --> 00:09:55.640 +frequency + +00:09:53.120 --> 00:10:00.519 +words there's another version of this + +00:09:55.640 --> 00:10:03.640 +called bm25 that is uh widely used used + +00:10:00.519 --> 00:10:05.800 +um this is more involved so I'm not + +00:10:03.640 --> 00:10:08.120 +going to go into all of the details but + +00:10:05.800 --> 00:10:12.399 +basically if you remember back to the + +00:10:08.120 --> 00:10:13.720 +lecture on count-based language models + +00:10:12.399 --> 00:10:14.880 +there were a bunch of smoothing + +00:10:13.720 --> 00:10:18.839 +techniques for these count-based + +00:10:14.880 --> 00:10:21.839 +language models and this uses uh kind of + +00:10:18.839 --> 00:10:25.839 +a m multiplicative additive smoothing + +00:10:21.839 --> 00:10:27.160 +term to upway things instead of using + +00:10:25.839 --> 00:10:30.200 +the term + +00:10:27.160 --> 00:10:33.399 +frequency and uh the actual formula is + +00:10:30.200 --> 00:10:37.240 +here K and B are kind of + +00:10:33.399 --> 00:10:39.360 +hyperparameters and um average DL is + +00:10:37.240 --> 00:10:40.639 +average document length the details of + +00:10:39.360 --> 00:10:42.120 +this are not really important but + +00:10:40.639 --> 00:10:43.800 +basically what you should know is that + +00:10:42.120 --> 00:10:45.639 +this is doing some smoothing on the term + +00:10:43.800 --> 00:10:48.240 +frequencies and you can look in more + +00:10:45.639 --> 00:10:48.240 +detail if you're + +00:10:49.160 --> 00:10:54.920 +interested so now that we have this sort + +00:10:52.880 --> 00:10:57.959 +of term + +00:10:54.920 --> 00:11:00.320 +based uh sparse Vector we would like to + +00:10:57.959 --> 00:11:03.320 +use this to look up relevant documents + +00:11:00.320 --> 00:11:06.000 +in a collection very quickly because you + +00:11:03.320 --> 00:11:08.000 +know we might have a collection that's + +00:11:06.000 --> 00:11:09.720 +extremely large like as large as the + +00:11:08.000 --> 00:11:12.320 +entire internet like what Google is + +00:11:09.720 --> 00:11:14.160 +doing when it searches and so in order + +00:11:12.320 --> 00:11:16.240 +to solve this we need a data structure + +00:11:14.160 --> 00:11:17.279 +that allows for efficient sparse lookup + +00:11:16.240 --> 00:11:19.480 +of + +00:11:17.279 --> 00:11:23.720 +vectors and so we have all of these + +00:11:19.480 --> 00:11:27.279 +sparse vectors like this + +00:11:23.720 --> 00:11:31.240 +and we uh basically turn this into an + +00:11:27.279 --> 00:11:34.720 +index where we have something like a you + +00:11:31.240 --> 00:11:37.920 +know python style dictionary or map that + +00:11:34.720 --> 00:11:41.079 +has it's the key each uh word we would + +00:11:37.920 --> 00:11:45.000 +look like to look up and is the vector + +00:11:41.079 --> 00:11:48.480 +the corresponding um index of that + +00:11:45.000 --> 00:11:50.480 +document so for example what in our case + +00:11:48.480 --> 00:11:54.200 +here only appears in document one so it + +00:11:50.480 --> 00:11:56.279 +would point to document one candy uh + +00:11:54.200 --> 00:11:58.560 +also appears in document one NLP appears + +00:11:56.279 --> 00:11:59.839 +in two and three and so you can create + +00:11:58.560 --> 00:12:02.760 +this index IND like this and this is + +00:11:59.839 --> 00:12:02.760 +called an inverted + +00:12:03.079 --> 00:12:08.760 +Index this is an important application + +00:12:06.000 --> 00:12:11.600 +of course so there's lots of software + +00:12:08.760 --> 00:12:14.920 +the most kind of pical software for this + +00:12:11.600 --> 00:12:18.760 +is Apache Lucine so if you want to build + +00:12:14.920 --> 00:12:21.639 +a big index uh to look up vectors using + +00:12:18.760 --> 00:12:24.160 +this sparse index like this you can uh + +00:12:21.639 --> 00:12:24.160 +take a look at + +00:12:26.160 --> 00:12:30.880 +Lucy so the next thing I'd like to talk + +00:12:28.399 --> 00:12:33.199 +about is dense retrieval and the way + +00:12:30.880 --> 00:12:36.000 +dense retrieval works is you encode the + +00:12:33.199 --> 00:12:37.240 +document in query into a dense factor + +00:12:36.000 --> 00:12:40.240 +and find the nearest + +00:12:37.240 --> 00:12:42.160 +neighbor in order to do this encoding + +00:12:40.240 --> 00:12:44.639 +you can use a number of things you can + +00:12:42.160 --> 00:12:47.440 +use out of the box embeddings or you can + +00:12:44.639 --> 00:12:49.959 +use learned embeddings specifically + +00:12:47.440 --> 00:12:53.519 +created for the purpose of + +00:12:49.959 --> 00:12:56.240 +retrieving and so what we do is we take + +00:12:53.519 --> 00:12:57.920 +all of these uh documents here we + +00:12:56.240 --> 00:12:59.920 +convert them into embeddings using + +00:12:57.920 --> 00:13:04.040 +whatever embedding method that we want + +00:12:59.920 --> 00:13:05.920 +to use we then have a query and we take + +00:13:04.040 --> 00:13:07.720 +that query and we match it and find the + +00:13:05.920 --> 00:13:10.040 +nearest neighbor + +00:13:07.720 --> 00:13:13.120 +here so if you're just using out of the + +00:13:10.040 --> 00:13:14.839 +box embeddings you don't need to um you + +00:13:13.120 --> 00:13:15.880 +know do anything special for retrieval + +00:13:14.839 --> 00:13:18.440 +you can just take your favorite + +00:13:15.880 --> 00:13:22.800 +embeddings like the sentence BT + +00:13:18.440 --> 00:13:25.639 +embeddings or the open AI uh Adda + +00:13:22.800 --> 00:13:27.240 +embeddings or something like this but + +00:13:25.639 --> 00:13:29.519 +actually the type of embeddings you need + +00:13:27.240 --> 00:13:32.040 +for retrieval are kind of + +00:13:29.519 --> 00:13:33.519 +very special and because of that it's + +00:13:32.040 --> 00:13:36.160 +important + +00:13:33.519 --> 00:13:38.600 +to if you're very serious about doing a + +00:13:36.160 --> 00:13:39.800 +good job of retal it's important to use + +00:13:38.600 --> 00:13:41.360 +embeddings that were specifically + +00:13:39.800 --> 00:13:45.040 +tailored for + +00:13:41.360 --> 00:13:47.680 +retrieval and the reason why it is + +00:13:45.040 --> 00:13:50.079 +important to do this is severalfold but + +00:13:47.680 --> 00:13:53.800 +the most intuitive way to think about it + +00:13:50.079 --> 00:13:57.600 +is if we think about uh the things that + +00:13:53.800 --> 00:13:59.440 +tfidf does tfidf is giving a very high + +00:13:57.600 --> 00:14:03.000 +weight to + +00:13:59.440 --> 00:14:04.959 +contentful words and rare words and + +00:14:03.000 --> 00:14:06.639 +we're not guaranteed that any random + +00:14:04.959 --> 00:14:10.600 +embedding that we get is going to do + +00:14:06.639 --> 00:14:13.800 +that so for example if we just take the + +00:14:10.600 --> 00:14:16.160 +average word embeddings of every word in + +00:14:13.800 --> 00:14:20.160 +a sequence it's going to give the same + +00:14:16.160 --> 00:14:22.320 +weight to all of the words um in the + +00:14:20.160 --> 00:14:24.680 +output and in fact common words tend to + +00:14:22.320 --> 00:14:27.959 +have slightly higher Norms than + +00:14:24.680 --> 00:14:29.639 +infrequent words and so that would + +00:14:27.959 --> 00:14:31.880 +actually upli common wordss which is + +00:14:29.639 --> 00:14:34.639 +kind of exactly the opposite thing we + +00:14:31.880 --> 00:14:36.480 +want so how do we learn retrieval + +00:14:34.639 --> 00:14:39.160 +oriented + +00:14:36.480 --> 00:14:40.920 +embeddings the normal way we do this is + +00:14:39.160 --> 00:14:43.399 +we select positive and negative + +00:14:40.920 --> 00:14:46.839 +documents and then train using a + +00:14:43.399 --> 00:14:50.240 +contrastive loss and so an example of + +00:14:46.839 --> 00:14:52.519 +this is we have a query and then we have + +00:14:50.240 --> 00:14:55.519 +negative documents for that query and we + +00:14:52.519 --> 00:14:58.199 +have positive documents for that query + +00:14:55.519 --> 00:15:00.079 +and uh we form formulate a hinge loss or + +00:14:58.199 --> 00:15:04.000 +maybe some sort of probabilistic loss + +00:15:00.079 --> 00:15:06.560 +similar to the Hench loss and uh do fine + +00:15:04.000 --> 00:15:06.560 +tuning of the + +00:15:07.160 --> 00:15:13.440 +embeddings so if + +00:15:09.399 --> 00:15:16.320 +you have gold standard positive + +00:15:13.440 --> 00:15:18.800 +documents then this is relatively easy + +00:15:16.320 --> 00:15:21.040 +to train uh because you just need the + +00:15:18.800 --> 00:15:23.800 +positive documents and then you can get + +00:15:21.040 --> 00:15:25.959 +Negative documents in a number of ways + +00:15:23.800 --> 00:15:29.279 +one common way of getting negative + +00:15:25.959 --> 00:15:32.279 +documents is you just form a batch of + +00:15:29.279 --> 00:15:34.560 +data and given that batch of data you + +00:15:32.279 --> 00:15:37.480 +take all of the other documents in the + +00:15:34.560 --> 00:15:39.480 +batch um all of the documents in the + +00:15:37.480 --> 00:15:42.839 +batch that are positive for some other + +00:15:39.480 --> 00:15:46.399 +query and you use those as negative + +00:15:42.839 --> 00:15:49.000 +documents so you sample 32 query + +00:15:46.399 --> 00:15:50.759 +document pairs you use the aligned ones + +00:15:49.000 --> 00:15:53.759 +as positive documents and then use the + +00:15:50.759 --> 00:15:57.440 +31 other ones as negative documents and + +00:15:53.759 --> 00:16:00.279 +this is both effective and efficient + +00:15:57.440 --> 00:16:02.000 +because you can kind of learned from the + +00:16:00.279 --> 00:16:05.079 +query document pairs all at the same + +00:16:02.000 --> 00:16:05.079 +time in an efficient + +00:16:05.680 --> 00:16:13.680 +implementation however this is not + +00:16:09.160 --> 00:16:16.279 +enough in many cases because that will + +00:16:13.680 --> 00:16:19.040 +end up having lots of very kind of + +00:16:16.279 --> 00:16:20.440 +obviously wrong documents because you + +00:16:19.040 --> 00:16:23.120 +know + +00:16:20.440 --> 00:16:25.360 +they're documents that are relevant for + +00:16:23.120 --> 00:16:27.880 +a completely different query and it's + +00:16:25.360 --> 00:16:29.880 +kind of easy to distinguish uh between + +00:16:27.880 --> 00:16:32.319 +those you can just at superficial word + +00:16:29.880 --> 00:16:34.519 +overlap so another common thing to do + +00:16:32.319 --> 00:16:35.759 +when you're training these models is to + +00:16:34.519 --> 00:16:38.160 +get hard + +00:16:35.759 --> 00:16:40.680 +negatives so hard negatives are + +00:16:38.160 --> 00:16:44.360 +basically negative examples that look + +00:16:40.680 --> 00:16:49.399 +plausible but are actually wrong and + +00:16:44.360 --> 00:16:53.199 +so here uh this famous method called DPR + +00:16:49.399 --> 00:16:55.880 +is it basically learns the uh encoders + +00:16:53.199 --> 00:16:57.759 +based on both inbatch negatives like I + +00:16:55.880 --> 00:17:00.160 +mentioned before and hard negatives that + +00:16:57.759 --> 00:17:01.360 +were created by looking up documents + +00:17:00.160 --> 00:17:03.839 +with + +00:17:01.360 --> 00:17:06.039 +bm25 and so the ones that were looked up + +00:17:03.839 --> 00:17:07.640 +by bm25 you know kind of look very + +00:17:06.039 --> 00:17:10.039 +similar superficially but they might + +00:17:07.640 --> 00:17:12.400 +have you know subtle errors in them for + +00:17:10.039 --> 00:17:12.400 +why they're + +00:17:12.799 --> 00:17:17.160 +inappropriate there's also methods to + +00:17:15.679 --> 00:17:20.000 +learn these + +00:17:17.160 --> 00:17:23.199 +retrievers based on kind of not + +00:17:20.000 --> 00:17:26.199 +supervised data so one major bottleneck + +00:17:23.199 --> 00:17:29.000 +if you're taking the positive documents + +00:17:26.199 --> 00:17:30.440 +from Human annotations of whether + +00:17:29.000 --> 00:17:33.440 +something is correct or not or human + +00:17:30.440 --> 00:17:37.880 +clickthrough logs or other things like + +00:17:33.440 --> 00:17:40.640 +this is that you need that data in order + +00:17:37.880 --> 00:17:44.440 +to start training a bottle so uh + +00:17:40.640 --> 00:17:47.880 +contriver is another method that uses + +00:17:44.440 --> 00:17:51.520 +two random spans within a document is a + +00:17:47.880 --> 00:17:54.440 +positive pair and random spans from + +00:17:51.520 --> 00:17:56.559 +across documents is negative Pairs and + +00:17:54.440 --> 00:17:58.960 +so this can be used for you know very + +00:17:56.559 --> 00:18:00.039 +very large scale initial pre-training of + +00:17:58.960 --> 00:18:02.280 +the + +00:18:00.039 --> 00:18:04.520 +models and then after you've done that + +00:18:02.280 --> 00:18:06.840 +large scale initial pre-training you can + +00:18:04.520 --> 00:18:10.799 +then go in and fine-tune it on you know + +00:18:06.840 --> 00:18:10.799 +actually annotate the data to improve it + +00:18:12.120 --> 00:18:18.799 +further Okay so we've talked about + +00:18:15.159 --> 00:18:21.559 +training uh these dense product uh + +00:18:18.799 --> 00:18:24.559 +models these uh models that look at + +00:18:21.559 --> 00:18:27.720 +dense embedding overlap for nearest + +00:18:24.559 --> 00:18:28.919 +neighbors but the problem is in order to + +00:18:27.720 --> 00:18:30.919 +calculate this you would need to + +00:18:28.919 --> 00:18:35.159 +calculate it over a very very large + +00:18:30.919 --> 00:18:37.960 +document base and just taking a product + +00:18:35.159 --> 00:18:40.480 +between the query and all of the other + +00:18:37.960 --> 00:18:42.400 +documents in the document base is + +00:18:40.480 --> 00:18:46.080 +extremely + +00:18:42.400 --> 00:18:48.080 +costly and so in order to fix this there + +00:18:46.080 --> 00:18:49.080 +are methods for approximate nearest + +00:18:48.080 --> 00:18:52.280 +neighbor + +00:18:49.080 --> 00:18:54.200 +search and these are methods that allow + +00:18:52.280 --> 00:18:57.360 +you to retrieve embeddings that have the + +00:18:54.200 --> 00:19:00.280 +maximum inner product between them in + +00:18:57.360 --> 00:19:02.520 +sublinear time and because you're doing + +00:19:00.280 --> 00:19:03.960 +the maximum inner product this is also + +00:19:02.520 --> 00:19:06.600 +often called maximum inner product + +00:19:03.960 --> 00:19:06.600 +search or + +00:19:06.679 --> 00:19:12.360 +myips so I'm going to introduce on a + +00:19:09.440 --> 00:19:15.360 +very high level two common methods to do + +00:19:12.360 --> 00:19:19.320 +this the first one is locality sensitive + +00:19:15.360 --> 00:19:22.440 +hashen um or this can also be called + +00:19:19.320 --> 00:19:24.799 +kind of inverted index as well and what + +00:19:22.440 --> 00:19:26.840 +you do is you make partitions in + +00:19:24.799 --> 00:19:29.320 +continuous space and then you use it + +00:19:26.840 --> 00:19:31.240 +like an inverted index + +00:19:29.320 --> 00:19:33.679 +so let's say we have a whole bunch of + +00:19:31.240 --> 00:19:34.919 +embeddings uh I demonstrated two + +00:19:33.679 --> 00:19:36.640 +dimensional embeddings here but in + +00:19:34.919 --> 00:19:38.440 +reality this would be you know as large + +00:19:36.640 --> 00:19:41.159 +as your word + +00:19:38.440 --> 00:19:42.880 +embedding your query and document + +00:19:41.159 --> 00:19:47.120 +embedding space so this would be you + +00:19:42.880 --> 00:19:49.760 +know 512 or 1024 or something like that + +00:19:47.120 --> 00:19:53.480 +and what you do is you define a whole + +00:19:49.760 --> 00:19:56.720 +bunch of planes that separate these + +00:19:53.480 --> 00:19:59.320 +points into two spaces so if this is our + +00:19:56.720 --> 00:20:02.520 +first plane all the points above the + +00:19:59.320 --> 00:20:04.280 +plane will get a one for this partition + +00:20:02.520 --> 00:20:06.799 +and all the points below the plane will + +00:20:04.280 --> 00:20:08.840 +get a zero for this partition and we do + +00:20:06.799 --> 00:20:12.400 +it similarly we we create a whole bunch + +00:20:08.840 --> 00:20:15.840 +of them and then based on this we can + +00:20:12.400 --> 00:20:18.440 +now assign sparse vectors depending on + +00:20:15.840 --> 00:20:21.520 +each of these planes so we have uh for + +00:20:18.440 --> 00:20:24.000 +example the top one uh one0 0 because + +00:20:21.520 --> 00:20:26.400 +it's on the right side of the blue plane + +00:20:24.000 --> 00:20:28.760 +and the um wrong side of the red and the + +00:20:26.400 --> 00:20:30.679 +green planes and then for the top right + +00:20:28.760 --> 00:20:32.799 +we have one1 because it's on the right + +00:20:30.679 --> 00:20:37.159 +side of the blueing the green planes and + +00:20:32.799 --> 00:20:39.440 +the wrong side of the red plane and So + +00:20:37.159 --> 00:20:41.000 +based on this now we have a sparse + +00:20:39.440 --> 00:20:42.600 +vector and we already know what to do + +00:20:41.000 --> 00:20:44.640 +with a sparse Vector right we look it up + +00:20:42.600 --> 00:20:49.039 +in an inverted index just like we did + +00:20:44.640 --> 00:20:51.520 +for a sparse um you know sparse lookup + +00:20:49.039 --> 00:20:54.520 +table so that's one + +00:20:51.520 --> 00:20:57.799 +method another method uses a graph-based + +00:20:54.520 --> 00:21:01.320 +search and the basic idea behind this is + +00:20:57.799 --> 00:21:02.480 +that we create hubs uh and these hubs + +00:21:01.320 --> 00:21:05.200 +are kind + +00:21:02.480 --> 00:21:07.960 +of a small number of points that are + +00:21:05.200 --> 00:21:09.440 +close to other points in the space and + +00:21:07.960 --> 00:21:10.880 +so we create some hubs and then we + +00:21:09.440 --> 00:21:12.200 +search from there so if we have a + +00:21:10.880 --> 00:21:16.880 +similar + +00:21:12.200 --> 00:21:19.159 +looking uh set of points in the space we + +00:21:16.880 --> 00:21:21.520 +find these hubs which are something like + +00:21:19.159 --> 00:21:24.880 +cluster centroids and then based on the + +00:21:21.520 --> 00:21:28.559 +cluster centroids we then rule down or + +00:21:24.880 --> 00:21:31.200 +we greatly reduce the number of + +00:21:28.559 --> 00:21:33.400 +points that we need to be looking at and + +00:21:31.200 --> 00:21:36.960 +then we search through only those points + +00:21:33.400 --> 00:21:38.600 +in a more kind of extensive Manner and + +00:21:36.960 --> 00:21:41.840 +you can even turn this into a tree where + +00:21:38.600 --> 00:21:43.760 +you have hubs and then you have uh kind + +00:21:41.840 --> 00:21:46.600 +of mini hubs and then you have all the + +00:21:43.760 --> 00:21:50.200 +points so this allows you to do a kind + +00:21:46.600 --> 00:21:50.200 +of tree based or graph based + +00:21:50.600 --> 00:21:55.840 +search so obviously unless you're really + +00:21:54.159 --> 00:21:57.039 +excited about these algorithms this is + +00:21:55.840 --> 00:22:00.080 +something that you probably don't want + +00:21:57.039 --> 00:22:01.440 +to be implementing yourself um and the + +00:22:00.080 --> 00:22:03.000 +good news is there's lots of very good + +00:22:01.440 --> 00:22:04.480 +libraries that help you do this in fact + +00:22:03.000 --> 00:22:08.799 +there are so many libraries it's hard to + +00:22:04.480 --> 00:22:11.960 +manage them but some libraries that + +00:22:08.799 --> 00:22:13.799 +people very commonly use I I think face + +00:22:11.960 --> 00:22:17.320 +uh FIS + +00:22:13.799 --> 00:22:20.200 +SS is a widely used one created by uh + +00:22:17.320 --> 00:22:23.760 +fair and meta and chroma DB is a + +00:22:20.200 --> 00:22:27.720 +separate one uh that is kind of an AI + +00:22:23.760 --> 00:22:30.720 +native uh embedding search database so + +00:22:27.720 --> 00:22:30.720 +both those are good + +00:22:32.960 --> 00:22:41.120 +options even with intelligent training + +00:22:37.880 --> 00:22:42.640 +of dense embeddings however there still + +00:22:41.120 --> 00:22:45.600 +are + +00:22:42.640 --> 00:22:48.240 +problems and the biggest + +00:22:45.600 --> 00:22:51.720 +problem that you face when you're + +00:22:48.240 --> 00:22:54.000 +looking at something like uh cross + +00:22:51.720 --> 00:22:56.880 +encoders um that sorry when you're + +00:22:54.000 --> 00:23:00.240 +looking at dense embeddings is that in + +00:22:56.880 --> 00:23:02.159 +order to form a good dense embedding you + +00:23:00.240 --> 00:23:03.840 +need to kind of know in advance what + +00:23:02.159 --> 00:23:05.799 +you're looking for right because you're + +00:23:03.840 --> 00:23:09.120 +taking a long document you're condensing + +00:23:05.799 --> 00:23:10.679 +it down into a single embedding and or a + +00:23:09.120 --> 00:23:13.320 +long passage and you're condensing it + +00:23:10.679 --> 00:23:16.200 +down to a single embedding and so if + +00:23:13.320 --> 00:23:19.520 +that during that condensation process + +00:23:16.200 --> 00:23:21.240 +actually there's other information that + +00:23:19.520 --> 00:23:23.159 +is relevant to a query but you have to + +00:23:21.240 --> 00:23:27.600 +throw out because of the limited + +00:23:23.159 --> 00:23:30.600 +embedding capacity this causes you to + +00:23:27.600 --> 00:23:32.320 +you know essentially fail at um doing + +00:23:30.600 --> 00:23:34.840 +retrieval + +00:23:32.320 --> 00:23:38.159 +appropriately so there's a couple + +00:23:34.840 --> 00:23:40.880 +methods that can be used to fix this so + +00:23:38.159 --> 00:23:42.279 +the first method is in contrast to the + +00:23:40.880 --> 00:23:44.159 +buy encoder which is what I've been + +00:23:42.279 --> 00:23:47.000 +talking out about at this point where + +00:23:44.159 --> 00:23:48.520 +you kind of do full encoding of queries + +00:23:47.000 --> 00:23:52.120 +full encoding of documents and then do + +00:23:48.520 --> 00:23:53.840 +inner product search for a score uh you + +00:23:52.120 --> 00:23:56.760 +can use a cross encoder and the way the + +00:23:53.840 --> 00:23:58.559 +cross- encoder works is you append the + +00:23:56.760 --> 00:24:00.799 +query and document and then you run them + +00:23:58.559 --> 00:24:03.400 +through a model like a Transformer model + +00:24:00.799 --> 00:24:07.840 +and you calculate the output + +00:24:03.400 --> 00:24:09.880 +score so the problem with this um so + +00:24:07.840 --> 00:24:12.480 +this this is great uh because it gives + +00:24:09.880 --> 00:24:15.799 +you maximum flexibility um Transformer + +00:24:12.480 --> 00:24:18.799 +models are powerful you can uh assess + +00:24:15.799 --> 00:24:20.520 +relevance very well the problem with + +00:24:18.799 --> 00:24:22.200 +this is this precludes approximate + +00:24:20.520 --> 00:24:23.720 +nearest neighbor lookup because now + +00:24:22.200 --> 00:24:25.799 +you're running through you know many + +00:24:23.720 --> 00:24:28.880 +many nonlinearities + +00:24:25.799 --> 00:24:32.760 +here so this is can only be used for + +00:24:28.880 --> 00:24:34.360 +reranking documents um or if even if + +00:24:32.760 --> 00:24:36.880 +you're doing retrieval doing retrieval + +00:24:34.360 --> 00:24:39.679 +over a very very small number of + +00:24:36.880 --> 00:24:41.960 +documents but if you really want maximal + +00:24:39.679 --> 00:24:44.080 +accuracy I definitely would recommend uh + +00:24:41.960 --> 00:24:45.720 +doing something like this because it can + +00:24:44.080 --> 00:24:47.960 +allow you to do kind of a second pass + +00:24:45.720 --> 00:24:49.360 +filtering over the most relevant looking + +00:24:47.960 --> 00:24:52.399 +documents to identify the ones you + +00:24:49.360 --> 00:24:52.399 +really want to add to your + +00:24:54.240 --> 00:24:58.240 +context so then there are also + +00:24:56.760 --> 00:25:01.360 +approaches that are kind kind of in the + +00:24:58.240 --> 00:25:02.159 +middle of these two uh the most famous + +00:25:01.360 --> 00:25:05.880 +one is + +00:25:02.159 --> 00:25:08.320 +Kar and the I called this token level + +00:25:05.880 --> 00:25:10.840 +dense retrieval it's also called uh late + +00:25:08.320 --> 00:25:12.720 +interaction in the coold bear paper but + +00:25:10.840 --> 00:25:14.919 +the way it works is you use + +00:25:12.720 --> 00:25:18.440 +contextualized representations of all + +00:25:14.919 --> 00:25:19.440 +query and document tokens to compute a + +00:25:18.440 --> 00:25:23.559 +retrieval + +00:25:19.440 --> 00:25:26.919 +score and so you do offline indexing of + +00:25:23.559 --> 00:25:29.159 +every token in the document and then + +00:25:26.919 --> 00:25:31.399 +based on this offline X indexing of + +00:25:29.159 --> 00:25:35.320 +every token in the document you then + +00:25:31.399 --> 00:25:38.760 +have a query encoder and you do matching + +00:25:35.320 --> 00:25:41.799 +between each token in the query and the + +00:25:38.760 --> 00:25:43.399 +highest scoring tokens in each + +00:25:41.799 --> 00:25:46.320 +document + +00:25:43.399 --> 00:25:48.399 +and the reason why this is good is it + +00:25:46.320 --> 00:25:49.600 +still allows you to encode all of the + +00:25:48.399 --> 00:25:52.120 +tokens in the + +00:25:49.600 --> 00:25:55.440 +document and but each of these + +00:25:52.120 --> 00:25:59.679 +similarity searches is still just + +00:25:55.440 --> 00:26:03.559 +a kind of maximum product search and + +00:25:59.679 --> 00:26:06.279 +because of this this allows you to do + +00:26:03.559 --> 00:26:07.960 +each of these searches efficiently and + +00:26:06.279 --> 00:26:09.840 +doesn't preclude you from running it + +00:26:07.960 --> 00:26:12.919 +over an entire + +00:26:09.840 --> 00:26:16.399 +database the downside to this method uh + +00:26:12.919 --> 00:26:19.120 +may already be obvious but in the + +00:26:16.399 --> 00:26:22.200 +traditional bu encoder we have a single + +00:26:19.120 --> 00:26:26.880 +Vector for each document but here we + +00:26:22.200 --> 00:26:29.320 +have one vector for um each token in the + +00:26:26.880 --> 00:26:31.880 +document so BAS basically your vector + +00:26:29.320 --> 00:26:34.399 +database gets n times larger where n is + +00:26:31.880 --> 00:26:36.679 +the number of tokens in the document and + +00:26:34.399 --> 00:26:38.080 +there are certain methods to make this + +00:26:36.679 --> 00:26:41.559 +better like you can compress each + +00:26:38.080 --> 00:26:42.960 +document to a smaller number of n uh but + +00:26:41.559 --> 00:26:45.880 +still this is definitely going to be + +00:26:42.960 --> 00:26:48.399 +more costly than looking up each uh + +00:26:45.880 --> 00:26:50.360 +token so this is definitely something to + +00:26:48.399 --> 00:26:53.520 +consider if you want to get you know + +00:26:50.360 --> 00:26:55.159 +very good scores and Co bear is a good + +00:26:53.520 --> 00:26:59.600 +implementation of that to start with if + +00:26:55.159 --> 00:26:59.600 +you're interested in trying it out + +00:27:00.480 --> 00:27:07.000 +so this is a final thing this is uh + +00:27:03.080 --> 00:27:08.679 +something that is a little bit uh + +00:27:07.000 --> 00:27:10.080 +different than all the other things I I + +00:27:08.679 --> 00:27:12.399 +talked about before but I've used it + +00:27:10.080 --> 00:27:15.840 +myself and it actually can be pretty + +00:27:12.399 --> 00:27:18.799 +effective um it was also made at CMU so + +00:27:15.840 --> 00:27:24.399 +by Lal so I would like to promote our + +00:27:18.799 --> 00:27:26.880 +CMU work of course but um the HP idea + +00:27:24.399 --> 00:27:28.080 +between behind a hypothetical document + +00:27:26.880 --> 00:27:30.320 +embedding + +00:27:28.080 --> 00:27:33.440 +is that it's actually somewhat difficult + +00:27:30.320 --> 00:27:36.200 +to match a query and a document right + +00:27:33.440 --> 00:27:38.919 +because a query is a very short possibly + +00:27:36.200 --> 00:27:42.240 +ungrammatical output that's asking a + +00:27:38.919 --> 00:27:44.799 +question and then a document is a very + +00:27:42.240 --> 00:27:49.440 +long output that's written in a + +00:27:44.799 --> 00:27:50.799 +different proos style and you you know + +00:27:49.440 --> 00:27:53.159 +it might have lots of irrelevant + +00:27:50.799 --> 00:27:54.519 +information or or boiler plate or fluff + +00:27:53.159 --> 00:27:57.640 +or something like + +00:27:54.519 --> 00:28:00.640 +that so the idea behind a hypothetical + +00:27:57.640 --> 00:28:03.120 +document embedding is that it's e easier + +00:28:00.640 --> 00:28:05.279 +to match a document in a document than + +00:28:03.120 --> 00:28:08.159 +it is to match a query in a + +00:28:05.279 --> 00:28:10.159 +document but the input to our model is a + +00:28:08.159 --> 00:28:14.360 +query right so what do we + +00:28:10.159 --> 00:28:17.919 +do and so essentially what we do is we + +00:28:14.360 --> 00:28:20.399 +then take a large language model we feed + +00:28:17.919 --> 00:28:23.320 +it in a query in a prompt and say + +00:28:20.399 --> 00:28:25.399 +generate a document that looks like it + +00:28:23.320 --> 00:28:30.080 +should be the answer to this + +00:28:25.399 --> 00:28:32.120 +query and so so then the llm goes in and + +00:28:30.080 --> 00:28:34.440 +it generates a document and hopefully + +00:28:32.120 --> 00:28:38.440 +this document looks more similar to the + +00:28:34.440 --> 00:28:41.440 +documents you want to retrieve than the + +00:28:38.440 --> 00:28:44.039 +um than the original query does and I've + +00:28:41.440 --> 00:28:47.240 +actually found this to be relatively + +00:28:44.039 --> 00:28:51.880 +effective at improving accuracy + +00:28:47.240 --> 00:28:53.200 +on kind of difficult uh tasks especially + +00:28:51.880 --> 00:28:55.840 +ones that are out of domain from the + +00:28:53.200 --> 00:28:58.000 +trend models that I'm + +00:28:55.840 --> 00:29:01.880 +using so I've gone through a whole bunch + +00:28:58.000 --> 00:29:04.039 +of methods and I would like to finish up + +00:29:01.880 --> 00:29:05.679 +this section by giving some insight + +00:29:04.039 --> 00:29:11.399 +about which one you should be + +00:29:05.679 --> 00:29:14.559 +using so my impression right now is + +00:29:11.399 --> 00:29:17.760 +that a good basine to start out with is + +00:29:14.559 --> 00:29:20.679 +something like bm25 it's very easy to + +00:29:17.760 --> 00:29:23.080 +start out and compared to embedding + +00:29:20.679 --> 00:29:26.120 +based models it tends to be relatively + +00:29:23.080 --> 00:29:28.279 +robust to new domains so if you have a + +00:29:26.120 --> 00:29:30.559 +new domain you're more less guaranteed + +00:29:28.279 --> 00:29:32.240 +that bm25 will give you some performance + +00:29:30.559 --> 00:29:35.320 +whereas embeddings may be really good + +00:29:32.240 --> 00:29:38.399 +but they may be really bad uh depending + +00:29:35.320 --> 00:29:40.880 +on how out of domain that is compared to + +00:29:38.399 --> 00:29:42.799 +your underlying embedding + +00:29:40.880 --> 00:29:44.760 +model + +00:29:42.799 --> 00:29:48.039 +so however if you want to get the + +00:29:44.760 --> 00:29:51.080 +highest accuracy definitely tuned models + +00:29:48.039 --> 00:29:53.200 +are going to be better and if you're not + +00:29:51.080 --> 00:29:56.039 +worried about computation efficiency + +00:29:53.200 --> 00:29:58.480 +using something like P bear um with kind + +00:29:56.039 --> 00:30:01.320 +of the token level retrieval will + +00:29:58.480 --> 00:30:05.559 +definitely give you uh good accuracy + +00:30:01.320 --> 00:30:08.559 +here however there's better support for + +00:30:05.559 --> 00:30:12.159 +bu encoder style models um in kind of + +00:30:08.559 --> 00:30:15.240 +standard Vector databases like feice and + +00:30:12.159 --> 00:30:17.519 +uh chroma and other things like that so + +00:30:15.240 --> 00:30:19.799 +if you want a kind of easier method to + +00:30:17.519 --> 00:30:23.279 +get started very quickly then using a bu + +00:30:19.799 --> 00:30:23.279 +encoder is probably the best way to + +00:30:25.080 --> 00:30:31.080 +go okay so now moving on to actual + +00:30:28.279 --> 00:30:33.159 +retrieval augmented generation models we + +00:30:31.080 --> 00:30:38.360 +have uh retriever reader + +00:30:33.159 --> 00:30:40.880 +models and the way these work is you + +00:30:38.360 --> 00:30:43.279 +basically the simplest way they can work + +00:30:40.880 --> 00:30:45.799 +is you basically just chain retrieval + +00:30:43.279 --> 00:30:47.640 +and reading together so you use an outof + +00:30:45.799 --> 00:30:52.519 +thebox Retriever and an outof thebox + +00:30:47.640 --> 00:30:54.039 +reader model and you have your query uh + +00:30:52.519 --> 00:30:56.159 +you could for example look something up + +00:30:54.039 --> 00:30:58.039 +on Google get a whole bunch of passages + +00:30:56.159 --> 00:30:59.760 +and then feed them into a GP key model + +00:30:58.039 --> 00:31:03.919 +and get an + +00:30:59.760 --> 00:31:06.960 +answer this overall is quite effective + +00:31:03.919 --> 00:31:09.159 +um you it's easy to implement and it + +00:31:06.960 --> 00:31:10.600 +will give you decent results so + +00:31:09.159 --> 00:31:15.480 +definitely it's something to be worth + +00:31:10.600 --> 00:31:20.720 +thinking about uh for assignment two in + +00:31:15.480 --> 00:31:24.799 +the um in the class you're required to + +00:31:20.720 --> 00:31:26.679 +only use uh kind of public models or + +00:31:24.799 --> 00:31:29.760 +open source implementations so you could + +00:31:26.679 --> 00:31:34.360 +still replace that with Apachi Lucine + +00:31:29.760 --> 00:31:36.360 +and then um you know any standard llm + +00:31:34.360 --> 00:31:39.159 +and that could be you know llama llama + +00:31:36.360 --> 00:31:41.600 +Chad or M mistol or mixol or something + +00:31:39.159 --> 00:31:45.360 +like that so uh you could definitely + +00:31:41.600 --> 00:31:48.120 +feel feel free to do something like + +00:31:45.360 --> 00:31:51.559 +that um of course the passages are + +00:31:48.120 --> 00:31:53.200 +concatenated to the context and so + +00:31:51.559 --> 00:31:54.799 +because the passages are concatenated to + +00:31:53.200 --> 00:31:56.679 +context the contacts can get relatively + +00:31:54.799 --> 00:31:58.399 +long and expensive and other things like + +00:31:56.679 --> 00:32:01.960 +that but it's just something you have to + +00:31:58.399 --> 00:32:01.960 +deal with when you're using + +00:32:02.600 --> 00:32:07.480 +R there are methods also for Retriever + +00:32:05.799 --> 00:32:11.600 +and Generator endtoend + +00:32:07.480 --> 00:32:14.720 +training so this is the paper actually + +00:32:11.600 --> 00:32:17.600 +where the name rag came from and I'll + +00:32:14.720 --> 00:32:20.200 +use that as an example here uh but + +00:32:17.600 --> 00:32:21.600 +basically um there are several methods + +00:32:20.200 --> 00:32:23.399 +that propos to train the Retriever and + +00:32:21.600 --> 00:32:27.440 +reader to improve + +00:32:23.399 --> 00:32:31.240 +accuracy and specifically the rag p by + +00:32:27.440 --> 00:32:33.200 +Lewis at all the way it trained the um + +00:32:31.240 --> 00:32:35.639 +reader was to maximize generation + +00:32:33.200 --> 00:32:38.600 +likelihood given a single retrieved + +00:32:35.639 --> 00:32:40.279 +document and for the retriever it + +00:32:38.600 --> 00:32:41.880 +maximized overall likelihood by + +00:32:40.279 --> 00:32:44.480 +optimizing the mixture weight over + +00:32:41.880 --> 00:32:46.559 +documents so here's kind of a a + +00:32:44.480 --> 00:32:50.480 +schematic uh which is you have your + +00:32:46.559 --> 00:32:54.039 +query encoder um you run the Retriever + +00:32:50.480 --> 00:32:57.760 +with uh maximum inner product search it + +00:32:54.039 --> 00:33:00.919 +gives you several documents and each + +00:32:57.760 --> 00:33:05.880 +document has a score and then based on + +00:33:00.919 --> 00:33:09.399 +the documents and the scores you + +00:33:05.880 --> 00:33:11.200 +generate uh with each document in the + +00:33:09.399 --> 00:33:15.360 +context and + +00:33:11.200 --> 00:33:17.080 +then sum together the probabilities + +00:33:15.360 --> 00:33:18.639 +multiplied by the weights and I have the + +00:33:17.080 --> 00:33:20.320 +actual equations here because I think + +00:33:18.639 --> 00:33:23.039 +it'll be a little bit easier to + +00:33:20.320 --> 00:33:25.760 +understand after looking at the + +00:33:23.039 --> 00:33:28.360 +equations so generation is a mixture + +00:33:25.760 --> 00:33:31.440 +model and you pick a document and + +00:33:28.360 --> 00:33:36.519 +generate from the document this + +00:33:31.440 --> 00:33:40.080 +p z given X is the probability of + +00:33:36.519 --> 00:33:44.679 +picking that document given the query X + +00:33:40.080 --> 00:33:48.880 +and then this P Theta x z and all of the + +00:33:44.679 --> 00:33:51.480 +previous tokens is basically the uh + +00:33:48.880 --> 00:33:54.840 +probability of the next token given that + +00:33:51.480 --> 00:33:56.559 +you have this particular document so you + +00:33:54.840 --> 00:34:00.840 +can see that this is basically linearly + +00:33:56.559 --> 00:34:00.840 +interpr ating between the multiple + +00:34:01.559 --> 00:34:05.760 +documents and if we look this can be + +00:34:04.600 --> 00:34:09.039 +considered the Retriever and the + +00:34:05.760 --> 00:34:09.039 +generator the Retriever and the + +00:34:10.839 --> 00:34:16.119 +reader one really important thing here + +00:34:13.639 --> 00:34:17.760 +uh that enables endtoend training is + +00:34:16.119 --> 00:34:19.639 +they have this probability of the + +00:34:17.760 --> 00:34:22.919 +retriever be based on + +00:34:19.639 --> 00:34:25.480 +embeddings and so here we have the + +00:34:22.919 --> 00:34:29.040 +document embedding and the query + +00:34:25.480 --> 00:34:31.440 +embedding and the probability is + +00:34:29.040 --> 00:34:33.320 +proportional to the inner product of + +00:34:31.440 --> 00:34:36.599 +these exponentiated so you're basically + +00:34:33.320 --> 00:34:38.839 +taking a soft Max over uh the inner + +00:34:36.599 --> 00:34:40.599 +product between the + +00:34:38.839 --> 00:34:44.200 +two + +00:34:40.599 --> 00:34:47.919 +and this adjusts the retriever to give + +00:34:44.200 --> 00:34:49.560 +higher similarities to helpful + +00:34:47.919 --> 00:34:52.560 +documents + +00:34:49.560 --> 00:34:52.560 +and + +00:34:54.040 --> 00:35:02.800 +so because the prob probability of the + +00:34:59.800 --> 00:35:04.839 +retriever model here is included in the + +00:35:02.800 --> 00:35:07.160 +endtoend probability you don't actually + +00:35:04.839 --> 00:35:10.680 +need any annotations + +00:35:07.160 --> 00:35:12.839 +about which documents are useful you can + +00:35:10.680 --> 00:35:16.680 +just train all of this end to end and + +00:35:12.839 --> 00:35:19.480 +let backrop do its thing to update the + +00:35:16.680 --> 00:35:22.640 +uh the retriever as + +00:35:19.480 --> 00:35:25.000 +well one important issue when training + +00:35:22.640 --> 00:35:27.480 +models like this is that the search + +00:35:25.000 --> 00:35:30.400 +index will become stale so what do I + +00:35:27.480 --> 00:35:34.760 +mean by this if we go back to our + +00:35:30.400 --> 00:35:34.760 +previous uh thing about dense + +00:35:35.480 --> 00:35:43.560 +models creating this blue search index + +00:35:39.800 --> 00:35:45.400 +on the right side of the figure here is + +00:35:43.560 --> 00:35:48.680 +very costly so like let's say you want + +00:35:45.400 --> 00:35:50.520 +to embed a million documents or a + +00:35:48.680 --> 00:35:55.240 +billion documents if you're a big search + +00:35:50.520 --> 00:35:58.200 +engine company so doing this is very + +00:35:55.240 --> 00:36:00.599 +slow and + +00:35:58.200 --> 00:36:01.920 +in contrast doing lookup with kind of + +00:36:00.599 --> 00:36:04.160 +these approximate nearest neighbor + +00:36:01.920 --> 00:36:05.440 +searches is sublinear time or even you + +00:36:04.160 --> 00:36:08.119 +know log time so you can do it + +00:36:05.440 --> 00:36:12.319 +relatively quickly + +00:36:08.119 --> 00:36:15.680 +so it's fine to do lookup over this big + +00:36:12.319 --> 00:36:17.520 +index but if you start updating this + +00:36:15.680 --> 00:36:19.920 +document embedding you need to recreate + +00:36:17.520 --> 00:36:23.760 +the entire index and that would be you + +00:36:19.920 --> 00:36:27.240 +know very computationally costly so the + +00:36:23.760 --> 00:36:30.119 +solution to this proposed in this rag + +00:36:27.240 --> 00:36:33.640 +paper by Lewis at all is uh we only + +00:36:30.119 --> 00:36:35.640 +train the query embeddings and we keep + +00:36:33.640 --> 00:36:39.640 +the document embedding + +00:36:35.640 --> 00:36:41.920 +swix there's other Alternatives like um + +00:36:39.640 --> 00:36:45.000 +there was a paper called realm uh from + +00:36:41.920 --> 00:36:48.040 +early in retrieval base modeling and in + +00:36:45.000 --> 00:36:50.040 +that in that method they basically had + +00:36:48.040 --> 00:36:51.520 +an asynchronous process that was going + +00:36:50.040 --> 00:36:55.760 +through and using the most recent + +00:36:51.520 --> 00:36:59.960 +document in better to re-update the + +00:36:55.760 --> 00:37:03.359 +search index during training but that is + +00:36:59.960 --> 00:37:05.960 +uh you know kind of a very onerous + +00:37:03.359 --> 00:37:07.800 +process so I think it's quite common to + +00:37:05.960 --> 00:37:11.000 +use kind of a fixed document embedding + +00:37:07.800 --> 00:37:11.000 +in update only the + +00:37:12.079 --> 00:37:17.720 +queries another thing to think about is + +00:37:14.359 --> 00:37:21.160 +when do we do retrieval um so there's a + +00:37:17.720 --> 00:37:23.079 +bunch of different methods the rag paper + +00:37:21.160 --> 00:37:24.440 +that I mentioned before did this only + +00:37:23.079 --> 00:37:26.359 +once right at the very beginning of + +00:37:24.440 --> 00:37:29.400 +generation it grabbed a single document + +00:37:26.359 --> 00:37:32.560 +and generated the entire output this is + +00:37:29.400 --> 00:37:34.800 +the default method used by most + +00:37:32.560 --> 00:37:37.240 +systems however there's other options as + +00:37:34.800 --> 00:37:39.640 +well you can retrieve uh several times + +00:37:37.240 --> 00:37:43.040 +during generation as + +00:37:39.640 --> 00:37:44.480 +necessary and the way this works uh we + +00:37:43.040 --> 00:37:46.280 +can do this either by generating a + +00:37:44.480 --> 00:37:48.480 +search token uh saying that we should + +00:37:46.280 --> 00:37:50.200 +start searching or searching when the + +00:37:48.480 --> 00:37:52.640 +model is + +00:37:50.200 --> 00:37:55.920 +uncertain and another way is to do this + +00:37:52.640 --> 00:37:58.079 +every token so we can do this by finding + +00:37:55.920 --> 00:37:59.760 +similar final embeddings and using this + +00:37:58.079 --> 00:38:02.240 +to influence the + +00:37:59.760 --> 00:38:04.720 +probabilities or approximating attention + +00:38:02.240 --> 00:38:06.440 +with nearest neighbors so I'm going to + +00:38:04.720 --> 00:38:08.920 +explain about each of these in a bit + +00:38:06.440 --> 00:38:12.480 +more detail + +00:38:08.920 --> 00:38:16.119 +in so triggering retrieval with token + +00:38:12.480 --> 00:38:19.720 +embeddings is um was proposed by Tool + +00:38:16.119 --> 00:38:22.119 +forer shik all and the way it works is + +00:38:19.720 --> 00:38:25.000 +you generate tokens that Tri trigger + +00:38:22.119 --> 00:38:27.880 +retrieval or other tools so in this + +00:38:25.000 --> 00:38:30.079 +particular method it uh had several + +00:38:27.880 --> 00:38:32.000 +tools including asking a QA model or + +00:38:30.079 --> 00:38:34.800 +getting a calculator or having a machine + +00:38:32.000 --> 00:38:37.200 +translation system but with respect to + +00:38:34.800 --> 00:38:40.000 +retrieval augmented generation it had + +00:38:37.200 --> 00:38:41.560 +this essentially Wiki search + +00:38:40.000 --> 00:38:43.680 +functionality that would look up + +00:38:41.560 --> 00:38:46.680 +something in Wikipedia and then use that + +00:38:43.680 --> 00:38:46.680 +to influence the final + +00:38:46.760 --> 00:38:52.200 +probabilities + +00:38:48.800 --> 00:38:55.160 +and the way this was trained is training + +00:38:52.200 --> 00:38:59.800 +was done in an inative manner where it + +00:38:55.160 --> 00:38:59.800 +basically generated uh kind + +00:39:00.000 --> 00:39:05.680 +of examples of tools being useful and + +00:39:04.359 --> 00:39:09.560 +when the + +00:39:05.680 --> 00:39:14.160 +tools improve the probability of the + +00:39:09.560 --> 00:39:16.119 +following output then that would be kind + +00:39:14.160 --> 00:39:19.560 +of treated as a positive example and + +00:39:16.119 --> 00:39:21.520 +used to further train the model so this + +00:39:19.560 --> 00:39:23.400 +was really influential and in fact this + +00:39:21.520 --> 00:39:27.000 +is how things are implemented in chat + +00:39:23.400 --> 00:39:29.319 +GPT nowadays not only for um doing + +00:39:27.000 --> 00:39:33.400 +retrieval but also doing other tools + +00:39:29.319 --> 00:39:35.200 +like um for example uh generating code + +00:39:33.400 --> 00:39:37.440 +or generating images or other things + +00:39:35.200 --> 00:39:37.440 +like + +00:39:38.200 --> 00:39:45.079 +this another option is to trigger + +00:39:40.920 --> 00:39:48.240 +retrieval uh with uncertainty estimates + +00:39:45.079 --> 00:39:52.280 +so flare this is a paper by my student + +00:39:48.240 --> 00:39:55.160 +Jang bang um where we try to generate + +00:39:52.280 --> 00:39:58.560 +content and then do retrieval if the + +00:39:55.160 --> 00:40:01.800 +language model certainty is low so + +00:39:58.560 --> 00:40:05.599 +here's a schematic of how this works but + +00:40:01.800 --> 00:40:09.160 +basically um if we have + +00:40:05.599 --> 00:40:13.440 +some uh retrieved documents we can say + +00:40:09.160 --> 00:40:16.560 +generate a a summary about Joe Biden and + +00:40:13.440 --> 00:40:19.560 +when it generates a summary maybe for + +00:40:16.560 --> 00:40:20.960 +the first output um the language model + +00:40:19.560 --> 00:40:22.960 +has high + +00:40:20.960 --> 00:40:24.240 +confidence and because the language + +00:40:22.960 --> 00:40:25.359 +model has high confidence we just + +00:40:24.240 --> 00:40:27.520 +generate the + +00:40:25.359 --> 00:40:29.599 +output + +00:40:27.520 --> 00:40:31.839 +however in the next step if it might + +00:40:29.599 --> 00:40:33.599 +generate something like saying Joe Biden + +00:40:31.839 --> 00:40:35.680 +attended the University of Pennsylvania + +00:40:33.599 --> 00:40:37.160 +where he earned a law degree but the + +00:40:35.680 --> 00:40:39.000 +model might not be very certain about + +00:40:37.160 --> 00:40:41.560 +this it might have a low probability of + +00:40:39.000 --> 00:40:45.839 +certain important entities and So based + +00:40:41.560 --> 00:40:48.839 +on this uh we then form a a query where + +00:40:45.839 --> 00:40:52.119 +what we do is essentially we blank out + +00:40:48.839 --> 00:40:55.079 +the low probability parts of this and we + +00:40:52.119 --> 00:40:57.200 +do a search and so this is also a little + +00:40:55.079 --> 00:41:00.240 +bit like the hypothetical + +00:40:57.200 --> 00:41:02.520 +edings method where we basically create + +00:41:00.240 --> 00:41:04.040 +a document that we think will look + +00:41:02.520 --> 00:41:07.119 +similar to the document that we want to + +00:41:04.040 --> 00:41:09.480 +find we use that to create search + +00:41:07.119 --> 00:41:11.359 +results and then we generate the output + +00:41:09.480 --> 00:41:13.880 +and then we continue doing that and + +00:41:11.359 --> 00:41:15.960 +whenever we have a high confidence + +00:41:13.880 --> 00:41:18.800 +output like the one here we don't do any + +00:41:15.960 --> 00:41:20.040 +retrieval we just you know generate uh + +00:41:18.800 --> 00:41:21.880 +directly from the parameters of the + +00:41:20.040 --> 00:41:23.960 +model but whenever we have low + +00:41:21.880 --> 00:41:27.400 +confidence outputs we do the retrieval + +00:41:23.960 --> 00:41:30.400 +and base the output on this and so I I + +00:41:27.400 --> 00:41:33.119 +think this is uh you know a nice method + +00:41:30.400 --> 00:41:35.000 +that could potentially be uh used the + +00:41:33.119 --> 00:41:36.920 +downside to that is you might sometimes + +00:41:35.000 --> 00:41:38.920 +need to generate twice because you would + +00:41:36.920 --> 00:41:40.480 +generate the output once and then find + +00:41:38.920 --> 00:41:42.720 +the low confidence parts and generate + +00:41:40.480 --> 00:41:45.400 +again but you know if you really care + +00:41:42.720 --> 00:41:47.319 +about the uh kind of quality of the + +00:41:45.400 --> 00:41:49.640 +output this is I think a reasonable + +00:41:47.319 --> 00:41:49.640 +thing to + +00:41:50.160 --> 00:41:54.920 +do okay so now moving on to the Token by + +00:41:53.000 --> 00:41:59.800 +token retrieval + +00:41:54.920 --> 00:42:03.560 +methods the kind of original or one of + +00:41:59.800 --> 00:42:05.200 +the methods that popularized this idea + +00:42:03.560 --> 00:42:08.720 +of token by token retrieval is something + +00:42:05.200 --> 00:42:10.760 +called K&N LM and the way it works is it + +00:42:08.720 --> 00:42:13.839 +retrieves similar + +00:42:10.760 --> 00:42:16.680 +examples and then uses the following + +00:42:13.839 --> 00:42:20.880 +tokens from these + +00:42:16.680 --> 00:42:23.800 +examples and this is kind of like a very + +00:42:20.880 --> 00:42:25.839 +powerful count-based byr model in a way + +00:42:23.800 --> 00:42:28.440 +so if you remember back to when we were + +00:42:25.839 --> 00:42:32.920 +talking about count based Pam models + +00:42:28.440 --> 00:42:36.440 +what we would do is we would take the + +00:42:32.920 --> 00:42:39.400 +previous token and we would calculate + +00:42:36.440 --> 00:42:41.319 +the probability of the next token by + +00:42:39.400 --> 00:42:43.040 +summing up together all of the next + +00:42:41.319 --> 00:42:44.800 +tokens and dividing by the total number + +00:42:43.040 --> 00:42:49.240 +of times that previous token + +00:42:44.800 --> 00:42:52.720 +occurred and so given that background uh + +00:42:49.240 --> 00:42:56.760 +we can talk about how the KLM + +00:42:52.720 --> 00:43:00.319 +works so we have the text context X + +00:42:56.760 --> 00:43:02.240 +and we want to generate a Target output + +00:43:00.319 --> 00:43:04.839 +separately from this we have all of the + +00:43:02.240 --> 00:43:06.440 +training contexts so this is all of the + +00:43:04.839 --> 00:43:09.920 +contexts that appeared in our training + +00:43:06.440 --> 00:43:13.520 +data and we encode all of these training + +00:43:09.920 --> 00:43:15.720 +contexts specifically by calculating the + +00:43:13.520 --> 00:43:18.559 +representation of the final layer or + +00:43:15.720 --> 00:43:21.119 +near the final layer of the model and so + +00:43:18.559 --> 00:43:23.200 +we encode that as + +00:43:21.119 --> 00:43:25.240 +representations separately from that we + +00:43:23.200 --> 00:43:27.920 +remember the next word that appeared + +00:43:25.240 --> 00:43:29.720 +after this Contex + +00:43:27.920 --> 00:43:32.920 +so now we have a data store consisting + +00:43:29.720 --> 00:43:35.040 +of representations in next words we then + +00:43:32.920 --> 00:43:38.440 +take the representation of the current + +00:43:35.040 --> 00:43:40.880 +context and we calculate the distance + +00:43:38.440 --> 00:43:43.400 +between the current context and all of + +00:43:40.880 --> 00:43:47.119 +the other similar context in the + +00:43:43.400 --> 00:43:49.839 +database we take the nearest K so we + +00:43:47.119 --> 00:43:52.440 +take the top uh K examples here which + +00:43:49.839 --> 00:43:55.240 +would be Hawaii Illinois and + +00:43:52.440 --> 00:43:57.520 +Hawaii we then do uh some sort of + +00:43:55.240 --> 00:44:01.440 +normalization based on the + +00:43:57.520 --> 00:44:05.200 +distance and this gives us a probability + +00:44:01.440 --> 00:44:06.680 +distribution over all of the next tokens + +00:44:05.200 --> 00:44:10.599 +sometimes these tokens are duplicated + +00:44:06.680 --> 00:44:13.599 +multiple times and so we aggregate all + +00:44:10.599 --> 00:44:15.800 +of these counts to be Hawaii for example + +00:44:13.599 --> 00:44:18.839 +0.8 and Illinois + +00:44:15.800 --> 00:44:21.839 +0.2 and then we interpolate this with + +00:44:18.839 --> 00:44:24.040 +the probability given by the standard + +00:44:21.839 --> 00:44:26.440 +language model using an interpolation + +00:44:24.040 --> 00:44:28.400 +coefficient Lambda and this gives us our + +00:44:26.440 --> 00:44:31.000 +final + +00:44:28.400 --> 00:44:34.559 +probability so the nice thing about this + +00:44:31.000 --> 00:44:38.000 +is this allows us to explicitly ground + +00:44:34.559 --> 00:44:42.079 +our outputs in individual + +00:44:38.000 --> 00:44:45.319 +examples uh and it's a pretty effective + +00:44:42.079 --> 00:44:48.760 +way to improve the probability of models + +00:44:45.319 --> 00:44:53.839 +improve translation and other stuff like + +00:44:48.760 --> 00:44:56.119 +this the disadvantage of doing this is + +00:44:53.839 --> 00:44:59.319 +that it provides it it kind of ADD add + +00:44:56.119 --> 00:45:01.800 +an extra component of the model it adds + +00:44:59.319 --> 00:45:05.440 +extra + +00:45:01.800 --> 00:45:08.520 +um kind of hyperparameters like Lambda + +00:45:05.440 --> 00:45:11.680 +and things like this so it is a little + +00:45:08.520 --> 00:45:16.960 +bit finicky and it doesn't work in all + +00:45:11.680 --> 00:45:21.440 +situations and so another method that we + +00:45:16.960 --> 00:45:23.559 +uh proposed or by Manda Birch who gave + +00:45:21.440 --> 00:45:26.920 +the uh previous lecture on generation in + +00:45:23.559 --> 00:45:29.240 +this class is unlimi forer and basically + +00:45:26.920 --> 00:45:32.680 +what unlimi forer does is it notes that + +00:45:29.240 --> 00:45:36.079 +attention itself is an in inner product + +00:45:32.680 --> 00:45:40.440 +search and it does topk + +00:45:36.079 --> 00:45:42.680 +attention and the way we do this is we + +00:45:40.440 --> 00:45:45.160 +first process the input with a sliding + +00:45:42.680 --> 00:45:47.480 +window and then perform attention using + +00:45:45.160 --> 00:45:49.960 +a vector index so if we have a really + +00:45:47.480 --> 00:45:54.280 +long input that we want to encode what + +00:45:49.960 --> 00:45:56.559 +we do is we first encode chunks so we + +00:45:54.280 --> 00:46:01.960 +encode for example AB + +00:45:56.559 --> 00:46:03.839 +then we encode CD and we encode EF we + +00:46:01.960 --> 00:46:06.240 +concatenate them together into a big + +00:46:03.839 --> 00:46:07.800 +index of one long input so in a way that + +00:46:06.240 --> 00:46:10.920 +this is similar to what they did in the + +00:46:07.800 --> 00:46:12.720 +KLM you know concatenate all of these + +00:46:10.920 --> 00:46:16.520 +embeddings into a single + +00:46:12.720 --> 00:46:18.680 +input but the difference is that this is + +00:46:16.520 --> 00:46:21.640 +done with + +00:46:18.680 --> 00:46:24.280 +um the values that we are attending to + +00:46:21.640 --> 00:46:27.559 +as opposed to just the final + +00:46:24.280 --> 00:46:30.079 +layer and + +00:46:27.559 --> 00:46:33.680 +the interesting thing about this is now + +00:46:30.079 --> 00:46:36.200 +we have an index of one long input and + +00:46:33.680 --> 00:46:39.800 +when we want to do our next version of + +00:46:36.200 --> 00:46:42.240 +attention we do KNN search from the + +00:46:39.800 --> 00:46:44.280 +query we take the retrieved hidden + +00:46:42.240 --> 00:46:47.880 +States and then we just do attention + +00:46:44.280 --> 00:46:50.440 +over them so the nice thing about this + +00:46:47.880 --> 00:46:53.079 +is in the extreme case this makes no + +00:46:50.440 --> 00:46:55.240 +changes to the model what I mean by this + +00:46:53.079 --> 00:46:57.520 +is let's say our input was small enough + +00:46:55.240 --> 00:47:02.240 +that we could coded in only a single + +00:46:57.520 --> 00:47:06.400 +chunk and for KNN search we also did KNN + +00:47:02.240 --> 00:47:09.559 +search um we did you know exact Canon + +00:47:06.400 --> 00:47:12.400 +search over all of the embeddings in the + +00:47:09.559 --> 00:47:14.680 +trunk in that case this would just be + +00:47:12.400 --> 00:47:16.520 +normal attention it's exactly the same + +00:47:14.680 --> 00:47:18.640 +as normal + +00:47:16.520 --> 00:47:20.160 +attention however there are some + +00:47:18.640 --> 00:47:21.760 +approximations that go into here like + +00:47:20.160 --> 00:47:24.000 +when we encode chunks they might not be + +00:47:21.760 --> 00:47:26.359 +exactly the same as if we encoded the + +00:47:24.000 --> 00:47:29.839 +entire thing together and we're also + +00:47:26.359 --> 00:47:33.640 +chopping off some of the values with + +00:47:29.839 --> 00:47:35.800 +very low um kind of inner products and + +00:47:33.640 --> 00:47:37.400 +so because of this there are some + +00:47:35.800 --> 00:47:38.760 +approximations being made but in the + +00:47:37.400 --> 00:47:40.160 +extreme case if we made no + +00:47:38.760 --> 00:47:41.880 +approximations this would just be + +00:47:40.160 --> 00:47:44.359 +exactly the same model as we were using + +00:47:41.880 --> 00:47:46.160 +before so I find this pretty attractive + +00:47:44.359 --> 00:47:48.760 +and uh you know empirically it gives + +00:47:46.160 --> 00:47:51.720 +very good results over long + +00:47:48.760 --> 00:47:53.440 +distances and you know we can always + +00:47:51.720 --> 00:47:56.240 +make our approximations better and + +00:47:53.440 --> 00:47:57.680 +improve this model as well so I I think + +00:47:56.240 --> 00:48:00.960 +this is a attractive method that you + +00:47:57.680 --> 00:48:00.960 +might be interested in taking a look + +00:48:02.240 --> 00:48:06.200 +at okay for the final part of this I'd + +00:48:04.559 --> 00:48:08.079 +like to talk about long context + +00:48:06.200 --> 00:48:12.400 +Transformers and these are models that + +00:48:08.079 --> 00:48:15.119 +are explicitly trained in a way that + +00:48:12.400 --> 00:48:16.920 +allows you to attend to longer contexts + +00:48:15.119 --> 00:48:18.839 +in an efficient + +00:48:16.920 --> 00:48:21.960 +manner + +00:48:18.839 --> 00:48:23.680 +so one way that we can train over longer + +00:48:21.960 --> 00:48:25.880 +context is just append all of the + +00:48:23.680 --> 00:48:28.040 +context together and in fact shortly + +00:48:25.880 --> 00:48:32.200 +after Transformers came out uh this + +00:48:28.040 --> 00:48:34.280 +paper by VOA at all demonstrated that um + +00:48:32.200 --> 00:48:36.160 +it doing this can learn you know + +00:48:34.280 --> 00:48:38.119 +interesting document level phenomena so + +00:48:36.160 --> 00:48:40.440 +it can identify when + +00:48:38.119 --> 00:48:42.480 +multiple uh words refer to the same + +00:48:40.440 --> 00:48:43.680 +thing or co-reference and other things + +00:48:42.480 --> 00:48:45.640 +like + +00:48:43.680 --> 00:48:47.720 +this however the problem with + +00:48:45.640 --> 00:48:51.119 +Transformers is that computation is + +00:48:47.720 --> 00:48:52.799 +quadratic in the sentence length because + +00:48:51.119 --> 00:48:54.599 +you're multiplying all of the query + +00:48:52.799 --> 00:48:56.799 +vectors by all of the key + +00:48:54.599 --> 00:48:59.480 +vectors + +00:48:56.799 --> 00:49:02.799 +and that basically causes a big problem + +00:48:59.480 --> 00:49:02.799 +if your sequences become very + +00:49:03.480 --> 00:49:09.760 +long so if we go back to what we did in + +00:49:07.480 --> 00:49:12.400 +rnns uh from the very beginning of the + +00:49:09.760 --> 00:49:14.359 +class in rnns they don't have this + +00:49:12.400 --> 00:49:16.280 +problem because computation is linear in + +00:49:14.359 --> 00:49:20.440 +the length of the sequence you just pass + +00:49:16.280 --> 00:49:22.200 +along the RNN State and every single + +00:49:20.440 --> 00:49:23.839 +time you do the same computation over it + +00:49:22.200 --> 00:49:26.559 +so there's no quadratic term in + +00:49:23.839 --> 00:49:32.400 +calculating rnns + +00:49:26.559 --> 00:49:34.880 +another thing is that when doing rnns + +00:49:32.400 --> 00:49:37.680 +you can actually P State infinitely + +00:49:34.880 --> 00:49:39.040 +during the forward pass by just + +00:49:37.680 --> 00:49:40.240 +calculating the hidden State and then + +00:49:39.040 --> 00:49:42.119 +throwing away the rest of the + +00:49:40.240 --> 00:49:43.359 +computation graph that was used in + +00:49:42.119 --> 00:49:45.160 +calculating that hidden State and + +00:49:43.359 --> 00:49:48.319 +there's no approximation that goes on + +00:49:45.160 --> 00:49:49.680 +there so unlike on in un liform that I + +00:49:48.319 --> 00:49:51.640 +was talking about before where we needed + +00:49:49.680 --> 00:49:54.119 +to make approximations none need to be + +00:49:51.640 --> 00:49:56.400 +made in this + +00:49:54.119 --> 00:50:00.200 +case however there is a problem with + +00:49:56.400 --> 00:50:02.040 +doing back propop uh because in order to + +00:50:00.200 --> 00:50:05.839 +do back propop normally you maintain the + +00:50:02.040 --> 00:50:09.720 +entire you know state of the computation + +00:50:05.839 --> 00:50:12.400 +graph and so there a common method to + +00:50:09.720 --> 00:50:15.280 +fix this is basically you pass along the + +00:50:12.400 --> 00:50:16.920 +RNN state from the previous sentence but + +00:50:15.280 --> 00:50:19.240 +you just don't do backdrop into the + +00:50:16.920 --> 00:50:21.200 +previous sentence and this is called + +00:50:19.240 --> 00:50:24.040 +truncated backrop or truncated back + +00:50:21.200 --> 00:50:27.280 +propagation through time and this allows + +00:50:24.040 --> 00:50:30.160 +you to essentially train models with + +00:50:27.280 --> 00:50:32.319 +infinite context um or at least models + +00:50:30.160 --> 00:50:33.720 +that can pass along context infinitely + +00:50:32.319 --> 00:50:36.359 +even if you're not back propping into + +00:50:33.720 --> 00:50:36.359 +they Cod ear + +00:50:37.480 --> 00:50:43.520 +there so of course a problem with this + +00:50:40.720 --> 00:50:45.880 +over long contexts is recurrents uh + +00:50:43.520 --> 00:50:47.520 +recurrent models can be slow due to the + +00:50:45.880 --> 00:50:51.400 +kind of sequential dependence they're + +00:50:47.520 --> 00:50:54.280 +not ideal for um you know running on + +00:50:51.400 --> 00:50:57.359 +gpus or things like that and this is + +00:50:54.280 --> 00:51:01.960 +improved by recent architectures like + +00:50:57.359 --> 00:51:05.359 +Mamba and RW KV which are more conducive + +00:51:01.960 --> 00:51:07.079 +to GPU Based training um while still + +00:51:05.359 --> 00:51:08.599 +maintaining linear time complexity and + +00:51:07.079 --> 00:51:11.480 +so I'm looking forward to talking about + +00:51:08.599 --> 00:51:11.480 +that more in a future + +00:51:13.000 --> 00:51:17.559 +class so actually if we take this idea + +00:51:15.880 --> 00:51:20.440 +of truncated back propagation through + +00:51:17.559 --> 00:51:22.359 +time this can also be applied to + +00:51:20.440 --> 00:51:25.440 +Transformers and there's a really nice + +00:51:22.359 --> 00:51:27.880 +paper Transformer XEL also created by + +00:51:25.440 --> 00:51:31.119 +kungai who was formerly at + +00:51:27.880 --> 00:51:33.119 +CMU and what this does is this attempts + +00:51:31.119 --> 00:51:35.760 +to fix vectors from the previous + +00:51:33.119 --> 00:51:39.440 +sentence so if we have a standard + +00:51:35.760 --> 00:51:40.720 +Transformer uh in a Transformer XL + +00:51:39.440 --> 00:51:44.640 +normally what we do in the standard + +00:51:40.720 --> 00:51:48.480 +Transformer is each Vector attends back + +00:51:44.640 --> 00:51:50.920 +to all the other vectors in the current + +00:51:48.480 --> 00:51:53.839 +context what Transformer XEL does + +00:51:50.920 --> 00:51:56.359 +instead is when you have a new segment + +00:51:53.839 --> 00:51:58.960 +that you want to do backrop + +00:51:56.359 --> 00:52:01.200 +into um you have a new segment that you + +00:51:58.960 --> 00:52:03.960 +want to basically train over you also + +00:52:01.200 --> 00:52:06.400 +attend to all of the previous tokens in + +00:52:03.960 --> 00:52:07.640 +the previous segment but you don't do + +00:52:06.400 --> 00:52:10.319 +back propop into + +00:52:07.640 --> 00:52:12.079 +them so this is essentially truncated + +00:52:10.319 --> 00:52:14.480 +backpropagation through time from the + +00:52:12.079 --> 00:52:17.760 +Transformer + +00:52:14.480 --> 00:52:19.520 +perspective this is also really nice + +00:52:17.760 --> 00:52:21.200 +because what it allows you to do is if + +00:52:19.520 --> 00:52:25.880 +you have a multi-layer + +00:52:21.200 --> 00:52:27.720 +Transformer it allows you to attend far + +00:52:25.880 --> 00:52:30.520 +back so if you look at the last layer + +00:52:27.720 --> 00:52:33.520 +it's attending um to things in the + +00:52:30.520 --> 00:52:36.599 +previous context window but the second + +00:52:33.520 --> 00:52:39.760 +to last layer is attending to things in + +00:52:36.599 --> 00:52:41.520 +the um not just one context window + +00:52:39.760 --> 00:52:44.079 +before but multiple context windows + +00:52:41.520 --> 00:52:45.760 +before and actually this allows you to + +00:52:44.079 --> 00:52:47.880 +very effectively attend a very long + +00:52:45.760 --> 00:52:51.720 +context because each time kind of the + +00:52:47.880 --> 00:52:54.799 +context expands in an exponential + +00:52:51.720 --> 00:52:56.520 +manner so um recently there's a popular + +00:52:54.799 --> 00:52:57.799 +model called mistol that I'm sure a lot + +00:52:56.520 --> 00:52:59.480 +of people have heard about and this is + +00:52:57.799 --> 00:53:01.920 +using sliding window attention which is + +00:52:59.480 --> 00:53:04.160 +essentially the same mechanism proposed + +00:53:01.920 --> 00:53:09.240 +by Transformer XEL so this method is + +00:53:04.160 --> 00:53:09.240 +still uh used in uh very practical + +00:53:10.400 --> 00:53:17.359 +systems another paper that has been + +00:53:13.440 --> 00:53:19.319 +pretty influential in this general area + +00:53:17.359 --> 00:53:21.079 +is something called sparse + +00:53:19.319 --> 00:53:23.359 +Transformers and the way sparse + +00:53:21.079 --> 00:53:25.960 +Transformers work is instead of + +00:53:23.359 --> 00:53:29.520 +attending to every single previous state + +00:53:25.960 --> 00:53:32.640 +you attend to every n previous + +00:53:29.520 --> 00:53:34.599 +States and what this allows you to do is + +00:53:32.640 --> 00:53:37.119 +this allows you to essentially create + +00:53:34.599 --> 00:53:40.319 +something like the strided uh + +00:53:37.119 --> 00:53:42.079 +convolutions or um pyramidal recurrent + +00:53:40.319 --> 00:53:45.520 +neural networks that I talked about + +00:53:42.079 --> 00:53:49.760 +earlier um so what this looks like + +00:53:45.520 --> 00:53:51.079 +essentially is you have um this like if + +00:53:49.760 --> 00:53:54.880 +you have a particular state it might + +00:53:51.079 --> 00:53:56.480 +attend to all of the previous end tokens + +00:53:54.880 --> 00:54:00.240 +but then it + +00:53:56.480 --> 00:54:04.400 +also attends to all of the + +00:54:00.240 --> 00:54:06.880 +previous um kind of M chunks so you kind + +00:54:04.400 --> 00:54:08.920 +of have a combination of local and + +00:54:06.880 --> 00:54:11.640 +Global + +00:54:08.920 --> 00:54:14.760 +attention or not local and Global but + +00:54:11.640 --> 00:54:16.760 +local and kind of longer range attention + +00:54:14.760 --> 00:54:18.760 +and this can be very effective because + +00:54:16.760 --> 00:54:22.319 +you can attend to you know much longer + +00:54:18.760 --> 00:54:24.079 +context with a minimal increase in a + +00:54:22.319 --> 00:54:26.520 +computational + +00:54:24.079 --> 00:54:28.720 +complexity + +00:54:26.520 --> 00:54:31.160 +so another method that's a little bit + +00:54:28.720 --> 00:54:32.960 +like this uh or it's very similar in + +00:54:31.160 --> 00:54:34.359 +spirit but slightly different in + +00:54:32.960 --> 00:54:35.599 +implementation is something called the + +00:54:34.359 --> 00:54:37.520 +compressive + +00:54:35.599 --> 00:54:40.400 +Transformer and in the compressive + +00:54:37.520 --> 00:54:43.000 +Transformer you also have this idea of a + +00:54:40.400 --> 00:54:44.319 +local memory and then a longer term + +00:54:43.000 --> 00:54:47.200 +compressed + +00:54:44.319 --> 00:54:50.799 +memory but you have an explicit + +00:54:47.200 --> 00:54:54.319 +compression step that + +00:54:50.799 --> 00:54:58.079 +directly essentially generates this uh + +00:54:54.319 --> 00:55:00.960 +compressed mem M itself and so this is a + +00:54:58.079 --> 00:55:04.119 +little bit more flexible I guess it + +00:55:00.960 --> 00:55:06.280 +allows you to take all of the you know + +00:55:04.119 --> 00:55:09.000 +relevant things from your local memory + +00:55:06.280 --> 00:55:12.000 +and compress it down so it's another + +00:55:09.000 --> 00:55:12.000 +method that's worth thinking + +00:55:12.760 --> 00:55:18.400 +about finally uh there are some very + +00:55:15.799 --> 00:55:20.200 +interesting methods that do low rank + +00:55:18.400 --> 00:55:23.039 +approximations for + +00:55:20.200 --> 00:55:25.920 +Transformers and so calculating the + +00:55:23.039 --> 00:55:29.119 +attention Matrix is expensive but this + +00:55:25.920 --> 00:55:31.640 +is a matrix and because it's a matrix we + +00:55:29.119 --> 00:55:32.640 +can also approximate it with a lower + +00:55:31.640 --> 00:55:35.480 +rank + +00:55:32.640 --> 00:55:38.559 +Matrix and there's a couple methods that + +00:55:35.480 --> 00:55:40.599 +do things uh like this uh the first one + +00:55:38.559 --> 00:55:42.680 +is something called Blind forer which + +00:55:40.599 --> 00:55:44.520 +adds low rank linear projections into + +00:55:42.680 --> 00:55:47.319 +the model at appropriate + +00:55:44.520 --> 00:55:50.359 +places and um there's another one called + +00:55:47.319 --> 00:55:52.200 +NR forer which approximates using the ni + +00:55:50.359 --> 00:55:54.440 +run method which is based on sampling + +00:55:52.200 --> 00:55:56.520 +Landmark points but basically the + +00:55:54.440 --> 00:56:00.319 +general IDE aide behind this is normally + +00:55:56.520 --> 00:56:03.400 +we do this kind of softmax over you know + +00:56:00.319 --> 00:56:06.240 +a very large attention Vector but + +00:56:03.400 --> 00:56:08.440 +instead we can approximate the softmax + +00:56:06.240 --> 00:56:11.520 +by having some low rank vectors kind of + +00:56:08.440 --> 00:56:12.799 +like what we used in Laura and uh + +00:56:11.520 --> 00:56:16.440 +nonetheless get a reasonable + +00:56:12.799 --> 00:56:16.440 +approximation of the softmax used + +00:56:17.799 --> 00:56:24.039 +inion okay so we're nearing the end of + +00:56:21.520 --> 00:56:26.000 +what I want to talk about today and + +00:56:24.039 --> 00:56:29.720 +finally the thing that I'd like to talk + +00:56:26.000 --> 00:56:33.240 +about is benchmarks for long PEX models + +00:56:29.720 --> 00:56:35.000 +and there's a few benchmarks one very + +00:56:33.240 --> 00:56:37.359 +well-known one is something called long + +00:56:35.000 --> 00:56:40.599 +range Arena this is a composite + +00:56:37.359 --> 00:56:43.000 +Benchmark containing mostly non NLP + +00:56:40.599 --> 00:56:45.280 +tasks and it's definitely used for long + +00:56:43.000 --> 00:56:46.760 +sequence modeling but the results on the + +00:56:45.280 --> 00:56:49.400 +long range Arena actually tend to + +00:56:46.760 --> 00:56:51.599 +diverge uh somewhat from the results + +00:56:49.400 --> 00:56:54.440 +that you get for longdistance language + +00:56:51.599 --> 00:56:56.520 +modeling so in addition to this another + +00:56:54.440 --> 00:56:58.400 +benchmark that I uh personally like and + +00:56:56.520 --> 00:57:01.960 +have used a bit is something called + +00:56:58.400 --> 00:57:05.720 +Scrolls which uh combines together a + +00:57:01.960 --> 00:57:07.960 +whole bunch of kind of QA style or + +00:57:05.720 --> 00:57:10.920 +summarization style tasks that have very + +00:57:07.960 --> 00:57:13.280 +long contexts including over narratives + +00:57:10.920 --> 00:57:15.680 +or books or government reports or other + +00:57:13.280 --> 00:57:17.280 +things like that so you can also take a + +00:57:15.680 --> 00:57:20.680 +look at this if you're interested in + +00:57:17.280 --> 00:57:20.680 +kind of benchmarking longer range + +00:57:21.839 --> 00:57:28.280 +models okay the final thing I'd like to + +00:57:24.559 --> 00:57:30.280 +talk about is now that we have retriever + +00:57:28.280 --> 00:57:31.680 +models we have reader models we maybe + +00:57:30.280 --> 00:57:34.000 +even have reader models that can + +00:57:31.680 --> 00:57:35.520 +effectively use very long contexts like + +00:57:34.000 --> 00:57:37.880 +the ones that we retrieve over whole + +00:57:35.520 --> 00:57:39.240 +documents how do we effectively use them + +00:57:37.880 --> 00:57:43.640 +in our + +00:57:39.240 --> 00:57:46.680 +models so there was a very nice paper um + +00:57:43.640 --> 00:57:48.880 +by Nelson Leo at Stanford that about a + +00:57:46.680 --> 00:57:51.160 +phenomenon that was kinded lost in the + +00:57:48.880 --> 00:57:53.079 +middle and basically what it does is it + +00:57:51.160 --> 00:57:55.119 +demonstrates that many many different + +00:57:53.079 --> 00:57:57.720 +models including state-of-the-art model + +00:57:55.119 --> 00:58:00.799 +models pay less attention to things in + +00:57:57.720 --> 00:58:03.960 +the middle of long context windows and + +00:58:00.799 --> 00:58:06.760 +so if we have an answer and we put it in + +00:58:03.960 --> 00:58:09.200 +you know the first position in Doc in + +00:58:06.760 --> 00:58:12.280 +you know a concatenated context or the + +00:58:09.200 --> 00:58:13.799 +20th position in a concatenated context + +00:58:12.280 --> 00:58:15.240 +it tends to attend more to the ones at + +00:58:13.799 --> 00:58:18.359 +the beginning or the + +00:58:15.240 --> 00:58:19.480 +end in contrast the ones in the middle + +00:58:18.359 --> 00:58:22.760 +kind of get + +00:58:19.480 --> 00:58:26.680 +lost hence the name lost in the middle + +00:58:22.760 --> 00:58:29.520 +and the problem with this is you know if + +00:58:26.680 --> 00:58:32.480 +we are doing something like retrieval in + +00:58:29.520 --> 00:58:34.160 +Reading then that's maybe not such a + +00:58:32.480 --> 00:58:35.680 +huge problem because we could just put + +00:58:34.160 --> 00:58:37.680 +you know the highest scoring documents + +00:58:35.680 --> 00:58:39.920 +at the beginning that might even be more + +00:58:37.680 --> 00:58:42.440 +effective than uh you know concatenating + +00:58:39.920 --> 00:58:44.160 +lots of low scoring documents together + +00:58:42.440 --> 00:58:45.559 +but if we want to read a really long + +00:58:44.160 --> 00:58:48.839 +document and synthesize something + +00:58:45.559 --> 00:58:52.200 +without doing kind of another uh scoring + +00:58:48.839 --> 00:58:54.200 +step uh that can be an issue and also + +00:58:52.200 --> 00:58:56.359 +you know our retriever is not perfect so + +00:58:54.200 --> 00:58:58.799 +we would like the model to the reader + +00:58:56.359 --> 00:59:00.520 +model to do a good job with the outputs + +00:58:58.799 --> 00:59:04.839 +that it + +00:59:00.520 --> 00:59:06.359 +has so there are methods uh to ensure + +00:59:04.839 --> 00:59:09.440 +use of relevant + +00:59:06.359 --> 00:59:12.119 +context so of course better retrievers + +00:59:09.440 --> 00:59:14.880 +make more relevant context you can do + +00:59:12.119 --> 00:59:16.240 +you know reranking or other things like + +00:59:14.880 --> 00:59:17.280 +that and only include the context that + +00:59:16.240 --> 00:59:19.680 +looks most + +00:59:17.280 --> 00:59:22.880 +relevant um or you know refine your + +00:59:19.680 --> 00:59:25.200 +reader model but there's also methods + +00:59:22.880 --> 00:59:28.720 +that can decide whether contact should + +00:59:25.200 --> 00:59:32.400 +be used in the first place so um there + +00:59:28.720 --> 00:59:35.440 +are methods uh to decide whether to use + +00:59:32.400 --> 00:59:37.559 +whether to include passages or not and + +00:59:35.440 --> 00:59:39.920 +also uh recently we proposed a method to + +00:59:37.559 --> 00:59:42.640 +filter down to parts of retrieve + +00:59:39.920 --> 00:59:44.920 +passages uh to have only appropriate + +00:59:42.640 --> 00:59:47.480 +content and this is a model uh that we + +00:59:44.920 --> 00:59:49.319 +called filco it basically filters the + +00:59:47.480 --> 00:59:52.160 +context down to the most relevant + +00:59:49.319 --> 00:59:53.920 +content that we think is appropriate and + +00:59:52.160 --> 00:59:56.960 +that allows us to get better results + +00:59:53.920 --> 00:59:56.960 +when it's fed to the + +00:59:57.079 --> 01:00:03.640 +generator so that's all I have for today + +01:00:00.319 --> 01:00:06.200 +um thank you for watching the video and + +01:00:03.640 --> 01:00:08.599 +for people in the class I'll be happy to + +01:00:06.200 --> 01:00:13.079 +take questions on Piaza or during the + +01:00:08.599 --> 01:00:13.079 +office hours that I had planned thanks a + +01:00:15.319 --> 01:00:18.319 +lot diff --git a/CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/CMU Advanced NLP 2024 (11) Distillation Quantization and Pruning.mp4 b/CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/CMU Advanced NLP 2024 (11) Distillation Quantization and Pruning.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6f23ffa77e7758dff329bb62a57c6dbe6d452828 --- /dev/null +++ b/CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/CMU Advanced NLP 2024 (11) Distillation Quantization and Pruning.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a58b185362d8bffc64edfe4141f67f0804b6efa04d4bfbba63316ce1b5dd8fe +size 65064579 diff --git a/CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/metadata.json b/CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..e82b75af5d1f3525cdfa76e58fdb183c9557219a --- /dev/null +++ b/CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=s9yyH3RPhdM", + "title": "CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/transcript.srt b/CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..a2e61432020b81a9c3f0119e77d978c588174dfd --- /dev/null +++ b/CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/transcript.srt @@ -0,0 +1,6567 @@ +1 +00:00:01,839 --> 00:00:08,520 +okay so um I'm here substituting for + +2 +00:00:04,759 --> 00:00:08,520 +Graham today because he's traveling + +3 +00:00:10,360 --> 00:00:14,280 +um and yeah we can just get right into + +4 +00:00:12,639 --> 00:00:17,480 +it so + +5 +00:00:14,280 --> 00:00:19,480 +um as everyone here knows NLP models now + +6 +00:00:17,480 --> 00:00:23,199 +are like really deployed at a large + +7 +00:00:19,480 --> 00:00:25,519 +scale um and we all know that training + +8 +00:00:23,199 --> 00:00:27,519 +big models is expensive um I'm sure that + +9 +00:00:25,519 --> 00:00:31,840 +you've experienced that in homework one + +10 +00:00:27,519 --> 00:00:34,760 +and uh in any time you train a network a + +11 +00:00:31,840 --> 00:00:36,640 +deep Network you need GPU resources and + +12 +00:00:34,760 --> 00:00:38,320 +this is something that we all understand + +13 +00:00:36,640 --> 00:00:41,360 +um but something that I think is + +14 +00:00:38,320 --> 00:00:43,480 +overlooked is that inference so once you + +15 +00:00:41,360 --> 00:00:47,160 +have a train model now deploying it and + +16 +00:00:43,480 --> 00:00:49,520 +making predictions for users is arguably + +17 +00:00:47,160 --> 00:00:53,440 +even more expensive like if you look at + +18 +00:00:49,520 --> 00:00:55,600 +the lifetime of a model um it probably + +19 +00:00:53,440 --> 00:00:58,120 +exceeds the training costs according to + +20 +00:00:55,600 --> 00:01:01,039 +this analysis within just one week of + +21 +00:00:58,120 --> 00:01:04,080 +use and so if your model is being used + +22 +00:01:01,039 --> 00:01:06,119 +for months or years of many people the + +23 +00:01:04,080 --> 00:01:07,479 +cost will like greatly eclipse the + +24 +00:01:06,119 --> 00:01:10,360 +training costs which is more of a + +25 +00:01:07,479 --> 00:01:13,640 +onetime cost + +26 +00:01:10,360 --> 00:01:16,920 +um and this is a problem because if we + +27 +00:01:13,640 --> 00:01:18,600 +want to make AI systems able to help + +28 +00:01:16,920 --> 00:01:21,600 +lots of different people in different + +29 +00:01:18,600 --> 00:01:25,040 +places um people without as much + +30 +00:01:21,600 --> 00:01:27,640 +resources or access to power um we want + +31 +00:01:25,040 --> 00:01:31,840 +to be able to reduce the cost of serving + +32 +00:01:27,640 --> 00:01:31,840 +AI systems to the public + +33 +00:01:32,479 --> 00:01:37,520 +and this is also getting harder because + +34 +00:01:34,880 --> 00:01:39,840 +models are getting bigger + +35 +00:01:37,520 --> 00:01:42,600 +um there's been like maybe a slight + +36 +00:01:39,840 --> 00:01:45,159 +shift towards reducing model size a + +37 +00:01:42,600 --> 00:01:48,280 +little bit in the last maybe two years + +38 +00:01:45,159 --> 00:01:49,960 +but these models are still like billions + +39 +00:01:48,280 --> 00:01:51,439 +of parameters in size and that is + +40 +00:01:49,960 --> 00:01:54,759 +expensive to + +41 +00:01:51,439 --> 00:01:57,119 +serve so uh the main question of this + +42 +00:01:54,759 --> 00:02:00,560 +that we'll be talking about in today's + +43 +00:01:57,119 --> 00:02:02,880 +lecture is how can we cheaply + +44 +00:02:00,560 --> 00:02:06,320 +efficiently and equitably deploy NLP + +45 +00:02:02,880 --> 00:02:09,319 +systems without sacrificing + +46 +00:02:06,320 --> 00:02:10,679 +performance and uh there's like a clear + +47 +00:02:09,319 --> 00:02:13,879 +answer here that I'll I'm kind of + +48 +00:02:10,679 --> 00:02:16,519 +leading towards which is model + +49 +00:02:13,879 --> 00:02:18,879 +compression + +50 +00:02:16,519 --> 00:02:21,080 +and model compression here basically + +51 +00:02:18,879 --> 00:02:23,160 +means taking a trained model and then + +52 +00:02:21,080 --> 00:02:26,519 +reducing the size of that + +53 +00:02:23,160 --> 00:02:28,760 +model before deploying it and there's + +54 +00:02:26,519 --> 00:02:30,800 +three highle ways that we'll talk about + +55 +00:02:28,760 --> 00:02:34,000 +in today's lecture for how we can + +56 +00:02:30,800 --> 00:02:36,840 +compress models the first is + +57 +00:02:34,000 --> 00:02:38,640 +quantization which is you basically + +58 +00:02:36,840 --> 00:02:40,959 +don't really change the architecture or + +59 +00:02:38,640 --> 00:02:42,360 +parameters of the model up to a certain + +60 +00:02:40,959 --> 00:02:43,920 +amount of precision and then you throw + +61 +00:02:42,360 --> 00:02:47,879 +away the remainder of the + +62 +00:02:43,920 --> 00:02:49,959 +Precision the second is pruning so uh + +63 +00:02:47,879 --> 00:02:51,519 +throwing out entire components or + +64 +00:02:49,959 --> 00:02:54,879 +parameters of a + +65 +00:02:51,519 --> 00:02:56,680 +model and the third uh is distillation + +66 +00:02:54,879 --> 00:02:57,879 +where you might change all of the + +67 +00:02:56,680 --> 00:02:59,760 +parameters but you're basically + +68 +00:02:57,879 --> 00:03:02,760 +condensing the knowledge of a big mod + +69 +00:02:59,760 --> 00:03:04,599 +model into a smaller model that is + +70 +00:03:02,760 --> 00:03:07,200 +retrained often from scratch to + +71 +00:03:04,599 --> 00:03:08,680 +replicate the behavior of the big model + +72 +00:03:07,200 --> 00:03:11,480 +and don't worry I'll go into much more + +73 +00:03:08,680 --> 00:03:11,480 +detail with all these + +74 +00:03:28,680 --> 00:03:31,680 +things + +75 +00:03:39,760 --> 00:03:43,280 +uh I think the mic the mic stopped + +76 +00:03:41,840 --> 00:03:45,080 +working so I'm going to speak up if + +77 +00:03:43,280 --> 00:03:48,519 +anyone's having trouble hearing me just + +78 +00:03:45,080 --> 00:03:48,519 +uh please raise your hand and + +79 +00:03:48,680 --> 00:03:53,599 +yeah okay so I've motivated this idea of + +80 +00:03:51,879 --> 00:03:55,680 +model compression which is very tempting + +81 +00:03:53,599 --> 00:03:58,120 +right just take a model make it smaller + +82 +00:03:55,680 --> 00:04:00,760 +get the same performance and um it's + +83 +00:03:58,120 --> 00:04:02,920 +just cheaper to serve right nothing + +84 +00:04:00,760 --> 00:04:04,519 +nothing about that seems bad so I think + +85 +00:04:02,920 --> 00:04:07,200 +there's a natural question of why is + +86 +00:04:04,519 --> 00:04:10,040 +this even possible uh and specifically I + +87 +00:04:07,200 --> 00:04:12,840 +think a natural question is instead of + +88 +00:04:10,040 --> 00:04:14,680 +taking a big model and making it smaller + +89 +00:04:12,840 --> 00:04:16,799 +why not just start with a small model + +90 +00:04:14,680 --> 00:04:19,479 +and train it as such like that seems a + +91 +00:04:16,799 --> 00:04:21,000 +little more intuitive um so that's the + +92 +00:04:19,479 --> 00:04:23,320 +first question is why not just start + +93 +00:04:21,000 --> 00:04:25,479 +with a small model that we train and + +94 +00:04:23,320 --> 00:04:27,360 +then the second question would be um why + +95 +00:04:25,479 --> 00:04:29,960 +is it possible to take a big model and + +96 +00:04:27,360 --> 00:04:32,360 +throw pieces of it away without + +97 +00:04:29,960 --> 00:04:33,960 +sacrificing accuracy that does not seem + +98 +00:04:32,360 --> 00:04:34,880 +like like a given that that should be + +99 +00:04:33,960 --> 00:04:36,960 +even + +100 +00:04:34,880 --> 00:04:42,720 +possible and I'll just give you a little + +101 +00:04:36,960 --> 00:04:42,720 +intuition for why this is possible um + +102 +00:04:42,880 --> 00:04:49,199 +so Mo so this term over parameterize um + +103 +00:04:47,639 --> 00:04:51,080 +uh means how many people are familiar + +104 +00:04:49,199 --> 00:04:53,880 +with this term you can raise your + +105 +00:04:51,080 --> 00:04:56,479 +hand the the basic me uh the meaning of + +106 +00:04:53,880 --> 00:04:59,240 +this term is that you have a model that + +107 +00:04:56,479 --> 00:05:02,039 +has usually more parameters than you + +108 +00:04:59,240 --> 00:05:03,520 +have training data or more in more + +109 +00:05:02,039 --> 00:05:06,800 +casual terms you just have a lot of + +110 +00:05:03,520 --> 00:05:08,280 +parameters like way more than + +111 +00:05:06,800 --> 00:05:11,039 +statistical machine learning would say + +112 +00:05:08,280 --> 00:05:14,160 +you need so for example like gpt3 the + +113 +00:05:11,039 --> 00:05:17,639 +original gpt3 model had 170 billion + +114 +00:05:14,160 --> 00:05:20,520 +parameters which is like definitely + +115 +00:05:17,639 --> 00:05:22,960 +overparameterized um and so there's been + +116 +00:05:20,520 --> 00:05:26,400 +a lot of work in the theory community of + +117 +00:05:22,960 --> 00:05:28,240 +ml that shows that overparameterized + +118 +00:05:26,400 --> 00:05:30,919 +models models that have a huge number of + +119 +00:05:28,240 --> 00:05:34,720 +parameters are actually much easier to + +120 +00:05:30,919 --> 00:05:38,360 +train uh especially for very complicated + +121 +00:05:34,720 --> 00:05:41,680 +tasks and um the basic idea is that + +122 +00:05:38,360 --> 00:05:44,280 +training deep neural networks for for + +123 +00:05:41,680 --> 00:05:46,759 +most tasks requires optimizing a + +124 +00:05:44,280 --> 00:05:49,000 +non-convex objective which is not + +125 +00:05:46,759 --> 00:05:52,360 +guaranteed to you're not guaranteed to + +126 +00:05:49,000 --> 00:05:54,560 +find the global Optimum of a non-convex + +127 +00:05:52,360 --> 00:05:56,280 +objective but when you have a bunch of + +128 +00:05:54,560 --> 00:05:58,680 +parameters and you're trying to + +129 +00:05:56,280 --> 00:06:01,960 +basically tune the parameters to find + +130 +00:05:58,680 --> 00:06:03,440 +the best value of your objective um + +131 +00:06:01,960 --> 00:06:06,960 +having a lot of parameters sort of lets + +132 +00:06:03,440 --> 00:06:10,360 +you Sid step around saddle points or + +133 +00:06:06,960 --> 00:06:12,440 +local Optima that are not Global Optima + +134 +00:06:10,360 --> 00:06:14,720 +uh you can basically like take sort of + +135 +00:06:12,440 --> 00:06:17,120 +shortcuts around barriers in the + +136 +00:06:14,720 --> 00:06:19,880 +optimization space um this is sort of + +137 +00:06:17,120 --> 00:06:21,560 +the intuition and uh the cmu's convex + +138 +00:06:19,880 --> 00:06:24,759 +optimization class goes into a lot more + +139 +00:06:21,560 --> 00:06:27,520 +detail uh for this kind of thing anyways + +140 +00:06:24,759 --> 00:06:28,960 +I think the intuition here is that + +141 +00:06:27,520 --> 00:06:30,680 +models with a lot of parameters are + +142 +00:06:28,960 --> 00:06:34,280 +easier to train and they lead to better + +143 +00:06:30,680 --> 00:06:36,000 +models um but you don't Pro you probably + +144 +00:06:34,280 --> 00:06:37,759 +don't need all those parameters for + +145 +00:06:36,000 --> 00:06:38,599 +inference they're more of a training + +146 +00:06:37,759 --> 00:06:41,000 +time + +147 +00:06:38,599 --> 00:06:43,919 +trick okay so now + +148 +00:06:41,000 --> 00:06:46,560 +um before I move on any questions about + +149 +00:06:43,919 --> 00:06:46,560 +the motivation + +150 +00:06:53,800 --> 00:07:00,840 +here okay so um we'll start with + +151 +00:06:57,440 --> 00:07:03,960 +quantization and uh the most obvious way + +152 +00:07:00,840 --> 00:07:07,039 +to do this is post training quantization + +153 +00:07:03,960 --> 00:07:09,840 +so you train a model as big as you want + +154 +00:07:07,039 --> 00:07:12,800 +and then you just reduce the Precision + +155 +00:07:09,840 --> 00:07:15,240 +of say all of the weights in that model + +156 +00:07:12,800 --> 00:07:18,160 +so for example where we can revisit this + +157 +00:07:15,240 --> 00:07:21,479 +uh slide that Graham had shown two two + +158 +00:07:18,160 --> 00:07:25,400 +lectures ago on if you have a 65 billion + +159 +00:07:21,479 --> 00:07:27,879 +parameter model like llama 2 um and you + +160 +00:07:25,400 --> 00:07:31,199 +have say four bit Precision so four + +161 +00:07:27,879 --> 00:07:33,520 +bytes of precision so like 30 2 bit + +162 +00:07:31,199 --> 00:07:36,840 +floats just loading that model into + +163 +00:07:33,520 --> 00:07:40,199 +memory would take 260 gbt of of GPU + +164 +00:07:36,840 --> 00:07:44,000 +memory which is more than most single + +165 +00:07:40,199 --> 00:07:46,879 +gpus that you could bu have um but if + +166 +00:07:44,000 --> 00:07:49,599 +you instead reduce the Precision of the + +167 +00:07:46,879 --> 00:07:51,199 +parameters of the weights in that model + +168 +00:07:49,599 --> 00:07:54,159 +you see like a pretty + +169 +00:07:51,199 --> 00:07:56,039 +massive decrease which is linear to the + +170 +00:07:54,159 --> 00:07:58,440 +uh R reduction in + +171 +00:07:56,039 --> 00:08:01,199 +Precision at the most extreme case if + +172 +00:07:58,440 --> 00:08:04,280 +you replaced each float 32 with a single + +173 +00:08:01,199 --> 00:08:07,039 +bit so zero or one um then you would + +174 +00:08:04,280 --> 00:08:11,039 +only have like an 8 gigabyte model which + +175 +00:08:07,039 --> 00:08:14,199 +you could probably load into most + +176 +00:08:11,039 --> 00:08:15,919 +gpus and so uh I think there's a clearly + +177 +00:08:14,199 --> 00:08:17,000 +an attractive proposition here in terms + +178 +00:08:15,919 --> 00:08:20,680 +of the + +179 +00:08:17,000 --> 00:08:23,159 +costs uh but before we go into some of + +180 +00:08:20,680 --> 00:08:26,400 +the nitty-gritty here um just a + +181 +00:08:23,159 --> 00:08:29,520 +refresher from computer systems + +182 +00:08:26,400 --> 00:08:30,919 +so neural Nets uh are typically + +183 +00:08:29,520 --> 00:08:33,800 +represent weights as floating Point + +184 +00:08:30,919 --> 00:08:36,760 +numbers in order to express + +185 +00:08:33,800 --> 00:08:39,760 +um to have a broader range of values in + +186 +00:08:36,760 --> 00:08:42,240 +the model and so floating points in at + +187 +00:08:39,760 --> 00:08:43,560 +least in like the itle E standard which + +188 +00:08:42,240 --> 00:08:46,640 +is the most + +189 +00:08:43,560 --> 00:08:48,720 +common you have three pieces you have + +190 +00:08:46,640 --> 00:08:49,760 +like a sign bit which says is it + +191 +00:08:48,720 --> 00:08:52,680 +positive or + +192 +00:08:49,760 --> 00:08:56,000 +negative a fractional bit which a + +193 +00:08:52,680 --> 00:08:58,200 +fractional piece um which specifies sort + +194 +00:08:56,000 --> 00:09:00,959 +of + +195 +00:08:58,200 --> 00:09:03,640 +the + +196 +00:09:00,959 --> 00:09:06,240 +which uh it specifies the range of the + +197 +00:09:03,640 --> 00:09:08,640 +values and then the exponent which uh + +198 +00:09:06,240 --> 00:09:11,440 +scales how big or small the per the + +199 +00:09:08,640 --> 00:09:13,680 +float is so we can give an example here + +200 +00:09:11,440 --> 00:09:16,839 +so here + +201 +00:09:13,680 --> 00:09:19,200 +um we have we can do float 16 where we + +202 +00:09:16,839 --> 00:09:23,279 +have like 10 bits of fraction so this + +203 +00:09:19,200 --> 00:09:25,680 +gives a lot more range of + +204 +00:09:23,279 --> 00:09:27,800 +uh of what the number could be then the + +205 +00:09:25,680 --> 00:09:29,839 +exponent bit is five so you have up to + +206 +00:09:27,800 --> 00:09:33,720 +like 2 to the^ of five + +207 +00:09:29,839 --> 00:09:36,000 +or 2^ NE um as like the scaling Factor + +208 +00:09:33,720 --> 00:09:40,160 +here um and then the sign which is + +209 +00:09:36,000 --> 00:09:45,760 +positive one or negative one um and + +210 +00:09:40,160 --> 00:09:47,680 +so float 16 is uh is pretty common but + +211 +00:09:45,760 --> 00:09:49,920 +for machine learning it's often not + +212 +00:09:47,680 --> 00:09:52,360 +enough because in especially when you're + +213 +00:09:49,920 --> 00:09:54,600 +trying to train a neuronet you often + +214 +00:09:52,360 --> 00:09:58,480 +have very small or very big values like + +215 +00:09:54,600 --> 00:10:00,279 +underflow or overflow and therefore uh a + +216 +00:09:58,480 --> 00:10:02,399 +really popular data type that was + +217 +00:10:00,279 --> 00:10:06,519 +designed just for machine learning is + +218 +00:10:02,399 --> 00:10:07,880 +called B flat 16 which just like moves + +219 +00:10:06,519 --> 00:10:09,680 +the idea is you're just moving some of + +220 +00:10:07,880 --> 00:10:11,600 +the bits from the fractional part to the + +221 +00:10:09,680 --> 00:10:14,279 +exponential part so you can have a + +222 +00:10:11,600 --> 00:10:16,920 +larger range but within that range you + +223 +00:10:14,279 --> 00:10:19,600 +may have a fewer choice of values to + +224 +00:10:16,920 --> 00:10:21,519 +choose from but it just works um it + +225 +00:10:19,600 --> 00:10:23,480 +supports like some of the the problems + +226 +00:10:21,519 --> 00:10:27,560 +that we face in machine + +227 +00:10:23,480 --> 00:10:30,720 +learning uh anyway so this is floating + +228 +00:10:27,560 --> 00:10:32,560 +Point types but as you can imagine um + +229 +00:10:30,720 --> 00:10:35,440 +once you get below + +230 +00:10:32,560 --> 00:10:37,959 +16 you're really impacting the amount of + +231 +00:10:35,440 --> 00:10:39,480 +things you can represent in a float + +232 +00:10:37,959 --> 00:10:41,639 +given that you need like one thing for + +233 +00:10:39,480 --> 00:10:43,320 +the sign + +234 +00:10:41,639 --> 00:10:46,600 +uh and + +235 +00:10:43,320 --> 00:10:48,279 +then if you're range of the exponent is + +236 +00:10:46,600 --> 00:10:50,000 +small then you're suddenly you don't + +237 +00:10:48,279 --> 00:10:52,959 +have that that much of a range of values + +238 +00:10:50,000 --> 00:10:56,040 +you can represent at all so + +239 +00:10:52,959 --> 00:10:59,839 +um a really popular way to like get + +240 +00:10:56,040 --> 00:11:02,320 +really small amounts of uh of footprint + +241 +00:10:59,839 --> 00:11:03,720 +in models is by quantizing to integers + +242 +00:11:02,320 --> 00:11:06,160 +and this is not that obvious because + +243 +00:11:03,720 --> 00:11:10,120 +you're taking a float and turning it + +244 +00:11:06,160 --> 00:11:12,639 +into an INT um and so uh one way this is + +245 +00:11:10,120 --> 00:11:15,639 +done is called ABS Max or absolute + +246 +00:11:12,639 --> 00:11:20,160 +maximum quantization where you basically + +247 +00:11:15,639 --> 00:11:23,600 +map each number in a list of floats to a + +248 +00:11:20,160 --> 00:11:27,399 +range of integers so uh for float for + +249 +00:11:23,600 --> 00:11:30,800 +int8 the range would be like27 to 127 + +250 +00:11:27,399 --> 00:11:32,920 +because that's 155 total values and two + +251 +00:11:30,800 --> 00:11:37,720 +the^ R is 256 + +252 +00:11:32,920 --> 00:11:39,600 +um and here you basically uh find the + +253 +00:11:37,720 --> 00:11:41,800 +the absolute value that is largest in + +254 +00:11:39,600 --> 00:11:45,880 +the whole array and then that + +255 +00:11:41,800 --> 00:11:47,680 +becomes uh 127 or whatever your whatever + +256 +00:11:45,880 --> 00:11:49,920 +the the largest value in your integer + +257 +00:11:47,680 --> 00:11:53,160 +range is and then everything else + +258 +00:11:49,920 --> 00:11:55,959 +becomes uh assigned to the closest + +259 +00:11:53,160 --> 00:11:58,360 +integer that's like scaled by that value + +260 +00:11:55,959 --> 00:12:01,880 +so here for example in this in this + +261 +00:11:58,360 --> 00:12:05,360 +example we have 20 is the largest value + +262 +00:12:01,880 --> 00:12:07,920 +and so uh 20 would become assigned as + +263 +00:12:05,360 --> 00:12:11,839 +127 and then everything else would be + +264 +00:12:07,920 --> 00:12:12,720 +assigned uh proportional to that uh so + +265 +00:12:11,839 --> 00:12:17,399 +like + +266 +00:12:12,720 --> 00:12:20,160 +0.5 uh is 140th of 20 and 140th of 127 + +267 +00:12:17,399 --> 00:12:23,279 +is rounds to the nearest number of three + +268 +00:12:20,160 --> 00:12:26,240 +and so um this is kind of how you can go + +269 +00:12:23,279 --> 00:12:27,880 +beyond floats and actually represent uh + +270 +00:12:26,240 --> 00:12:30,920 +parameters with + +271 +00:12:27,880 --> 00:12:32,519 +in the most extreme example which should + +272 +00:12:30,920 --> 00:12:33,880 +also now highlight some of the issues + +273 +00:12:32,519 --> 00:12:36,959 +with this idea of post- trining + +274 +00:12:33,880 --> 00:12:39,199 +quantization is what if we just had a + +275 +00:12:36,959 --> 00:12:43,360 +binary value for every parameter zero or + +276 +00:12:39,199 --> 00:12:46,079 +one um and so instead of having so we + +277 +00:12:43,360 --> 00:12:49,040 +might train a model using floats and we + +278 +00:12:46,079 --> 00:12:53,399 +get these parameters so we have + +279 +00:12:49,040 --> 00:12:56,079 +um the purple here is like the uh hidden + +280 +00:12:53,399 --> 00:12:58,399 +States and then the red would be the + +281 +00:12:56,079 --> 00:13:00,440 +activations um and these are all between + +282 +00:12:58,399 --> 00:13:02,839 +zero and one one right but if we now + +283 +00:13:00,440 --> 00:13:07,560 +round them to the nearest Value Z or one + +284 +00:13:02,839 --> 00:13:09,040 +we get uh on the top there a list of + +285 +00:13:07,560 --> 00:13:11,120 +binary + +286 +00:13:09,040 --> 00:13:12,360 +values and this seems like really + +287 +00:13:11,120 --> 00:13:16,440 +attractive right because you only need + +288 +00:13:12,360 --> 00:13:18,959 +one bit per per Vector it's like really + +289 +00:13:16,440 --> 00:13:21,519 +small but now let's consider like a real + +290 +00:13:18,959 --> 00:13:23,720 +example here where we are trying to do + +291 +00:13:21,519 --> 00:13:27,760 +translation uh and then we are producing + +292 +00:13:23,720 --> 00:13:30,279 +these float valued hidden States and + +293 +00:13:27,760 --> 00:13:32,839 +activations + +294 +00:13:30,279 --> 00:13:37,440 +in this example if I just rounded up or + +295 +00:13:32,839 --> 00:13:39,000 +down each of the values um the output + +296 +00:13:37,440 --> 00:13:40,320 +vectors that would be that like the the + +297 +00:13:39,000 --> 00:13:42,880 +embedding vectors that we would then + +298 +00:13:40,320 --> 00:13:45,440 +decode to outputs um even though they're + +299 +00:13:42,880 --> 00:13:48,120 +very different in the original float + +300 +00:13:45,440 --> 00:13:50,040 +space they actually become all the same + +301 +00:13:48,120 --> 00:13:52,880 +thing here um which is definitely not + +302 +00:13:50,040 --> 00:13:54,320 +what you want so basically by reducing + +303 +00:13:52,880 --> 00:13:56,480 +the Precision you might be like + +304 +00:13:54,320 --> 00:13:59,320 +significantly impacting the range of + +305 +00:13:56,480 --> 00:14:01,880 +things you can express and this can so + +306 +00:13:59,320 --> 00:14:03,360 +this basically does not work like + +307 +00:14:01,880 --> 00:14:05,800 +turning a + +308 +00:14:03,360 --> 00:14:08,519 +complex uh set of floats of of high + +309 +00:14:05,800 --> 00:14:11,639 +Precision floats to binary numbers does + +310 +00:14:08,519 --> 00:14:13,440 +not work uh and later I'll show that + +311 +00:14:11,639 --> 00:14:14,959 +there are ways to make this work that + +312 +00:14:13,440 --> 00:14:17,720 +are a little more complicated they just + +313 +00:14:14,959 --> 00:14:19,839 +require more processing after the + +314 +00:14:17,720 --> 00:14:22,160 +initial training of your + +315 +00:14:19,839 --> 00:14:24,959 +model okay so now that I've motivated + +316 +00:14:22,160 --> 00:14:26,959 +this problem uh we can talk about a few + +317 +00:14:24,959 --> 00:14:31,040 +like methods that actually work for post + +318 +00:14:26,959 --> 00:14:33,800 +trining quantization um the first one is + +319 +00:14:31,040 --> 00:14:36,519 +uh belongs to a class of methods called + +320 +00:14:33,800 --> 00:14:38,440 +Model aware quantization and the idea + +321 +00:14:36,519 --> 00:14:40,959 +here is that if you can study the + +322 +00:14:38,440 --> 00:14:44,759 +statistics of your model you can sort of + +323 +00:14:40,959 --> 00:14:48,560 +learn um ways to represent values in a + +324 +00:14:44,759 --> 00:14:51,160 +way that is matching the actual learning + +325 +00:14:48,560 --> 00:14:54,120 +uh the actual uh distribution of Weights + +326 +00:14:51,160 --> 00:14:59,480 +in that model so for example with Bert + +327 +00:14:54,120 --> 00:15:01,320 +uh most of the weights in each layer are + +328 +00:14:59,480 --> 00:15:03,440 +are concentrated around like the mean + +329 +00:15:01,320 --> 00:15:06,240 +value there and you have a few weights + +330 +00:15:03,440 --> 00:15:07,959 +that are very far from that mean value + +331 +00:15:06,240 --> 00:15:10,680 +so you can sort to fit a a gausian + +332 +00:15:07,959 --> 00:15:14,440 +distribution normal distribution to the + +333 +00:15:10,680 --> 00:15:16,680 +distribution of Weights um and then only + +334 +00:15:14,440 --> 00:15:19,320 +a few weights in each layer will be at + +335 +00:15:16,680 --> 00:15:22,560 +the taals of this distribution so the + +336 +00:15:19,320 --> 00:15:24,160 +idea here is that therefore um and for + +337 +00:15:22,560 --> 00:15:25,880 +uh just to to motivate this a little + +338 +00:15:24,160 --> 00:15:28,680 +more + +339 +00:15:25,880 --> 00:15:30,480 +um if you have values at the tals of the + +340 +00:15:28,680 --> 00:15:32,000 +distribution they pose issues for + +341 +00:15:30,480 --> 00:15:33,639 +quantization because like if you're + +342 +00:15:32,000 --> 00:15:35,480 +using the ABS Max quantization I + +343 +00:15:33,639 --> 00:15:37,399 +mentioned before you're now defining + +344 +00:15:35,480 --> 00:15:39,880 +your range according to the minimum and + +345 +00:15:37,399 --> 00:15:41,680 +maximum values and then everything in + +346 +00:15:39,880 --> 00:15:43,480 +between which might be close together + +347 +00:15:41,680 --> 00:15:46,880 +will now be grouped into into the same + +348 +00:15:43,480 --> 00:15:49,440 +bucket and that throws away a lot of the + +349 +00:15:46,880 --> 00:15:53,199 +ability to distinguish between weights + +350 +00:15:49,440 --> 00:15:55,600 +in your network so the idea here is that + +351 +00:15:53,199 --> 00:15:57,560 +you basically store the outliers + +352 +00:15:55,600 --> 00:15:59,800 +separately and you actually store them + +353 +00:15:57,560 --> 00:16:02,000 +in full precision so you're like paying + +354 +00:15:59,800 --> 00:16:03,959 +the full storage cost for a few + +355 +00:16:02,000 --> 00:16:05,319 +parameters and then everything else + +356 +00:16:03,959 --> 00:16:07,279 +that's like sort of concentrated + +357 +00:16:05,319 --> 00:16:10,959 +together gets quantized into a much + +358 +00:16:07,279 --> 00:16:13,360 +lower Precision space um and I think + +359 +00:16:10,959 --> 00:16:15,279 +that this is at least in theory very + +360 +00:16:13,360 --> 00:16:17,720 +effective and they have strong results + +361 +00:16:15,279 --> 00:16:20,560 +here um + +362 +00:16:17,720 --> 00:16:24,880 +and however a problem with that approach + +363 +00:16:20,560 --> 00:16:27,959 +is that um you're defining like the + +364 +00:16:24,880 --> 00:16:31,160 +the outliers and the minimum and maximum + +365 +00:16:27,959 --> 00:16:33,440 +for each layer uniformly um where so + +366 +00:16:31,160 --> 00:16:35,720 +instead this llm inate which is actually + +367 +00:16:33,440 --> 00:16:38,399 +very popular in NLP um you might have + +368 +00:16:35,720 --> 00:16:43,399 +heard of it if you're if you uh have + +369 +00:16:38,399 --> 00:16:45,079 +been building NLP systems um they go a + +370 +00:16:43,399 --> 00:16:48,000 +little a step further and they instead + +371 +00:16:45,079 --> 00:16:51,440 +of quantizing each layer uniformly um + +372 +00:16:48,000 --> 00:16:53,639 +they quantize each row or column of a + +373 +00:16:51,440 --> 00:16:57,279 +vector in m in matrix multiplication + +374 +00:16:53,639 --> 00:17:00,880 +separately um with the motivation that + +375 +00:16:57,279 --> 00:17:03,800 +most of the parameters in Transformers + +376 +00:17:00,880 --> 00:17:06,480 +are for matrix multiplication um and so + +377 +00:17:03,800 --> 00:17:09,120 +by doing this they're able to actually + +378 +00:17:06,480 --> 00:17:10,919 +get a better quantization uh because + +379 +00:17:09,120 --> 00:17:14,160 +you're able to like have a more precise + +380 +00:17:10,919 --> 00:17:16,280 +space range of the values um for each + +381 +00:17:14,160 --> 00:17:20,120 +row or column of of a + +382 +00:17:16,280 --> 00:17:23,240 +matrix yeah just curious in the previous + +383 +00:17:20,120 --> 00:17:24,480 +slide why why exactly is the frequency + +384 +00:17:23,240 --> 00:17:30,400 +changing + +385 +00:17:24,480 --> 00:17:30,400 +Bas based on the layers um + +386 +00:17:30,880 --> 00:17:36,640 +different layers might be have might + +387 +00:17:32,880 --> 00:17:38,960 +have more concentration of weights to a + +388 +00:17:36,640 --> 00:17:40,240 +single value or like the values of + +389 +00:17:38,960 --> 00:17:42,520 +Weights in that layer might be + +390 +00:17:40,240 --> 00:17:43,720 +concentrated or might be more broad I + +391 +00:17:42,520 --> 00:17:46,200 +think that's how to think of it is that + +392 +00:17:43,720 --> 00:17:46,200 +invers + +393 +00:17:47,520 --> 00:17:52,919 +proportioners lay layer has high + +394 +00:17:50,440 --> 00:17:52,919 +frequency + +395 +00:17:53,919 --> 00:17:58,960 +compar yeah I think that's right um I + +396 +00:17:56,320 --> 00:18:00,960 +think that that the uh this paper also + +397 +00:17:58,960 --> 00:18:03,360 +discusses that problem but as you get + +398 +00:18:00,960 --> 00:18:05,280 +later into the the layers of a network + +399 +00:18:03,360 --> 00:18:08,360 +you see a lot more of these outliers + +400 +00:18:05,280 --> 00:18:08,360 +these large magnitude + +401 +00:18:09,120 --> 00:18:15,240 +values um okay so uh moving on here the + +402 +00:18:12,960 --> 00:18:17,640 +last thing I'll say is that + +403 +00:18:15,240 --> 00:18:18,679 +um there is like an overhead you're + +404 +00:18:17,640 --> 00:18:21,600 +paying when you're doing this kind of + +405 +00:18:18,679 --> 00:18:23,360 +quantization where you have to for each + +406 +00:18:21,600 --> 00:18:25,559 +Vector you have to you you are now + +407 +00:18:23,360 --> 00:18:28,000 +mapping it to a list of numbers and you + +408 +00:18:25,559 --> 00:18:30,240 +need to then decode those numbers back + +409 +00:18:28,000 --> 00:18:32,720 +into floats like decode your ins back + +410 +00:18:30,240 --> 00:18:34,720 +into floats so there's an overhead that + +411 +00:18:32,720 --> 00:18:37,520 +that costs time when you're doing + +412 +00:18:34,720 --> 00:18:39,000 +inference um so if you have like a small + +413 +00:18:37,520 --> 00:18:41,600 +model this is not going to help you go + +414 +00:18:39,000 --> 00:18:43,400 +faster most likely but if you have like + +415 +00:18:41,600 --> 00:18:45,559 +a really big model it can actually like + +416 +00:18:43,400 --> 00:18:47,600 +double your inference speed at least and + +417 +00:18:45,559 --> 00:18:49,919 +it also lets you load models into memory + +418 +00:18:47,600 --> 00:18:51,360 +that you otherwise would not be able to + +419 +00:18:49,919 --> 00:18:54,679 +so there's definitely a trade-off in + +420 +00:18:51,360 --> 00:18:54,679 +when or this is really + +421 +00:18:54,840 --> 00:18:59,960 +desirable um any questions here before + +422 +00:18:58,120 --> 00:19:02,360 +Ive move on to some more high level + +423 +00:18:59,960 --> 00:19:02,360 +stuff on + +424 +00:19:10,440 --> 00:19:16,120 +quantization so uh in terms of Hardware + +425 +00:19:13,600 --> 00:19:18,000 +concerns I think one of the challenge as + +426 +00:19:16,120 --> 00:19:19,159 +uh like if you're somebody who's + +427 +00:19:18,000 --> 00:19:21,559 +interested in + +428 +00:19:19,159 --> 00:19:23,360 +algorithms you might have very creative + +429 +00:19:21,559 --> 00:19:26,120 +ideas for how you might do + +430 +00:19:23,360 --> 00:19:29,039 +quantization but the problem is that the + +431 +00:19:26,120 --> 00:19:31,679 +ability for quantization to actually + +432 +00:19:29,039 --> 00:19:34,159 +be effective or make things faster is + +433 +00:19:31,679 --> 00:19:36,600 +largely limited by both hardware and + +434 +00:19:34,159 --> 00:19:38,440 +also like low-level systems like your + +435 +00:19:36,600 --> 00:19:42,400 +the framework like pytorch that you're + +436 +00:19:38,440 --> 00:19:44,720 +running your models on um so at a + +437 +00:19:42,400 --> 00:19:46,880 +hardware level some data types are like + +438 +00:19:44,720 --> 00:19:50,080 +basically just not supported by Hardware + +439 +00:19:46,880 --> 00:19:52,720 +like int three like a three bit int is + +440 +00:19:50,080 --> 00:19:56,280 +not something that processors generally + +441 +00:19:52,720 --> 00:19:57,799 +support so uh if your quantization + +442 +00:19:56,280 --> 00:19:59,559 +method uses in3 it's effectively just + +443 +00:19:57,799 --> 00:20:00,919 +using in for and then you're not getting + +444 +00:19:59,559 --> 00:20:04,080 +a speed up + +445 +00:20:00,919 --> 00:20:05,640 +there and then pytorch has its own + +446 +00:20:04,080 --> 00:20:09,360 +requirements like pytorch doesn't have + +447 +00:20:05,640 --> 00:20:12,760 +in4 a lot of modules don't support + +448 +00:20:09,360 --> 00:20:13,840 +quantization at all in py torch like a + +449 +00:20:12,760 --> 00:20:16,360 +something like an + +450 +00:20:13,840 --> 00:20:20,039 +RNN which is now like becoming popular + +451 +00:20:16,360 --> 00:20:23,039 +again uh it's it's not really supporting + +452 +00:20:20,039 --> 00:20:25,200 +quantization right now so you definitely + +453 +00:20:23,039 --> 00:20:26,880 +if you're trying to go this route for a + +454 +00:20:25,200 --> 00:20:28,840 +practical application you'll want to + +455 +00:20:26,880 --> 00:20:30,440 +know what your Hardware is what your + +456 +00:20:28,840 --> 00:20:33,159 +framework is and what you can actually + +457 +00:20:30,440 --> 00:20:36,000 +support with your with the ways you want + +458 +00:20:33,159 --> 00:20:36,000 +to compress your + +459 +00:20:39,320 --> 00:20:45,480 +models and uh one last thing I'll say is + +460 +00:20:42,760 --> 00:20:48,159 +that both of the methods I showed so far + +461 +00:20:45,480 --> 00:20:50,720 +they both have to like they have their + +462 +00:20:48,159 --> 00:20:52,440 +own customized Hardware accelerators + +463 +00:20:50,720 --> 00:20:54,919 +that they wrote to make those things + +464 +00:20:52,440 --> 00:20:57,159 +work and this is like a lot of work and + +465 +00:20:54,919 --> 00:20:59,360 +most people probably don't don't have + +466 +00:20:57,159 --> 00:21:01,480 +the time to do this so um there are + +467 +00:20:59,360 --> 00:21:03,440 +methods that I haven't shown here that + +468 +00:21:01,480 --> 00:21:06,760 +do quantization in a way that is + +469 +00:21:03,440 --> 00:21:08,640 +effective without having to rewrite your + +470 +00:21:06,760 --> 00:21:10,640 +framework or your Hardware accelerator + +471 +00:21:08,640 --> 00:21:13,120 +um but this is definitely something to + +472 +00:21:10,640 --> 00:21:13,120 +consider with + +473 +00:21:13,200 --> 00:21:18,480 +quantization okay so now I think I've + +474 +00:21:15,120 --> 00:21:20,520 +motivated why post training quantization + +475 +00:21:18,480 --> 00:21:23,480 +is hard because you're throwing away + +476 +00:21:20,520 --> 00:21:25,919 +Precision which can make it hard to + +477 +00:21:23,480 --> 00:21:27,039 +um to get the most out of the network + +478 +00:21:25,919 --> 00:21:29,000 +that you have + +479 +00:21:27,039 --> 00:21:32,799 +trained + +480 +00:21:29,000 --> 00:21:34,200 +so uh attempting idea here is now we + +481 +00:21:32,799 --> 00:21:36,120 +know that let's say we know we're going + +482 +00:21:34,200 --> 00:21:39,320 +to quantize our model let's train the + +483 +00:21:36,120 --> 00:21:41,039 +model with quantization in mind um and + +484 +00:21:39,320 --> 00:21:43,440 +now we can revisit the example I showed + +485 +00:21:41,039 --> 00:21:46,120 +before of binarized neural networks + +486 +00:21:43,440 --> 00:21:47,919 +which like didn't work but it actually + +487 +00:21:46,120 --> 00:21:51,120 +can work if you train with the + +488 +00:21:47,919 --> 00:21:54,279 +binarization in mind so uh a paper in + +489 +00:21:51,120 --> 00:21:57,000 +2016 um they considered a case where all + +490 +00:21:54,279 --> 00:21:59,120 +of your weights were negative one or one + +491 +00:21:57,000 --> 00:22:03,200 +um activations were also negative one or + +492 +00:21:59,120 --> 00:22:06,640 +one um and they do some some like clever + +493 +00:22:03,200 --> 00:22:08,039 +statistics to make that work and then + +494 +00:22:06,640 --> 00:22:09,640 +the gradients that you back propagate + +495 +00:22:08,039 --> 00:22:13,640 +through the model are also negative like + +496 +00:22:09,640 --> 00:22:15,559 +they're also discreet um and so it kind + +497 +00:22:13,640 --> 00:22:18,320 +of it's It's probably kind of surprising + +498 +00:22:15,559 --> 00:22:21,679 +that this um that this works but they + +499 +00:22:18,320 --> 00:22:23,200 +basically are using the the core + +500 +00:22:21,679 --> 00:22:26,039 +mechanisms that we use to train neural + +501 +00:22:23,200 --> 00:22:29,880 +networks and using fancy statistics to + +502 +00:22:26,039 --> 00:22:32,760 +make this work for binary values + +503 +00:22:29,880 --> 00:22:35,559 +and by doing this they get like kind of + +504 +00:22:32,760 --> 00:22:37,039 +shockingly good results like on cfr1 + +505 +00:22:35,559 --> 00:22:40,120 +which was a very popular image + +506 +00:22:37,039 --> 00:22:42,360 +classification data set for a while um + +507 +00:22:40,120 --> 00:22:46,720 +they get like a + +508 +00:22:42,360 --> 00:22:48,640 +10% uh training sorry test set error and + +509 +00:22:46,720 --> 00:22:51,200 +a at the time like a state-of-the-art + +510 +00:22:48,640 --> 00:22:55,360 +method was at a little under + +511 +00:22:51,200 --> 00:22:57,559 +12% um so + +512 +00:22:55,360 --> 00:22:59,679 +uh like they're basically matching or + +513 +00:22:57,559 --> 00:23:02,919 +beating some of these extremely strong + +514 +00:22:59,679 --> 00:23:05,120 +models at that time and uh they used + +515 +00:23:02,919 --> 00:23:07,159 +effectively the same architecture from + +516 +00:23:05,120 --> 00:23:09,159 +some of these models but just binarized + +517 +00:23:07,159 --> 00:23:12,520 +so they kind this this was sort of a + +518 +00:23:09,159 --> 00:23:14,760 +proof of concept that if you quantize + +519 +00:23:12,520 --> 00:23:18,159 +during training um you can match + +520 +00:23:14,760 --> 00:23:21,279 +performance and get a much smaller model + +521 +00:23:18,159 --> 00:23:21,279 +which I think was a really surprising + +522 +00:23:22,520 --> 00:23:29,760 +finding and then uh a more recent work + +523 +00:23:25,440 --> 00:23:32,559 +that I think is really cool um is + +524 +00:23:29,760 --> 00:23:33,799 +that for doing quantization another + +525 +00:23:32,559 --> 00:23:36,919 +thing you can do is you can start with + +526 +00:23:33,799 --> 00:23:39,760 +your model that is um that is full + +527 +00:23:36,919 --> 00:23:42,120 +Precision not quantized and then you can + +528 +00:23:39,760 --> 00:23:45,200 +basically train each layer one layer at + +529 +00:23:42,120 --> 00:23:47,400 +a time to replicate its counterpart in + +530 +00:23:45,200 --> 00:23:49,080 +the full Precision space so you can like + +531 +00:23:47,400 --> 00:23:52,440 +run inputs through the full Precision + +532 +00:23:49,080 --> 00:23:55,279 +model you get the like the output the + +533 +00:23:52,440 --> 00:23:56,960 +probabilities of each word for example + +534 +00:23:55,279 --> 00:23:59,159 +and then you replicate your quantitized + +535 +00:23:56,960 --> 00:24:02,240 +model to reproduce to get very close to + +536 +00:23:59,159 --> 00:24:03,880 +those same weights um then you do this + +537 +00:24:02,240 --> 00:24:07,320 +at the second layer so now you have like + +538 +00:24:03,880 --> 00:24:09,120 +the the logits from the like the the + +539 +00:24:07,320 --> 00:24:11,080 +hidden States from the second to last + +540 +00:24:09,120 --> 00:24:14,520 +layer and then you train your quantize + +541 +00:24:11,080 --> 00:24:15,880 +layer to match those those hidden States + +542 +00:24:14,520 --> 00:24:18,520 +and you keep doing that all the way down + +543 +00:24:15,880 --> 00:24:20,799 +and I think the intuition here is that + +544 +00:24:18,520 --> 00:24:23,360 +um by doing like layer by layer + +545 +00:24:20,799 --> 00:24:25,480 +distillation you're sorted uh + +546 +00:24:23,360 --> 00:24:27,320 +replicating not just the output which is + +547 +00:24:25,480 --> 00:24:29,279 +kind of sparse and hard to replicate but + +548 +00:24:27,320 --> 00:24:32,159 +even the + +549 +00:24:29,279 --> 00:24:34,080 +um like the flow of data throughout the + +550 +00:24:32,159 --> 00:24:36,919 +whole model step by step and you can + +551 +00:24:34,080 --> 00:24:39,919 +replicate that into the quantized model + +552 +00:24:36,919 --> 00:24:43,360 +which um which may run into issues when + +553 +00:24:39,919 --> 00:24:43,360 +training just end to + +554 +00:24:45,000 --> 00:24:49,520 +end um and then the last work here which + +555 +00:24:48,039 --> 00:24:53,760 +Graham already talked about two lectures + +556 +00:24:49,520 --> 00:24:56,720 +ago uh what is Cur so here they use + +557 +00:24:53,760 --> 00:24:58,360 +parameter efficient finetuning uh to + +558 +00:24:56,720 --> 00:25:01,039 +train a + +559 +00:24:58,360 --> 00:25:02,399 +highly quantized like four- bit model + +560 +00:25:01,039 --> 00:25:04,240 +and they do a bunch of other fancy + +561 +00:25:02,399 --> 00:25:07,240 +tricks and Kora is like super popular + +562 +00:25:04,240 --> 00:25:08,440 +right now so uh this is probably the if + +563 +00:25:07,240 --> 00:25:11,919 +you're going to use the quantization + +564 +00:25:08,440 --> 00:25:11,919 +method today this probably would be + +565 +00:25:12,240 --> 00:25:18,760 +it okay I think I'm going to move on to + +566 +00:25:14,840 --> 00:25:18,760 +uh pruning now so any questions + +567 +00:25:26,679 --> 00:25:29,679 +here + +568 +00:25:30,000 --> 00:25:35,760 +okay so pruning is uh pretty different + +569 +00:25:33,559 --> 00:25:37,120 +than this than quantization in + +570 +00:25:35,760 --> 00:25:38,919 +quantization you are sort of like + +571 +00:25:37,120 --> 00:25:41,840 +chipping away at every parameter in your + +572 +00:25:38,919 --> 00:25:44,480 +model instead in pruning you're like + +573 +00:25:41,840 --> 00:25:46,399 +completely eliminating some parameters + +574 +00:25:44,480 --> 00:25:49,120 +and completely not changing everything + +575 +00:25:46,399 --> 00:25:52,360 +else + +576 +00:25:49,120 --> 00:25:55,399 +um so a number of parameters set to zero + +577 +00:25:52,360 --> 00:25:55,399 +and the rest are completely + +578 +00:25:55,640 --> 00:26:02,200 +unchanged and the most uh + +579 +00:25:59,240 --> 00:26:04,240 +intuitive way to do this is this idea + +580 +00:26:02,200 --> 00:26:06,840 +that if you have a bunch of parameters + +581 +00:26:04,240 --> 00:26:08,120 +um some of them are probably close to + +582 +00:26:06,840 --> 00:26:09,640 +zero in which case they're not doing + +583 +00:26:08,120 --> 00:26:13,000 +anything anyways so just make them + +584 +00:26:09,640 --> 00:26:14,880 +completely set to zero that way you can + +585 +00:26:13,000 --> 00:26:16,640 +ignore those parameters effectively they + +586 +00:26:14,880 --> 00:26:17,520 +they effectively are not doing anything + +587 +00:26:16,640 --> 00:26:20,679 +uh + +588 +00:26:17,520 --> 00:26:24,559 +and uh it's as if they don't exist so in + +589 +00:26:20,679 --> 00:26:26,240 +magnitude pruning you set to zero some + +590 +00:26:24,559 --> 00:26:29,200 +percentage of parameters that have the + +591 +00:26:26,240 --> 00:26:32,720 +least magnitude + +592 +00:26:29,200 --> 00:26:34,440 +and uh in like machine translation we + +593 +00:26:32,720 --> 00:26:37,279 +people have seen that you can remove + +594 +00:26:34,440 --> 00:26:39,360 +almost half the parameters in a model + +595 +00:26:37,279 --> 00:26:41,559 +and get almost zero change in your + +596 +00:26:39,360 --> 00:26:43,960 +Downstream performance which I think + +597 +00:26:41,559 --> 00:26:45,640 +goes back to the earlier point about + +598 +00:26:43,960 --> 00:26:47,200 +over parameterization like you need a + +599 +00:26:45,640 --> 00:26:49,679 +lot of these parameters for training the + +600 +00:26:47,200 --> 00:26:51,440 +model but in practice they're not really + +601 +00:26:49,679 --> 00:26:53,600 +doing too much and so you can just get + +602 +00:26:51,440 --> 00:26:55,640 +rid of + +603 +00:26:53,600 --> 00:26:58,200 +them and so this is a type of + +604 +00:26:55,640 --> 00:27:01,760 +unstructured pruning where you're just + +605 +00:26:58,200 --> 00:27:04,080 +um you're removing + +606 +00:27:01,760 --> 00:27:06,640 +parameters throughout the model anywhere + +607 +00:27:04,080 --> 00:27:09,039 +you see fit there's no structure to how + +608 +00:27:06,640 --> 00:27:09,039 +you're doing the + +609 +00:27:09,200 --> 00:27:16,760 +pruning um and this is related to the + +610 +00:27:12,640 --> 00:27:19,480 +lottery ticket hypothesis which was uh + +611 +00:27:16,760 --> 00:27:22,159 +this idea that when you train a full + +612 +00:27:19,480 --> 00:27:24,279 +model um there are like randomly initial + +613 +00:27:22,159 --> 00:27:26,520 +there there are sub networks of that + +614 +00:27:24,279 --> 00:27:29,520 +model that + +615 +00:27:26,520 --> 00:27:29,520 +um + +616 +00:27:37,080 --> 00:27:39,799 +the idea is that when you're training a + +617 +00:27:38,320 --> 00:27:42,120 +big model there are sub networks that + +618 +00:27:39,799 --> 00:27:44,080 +are actually a better initialization + +619 +00:27:42,120 --> 00:27:45,720 +than the initial model so it it doesn't + +620 +00:27:44,080 --> 00:27:48,440 +it's it's a little bit unintuitive that + +621 +00:27:45,720 --> 00:27:51,159 +if you have let's say a model with 100 + +622 +00:27:48,440 --> 00:27:53,240 +with 100 billion parameters uh there's + +623 +00:27:51,159 --> 00:27:55,080 +subnetworks of this model with even if + +624 +00:27:53,240 --> 00:27:57,240 +you randomly initialize them that might + +625 +00:27:55,080 --> 00:28:00,080 +say have a billion parameters like 1% of + +626 +00:27:57,240 --> 00:28:03,279 +the size that are actually better than + +627 +00:28:00,080 --> 00:28:06,799 +the full model um and so this is related + +628 +00:28:03,279 --> 00:28:09,240 +to pruning but here um they prune the + +629 +00:28:06,799 --> 00:28:12,360 +model then they retrain it and they find + +630 +00:28:09,240 --> 00:28:15,519 +that surprisingly like here a model that + +631 +00:28:12,360 --> 00:28:19,080 +is 20% the size so it's pruned to 20% of + +632 +00:28:15,519 --> 00:28:21,080 +the original models parameters and then + +633 +00:28:19,080 --> 00:28:23,159 +retrained is actually like more + +634 +00:28:21,080 --> 00:28:26,159 +effective and generalizes better than + +635 +00:28:23,159 --> 00:28:28,880 +your original model um so the idea here + +636 +00:28:26,159 --> 00:28:30,799 +is basically finding like really good + +637 +00:28:28,880 --> 00:28:35,159 +initializations of these sub networks + +638 +00:28:30,799 --> 00:28:37,600 +can be better than uh like the most + +639 +00:28:35,159 --> 00:28:40,960 +intuitive random initialization of a big + +640 +00:28:37,600 --> 00:28:42,559 +model and so this is sort of a Step + +641 +00:28:40,960 --> 00:28:44,080 +Beyond pruning where you're pruning a + +642 +00:28:42,559 --> 00:28:46,440 +model then training on top of that and + +643 +00:28:44,080 --> 00:28:49,640 +that can improve + +644 +00:28:46,440 --> 00:28:51,000 +performance um but generally pruning I + +645 +00:28:49,640 --> 00:28:52,360 +think is not a method to improve + +646 +00:28:51,000 --> 00:28:56,200 +performance method to maintain + +647 +00:28:52,360 --> 00:28:59,519 +performance while improving the um the + +648 +00:28:56,200 --> 00:29:02,559 +efficiency and the size of your + +649 +00:28:59,519 --> 00:29:04,519 +model and uh so there's been like a lot + +650 +00:29:02,559 --> 00:29:08,279 +of cool work in pruning coming out of + +651 +00:29:04,519 --> 00:29:11,120 +CMU recently um this paper Called Wanda + +652 +00:29:08,279 --> 00:29:15,000 +uh which came from ml from folks in MLD + +653 +00:29:11,120 --> 00:29:17,559 +um the idea here is that magnitude + +654 +00:29:15,000 --> 00:29:19,240 +pruning presumes that you can just + +655 +00:29:17,559 --> 00:29:21,640 +decide which parameters you want to + +656 +00:29:19,240 --> 00:29:24,039 +throw away based on how big they are but + +657 +00:29:21,640 --> 00:29:26,720 +it doesn't consider the fact that there + +658 +00:29:24,039 --> 00:29:29,679 +are systematic differences in the size + +659 +00:29:26,720 --> 00:29:31,799 +of inputs that come in um so in the + +660 +00:29:29,679 --> 00:29:33,559 +paper they gave a a nice example which + +661 +00:29:31,799 --> 00:29:37,120 +maybe I'll I'll write on the Whiteboard + +662 +00:29:33,559 --> 00:29:37,120 +here um which is + +663 +00:29:56,320 --> 00:30:02,480 +that if your let's say your your model + +664 +00:29:59,320 --> 00:30:06,200 +was just this basic two parameter model + +665 +00:30:02,480 --> 00:30:08,399 +X and Y and you had weights A and B um + +666 +00:30:06,200 --> 00:30:10,679 +and let's say we know that the magnitude + +667 +00:30:08,399 --> 00:30:15,440 +of a is like a lot bigger than the + +668 +00:30:10,679 --> 00:30:18,320 +magnitude of B then in magnitude pruning + +669 +00:30:15,440 --> 00:30:20,360 +we would just set B to + +670 +00:30:18,320 --> 00:30:24,640 +zero and then the model would just + +671 +00:30:20,360 --> 00:30:27,440 +become a * X because the idea is that + +672 +00:30:24,640 --> 00:30:29,080 +this this parameter has a lot more + +673 +00:30:27,440 --> 00:30:31,799 +effect on the output and therefore we + +674 +00:30:29,080 --> 00:30:33,840 +don't need to consider the other one but + +675 +00:30:31,799 --> 00:30:37,720 +what if I told you now that the range of + +676 +00:30:33,840 --> 00:30:40,279 +X like the average value of x was a + +677 +00:30:37,720 --> 00:30:42,000 +thousand uh sorry it was sorry the + +678 +00:30:40,279 --> 00:30:45,840 +average value of x was one and the + +679 +00:30:42,000 --> 00:30:49,159 +average value of y was a thousand um now + +680 +00:30:45,840 --> 00:30:50,919 +in practice even though B is smaller + +681 +00:30:49,159 --> 00:30:52,640 +it's processing much larger inputs and + +682 +00:30:50,919 --> 00:30:56,760 +therefore it's going to have a outsize + +683 +00:30:52,640 --> 00:30:58,600 +impact on the output of the model um so + +684 +00:30:56,760 --> 00:31:01,880 +that's the motivation of Wanda which is + +685 +00:30:58,600 --> 00:31:04,080 +here they um decide which parameters to + +686 +00:31:01,880 --> 00:31:05,519 +to prune based on a combination of the + +687 +00:31:04,080 --> 00:31:08,679 +magnitude of that + +688 +00:31:05,519 --> 00:31:10,720 +parameter as well as the magnitude of + +689 +00:31:08,679 --> 00:31:13,200 +the actual inputs that come into that + +690 +00:31:10,720 --> 00:31:16,679 +layer of the model so they take like + +691 +00:31:13,200 --> 00:31:18,600 +data uh calibration data and then they + +692 +00:31:16,679 --> 00:31:20,960 +kind say learn what the average + +693 +00:31:18,600 --> 00:31:25,399 +magnitude of the inputs are and they use + +694 +00:31:20,960 --> 00:31:25,399 +this to decide what parameters to + +695 +00:31:26,000 --> 00:31:29,799 +prune um + +696 +00:31:28,200 --> 00:31:31,919 +okay so + +697 +00:31:29,799 --> 00:31:33,799 +uh so far I've been talking about + +698 +00:31:31,919 --> 00:31:35,799 +unstructured pruning and I think there's + +699 +00:31:33,799 --> 00:31:38,399 +a pretty clear problem with this that + +700 +00:31:35,799 --> 00:31:40,880 +makes this really not that that + +701 +00:31:38,399 --> 00:31:44,399 +effective in + +702 +00:31:40,880 --> 00:31:46,919 +practice and uh the problem is that you + +703 +00:31:44,399 --> 00:31:49,559 +can make a model sparse you can take the + +704 +00:31:46,919 --> 00:31:51,960 +vectors and make them sparse but if your + +705 +00:31:49,559 --> 00:31:53,960 +Hardware does not take advantage of that + +706 +00:31:51,960 --> 00:31:57,200 +sparsity then you're not getting any + +707 +00:31:53,960 --> 00:31:59,919 +gains in performance so for example um + +708 +00:31:57,200 --> 00:32:01,799 +here we like turn turned off half the + +709 +00:31:59,919 --> 00:32:04,880 +parameters but if we're still + +710 +00:32:01,799 --> 00:32:07,200 +multiplying zeros with other zeros in + +711 +00:32:04,880 --> 00:32:08,639 +like a dense operation we're doing + +712 +00:32:07,200 --> 00:32:13,559 +exactly the same amount of work we're + +713 +00:32:08,639 --> 00:32:16,440 +getting no no no benefits here um and + +714 +00:32:13,559 --> 00:32:18,880 +the reality is that right now hardware + +715 +00:32:16,440 --> 00:32:20,840 +for machine learning does not support + +716 +00:32:18,880 --> 00:32:23,480 +sparse data structures or computation + +717 +00:32:20,840 --> 00:32:25,919 +that well like Matrix Matrix + +718 +00:32:23,480 --> 00:32:29,919 +multiplications do some kinds of + +719 +00:32:25,919 --> 00:32:31,919 +complicated things under the hood um and + +720 +00:32:29,919 --> 00:32:34,559 +therefore they don't really work that + +721 +00:32:31,919 --> 00:32:37,399 +well for sparse data structures uh and + +722 +00:32:34,559 --> 00:32:39,559 +so so basically right now this is not + +723 +00:32:37,399 --> 00:32:42,519 +very effective in the current Hardware + +724 +00:32:39,559 --> 00:32:45,159 +although I hope this will change in the + +725 +00:32:42,519 --> 00:32:47,399 +future so therefore like a more + +726 +00:32:45,159 --> 00:32:50,360 +immediately useful idea is called + +727 +00:32:47,399 --> 00:32:52,480 +structured pruning and the idea here is + +728 +00:32:50,360 --> 00:32:54,679 +that instead of just picking parameters + +729 +00:32:52,480 --> 00:32:57,480 +across like willy-nilly across the whole + +730 +00:32:54,679 --> 00:33:01,960 +model you remove entire components or + +731 +00:32:57,480 --> 00:33:04,279 +entire layers uh and therefore you're + +732 +00:33:01,960 --> 00:33:07,200 +pruning the model in a way that is + +733 +00:33:04,279 --> 00:33:10,159 +structured and uh really going to make a + +734 +00:33:07,200 --> 00:33:10,159 +difference on your overall + +735 +00:33:10,480 --> 00:33:15,840 +runtime so uh Graham and one of his PhD + +736 +00:33:13,600 --> 00:33:18,799 +students a few years ago did some really + +737 +00:33:15,840 --> 00:33:20,360 +cool work on this where they showed that + +738 +00:33:18,799 --> 00:33:23,320 +if you're training a Transformer model + +739 +00:33:20,360 --> 00:33:25,480 +like Bert you usually have many heads of + +740 +00:33:23,320 --> 00:33:26,840 +attention like you guys also experienced + +741 +00:33:25,480 --> 00:33:27,559 +this for the Llama homework where you + +742 +00:33:26,840 --> 00:33:28,880 +have + +743 +00:33:27,559 --> 00:33:32,519 +I think it was eight heads of attention + +744 +00:33:28,880 --> 00:33:34,320 +there um but in practice most of these + +745 +00:33:32,519 --> 00:33:36,120 +heads of attention can be removed + +746 +00:33:34,320 --> 00:33:39,080 +Without Really + +747 +00:33:36,120 --> 00:33:41,919 +any uh negative impact on the + +748 +00:33:39,080 --> 00:33:44,039 +performance of your model and so here + +749 +00:33:41,919 --> 00:33:46,480 +they show that for Mt model you can + +750 +00:33:44,039 --> 00:33:49,159 +remove half of the attention heads + +751 +00:33:46,480 --> 00:33:50,159 +entirely and get a negligible impact on + +752 +00:33:49,159 --> 00:33:52,120 +your + +753 +00:33:50,159 --> 00:33:55,000 +performance this is different than what + +754 +00:33:52,120 --> 00:33:57,519 +we showed here where we were removing + +755 +00:33:55,000 --> 00:34:01,039 +parameters from anywhere in the model we + +756 +00:33:57,519 --> 00:34:03,559 +saw fit here we're removing entire heads + +757 +00:34:01,039 --> 00:34:05,799 +of attention even some of those heads + +758 +00:34:03,559 --> 00:34:06,799 +might have large magnitude weights some + +759 +00:34:05,799 --> 00:34:09,000 +of them might have small magnitude + +760 +00:34:06,799 --> 00:34:11,159 +weights but um we can just remove the + +761 +00:34:09,000 --> 00:34:12,679 +entire attention head and this has a + +762 +00:34:11,159 --> 00:34:14,919 +immediate impact on the performance of + +763 +00:34:12,679 --> 00:34:14,919 +your + +764 +00:34:17,720 --> 00:34:24,000 +model and uh generalizing this recent + +765 +00:34:21,359 --> 00:34:27,399 +work has proposed + +766 +00:34:24,000 --> 00:34:30,240 +uh controlling even uh other kind of + +767 +00:34:27,399 --> 00:34:32,760 +components of your model so um in this + +768 +00:34:30,240 --> 00:34:35,919 +paper from from two years ago uh they + +769 +00:34:32,760 --> 00:34:39,159 +propose masking having two levels of + +770 +00:34:35,919 --> 00:34:40,720 +masks on your model the first is um like + +771 +00:34:39,159 --> 00:34:44,119 +what they call a coar mask which is + +772 +00:34:40,720 --> 00:34:46,960 +turning off large components like full + +773 +00:34:44,119 --> 00:34:48,440 +attention heads or full feed forward + +774 +00:34:46,960 --> 00:34:51,320 +layers where you replace them with + +775 +00:34:48,440 --> 00:34:53,679 +identity Matrix um and these are like + +776 +00:34:51,320 --> 00:34:57,440 +really big things to turn off and then + +777 +00:34:53,679 --> 00:35:00,119 +you could also have fine masks like um + +778 +00:34:57,440 --> 00:35:02,359 +sorry I meant uh entire self attention + +779 +00:35:00,119 --> 00:35:04,280 +layers not attention heads and then the + +780 +00:35:02,359 --> 00:35:06,920 +fine masks would control like individual + +781 +00:35:04,280 --> 00:35:10,480 +heads or um removing individual + +782 +00:35:06,920 --> 00:35:12,680 +Dimensions so uh changing your hidden + +783 +00:35:10,480 --> 00:35:16,520 +state to be from like 512 Dimensions to + +784 +00:35:12,680 --> 00:35:18,680 +200 Dimensions um and so the idea here + +785 +00:35:16,520 --> 00:35:20,880 +is they give two different levels of + +786 +00:35:18,680 --> 00:35:23,680 +granularity at which you can turn off + +787 +00:35:20,880 --> 00:35:26,160 +different components um and then these + +788 +00:35:23,680 --> 00:35:29,359 +masks then learned using some kind of + +789 +00:35:26,160 --> 00:35:32,040 +held out valid dat to learn what can we + +790 +00:35:29,359 --> 00:35:34,240 +off without totally destroying the + +791 +00:35:32,040 --> 00:35:35,960 +perance of this + +792 +00:35:34,240 --> 00:35:38,800 +model + +793 +00:35:35,960 --> 00:35:40,520 +um and in this paper they showed that + +794 +00:35:38,800 --> 00:35:43,680 +you can really get pretty far with this + +795 +00:35:40,520 --> 00:35:46,480 +idea um and the last thing I'll say + +796 +00:35:43,680 --> 00:35:49,440 +about pruning here is that + +797 +00:35:46,480 --> 00:35:53,240 +uh methods like this you're actually + +798 +00:35:49,440 --> 00:35:55,280 +learning a kind of control over your + +799 +00:35:53,240 --> 00:35:57,440 +your model so you're learning these this + +800 +00:35:55,280 --> 00:36:00,160 +set of of masks + +801 +00:35:57,440 --> 00:36:01,359 +and that's pretty expensive in terms of + +802 +00:36:00,160 --> 00:36:04,160 +training + +803 +00:36:01,359 --> 00:36:06,319 +budget it requires a lot of GPU memory + +804 +00:36:04,160 --> 00:36:09,200 +and so if you want to prune let's say a + +805 +00:36:06,319 --> 00:36:11,200 +llama 70 billion model you'll need + +806 +00:36:09,200 --> 00:36:13,000 +basically as much computer as it need as + +807 +00:36:11,200 --> 00:36:16,359 +it took to train that model to begin + +808 +00:36:13,000 --> 00:36:18,880 +with so um one of this is a recent paper + +809 +00:36:16,359 --> 00:36:21,400 +from Graham and one of his PhD students + +810 +00:36:18,880 --> 00:36:23,079 +where they instead ask can we do pruning + +811 +00:36:21,400 --> 00:36:25,400 +without having to like compute gradients + +812 +00:36:23,079 --> 00:36:27,280 +at all so basically can we if we just + +813 +00:36:25,400 --> 00:36:31,240 +have enough memory to run the model on + +814 +00:36:27,280 --> 00:36:33,160 +our computer uh can we then use that + +815 +00:36:31,240 --> 00:36:35,760 +same computer to prune the model without + +816 +00:36:33,160 --> 00:36:38,400 +having to do any to use atom or to + +817 +00:36:35,760 --> 00:36:40,319 +compute gradients um and so I think the + +818 +00:36:38,400 --> 00:36:42,640 +idea here is really clever they + +819 +00:36:40,319 --> 00:36:45,200 +basically randomly mask out all the + +820 +00:36:42,640 --> 00:36:48,520 +different modules in the network so they + +821 +00:36:45,200 --> 00:36:50,160 +like create like 100 or a thousand + +822 +00:36:48,520 --> 00:36:52,839 +variants of this model with different + +823 +00:36:50,160 --> 00:36:55,240 +masks turned off then they measure the + +824 +00:36:52,839 --> 00:36:58,119 +performance of those like perturbed + +825 +00:36:55,240 --> 00:37:01,040 +models and then they learn a regression + +826 +00:36:58,119 --> 00:37:03,119 +of like how much does each module affect + +827 +00:37:01,040 --> 00:37:05,880 +the performance of the of the full + +828 +00:37:03,119 --> 00:37:08,560 +system and then uh you can then use + +829 +00:37:05,880 --> 00:37:11,560 +these regression weights of this uh + +830 +00:37:08,560 --> 00:37:13,000 +train of this learned regressor to + +831 +00:37:11,560 --> 00:37:14,960 +figure out which modules you can + +832 +00:37:13,000 --> 00:37:17,599 +actually turn off without impacting the + +833 +00:37:14,960 --> 00:37:17,599 +performance too + +834 +00:37:18,880 --> 00:37:27,880 +much okay um any questions + +835 +00:37:24,040 --> 00:37:29,720 +here just doing like a validation for on + +836 +00:37:27,880 --> 00:37:33,680 +these prun models + +837 +00:37:29,720 --> 00:37:33,680 +yeah like randomly prune + +838 +00:37:38,640 --> 00:37:44,000 +models model what happens to the matrix + +839 +00:37:46,000 --> 00:37:50,960 +multiplication yeah yeah that's right + +840 +00:37:48,280 --> 00:37:52,640 +like if you it depends on the yeah I + +841 +00:37:50,960 --> 00:37:54,560 +think like for self attention heads or + +842 +00:37:52,640 --> 00:37:55,880 +for feed forward feed forward layers you + +843 +00:37:54,560 --> 00:37:58,599 +can think of it as just multiplying by + +844 +00:37:55,880 --> 00:37:58,599 +the identity + +845 +00:38:04,000 --> 00:38:08,280 +um and next I'm going to move on to + +846 +00:38:05,720 --> 00:38:10,359 +distillation um so + +847 +00:38:08,280 --> 00:38:12,280 +uh if there's any questions about + +848 +00:38:10,359 --> 00:38:13,800 +anything I've covered in pruning um you + +849 +00:38:12,280 --> 00:38:14,960 +can ask now and I know I was moving + +850 +00:38:13,800 --> 00:38:17,640 +pretty fast + +851 +00:38:14,960 --> 00:38:19,760 +so I'm happy to talk what's the point of + +852 +00:38:17,640 --> 00:38:23,520 +burning or aggression like does it + +853 +00:38:19,760 --> 00:38:27,240 +generalize to other at + +854 +00:38:23,520 --> 00:38:30,400 +all yeah the idea is that so the best + +855 +00:38:27,240 --> 00:38:33,359 +way to do this would be to consider all + +856 +00:38:30,400 --> 00:38:35,920 +possible choices of masks like turn off + +857 +00:38:33,359 --> 00:38:37,880 +every combination of modules and then + +858 +00:38:35,920 --> 00:38:39,599 +measure the performance then you know + +859 +00:38:37,880 --> 00:38:43,319 +exactly which one which combination is + +860 +00:38:39,599 --> 00:38:45,040 +the best but if you have um tens of + +861 +00:38:43,319 --> 00:38:48,560 +layers and tens of components in each + +862 +00:38:45,040 --> 00:38:50,119 +layer you'll have like millions of + +863 +00:38:48,560 --> 00:38:53,079 +different subm modules you want to try + +864 +00:38:50,119 --> 00:38:58,040 +out so this lets you just sample from + +865 +00:38:53,079 --> 00:38:59,960 +that spa combinatorial space and then uh + +866 +00:38:58,040 --> 00:39:02,200 +predict the interaction between what you + +867 +00:38:59,960 --> 00:39:04,520 +have seen between the mod modules based + +868 +00:39:02,200 --> 00:39:07,359 +on what you have seen like slly more + +869 +00:39:04,520 --> 00:39:09,520 +optimiz some kind of random yeah yeah I + +870 +00:39:07,359 --> 00:39:12,119 +think that's right + +871 +00:39:09,520 --> 00:39:14,319 +yeah I think you can make analogy for + +872 +00:39:12,119 --> 00:39:16,280 +this to like parameter selection where + +873 +00:39:14,319 --> 00:39:18,960 +you can do random search but now people + +874 +00:39:16,280 --> 00:39:21,359 +do much like fancier things like Beijing + +875 +00:39:18,960 --> 00:39:23,839 +optimization uh which are trying + +876 +00:39:21,359 --> 00:39:25,839 +to explore the space in a more + +877 +00:39:23,839 --> 00:39:29,319 +structured way and so I think that this + +878 +00:39:25,839 --> 00:39:29,319 +is similar in + +879 +00:39:33,680 --> 00:39:39,200 +spirit are you like randomly just + +880 +00:39:35,960 --> 00:39:41,319 +choosing like like for example the green + +881 +00:39:39,200 --> 00:39:43,520 +one that see is it just like you're + +882 +00:39:41,319 --> 00:39:47,000 +retaining 80% of + +883 +00:39:43,520 --> 00:39:51,079 +theam you're retaining 20% in the green + +884 +00:39:47,000 --> 00:39:51,079 +line so you're throwing away 80% + +885 +00:39:53,520 --> 00:39:58,640 +um so in the like what this green line + +886 +00:39:57,079 --> 00:40:01,599 +exactly means and I did not go into + +887 +00:39:58,640 --> 00:40:04,960 +detail here is that you first train a + +888 +00:40:01,599 --> 00:40:07,079 +full model uh with all of its parameters + +889 +00:40:04,960 --> 00:40:08,720 +and then you apply pruning methods kind + +890 +00:40:07,079 --> 00:40:11,960 +of like magnitude pruning I showed + +891 +00:40:08,720 --> 00:40:14,000 +before um to find like a good sub + +892 +00:40:11,960 --> 00:40:16,079 +Network that is still pretty effective + +893 +00:40:14,000 --> 00:40:18,720 +after training but then they do + +894 +00:40:16,079 --> 00:40:21,240 +something weird which is then they then + +895 +00:40:18,720 --> 00:40:22,160 +restore that subnetwork to its initial + +896 +00:40:21,240 --> 00:40:24,359 +random + +897 +00:40:22,160 --> 00:40:25,359 +initialization so in the beginning you + +898 +00:40:24,359 --> 00:40:28,359 +had + +899 +00:40:25,359 --> 00:40:28,359 +um + +900 +00:40:30,040 --> 00:40:32,920 +in the beginning you + +901 +00:40:49,079 --> 00:40:54,520 +had so you you train this model and you + +902 +00:40:51,760 --> 00:40:57,880 +get you initially initialize it randomly + +903 +00:40:54,520 --> 00:41:00,880 +so you might have um just random values + +904 +00:40:57,880 --> 00:41:04,280 +at each parameter um you then train the + +905 +00:41:00,880 --> 00:41:06,119 +model as it is so these parameters are + +906 +00:41:04,280 --> 00:41:07,359 +all changed and they're they become they + +907 +00:41:06,119 --> 00:41:10,560 +serve the + +908 +00:41:07,359 --> 00:41:13,119 +task then they identify a good sub + +909 +00:41:10,560 --> 00:41:15,720 +Network here so you might throw away + +910 +00:41:13,119 --> 00:41:17,200 +this node you might throw away this node + +911 +00:41:15,720 --> 00:41:19,960 +maybe this + +912 +00:41:17,200 --> 00:41:22,640 +node and then you say okay what was the + +913 +00:41:19,960 --> 00:41:24,359 +initial initialization at these exact + +914 +00:41:22,640 --> 00:41:26,599 +parameters which I've since learned I've + +915 +00:41:24,359 --> 00:41:28,640 +since learned better parameters but what + +916 +00:41:26,599 --> 00:41:30,000 +was the initial value here at the start + +917 +00:41:28,640 --> 00:41:32,400 +of the optimization + +918 +00:41:30,000 --> 00:41:36,160 +process they then go back to that to + +919 +00:41:32,400 --> 00:41:39,200 +those initial values retrain the model + +920 +00:41:36,160 --> 00:41:42,359 +on this task and it becomes even better + +921 +00:41:39,200 --> 00:41:44,599 +um which I think it's uh there's been a + +922 +00:41:42,359 --> 00:41:47,079 +lot of like theory about this since this + +923 +00:41:44,599 --> 00:41:50,640 +paper came out it was kind of like a + +924 +00:41:47,079 --> 00:41:52,960 +influen a very influential paper I think + +925 +00:41:50,640 --> 00:41:54,480 +the to me this shows at the very least + +926 +00:41:52,960 --> 00:41:57,520 +the importance of initialization and how + +927 +00:41:54,480 --> 00:42:00,400 +much like Randomness there is in + +928 +00:41:57,520 --> 00:42:03,359 +uh in your network training and how you + +929 +00:42:00,400 --> 00:42:03,359 +can kind of take advantage of + +930 +00:42:09,440 --> 00:42:13,920 +that yeah that related to drop + +931 +00:42:20,960 --> 00:42:25,160 +off so I think you could you can see + +932 +00:42:23,280 --> 00:42:29,839 +Dropout as + +933 +00:42:25,160 --> 00:42:32,480 +a version of pruning where at each step + +934 +00:42:29,839 --> 00:42:34,520 +of optimization you perform a random + +935 +00:42:32,480 --> 00:42:37,000 +pruning you like randomly prune your + +936 +00:42:34,520 --> 00:42:40,680 +network at each for each update and then + +937 +00:42:37,000 --> 00:42:42,760 +you um then train the you update the + +938 +00:42:40,680 --> 00:42:45,599 +parameters that have not been dropped + +939 +00:42:42,760 --> 00:42:48,720 +out but then in the next step you have a + +940 +00:42:45,599 --> 00:42:50,640 +totally different prune Network um so + +941 +00:42:48,720 --> 00:42:52,839 +here you're doing pruning once and then + +942 +00:42:50,640 --> 00:42:55,480 +you're training it this fixed prune + +943 +00:42:52,839 --> 00:42:56,720 +Network for all the iterations whereas + +944 +00:42:55,480 --> 00:42:58,960 +in Dropout you're + +945 +00:42:56,720 --> 00:43:01,599 +doing like a random pruning each + +946 +00:42:58,960 --> 00:43:04,119 +time and they serve different purposes + +947 +00:43:01,599 --> 00:43:06,359 +is the main difference where pruning um + +948 +00:43:04,119 --> 00:43:08,680 +is primarily for reducing the size of + +949 +00:43:06,359 --> 00:43:11,520 +your model whereas Dropout is primarily + +950 +00:43:08,680 --> 00:43:14,400 +for regularizing your model to avoid + +951 +00:43:11,520 --> 00:43:17,040 +overfitting certain like to individual + +952 +00:43:14,400 --> 00:43:17,040 +um + +953 +00:43:18,559 --> 00:43:24,480 +weights like to avoid individual weights + +954 +00:43:21,040 --> 00:43:24,480 +having too much correlation with the + +955 +00:43:25,079 --> 00:43:28,079 +label + +956 +00:43:35,800 --> 00:43:41,680 +okay so I'll move on now to um the final + +957 +00:43:38,359 --> 00:43:41,680 +piece here which is + +958 +00:43:46,119 --> 00:43:50,400 +uh yeah so the last slide I forgot to + +959 +00:43:48,480 --> 00:43:51,920 +show here is that which I think is a + +960 +00:43:50,400 --> 00:43:55,200 +kind of a summarization of everything + +961 +00:43:51,920 --> 00:43:57,319 +I've taught so far about pruning is that + +962 +00:43:55,200 --> 00:44:00,640 +Wanda which is this unstructured pruning + +963 +00:43:57,319 --> 00:44:04,240 +way that that I showed earlier um even + +964 +00:44:00,640 --> 00:44:06,720 +though it achieves the same number of + +965 +00:44:04,240 --> 00:44:09,599 +parameters as like a structured pruning + +966 +00:44:06,720 --> 00:44:13,960 +method which is Bonsai the one uh from + +967 +00:44:09,599 --> 00:44:16,760 +from Graham uh it actually achieves much + +968 +00:44:13,960 --> 00:44:22,200 +less of a speed up potentially like a + +969 +00:44:16,760 --> 00:44:23,720 +negative speed up um and uh whereas a + +970 +00:44:22,200 --> 00:44:25,480 +structured pruning + +971 +00:44:23,720 --> 00:44:27,839 +method with the same amount of + +972 +00:44:25,480 --> 00:44:31,079 +parameters can be like a lot faster like + +973 +00:44:27,839 --> 00:44:32,359 +50% speed up is huge so um I think that + +974 +00:44:31,079 --> 00:44:35,319 +this shows that like unstructured + +975 +00:44:32,359 --> 00:44:39,079 +pruning can be pretty uh + +976 +00:44:35,319 --> 00:44:39,079 +ineffective if it's done + +977 +00:44:40,520 --> 00:44:46,640 +naively okay so now I'll move on to the + +978 +00:44:42,640 --> 00:44:46,640 +final piece here which is + +979 +00:44:48,079 --> 00:44:54,079 +distillation so in distillation the core + +980 +00:44:50,800 --> 00:44:56,599 +idea here is that you're um training one + +981 +00:44:54,079 --> 00:44:58,880 +model to replicate the behavior of + +982 +00:44:56,599 --> 00:44:58,880 +another + +983 +00:44:59,839 --> 00:45:03,800 +model and uh this is pretty + +984 +00:45:02,480 --> 00:45:07,400 +fundamentally different than the other + +985 +00:45:03,800 --> 00:45:09,760 +two methods we talked about so far uh in + +986 +00:45:07,400 --> 00:45:12,040 +distillation you're probably changing + +987 +00:45:09,760 --> 00:45:14,000 +every parameter in your model you might + +988 +00:45:12,040 --> 00:45:16,599 +even be having a totally different + +989 +00:45:14,000 --> 00:45:19,680 +architecture uh in the other two methods + +990 +00:45:16,599 --> 00:45:21,599 +um in quantization you are like kind of + +991 +00:45:19,680 --> 00:45:23,559 +not changing any of your parameters up + +992 +00:45:21,599 --> 00:45:25,559 +to a certain amount of precision and in + +993 +00:45:23,559 --> 00:45:28,119 +pruning you were keeping a set of your + +994 +00:45:25,559 --> 00:45:29,880 +parameters is completely fixed whereas + +995 +00:45:28,119 --> 00:45:32,599 +in distillation you're like changing + +996 +00:45:29,880 --> 00:45:36,440 +everything uh but hopefully doing it in + +997 +00:45:32,599 --> 00:45:36,440 +a way that requires many fewer + +998 +00:45:39,839 --> 00:45:45,359 +parameters and uh distillation is + +999 +00:45:42,599 --> 00:45:47,400 +related to a really cool idea that uh is + +1000 +00:45:45,359 --> 00:45:48,400 +more classic machine learning called + +1001 +00:45:47,400 --> 00:45:50,839 +weak + +1002 +00:45:48,400 --> 00:45:53,319 +supervision which is the idea that if + +1003 +00:45:50,839 --> 00:45:55,640 +you have unlabeled text or it could be + +1004 +00:45:53,319 --> 00:45:56,880 +images or whatever uh whatever data you + +1005 +00:45:55,640 --> 00:45:59,920 +want a + +1006 +00:45:56,880 --> 00:46:02,200 +process you can produce like things that + +1007 +00:45:59,920 --> 00:46:04,200 +are like labels that you could use like + +1008 +00:46:02,200 --> 00:46:07,240 +labels but maybe are not actually + +1009 +00:46:04,200 --> 00:46:09,440 +written by humans um and then you can + +1010 +00:46:07,240 --> 00:46:11,640 +train on these as as if they were labels + +1011 +00:46:09,440 --> 00:46:14,839 +and actually get pretty good + +1012 +00:46:11,640 --> 00:46:17,680 +performance uh so + +1013 +00:46:14,839 --> 00:46:19,520 +um this a few like really famous + +1014 +00:46:17,680 --> 00:46:23,240 +examples of this is one is self trining + +1015 +00:46:19,520 --> 00:46:26,400 +where you initialize like a a model um + +1016 +00:46:23,240 --> 00:46:28,240 +maybe with a a handful of points like + +1017 +00:46:26,400 --> 00:46:30,680 +three or five + +1018 +00:46:28,240 --> 00:46:32,359 +examples you train a classifier on that + +1019 +00:46:30,680 --> 00:46:33,839 +very small number of points which is + +1020 +00:46:32,359 --> 00:46:36,720 +going to be really bad because it's not + +1021 +00:46:33,839 --> 00:46:39,440 +enough data to learn then you have that + +1022 +00:46:36,720 --> 00:46:40,960 +model make its own predictions which are + +1023 +00:46:39,440 --> 00:46:44,559 +probably pretty pretty bad on a bunch of + +1024 +00:46:40,960 --> 00:46:46,800 +unlabeled text you use those pseudo + +1025 +00:46:44,559 --> 00:46:48,680 +labels to update the model again and you + +1026 +00:46:46,800 --> 00:46:50,319 +can do this iteratively so you're like + +1027 +00:46:48,680 --> 00:46:52,559 +basically using a model to produce its + +1028 +00:46:50,319 --> 00:46:55,800 +own training data to train itself and + +1029 +00:46:52,559 --> 00:46:58,319 +you do this over and over um and this is + +1030 +00:46:55,800 --> 00:47:01,079 +is like a a pretty classic method at + +1031 +00:46:58,319 --> 00:47:02,319 +this point it's like 30 years old um and + +1032 +00:47:01,079 --> 00:47:05,200 +that's self + +1033 +00:47:02,319 --> 00:47:08,000 +trining um and then there's a few others + +1034 +00:47:05,200 --> 00:47:09,960 +that I won't go into uh just not to be + +1035 +00:47:08,000 --> 00:47:12,520 +too dense but um that are related to + +1036 +00:47:09,960 --> 00:47:15,520 +this uh and this is all pseudo labels + +1037 +00:47:12,520 --> 00:47:17,720 +are also used um when + +1038 +00:47:15,520 --> 00:47:19,400 +you let's say you don't have the ability + +1039 +00:47:17,720 --> 00:47:21,680 +to annotate thousands of examples but + +1040 +00:47:19,400 --> 00:47:24,400 +you can write like a basic rule so you + +1041 +00:47:21,680 --> 00:47:27,160 +might say if you see a movie review that + +1042 +00:47:24,400 --> 00:47:29,920 +says awesome it's positive and if you + +1043 +00:47:27,160 --> 00:47:32,240 +see if it says I hated it it's negative + +1044 +00:47:29,920 --> 00:47:34,240 +this is um probably not a good rule to + +1045 +00:47:32,240 --> 00:47:36,240 +apply for your actual classifier because + +1046 +00:47:34,240 --> 00:47:38,559 +as Graham had showed earlier this is + +1047 +00:47:36,240 --> 00:47:41,160 +like really brittle it requires a lot of + +1048 +00:47:38,559 --> 00:47:43,000 +work but you can use these rules to + +1049 +00:47:41,160 --> 00:47:45,559 +construct pseudo labels that you then + +1050 +00:47:43,000 --> 00:47:48,040 +train like an actual full vocabulary + +1051 +00:47:45,559 --> 00:47:49,480 +model on um and if you have enough of + +1052 +00:47:48,040 --> 00:47:52,720 +these pseudo labels with enough of these + +1053 +00:47:49,480 --> 00:47:54,400 +rules um you can actually get pretty far + +1054 +00:47:52,720 --> 00:47:56,240 +and so that idea I just described is + +1055 +00:47:54,400 --> 00:47:57,720 +called is like in a + +1056 +00:47:56,240 --> 00:48:00,680 +there's a startup called snorkel that + +1057 +00:47:57,720 --> 00:48:02,760 +does that um and they have a bunch of + +1058 +00:48:00,680 --> 00:48:05,800 +papers about this idea as + +1059 +00:48:02,760 --> 00:48:08,400 +well so uh yeah + +1060 +00:48:05,800 --> 00:48:10,559 +so I'm me mentioning weak supervision + +1061 +00:48:08,400 --> 00:48:13,200 +because this to me forms the basis of + +1062 +00:48:10,559 --> 00:48:15,960 +knowledge distillation um in knowledge + +1063 +00:48:13,200 --> 00:48:18,760 +distillation you train a small model to + +1064 +00:48:15,960 --> 00:48:21,400 +just replicate the predictions of a big + +1065 +00:48:18,760 --> 00:48:25,480 +model so the big model is producing + +1066 +00:48:21,400 --> 00:48:27,240 +pseudo labels on unlabeled text uh and + +1067 +00:48:25,480 --> 00:48:29,839 +then that becomes the target for your + +1068 +00:48:27,240 --> 00:48:31,359 +small model to match uh the One + +1069 +00:48:29,839 --> 00:48:32,880 +requirement here that I think is really + +1070 +00:48:31,359 --> 00:48:35,720 +important to note is that you do need + +1071 +00:48:32,880 --> 00:48:38,480 +unlabeled text that that matches what + +1072 +00:48:35,720 --> 00:48:41,040 +you expect as input so let's say you're + +1073 +00:48:38,480 --> 00:48:44,720 +doing a movie review classification you + +1074 +00:48:41,040 --> 00:48:46,319 +definitely would need to somehow find um + +1075 +00:48:44,720 --> 00:48:47,960 +thousands of movie reviews that look + +1076 +00:48:46,319 --> 00:48:50,960 +like the kinds that your model is going + +1077 +00:48:47,960 --> 00:48:53,520 +to expect uh and + +1078 +00:48:50,960 --> 00:48:57,599 +that's most of these methods require + +1079 +00:48:53,520 --> 00:48:57,599 +that to work um Okay so + +1080 +00:49:03,920 --> 00:49:08,040 +the there's broadly like two kinds of + +1081 +00:49:06,640 --> 00:49:10,760 +ways you can train knowledge + +1082 +00:49:08,040 --> 00:49:12,480 +distillation um the first is like the + +1083 +00:49:10,760 --> 00:49:14,640 +most obvious which is called hard + +1084 +00:49:12,480 --> 00:49:16,280 +targets where you take your unlabeled + +1085 +00:49:14,640 --> 00:49:19,160 +text you produce a label from your + +1086 +00:49:16,280 --> 00:49:20,960 +teacher model and then you now use that + +1087 +00:49:19,160 --> 00:49:24,000 +predicted like that the teacher's + +1088 +00:49:20,960 --> 00:49:26,880 +prediction as the target for your for + +1089 +00:49:24,000 --> 00:49:28,240 +your model so you might say uh llama 70 + +1090 +00:49:26,880 --> 00:49:30,240 +billion predicted positive for this + +1091 +00:49:28,240 --> 00:49:31,559 +review therefore I'm going to say this + +1092 +00:49:30,240 --> 00:49:33,760 +the label here is + +1093 +00:49:31,559 --> 00:49:37,760 +positive this is like really easy it's + +1094 +00:49:33,760 --> 00:49:39,520 +convenient it's very intuitive um but + +1095 +00:49:37,760 --> 00:49:42,799 +another type of distillation that's even + +1096 +00:49:39,520 --> 00:49:45,799 +more effective uh pretty consistently is + +1097 +00:49:42,799 --> 00:49:48,839 +called S soft target distillation which + +1098 +00:49:45,799 --> 00:49:50,559 +is that instead of trying to match to do + +1099 +00:49:48,839 --> 00:49:52,839 +a supervised learning objective where + +1100 +00:49:50,559 --> 00:49:54,119 +you're just trying to match the label + +1101 +00:49:52,839 --> 00:49:56,160 +predicted by your + +1102 +00:49:54,119 --> 00:49:58,160 +teacher you and instead want your + +1103 +00:49:56,160 --> 00:50:01,280 +student model to produce probabilities + +1104 +00:49:58,160 --> 00:50:03,119 +over the full distribution of labels + +1105 +00:50:01,280 --> 00:50:03,920 +that matches the teacher distribution + +1106 +00:50:03,119 --> 00:50:07,480 +over + +1107 +00:50:03,920 --> 00:50:10,160 +labels um so like here the Llama 70 + +1108 +00:50:07,480 --> 00:50:11,960 +billion our teacher has predicted like + +1109 +00:50:10,160 --> 00:50:13,960 +probabilities over three different + +1110 +00:50:11,960 --> 00:50:15,599 +labels and then we want the student + +1111 +00:50:13,960 --> 00:50:17,680 +model to match those + +1112 +00:50:15,599 --> 00:50:21,440 +probabilities I think a cool thing here + +1113 +00:50:17,680 --> 00:50:23,079 +is that this is usually not possible + +1114 +00:50:21,440 --> 00:50:25,119 +with supervised learning when you have + +1115 +00:50:23,079 --> 00:50:26,799 +an annotator they usually just give you + +1116 +00:50:25,119 --> 00:50:28,200 +one answer they don't tell you how + +1117 +00:50:26,799 --> 00:50:32,000 +likely it is that they were wrong about + +1118 +00:50:28,200 --> 00:50:35,079 +that answer uh like they don't tell you + +1119 +00:50:32,000 --> 00:50:36,640 +what the next best answer was um but + +1120 +00:50:35,079 --> 00:50:38,880 +with a neural network that's that's + +1121 +00:50:36,640 --> 00:50:42,280 +teaching you you can ask that you have a + +1122 +00:50:38,880 --> 00:50:44,200 +lot more flexibility um and then this + +1123 +00:50:42,280 --> 00:50:46,200 +also changes how it's optimized so + +1124 +00:50:44,200 --> 00:50:48,760 +instead of optimizing for like the + +1125 +00:50:46,200 --> 00:50:50,400 +probability of the correct answer you + +1126 +00:50:48,760 --> 00:50:52,599 +can basically optimize for the + +1127 +00:50:50,400 --> 00:50:57,440 +difference in your the distributions + +1128 +00:50:52,599 --> 00:50:57,440 +over the answers um and + +1129 +00:51:00,280 --> 00:51:06,280 +as we can see here um here in this paper + +1130 +00:51:04,000 --> 00:51:08,960 +by Jeff Hinton and some others um they + +1131 +00:51:06,280 --> 00:51:11,559 +Ed this method for speech uh I think it + +1132 +00:51:08,960 --> 00:51:15,440 +was speech recognition and they showed + +1133 +00:51:11,559 --> 00:51:18,079 +that uh this Baseline here is + +1134 +00:51:15,440 --> 00:51:21,319 +um they took a training set and then + +1135 +00:51:18,079 --> 00:51:23,240 +threw away the labels and pseudo labeled + +1136 +00:51:21,319 --> 00:51:26,280 +those the inputs of the like the input + +1137 +00:51:23,240 --> 00:51:27,400 +speech with another model + +1138 +00:51:26,280 --> 00:51:28,960 +uh and then they showed that if you're + +1139 +00:51:27,400 --> 00:51:30,920 +using hard + +1140 +00:51:28,960 --> 00:51:33,079 +targets and you use like the full + +1141 +00:51:30,920 --> 00:51:35,480 +training sets inputs you can get pretty + +1142 +00:51:33,079 --> 00:51:37,280 +far but if you instead don't have that + +1143 +00:51:35,480 --> 00:51:39,799 +much unlabeled speech and you use like a + +1144 +00:51:37,280 --> 00:51:41,960 +small amount of speech using soft + +1145 +00:51:39,799 --> 00:51:44,960 +targets is like way way more + +1146 +00:51:41,960 --> 00:51:44,960 +effective + +1147 +00:51:45,880 --> 00:51:54,200 +uh okay uh I'll just jump ahead to uh I + +1148 +00:51:50,520 --> 00:51:57,200 +think a a result that is like pretty + +1149 +00:51:54,200 --> 00:51:58,680 +shocking that it works um so this came + +1150 +00:51:57,200 --> 00:52:01,680 +from uh Zack Lipton and some other + +1151 +00:51:58,680 --> 00:52:03,839 +people a few years ago um and uh the + +1152 +00:52:01,680 --> 00:52:07,240 +idea here is that you can take a model + +1153 +00:52:03,839 --> 00:52:08,960 +that is trained with supervised learning + +1154 +00:52:07,240 --> 00:52:11,960 +and here they train it on like a image + +1155 +00:52:08,960 --> 00:52:15,359 +classification task then you can + +1156 +00:52:11,960 --> 00:52:17,960 +repeatedly distill it to itself using + +1157 +00:52:15,359 --> 00:52:20,160 +soft targets so you take a bunch of + +1158 +00:52:17,960 --> 00:52:22,680 +images and then predict the distribution + +1159 +00:52:20,160 --> 00:52:24,760 +over the labels of those images and then + +1160 +00:52:22,680 --> 00:52:26,280 +train the model to basically match its + +1161 +00:52:24,760 --> 00:52:28,599 +own + +1162 +00:52:26,280 --> 00:52:31,240 +to + +1163 +00:52:28,599 --> 00:52:34,440 +um you train the model in a soft target + +1164 +00:52:31,240 --> 00:52:38,119 +distillation objective uh using itself + +1165 +00:52:34,440 --> 00:52:39,480 +as a teacher and uh it's still kind of + +1166 +00:52:38,119 --> 00:52:41,960 +bizarre to me that this works but they + +1167 +00:52:39,480 --> 00:52:45,079 +show that this pretty consistently + +1168 +00:52:41,960 --> 00:52:46,880 +improves performance of a model so I + +1169 +00:52:45,079 --> 00:52:49,799 +think that the to me the intuition here + +1170 +00:52:46,880 --> 00:52:51,040 +is that this um soft target objective + +1171 +00:52:49,799 --> 00:52:54,079 +which is different than what you were + +1172 +00:52:51,040 --> 00:52:56,319 +trained on in what you would train using + +1173 +00:52:54,079 --> 00:52:58,280 +supervised learning um it's like a + +1174 +00:52:56,319 --> 00:53:00,440 +different objective that is somehow + +1175 +00:52:58,280 --> 00:53:02,000 +conveying more information to your model + +1176 +00:53:00,440 --> 00:53:06,520 +it's conveying like uncertainties about + +1177 +00:53:02,000 --> 00:53:09,280 +the labels and um it's like a + +1178 +00:53:06,520 --> 00:53:11,640 +richer it's a richer knowledge interface + +1179 +00:53:09,280 --> 00:53:13,799 +between the teacher and the student than + +1180 +00:53:11,640 --> 00:53:15,400 +just giving a single answer and that + +1181 +00:53:13,799 --> 00:53:17,920 +this like Rich interface of knowledge + +1182 +00:53:15,400 --> 00:53:21,240 +can be really effective to me that's + +1183 +00:53:17,920 --> 00:53:22,520 +like the takeaway from this results um + +1184 +00:53:21,240 --> 00:53:24,160 +but yeah check out this paper if you + +1185 +00:53:22,520 --> 00:53:26,319 +think this is + +1186 +00:53:24,160 --> 00:53:28,200 +cool + +1187 +00:53:26,319 --> 00:53:30,319 +okay so + +1188 +00:53:28,200 --> 00:53:33,520 +uh any questions on what I've talked + +1189 +00:53:30,319 --> 00:53:33,520 +about so far in distillation + +1190 +00:53:34,079 --> 00:53:37,079 +here + +1191 +00:53:41,240 --> 00:53:44,920 +toble very similar to + +1192 +00:53:47,000 --> 00:53:50,920 +drop I + +1193 +00:53:52,079 --> 00:53:57,160 +drop is the example example + +1194 +00:54:04,960 --> 00:54:10,240 +oh the on yeah yeah yeah yeah I think + +1195 +00:54:07,240 --> 00:54:12,680 +that's right yeah + +1196 +00:54:10,240 --> 00:54:15,920 +yeah yeah if you think of of Dropout as + +1197 +00:54:12,680 --> 00:54:17,280 +a way to perturb the model that uh we + +1198 +00:54:15,920 --> 00:54:19,599 +have different perturbed versions of a + +1199 +00:54:17,280 --> 00:54:22,240 +model here they instead instead produce + +1200 +00:54:19,599 --> 00:54:23,960 +these perturbed versions of the model by + +1201 +00:54:22,240 --> 00:54:26,640 +distilling the previous version to + +1202 +00:54:23,960 --> 00:54:29,680 +itself um and then under under that view + +1203 +00:54:26,640 --> 00:54:31,760 +it's like they're both just ensembles of + +1204 +00:54:29,680 --> 00:54:33,760 +random perturbations of the same model + +1205 +00:54:31,760 --> 00:54:36,960 +yeah I think that that's exactly + +1206 +00:54:33,760 --> 00:54:42,480 +right I also have question about + +1207 +00:54:36,960 --> 00:54:46,440 +this so see op mod is not by initially + +1208 +00:54:42,480 --> 00:54:50,240 +then the soft label produced also has a + +1209 +00:54:46,440 --> 00:54:52,359 +low quality how can you train train the + +1210 +00:54:50,240 --> 00:54:57,079 +model subsequently using the low Quality + +1211 +00:54:52,359 --> 00:54:57,079 +Soft label B + +1212 +00:54:58,280 --> 00:55:03,079 +yeah I think that this method would + +1213 +00:55:00,280 --> 00:55:06,640 +require like pretty strong initial model + +1214 +00:55:03,079 --> 00:55:08,599 +to make this work and then given like a + +1215 +00:55:06,640 --> 00:55:10,839 +a decent enough initial model it can + +1216 +00:55:08,599 --> 00:55:12,480 +make it better that's how that's like my + +1217 +00:55:10,839 --> 00:55:14,599 +intuition on on + +1218 +00:55:12,480 --> 00:55:16,359 +that at the most extreme case if you + +1219 +00:55:14,599 --> 00:55:18,359 +start off with a random a random model + +1220 +00:55:16,359 --> 00:55:21,920 +that produces random outputs you'll + +1221 +00:55:18,359 --> 00:55:21,920 +probably never improve + +1222 +00:55:24,000 --> 00:55:27,000 +performance + +1223 +00:55:34,480 --> 00:55:41,440 +um so I I don't have a a clear answer + +1224 +00:55:38,440 --> 00:55:43,160 +but I think that um I would say it + +1225 +00:55:41,440 --> 00:55:45,760 +probably just has to be a little better + +1226 +00:55:43,160 --> 00:55:48,640 +than random chance which is probably a + +1227 +00:55:45,760 --> 00:55:54,000 +surprising answer uh there's a previous + +1228 +00:55:48,640 --> 00:55:57,000 +work um let me see if I can find it + +1229 +00:55:54,000 --> 00:55:57,000 +uh + +1230 +00:56:05,240 --> 00:56:10,000 +yeah so this this paper I recently read + +1231 +00:56:07,160 --> 00:56:12,599 +and it was pretty shocking to me they + +1232 +00:56:10,000 --> 00:56:12,599 +take + +1233 +00:56:16,280 --> 00:56:23,880 +um they take models uh sorry they + +1234 +00:56:21,799 --> 00:56:25,640 +uh they focus on I think image + +1235 +00:56:23,880 --> 00:56:28,280 +classification here and take an initial + +1236 +00:56:25,640 --> 00:56:30,839 +data set and they randomly flip the + +1237 +00:56:28,280 --> 00:56:33,160 +label for like a certain percentage of + +1238 +00:56:30,839 --> 00:56:35,599 +examples so like they they provide the + +1239 +00:56:33,160 --> 00:56:38,559 +wrong label in train data for some + +1240 +00:56:35,599 --> 00:56:39,920 +percentage of examples and they find + +1241 +00:56:38,559 --> 00:56:44,200 +like + +1242 +00:56:39,920 --> 00:56:47,079 +that you can replace 99% of labels with + +1243 +00:56:44,200 --> 00:56:50,400 +the wrong label and still learn + +1244 +00:56:47,079 --> 00:56:52,240 +something useful um which I think is + +1245 +00:56:50,400 --> 00:56:55,520 +like is really weird and doesn't make + +1246 +00:56:52,240 --> 00:56:57,520 +sense uh and they kind of show that I + +1247 +00:56:55,520 --> 00:57:00,079 +think the idea is that this only works + +1248 +00:56:57,520 --> 00:57:03,280 +for really deep networks um and by the + +1249 +00:57:00,079 --> 00:57:05,599 +way these models I believe were not + +1250 +00:57:03,280 --> 00:57:09,839 +pre-trained so they were only learned on + +1251 +00:57:05,599 --> 00:57:12,799 +this terrible data um but + +1252 +00:57:09,839 --> 00:57:14,640 +um so I think that the so therefore like + +1253 +00:57:12,799 --> 00:57:15,520 +Bas if I'm generalizing that finding + +1254 +00:57:14,640 --> 00:57:18,319 +here + +1255 +00:57:15,520 --> 00:57:20,400 +um I think that because this show this + +1256 +00:57:18,319 --> 00:57:22,079 +suggests that neural Nets are very + +1257 +00:57:20,400 --> 00:57:24,359 +robust to label noise and if you think + +1258 +00:57:22,079 --> 00:57:26,760 +of this kind of method as you're using a + +1259 +00:57:24,359 --> 00:57:29,559 +noisy label from a teacher um you can + +1260 +00:57:26,760 --> 00:57:31,920 +probably get pretty far with data that + +1261 +00:57:29,559 --> 00:57:34,559 +is mostly noise but still better it's + +1262 +00:57:31,920 --> 00:57:34,559 +like not Pure + +1263 +00:57:38,079 --> 00:57:44,680 +Noise more of a comment rather + +1264 +00:57:41,359 --> 00:57:47,400 +than for the similarity between this and + +1265 +00:57:44,680 --> 00:57:51,880 +the drop off I think like like around + +1266 +00:57:47,400 --> 00:57:54,880 +2016 17 Garen G and like his friends + +1267 +00:57:51,880 --> 00:57:56,559 +published a lot on this area actually + +1268 +00:57:54,880 --> 00:57:59,920 +before this paper + +1269 +00:57:56,559 --> 00:58:02,760 +I so like if anyone is interested in + +1270 +00:57:59,920 --> 00:58:05,559 +this area like the keyword will be like + +1271 +00:58:02,760 --> 00:58:09,240 +alator uncertainty epistemic uncertainty + +1272 +00:58:05,559 --> 00:58:12,920 +or like mod uncertainty and + +1273 +00:58:09,240 --> 00:58:16,000 +major off or like Dr out + +1274 +00:58:12,920 --> 00:58:19,039 +approximation so there's that and also + +1275 +00:58:16,000 --> 00:58:21,039 +on the label noise side I think the + +1276 +00:58:19,039 --> 00:58:23,240 +important premise on the previous paper + +1277 +00:58:21,039 --> 00:58:26,680 +that you mentioned was giving uniform + +1278 +00:58:23,240 --> 00:58:30,280 +noise to temp if we start giving some + +1279 +00:58:26,680 --> 00:58:32,319 +like specific like bias noises like the + +1280 +00:58:30,280 --> 00:58:36,240 +neural network tends to be like somewhat + +1281 +00:58:32,319 --> 00:58:38,160 +like very bi for that specific no but + +1282 +00:58:36,240 --> 00:58:41,240 +existing computer vision label noise + +1283 +00:58:38,160 --> 00:58:43,880 +literatures often goes up to like label + +1284 +00:58:41,240 --> 00:58:47,039 +noise of 90% of something like already + +1285 +00:58:43,880 --> 00:58:49,000 +as a so even though those cases are + +1286 +00:58:47,039 --> 00:58:51,640 +somewhat like + +1287 +00:58:49,000 --> 00:58:55,599 +synthetics I think it's pretty + +1288 +00:58:51,640 --> 00:58:57,880 +interesting to see those cases but still + +1289 +00:58:55,599 --> 00:59:00,680 +like they only injecting like uniform + +1290 +00:58:57,880 --> 00:59:03,400 +distribution for the label so that like + +1291 +00:59:00,680 --> 00:59:04,960 +theoretically speaking the model will + +1292 +00:59:03,400 --> 00:59:07,079 +like with like enough number of + +1293 +00:59:04,960 --> 00:59:09,480 +iterations the model will be biased + +1294 +00:59:07,079 --> 00:59:09,480 +towards a + +1295 +00:59:11,760 --> 00:59:16,079 +two yeah that's uh thanks for the + +1296 +00:59:13,920 --> 00:59:19,240 +clarification so um yeah for the + +1297 +00:59:16,079 --> 00:59:20,559 +recording uh the the student pointed out + +1298 +00:59:19,240 --> 00:59:22,440 +that in the previous paper um an + +1299 +00:59:20,559 --> 00:59:25,520 +important detail is that the noise in + +1300 +00:59:22,440 --> 00:59:28,799 +the labels was uniform L sampled and + +1301 +00:59:25,520 --> 00:59:30,400 +that if you instead use um non-uniform + +1302 +00:59:28,799 --> 00:59:32,799 +random noise it can actually have a + +1303 +00:59:30,400 --> 00:59:35,599 +major impact on the ability of a deep + +1304 +00:59:32,799 --> 00:59:36,640 +Network to learn the input label mapping + +1305 +00:59:35,599 --> 00:59:40,920 +um yeah I think that's a really good + +1306 +00:59:36,640 --> 00:59:40,920 +point thanks um okay so + +1307 +00:59:42,440 --> 00:59:48,920 +um so uh distillation was originally + +1308 +00:59:46,079 --> 00:59:52,799 +designed here for like um when you had a + +1309 +00:59:48,920 --> 00:59:55,160 +single label per input but in text we + +1310 +00:59:52,799 --> 00:59:58,520 +often have sequences maybe we want to + +1311 +00:59:55,160 --> 01:00:01,079 +generate uh a sentence and so how do we + +1312 +00:59:58,520 --> 01:00:04,000 +extend distillation to this sequence + +1313 +01:00:01,079 --> 01:00:06,839 +labeling setting uh and so there's like + +1314 +01:00:04,000 --> 01:00:10,839 +kind of two obvious ways really the + +1315 +01:00:06,839 --> 01:00:13,319 +first is that you want to predict + +1316 +01:00:10,839 --> 01:00:15,839 +um you want to match the distribution of + +1317 +01:00:13,319 --> 01:00:17,440 +words that the teacher suggested at each + +1318 +01:00:15,839 --> 01:00:22,200 +point in your generation + +1319 +01:00:17,440 --> 01:00:25,039 +process um so uh given a prefix like um + +1320 +01:00:22,200 --> 01:00:26,880 +this movie is blank you then see the + +1321 +01:00:25,039 --> 01:00:28,319 +teacher distribution over the words and + +1322 +01:00:26,880 --> 01:00:29,880 +you try to replicate that in your + +1323 +01:00:28,319 --> 01:00:33,079 +student + +1324 +01:00:29,880 --> 01:00:36,000 +model uh and then the second idea is + +1325 +01:00:33,079 --> 01:00:38,920 +that you might now have + +1326 +01:00:36,000 --> 01:00:41,680 +um but the problem I think here is that + +1327 +01:00:38,920 --> 01:00:43,839 +the as you keep generating the + +1328 +01:00:41,680 --> 01:00:45,920 +text + +1329 +01:00:43,839 --> 01:00:48,119 +uh I think this is related to an idea + +1330 +01:00:45,920 --> 01:00:50,920 +called exposure bias which is that uh as + +1331 +01:00:48,119 --> 01:00:53,160 +you generate text um the teacher and the + +1332 +01:00:50,920 --> 01:00:54,839 +student might diverge dramatically like + +1333 +01:00:53,160 --> 01:00:57,440 +the teacher might be generating + +1334 +01:00:54,839 --> 01:00:58,920 +consistent text and it becomes uh it + +1335 +01:00:57,440 --> 01:01:00,240 +starts to look very different than what + +1336 +01:00:58,920 --> 01:01:03,240 +the student could have possibly + +1337 +01:01:00,240 --> 01:01:05,000 +generated so the second idea here is + +1338 +01:01:03,240 --> 01:01:08,079 +sequence level distillation where you + +1339 +01:01:05,000 --> 01:01:10,480 +instead just um generate a hard target + +1340 +01:01:08,079 --> 01:01:12,599 +from the teacher so instead of using + +1341 +01:01:10,480 --> 01:01:14,240 +like the so you use a soft targets at + +1342 +01:01:12,599 --> 01:01:17,400 +the word level and then at the at the + +1343 +01:01:14,240 --> 01:01:19,640 +teacher at the at the sequence level you + +1344 +01:01:17,400 --> 01:01:20,960 +generate a full sentence from teacher + +1345 +01:01:19,640 --> 01:01:22,920 +and you just want to maximize the + +1346 +01:01:20,960 --> 01:01:25,839 +probability of that like go of that + +1347 +01:01:22,920 --> 01:01:27,599 +pseudo labeled gold sentence + +1348 +01:01:25,839 --> 01:01:29,000 +and uh they show that if you combine + +1349 +01:01:27,599 --> 01:01:30,160 +these two objectives together it's + +1350 +01:01:29,000 --> 01:01:32,280 +really + +1351 +01:01:30,160 --> 01:01:33,799 +effective so I think that this this is + +1352 +01:01:32,280 --> 01:01:36,520 +like a this seems like the right way to + +1353 +01:01:33,799 --> 01:01:39,039 +do distillation for like generating + +1354 +01:01:36,520 --> 01:01:42,039 +sequences of + +1355 +01:01:39,039 --> 01:01:42,039 +text + +1356 +01:01:44,760 --> 01:01:51,400 +um and uh one really popular distilled + +1357 +01:01:48,319 --> 01:01:54,000 +distilled model in NLP that I use all + +1358 +01:01:51,400 --> 01:01:56,480 +the time is called distill birt it + +1359 +01:01:54,000 --> 01:01:57,920 +basically is just Bert um which at the + +1360 +01:01:56,480 --> 01:02:00,720 +time ber came out it was considered to + +1361 +01:01:57,920 --> 01:02:04,760 +be really big um which is kind of like + +1362 +01:02:00,720 --> 01:02:07,240 +uh comedic now but uh uh they the idea + +1363 +01:02:04,760 --> 01:02:09,920 +here was like can we reduce the size of + +1364 +01:02:07,240 --> 01:02:12,559 +Bert in half and get the same + +1365 +01:02:09,920 --> 01:02:15,520 +performance so they do this they use a + +1366 +01:02:12,559 --> 01:02:17,680 +couple tricks to do this first they just + +1367 +01:02:15,520 --> 01:02:19,279 +like took every other layer of ber so if + +1368 +01:02:17,680 --> 01:02:21,960 +you had a 12 layer BT model they they + +1369 +01:02:19,279 --> 01:02:25,240 +took six layers um and they initialized + +1370 +01:02:21,960 --> 01:02:26,920 +each layer from one the layers of the + +1371 +01:02:25,240 --> 01:02:27,799 +initial Bert model so it's not like a + +1372 +01:02:26,920 --> 01:02:32,319 +random + +1373 +01:02:27,799 --> 01:02:34,520 +initialization um then they did uh + +1374 +01:02:32,319 --> 01:02:37,279 +effectively soft target distillation + +1375 +01:02:34,520 --> 01:02:40,559 +which was effective they also tried in + +1376 +01:02:37,279 --> 01:02:42,440 +their paper and they they use um they + +1377 +01:02:40,559 --> 01:02:45,400 +combined soft target distillation with + +1378 +01:02:42,440 --> 01:02:47,799 +like real supervised a real supervised + +1379 +01:02:45,400 --> 01:02:51,440 +objective from language modeling so like + +1380 +01:02:47,799 --> 01:02:53,119 +they they masked tokens of text and they + +1381 +01:02:51,440 --> 01:02:55,279 +tried to train on both like what was + +1382 +01:02:53,119 --> 01:02:57,440 +actually behind the mask but also so + +1383 +01:02:55,279 --> 01:03:00,279 +what the teacher would have predicted + +1384 +01:02:57,440 --> 01:03:02,799 +for that mask and they kind of f found + +1385 +01:03:00,279 --> 01:03:04,720 +surprisingly I think that um the + +1386 +01:03:02,799 --> 01:03:06,520 +supervised objective doesn't really help + +1387 +01:03:04,720 --> 01:03:08,640 +much at all so if you have a good + +1388 +01:03:06,520 --> 01:03:11,680 +teacher that's probably enough for + +1389 +01:03:08,640 --> 01:03:14,279 +distillation um and then they did + +1390 +01:03:11,680 --> 01:03:16,039 +something else to make sure that the + +1391 +01:03:14,279 --> 01:03:18,319 +embedding space like had a similar + +1392 +01:03:16,039 --> 01:03:21,760 +geometry in the small model and the big + +1393 +01:03:18,319 --> 01:03:23,599 +model um and the the main finding here + +1394 +01:03:21,760 --> 01:03:26,559 +is that you can do this and get a model + +1395 +01:03:23,599 --> 01:03:28,880 +that is pretty much just as good or very + +1396 +01:03:26,559 --> 01:03:30,880 +close to it in most tasks as the Big + +1397 +01:03:28,880 --> 01:03:33,599 +Bird model and this little bird is like + +1398 +01:03:30,880 --> 01:03:36,079 +super popular people use it all the time + +1399 +01:03:33,599 --> 01:03:40,640 +uh + +1400 +01:03:36,079 --> 01:03:44,640 +okay so now uh I'm gonna go a little bit + +1401 +01:03:40,640 --> 01:03:46,480 +uh off of this initial motivation of + +1402 +01:03:44,640 --> 01:03:48,440 +efficiency and talk about how + +1403 +01:03:46,480 --> 01:03:51,279 +distillation can be used like actually + +1404 +01:03:48,440 --> 01:03:52,799 +do things that you cannot do otherwise + +1405 +01:03:51,279 --> 01:03:54,359 +like unlocking capabilities and + +1406 +01:03:52,799 --> 01:03:57,440 +performance that are pretty much + +1407 +01:03:54,359 --> 01:03:58,760 +impossible using traditional learning um + +1408 +01:03:57,440 --> 01:04:01,799 +before I do that any questions about + +1409 +01:03:58,760 --> 01:04:01,799 +distillation before I move + +1410 +01:04:05,760 --> 01:04:13,160 +on try to define the architecture of the + +1411 +01:04:10,079 --> 01:04:15,720 +if you want to dis some given model how + +1412 +01:04:13,160 --> 01:04:18,960 +what would you + +1413 +01:04:15,720 --> 01:04:20,559 +your um I think it's you have a lot of + +1414 +01:04:18,960 --> 01:04:23,200 +flexibility in distillation unlike + +1415 +01:04:20,559 --> 01:04:26,200 +something like pruning + +1416 +01:04:23,200 --> 01:04:26,200 +um + +1417 +01:04:28,880 --> 01:04:32,200 +I think I've seen work that suggests + +1418 +01:04:30,520 --> 01:04:34,279 +that distillation is most effective when + +1419 +01:04:32,200 --> 01:04:36,400 +your student and teacher have like + +1420 +01:04:34,279 --> 01:04:38,559 +similar architectures like if you're for + +1421 +01:04:36,400 --> 01:04:41,319 +example if your teacher is an auto + +1422 +01:04:38,559 --> 01:04:43,079 +regressive model you might like a GPT 2 + +1423 +01:04:41,319 --> 01:04:45,760 +or three you might want your student to + +1424 +01:04:43,079 --> 01:04:47,079 +be autoaggressive but um but my + +1425 +01:04:45,760 --> 01:04:48,400 +intuition is that there's a lot of + +1426 +01:04:47,079 --> 01:04:49,839 +flexibility here and especially if + +1427 +01:04:48,400 --> 01:04:51,880 +you're doing Hard Target distillation + +1428 +01:04:49,839 --> 01:04:54,279 +where you're generating sequences you + +1429 +01:04:51,880 --> 01:04:56,680 +could just treat this as as labels and + +1430 +01:04:54,279 --> 01:05:00,200 +then you could train any student model + +1431 +01:04:56,680 --> 01:05:00,200 +um so I don't think it's that + +1432 +01:05:14,880 --> 01:05:20,359 +constrained okay so um I think that the + +1433 +01:05:18,200 --> 01:05:22,720 +first like thing I'll talk about here in + +1434 +01:05:20,359 --> 01:05:24,960 +this sort of new age distillation world + +1435 +01:05:22,720 --> 01:05:26,720 +is self- instruct which gram already + +1436 +01:05:24,960 --> 01:05:29,799 +talked about two weeks ago um so I'll + +1437 +01:05:26,720 --> 01:05:32,240 +just touch on this the idea here is that + +1438 +01:05:29,799 --> 01:05:35,079 +they doing like self distillation sort + +1439 +01:05:32,240 --> 01:05:38,000 +of like this paper where they're taking + +1440 +01:05:35,079 --> 01:05:39,880 +a model making it generate data and then + +1441 +01:05:38,000 --> 01:05:42,079 +training that same model on that data + +1442 +01:05:39,880 --> 01:05:44,359 +that's the basic idea + +1443 +01:05:42,079 --> 01:05:45,760 +um but here they're doing something very + +1444 +01:05:44,359 --> 01:05:47,160 +specific where they take a vanilla + +1445 +01:05:45,760 --> 01:05:49,440 +language model that's just trained to + +1446 +01:05:47,160 --> 01:05:51,279 +like generate text and they're trying to + +1447 +01:05:49,440 --> 01:05:54,079 +teach it to follow instructions using + +1448 +01:05:51,279 --> 01:05:55,640 +instruction fine tuning and the way they + +1449 +01:05:54,079 --> 01:05:58,640 +accomplish this is by having this + +1450 +01:05:55,640 --> 01:06:01,640 +vanilla language model first like + +1451 +01:05:58,640 --> 01:06:05,760 +generate instructions arbitrarily like + +1452 +01:06:01,640 --> 01:06:08,760 +um write a poem about dogs and then + +1453 +01:06:05,760 --> 01:06:11,359 +produce responses to those instructions + +1454 +01:06:08,760 --> 01:06:13,720 +like a poem about dogs and then training + +1455 +01:06:11,359 --> 01:06:15,400 +that same model to now imitate its own + +1456 +01:06:13,720 --> 01:06:20,200 +behavior + +1457 +01:06:15,400 --> 01:06:21,279 +um and uh they use some tricks that like + +1458 +01:06:20,200 --> 01:06:23,520 +make this + +1459 +01:06:21,279 --> 01:06:28,200 +work the and I think one of the key + +1460 +01:06:23,520 --> 01:06:28,200 +tricks that um I'll zoom in on + +1461 +01:06:28,559 --> 01:06:33,839 +here is that when you're doing data set + +1462 +01:06:31,240 --> 01:06:35,920 +generation uh the most obvious thing to + +1463 +01:06:33,839 --> 01:06:38,520 +do is you first generate the inputs then + +1464 +01:06:35,920 --> 01:06:40,480 +you like pseudo label your outputs but + +1465 +01:06:38,520 --> 01:06:42,400 +the issue here is that the quality of + +1466 +01:06:40,480 --> 01:06:45,400 +your labels is only as good as your as + +1467 +01:06:42,400 --> 01:06:49,039 +your teacher is so if I + +1468 +01:06:45,400 --> 01:06:51,599 +um if I first generate a text and then I + +1469 +01:06:49,039 --> 01:06:54,760 +generate the class that I think + +1470 +01:06:51,599 --> 01:06:56,920 +corresponds to that text if this class + +1471 +01:06:54,760 --> 01:06:58,920 +label is like really bad and maybe in + +1472 +01:06:56,920 --> 01:07:02,039 +like very systematic ways as was + +1473 +01:06:58,920 --> 01:07:03,039 +mentioned earlier then uh the CR data + +1474 +01:07:02,039 --> 01:07:05,480 +will be really bad and you're not going + +1475 +01:07:03,039 --> 01:07:06,920 +to be able to learn anything useful but + +1476 +01:07:05,480 --> 01:07:10,480 +when you're generating data you don't + +1477 +01:07:06,920 --> 01:07:12,960 +need to actually do this linear process + +1478 +01:07:10,480 --> 01:07:16,240 +you can instead first generate the + +1479 +01:07:12,960 --> 01:07:17,960 +class and then generate inputs condition + +1480 +01:07:16,240 --> 01:07:20,160 +on that class so like this is kind of + +1481 +01:07:17,960 --> 01:07:21,359 +doing things backwards and you can't do + +1482 +01:07:20,160 --> 01:07:23,160 +this when you're doing real prediction + +1483 +01:07:21,359 --> 01:07:24,880 +because you don't know the class like + +1484 +01:07:23,160 --> 01:07:27,920 +like this is not useful + +1485 +01:07:24,880 --> 01:07:29,760 +in practice when you're doing prediction + +1486 +01:07:27,920 --> 01:07:33,079 +but for generating data you don't need + +1487 +01:07:29,760 --> 01:07:34,880 +to do things linearly and so um I think + +1488 +01:07:33,079 --> 01:07:36,799 +that this idea to me is like really + +1489 +01:07:34,880 --> 01:07:39,119 +important in data + +1490 +01:07:36,799 --> 01:07:42,520 +generation that you can decompose your + +1491 +01:07:39,119 --> 01:07:44,839 +task into different patterns or orders + +1492 +01:07:42,520 --> 01:07:47,680 +and then generate your data from like + +1493 +01:07:44,839 --> 01:07:49,680 +the ground up that way um and hopefully + +1494 +01:07:47,680 --> 01:07:52,920 +this way by like reducing a hard problem + +1495 +01:07:49,680 --> 01:07:55,880 +to an easy problem uh you can do a lot + +1496 +01:07:52,920 --> 01:07:59,279 +better um this is related to one other + +1497 +01:07:55,880 --> 01:08:02,119 +paper that I did not put in the + +1498 +01:07:59,279 --> 01:08:04,599 +um in the + +1499 +01:08:02,119 --> 01:08:06,119 +slides and they so in this paper they + +1500 +01:08:04,599 --> 01:08:08,599 +call this + +1501 +01:08:06,119 --> 01:08:12,160 +idea task + +1502 +01:08:08,599 --> 01:08:12,160 +asymmetry where + +1503 +01:08:18,440 --> 01:08:24,560 +uh if you have a task of going from X to + +1504 +01:08:21,719 --> 01:08:27,640 +Y that is really hard but going from y + +1505 +01:08:24,560 --> 01:08:30,120 +to X is easy then you can just start + +1506 +01:08:27,640 --> 01:08:32,839 +with a bunch of y's generate synthetic + +1507 +01:08:30,120 --> 01:08:35,239 +X's but because this direction is easy + +1508 +01:08:32,839 --> 01:08:37,279 +you can probably do pretty good at this + +1509 +01:08:35,239 --> 01:08:39,239 +and then you can now flip the data again + +1510 +01:08:37,279 --> 01:08:41,920 +and train your model to generate X to + +1511 +01:08:39,239 --> 01:08:44,600 +generate y from X but you have a lot of + +1512 +01:08:41,920 --> 01:08:46,799 +data that is like pretty good and then + +1513 +01:08:44,600 --> 01:08:49,960 +uh you can do like really surprisingly + +1514 +01:08:46,799 --> 01:08:51,799 +well using this strategy uh so in like + +1515 +01:08:49,960 --> 01:08:54,799 +this paper they were doing information + +1516 +01:08:51,799 --> 01:08:56,679 +extraction where you had um you're given + +1517 +01:08:54,799 --> 01:09:00,520 +like sentences and you wanted to extract + +1518 +01:08:56,679 --> 01:09:03,040 +triples so here like you had what film + +1519 +01:09:00,520 --> 01:09:05,359 +who is what's the location of that film + +1520 +01:09:03,040 --> 01:09:07,640 +and instead of and so doing this uh + +1521 +01:09:05,359 --> 01:09:10,279 +sentence to information extraction is + +1522 +01:09:07,640 --> 01:09:11,560 +pretty hard but it's pretty easy to get + +1523 +01:09:10,279 --> 01:09:14,120 +be given a bunch of entities and + +1524 +01:09:11,560 --> 01:09:15,759 +generate a sentence about those entities + +1525 +01:09:14,120 --> 01:09:18,440 +that's like pretty trivial to do I think + +1526 +01:09:15,759 --> 01:09:19,920 +with large language models so they they + +1527 +01:09:18,440 --> 01:09:22,520 +went backwards and they just took a + +1528 +01:09:19,920 --> 01:09:25,440 +bunch of triples generated text + +1529 +01:09:22,520 --> 01:09:28,679 +synthetically and used then flipped the + +1530 +01:09:25,440 --> 01:09:30,600 +order of the labels and inputs um and + +1531 +01:09:28,679 --> 01:09:33,640 +then what they found is that in terms of + +1532 +01:09:30,600 --> 01:09:36,759 +the performance here + +1533 +01:09:33,640 --> 01:09:39,400 +um it's + +1534 +01:09:36,759 --> 01:09:41,520 +like double as good as the previous best + +1535 +01:09:39,400 --> 01:09:45,120 +model here which was already really good + +1536 +01:09:41,520 --> 01:09:47,679 +for the time um so I think that this is + +1537 +01:09:45,120 --> 01:09:49,759 +like sort of an idea that I want to pass + +1538 +01:09:47,679 --> 01:09:52,560 +on that I think is is really nice uh + +1539 +01:09:49,759 --> 01:09:56,560 +that was touched on by self + +1540 +01:09:52,560 --> 01:10:00,760 +instruct okay okay um and then uh + +1541 +01:09:56,560 --> 01:10:02,440 +the one the going a little further in + +1542 +01:10:00,760 --> 01:10:04,920 +this idea of using distillation to do + +1543 +01:10:02,440 --> 01:10:06,480 +things that you couldn't do before um is + +1544 +01:10:04,920 --> 01:10:08,920 +some work that I did with graham last + +1545 +01:10:06,480 --> 01:10:12,320 +year this past year um which is called + +1546 +01:10:08,920 --> 01:10:14,560 +prompt model and the idea here is that + +1547 +01:10:12,320 --> 01:10:17,040 +let's forget that distillation is + +1548 +01:10:14,560 --> 01:10:20,760 +anything but just like a bit + +1549 +01:10:17,040 --> 01:10:24,080 +generator and now um distillation is one + +1550 +01:10:20,760 --> 01:10:25,280 +way to get training data for your model + +1551 +01:10:24,080 --> 01:10:27,000 +but there might be other ways to get + +1552 +01:10:25,280 --> 01:10:30,960 +data as well that we are like leaving on + +1553 +01:10:27,000 --> 01:10:33,080 +the table so I think the the key idea + +1554 +01:10:30,960 --> 01:10:35,840 +here is can we + +1555 +01:10:33,080 --> 01:10:37,960 +combine retrieved data existing data + +1556 +01:10:35,840 --> 01:10:40,159 +that exists on the on the internet with + +1557 +01:10:37,960 --> 01:10:42,960 +data generated from llm can we put these + +1558 +01:10:40,159 --> 01:10:46,440 +two things together and uh do even + +1559 +01:10:42,960 --> 01:10:50,480 +better so uh in this in this paper + +1560 +01:10:46,440 --> 01:10:51,920 +we uh ask the user to specify their task + +1561 +01:10:50,480 --> 01:10:55,159 +in a prompt kind of like what you use + +1562 +01:10:51,920 --> 01:10:57,560 +for gpt3 and and uh they can give a + +1563 +01:10:55,159 --> 01:11:00,920 +couple examples if they want and then + +1564 +01:10:57,560 --> 01:11:02,640 +given this prompt we first retrieve + +1565 +01:11:00,920 --> 01:11:05,000 +existing data sets that might be + +1566 +01:11:02,640 --> 01:11:06,920 +relevant to that prompt so we had like a + +1567 +01:11:05,000 --> 01:11:09,280 +method for data set retrieval in a + +1568 +01:11:06,920 --> 01:11:12,480 +previous paper that just uses text to + +1569 +01:11:09,280 --> 01:11:17,159 +find similar data sets so if I say + +1570 +01:11:12,480 --> 01:11:19,440 +um answer biomedical questions about uh + +1571 +01:11:17,159 --> 01:11:24,159 +for cancer doctors it might find like + +1572 +01:11:19,440 --> 01:11:26,960 +the bioasq data set uh and then we take + +1573 +01:11:24,159 --> 01:11:29,840 +that retrieve data set which is likely + +1574 +01:11:26,960 --> 01:11:31,600 +to be high quality but may not match the + +1575 +01:11:29,840 --> 01:11:32,840 +task that the user actually cares about + +1576 +01:11:31,600 --> 01:11:35,320 +it might be like a little bit different + +1577 +01:11:32,840 --> 01:11:37,880 +than what the user actually wants we + +1578 +01:11:35,320 --> 01:11:40,679 +then complement this retrieve data set + +1579 +01:11:37,880 --> 01:11:43,159 +with generated data generated by a + +1580 +01:11:40,679 --> 01:11:45,719 +language model which is potentially like + +1581 +01:11:43,159 --> 01:11:48,199 +not that high quality but is much more + +1582 +01:11:45,719 --> 01:11:51,120 +likely to match the user's + +1583 +01:11:48,199 --> 01:11:52,840 +intentions uh we then did one other + +1584 +01:11:51,120 --> 01:11:54,920 +thing that's not that important which is + +1585 +01:11:52,840 --> 01:11:57,159 +um retrieving like a pre-train model as + +1586 +01:11:54,920 --> 01:11:58,840 +well like maybe you have a pre-train + +1587 +01:11:57,159 --> 01:12:00,639 +model that is in your domain that you + +1588 +01:11:58,840 --> 01:12:02,719 +want to that you can actually benefit + +1589 +01:12:00,639 --> 01:12:05,440 +from then we just put all these things + +1590 +01:12:02,719 --> 01:12:08,480 +together fine-tune this small model on + +1591 +01:12:05,440 --> 01:12:10,639 +your generated and retriev data sets and + +1592 +01:12:08,480 --> 01:12:12,880 +then um I think the really cool thing + +1593 +01:12:10,639 --> 01:12:17,440 +here was that we were able to obtain + +1594 +01:12:12,880 --> 01:12:20,400 +small models that often outperform + +1595 +01:12:17,440 --> 01:12:22,920 +gpt3 even though gpt3 was the model used + +1596 +01:12:20,400 --> 01:12:24,280 +to generate data so like we were beating + +1597 +01:12:22,920 --> 01:12:27,880 +the teacher + +1598 +01:12:24,280 --> 01:12:30,000 +uh by leveraging like both distillation + +1599 +01:12:27,880 --> 01:12:33,199 +but also taking advantage of existing + +1600 +01:12:30,000 --> 01:12:33,199 +data sets that were available on the + +1601 +01:12:33,560 --> 01:12:37,800 +internet so I think that uh generally + +1602 +01:12:36,520 --> 01:12:40,800 +this is a direction I'm really excited + +1603 +01:12:37,800 --> 01:12:43,480 +about distillation for the purpose of um + +1604 +01:12:40,800 --> 01:12:45,880 +advancing model capabilities + +1605 +01:12:43,480 --> 01:12:48,800 +um and + +1606 +01:12:45,880 --> 01:12:49,639 +uh I think that this kind of came at a + +1607 +01:12:48,800 --> 01:12:52,199 +time + +1608 +01:12:49,639 --> 01:12:53,719 +when distillation was becoming really + +1609 +01:12:52,199 --> 01:12:56,239 +popular but now it's often used used by + +1610 +01:12:53,719 --> 01:12:58,280 +a different name which is called synth + +1611 +01:12:56,239 --> 01:12:59,760 +synthetic data generation it's + +1612 +01:12:58,280 --> 01:13:00,639 +effectively the same thing as hard + +1613 +01:12:59,760 --> 01:13:03,440 +target + +1614 +01:13:00,639 --> 01:13:04,800 +distillation uh but this is like + +1615 +01:13:03,440 --> 01:13:07,199 +probably one of the hottest research + +1616 +01:13:04,800 --> 01:13:09,199 +topics in NLP right now um and just like + +1617 +01:13:07,199 --> 01:13:13,560 +last week I saw this paper on the + +1618 +01:13:09,199 --> 01:13:15,480 +internet um that provides like a sort of + +1619 +01:13:13,560 --> 01:13:17,719 +pytorch like toolkit for doing + +1620 +01:13:15,480 --> 01:13:20,840 +distillation so they Define different + +1621 +01:13:17,719 --> 01:13:23,560 +like primitive operations um like + +1622 +01:13:20,840 --> 01:13:28,120 +generating stuff from prompt or from rag + +1623 +01:13:23,560 --> 01:13:31,639 +um doing retrieval uh doing filtering + +1624 +01:13:28,120 --> 01:13:34,239 +and ranking of examples or judging the + +1625 +01:13:31,639 --> 01:13:37,880 +input like judging your generated + +1626 +01:13:34,239 --> 01:13:39,920 +examples using another llm and uh they + +1627 +01:13:37,880 --> 01:13:41,360 +also integrate model training into this + +1628 +01:13:39,920 --> 01:13:43,880 +Loop but I think that this is like a + +1629 +01:13:41,360 --> 01:13:45,360 +really exciting Direction in terms of + +1630 +01:13:43,880 --> 01:13:49,600 +making data set generation something + +1631 +01:13:45,360 --> 01:13:51,880 +that can be uh very mature and uh like + +1632 +01:13:49,600 --> 01:13:55,320 +managed like a real engineering problem + +1633 +01:13:51,880 --> 01:13:57,639 +uh so + +1634 +01:13:55,320 --> 01:13:59,880 +uh yeah so I think that for like final + +1635 +01:13:57,639 --> 01:14:01,800 +projects I think the synthetic data is a + +1636 +01:13:59,880 --> 01:14:02,840 +really exciting Direction and this kind + +1637 +01:14:01,800 --> 01:14:06,480 +of toolkit could be something to + +1638 +01:14:02,840 --> 01:14:06,480 +consider for if you decide to go that + +1639 +01:14:06,679 --> 01:14:11,800 +route okay so we have a couple minutes + +1640 +01:14:09,040 --> 01:14:11,800 +left anyone ask + +1641 +01:14:17,880 --> 01:14:21,800 +questions otherwise uh that's + +1642 +01:14:22,280 --> 01:14:25,280 +it \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/transcript.vtt b/CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..b6d5176c4e020913325f195e1db0fabf5aa336f3 --- /dev/null +++ b/CMU Advanced NLP 2024 (11) Distillation, Quantization, and Pruning/transcript.vtt @@ -0,0 +1,4927 @@ +WEBVTT + +00:00:01.839 --> 00:00:08.520 +okay so um I'm here substituting for + +00:00:04.759 --> 00:00:08.520 +Graham today because he's traveling + +00:00:10.360 --> 00:00:14.280 +um and yeah we can just get right into + +00:00:12.639 --> 00:00:17.480 +it so + +00:00:14.280 --> 00:00:19.480 +um as everyone here knows NLP models now + +00:00:17.480 --> 00:00:23.199 +are like really deployed at a large + +00:00:19.480 --> 00:00:25.519 +scale um and we all know that training + +00:00:23.199 --> 00:00:27.519 +big models is expensive um I'm sure that + +00:00:25.519 --> 00:00:31.840 +you've experienced that in homework one + +00:00:27.519 --> 00:00:34.760 +and uh in any time you train a network a + +00:00:31.840 --> 00:00:36.640 +deep Network you need GPU resources and + +00:00:34.760 --> 00:00:38.320 +this is something that we all understand + +00:00:36.640 --> 00:00:41.360 +um but something that I think is + +00:00:38.320 --> 00:00:43.480 +overlooked is that inference so once you + +00:00:41.360 --> 00:00:47.160 +have a train model now deploying it and + +00:00:43.480 --> 00:00:49.520 +making predictions for users is arguably + +00:00:47.160 --> 00:00:53.440 +even more expensive like if you look at + +00:00:49.520 --> 00:00:55.600 +the lifetime of a model um it probably + +00:00:53.440 --> 00:00:58.120 +exceeds the training costs according to + +00:00:55.600 --> 00:01:01.039 +this analysis within just one week of + +00:00:58.120 --> 00:01:04.080 +use and so if your model is being used + +00:01:01.039 --> 00:01:06.119 +for months or years of many people the + +00:01:04.080 --> 00:01:07.479 +cost will like greatly eclipse the + +00:01:06.119 --> 00:01:10.360 +training costs which is more of a + +00:01:07.479 --> 00:01:13.640 +onetime cost + +00:01:10.360 --> 00:01:16.920 +um and this is a problem because if we + +00:01:13.640 --> 00:01:18.600 +want to make AI systems able to help + +00:01:16.920 --> 00:01:21.600 +lots of different people in different + +00:01:18.600 --> 00:01:25.040 +places um people without as much + +00:01:21.600 --> 00:01:27.640 +resources or access to power um we want + +00:01:25.040 --> 00:01:31.840 +to be able to reduce the cost of serving + +00:01:27.640 --> 00:01:31.840 +AI systems to the public + +00:01:32.479 --> 00:01:37.520 +and this is also getting harder because + +00:01:34.880 --> 00:01:39.840 +models are getting bigger + +00:01:37.520 --> 00:01:42.600 +um there's been like maybe a slight + +00:01:39.840 --> 00:01:45.159 +shift towards reducing model size a + +00:01:42.600 --> 00:01:48.280 +little bit in the last maybe two years + +00:01:45.159 --> 00:01:49.960 +but these models are still like billions + +00:01:48.280 --> 00:01:51.439 +of parameters in size and that is + +00:01:49.960 --> 00:01:54.759 +expensive to + +00:01:51.439 --> 00:01:57.119 +serve so uh the main question of this + +00:01:54.759 --> 00:02:00.560 +that we'll be talking about in today's + +00:01:57.119 --> 00:02:02.880 +lecture is how can we cheaply + +00:02:00.560 --> 00:02:06.320 +efficiently and equitably deploy NLP + +00:02:02.880 --> 00:02:09.319 +systems without sacrificing + +00:02:06.320 --> 00:02:10.679 +performance and uh there's like a clear + +00:02:09.319 --> 00:02:13.879 +answer here that I'll I'm kind of + +00:02:10.679 --> 00:02:16.519 +leading towards which is model + +00:02:13.879 --> 00:02:18.879 +compression + +00:02:16.519 --> 00:02:21.080 +and model compression here basically + +00:02:18.879 --> 00:02:23.160 +means taking a trained model and then + +00:02:21.080 --> 00:02:26.519 +reducing the size of that + +00:02:23.160 --> 00:02:28.760 +model before deploying it and there's + +00:02:26.519 --> 00:02:30.800 +three highle ways that we'll talk about + +00:02:28.760 --> 00:02:34.000 +in today's lecture for how we can + +00:02:30.800 --> 00:02:36.840 +compress models the first is + +00:02:34.000 --> 00:02:38.640 +quantization which is you basically + +00:02:36.840 --> 00:02:40.959 +don't really change the architecture or + +00:02:38.640 --> 00:02:42.360 +parameters of the model up to a certain + +00:02:40.959 --> 00:02:43.920 +amount of precision and then you throw + +00:02:42.360 --> 00:02:47.879 +away the remainder of the + +00:02:43.920 --> 00:02:49.959 +Precision the second is pruning so uh + +00:02:47.879 --> 00:02:51.519 +throwing out entire components or + +00:02:49.959 --> 00:02:54.879 +parameters of a + +00:02:51.519 --> 00:02:56.680 +model and the third uh is distillation + +00:02:54.879 --> 00:02:57.879 +where you might change all of the + +00:02:56.680 --> 00:02:59.760 +parameters but you're basically + +00:02:57.879 --> 00:03:02.760 +condensing the knowledge of a big mod + +00:02:59.760 --> 00:03:04.599 +model into a smaller model that is + +00:03:02.760 --> 00:03:07.200 +retrained often from scratch to + +00:03:04.599 --> 00:03:08.680 +replicate the behavior of the big model + +00:03:07.200 --> 00:03:11.480 +and don't worry I'll go into much more + +00:03:08.680 --> 00:03:11.480 +detail with all these + +00:03:28.680 --> 00:03:31.680 +things + +00:03:39.760 --> 00:03:43.280 +uh I think the mic the mic stopped + +00:03:41.840 --> 00:03:45.080 +working so I'm going to speak up if + +00:03:43.280 --> 00:03:48.519 +anyone's having trouble hearing me just + +00:03:45.080 --> 00:03:48.519 +uh please raise your hand and + +00:03:48.680 --> 00:03:53.599 +yeah okay so I've motivated this idea of + +00:03:51.879 --> 00:03:55.680 +model compression which is very tempting + +00:03:53.599 --> 00:03:58.120 +right just take a model make it smaller + +00:03:55.680 --> 00:04:00.760 +get the same performance and um it's + +00:03:58.120 --> 00:04:02.920 +just cheaper to serve right nothing + +00:04:00.760 --> 00:04:04.519 +nothing about that seems bad so I think + +00:04:02.920 --> 00:04:07.200 +there's a natural question of why is + +00:04:04.519 --> 00:04:10.040 +this even possible uh and specifically I + +00:04:07.200 --> 00:04:12.840 +think a natural question is instead of + +00:04:10.040 --> 00:04:14.680 +taking a big model and making it smaller + +00:04:12.840 --> 00:04:16.799 +why not just start with a small model + +00:04:14.680 --> 00:04:19.479 +and train it as such like that seems a + +00:04:16.799 --> 00:04:21.000 +little more intuitive um so that's the + +00:04:19.479 --> 00:04:23.320 +first question is why not just start + +00:04:21.000 --> 00:04:25.479 +with a small model that we train and + +00:04:23.320 --> 00:04:27.360 +then the second question would be um why + +00:04:25.479 --> 00:04:29.960 +is it possible to take a big model and + +00:04:27.360 --> 00:04:32.360 +throw pieces of it away without + +00:04:29.960 --> 00:04:33.960 +sacrificing accuracy that does not seem + +00:04:32.360 --> 00:04:34.880 +like like a given that that should be + +00:04:33.960 --> 00:04:36.960 +even + +00:04:34.880 --> 00:04:42.720 +possible and I'll just give you a little + +00:04:36.960 --> 00:04:42.720 +intuition for why this is possible um + +00:04:42.880 --> 00:04:49.199 +so Mo so this term over parameterize um + +00:04:47.639 --> 00:04:51.080 +uh means how many people are familiar + +00:04:49.199 --> 00:04:53.880 +with this term you can raise your + +00:04:51.080 --> 00:04:56.479 +hand the the basic me uh the meaning of + +00:04:53.880 --> 00:04:59.240 +this term is that you have a model that + +00:04:56.479 --> 00:05:02.039 +has usually more parameters than you + +00:04:59.240 --> 00:05:03.520 +have training data or more in more + +00:05:02.039 --> 00:05:06.800 +casual terms you just have a lot of + +00:05:03.520 --> 00:05:08.280 +parameters like way more than + +00:05:06.800 --> 00:05:11.039 +statistical machine learning would say + +00:05:08.280 --> 00:05:14.160 +you need so for example like gpt3 the + +00:05:11.039 --> 00:05:17.639 +original gpt3 model had 170 billion + +00:05:14.160 --> 00:05:20.520 +parameters which is like definitely + +00:05:17.639 --> 00:05:22.960 +overparameterized um and so there's been + +00:05:20.520 --> 00:05:26.400 +a lot of work in the theory community of + +00:05:22.960 --> 00:05:28.240 +ml that shows that overparameterized + +00:05:26.400 --> 00:05:30.919 +models models that have a huge number of + +00:05:28.240 --> 00:05:34.720 +parameters are actually much easier to + +00:05:30.919 --> 00:05:38.360 +train uh especially for very complicated + +00:05:34.720 --> 00:05:41.680 +tasks and um the basic idea is that + +00:05:38.360 --> 00:05:44.280 +training deep neural networks for for + +00:05:41.680 --> 00:05:46.759 +most tasks requires optimizing a + +00:05:44.280 --> 00:05:49.000 +non-convex objective which is not + +00:05:46.759 --> 00:05:52.360 +guaranteed to you're not guaranteed to + +00:05:49.000 --> 00:05:54.560 +find the global Optimum of a non-convex + +00:05:52.360 --> 00:05:56.280 +objective but when you have a bunch of + +00:05:54.560 --> 00:05:58.680 +parameters and you're trying to + +00:05:56.280 --> 00:06:01.960 +basically tune the parameters to find + +00:05:58.680 --> 00:06:03.440 +the best value of your objective um + +00:06:01.960 --> 00:06:06.960 +having a lot of parameters sort of lets + +00:06:03.440 --> 00:06:10.360 +you Sid step around saddle points or + +00:06:06.960 --> 00:06:12.440 +local Optima that are not Global Optima + +00:06:10.360 --> 00:06:14.720 +uh you can basically like take sort of + +00:06:12.440 --> 00:06:17.120 +shortcuts around barriers in the + +00:06:14.720 --> 00:06:19.880 +optimization space um this is sort of + +00:06:17.120 --> 00:06:21.560 +the intuition and uh the cmu's convex + +00:06:19.880 --> 00:06:24.759 +optimization class goes into a lot more + +00:06:21.560 --> 00:06:27.520 +detail uh for this kind of thing anyways + +00:06:24.759 --> 00:06:28.960 +I think the intuition here is that + +00:06:27.520 --> 00:06:30.680 +models with a lot of parameters are + +00:06:28.960 --> 00:06:34.280 +easier to train and they lead to better + +00:06:30.680 --> 00:06:36.000 +models um but you don't Pro you probably + +00:06:34.280 --> 00:06:37.759 +don't need all those parameters for + +00:06:36.000 --> 00:06:38.599 +inference they're more of a training + +00:06:37.759 --> 00:06:41.000 +time + +00:06:38.599 --> 00:06:43.919 +trick okay so now + +00:06:41.000 --> 00:06:46.560 +um before I move on any questions about + +00:06:43.919 --> 00:06:46.560 +the motivation + +00:06:53.800 --> 00:07:00.840 +here okay so um we'll start with + +00:06:57.440 --> 00:07:03.960 +quantization and uh the most obvious way + +00:07:00.840 --> 00:07:07.039 +to do this is post training quantization + +00:07:03.960 --> 00:07:09.840 +so you train a model as big as you want + +00:07:07.039 --> 00:07:12.800 +and then you just reduce the Precision + +00:07:09.840 --> 00:07:15.240 +of say all of the weights in that model + +00:07:12.800 --> 00:07:18.160 +so for example where we can revisit this + +00:07:15.240 --> 00:07:21.479 +uh slide that Graham had shown two two + +00:07:18.160 --> 00:07:25.400 +lectures ago on if you have a 65 billion + +00:07:21.479 --> 00:07:27.879 +parameter model like llama 2 um and you + +00:07:25.400 --> 00:07:31.199 +have say four bit Precision so four + +00:07:27.879 --> 00:07:33.520 +bytes of precision so like 30 2 bit + +00:07:31.199 --> 00:07:36.840 +floats just loading that model into + +00:07:33.520 --> 00:07:40.199 +memory would take 260 gbt of of GPU + +00:07:36.840 --> 00:07:44.000 +memory which is more than most single + +00:07:40.199 --> 00:07:46.879 +gpus that you could bu have um but if + +00:07:44.000 --> 00:07:49.599 +you instead reduce the Precision of the + +00:07:46.879 --> 00:07:51.199 +parameters of the weights in that model + +00:07:49.599 --> 00:07:54.159 +you see like a pretty + +00:07:51.199 --> 00:07:56.039 +massive decrease which is linear to the + +00:07:54.159 --> 00:07:58.440 +uh R reduction in + +00:07:56.039 --> 00:08:01.199 +Precision at the most extreme case if + +00:07:58.440 --> 00:08:04.280 +you replaced each float 32 with a single + +00:08:01.199 --> 00:08:07.039 +bit so zero or one um then you would + +00:08:04.280 --> 00:08:11.039 +only have like an 8 gigabyte model which + +00:08:07.039 --> 00:08:14.199 +you could probably load into most + +00:08:11.039 --> 00:08:15.919 +gpus and so uh I think there's a clearly + +00:08:14.199 --> 00:08:17.000 +an attractive proposition here in terms + +00:08:15.919 --> 00:08:20.680 +of the + +00:08:17.000 --> 00:08:23.159 +costs uh but before we go into some of + +00:08:20.680 --> 00:08:26.400 +the nitty-gritty here um just a + +00:08:23.159 --> 00:08:29.520 +refresher from computer systems + +00:08:26.400 --> 00:08:30.919 +so neural Nets uh are typically + +00:08:29.520 --> 00:08:33.800 +represent weights as floating Point + +00:08:30.919 --> 00:08:36.760 +numbers in order to express + +00:08:33.800 --> 00:08:39.760 +um to have a broader range of values in + +00:08:36.760 --> 00:08:42.240 +the model and so floating points in at + +00:08:39.760 --> 00:08:43.560 +least in like the itle E standard which + +00:08:42.240 --> 00:08:46.640 +is the most + +00:08:43.560 --> 00:08:48.720 +common you have three pieces you have + +00:08:46.640 --> 00:08:49.760 +like a sign bit which says is it + +00:08:48.720 --> 00:08:52.680 +positive or + +00:08:49.760 --> 00:08:56.000 +negative a fractional bit which a + +00:08:52.680 --> 00:08:58.200 +fractional piece um which specifies sort + +00:08:56.000 --> 00:09:00.959 +of + +00:08:58.200 --> 00:09:03.640 +the + +00:09:00.959 --> 00:09:06.240 +which uh it specifies the range of the + +00:09:03.640 --> 00:09:08.640 +values and then the exponent which uh + +00:09:06.240 --> 00:09:11.440 +scales how big or small the per the + +00:09:08.640 --> 00:09:13.680 +float is so we can give an example here + +00:09:11.440 --> 00:09:16.839 +so here + +00:09:13.680 --> 00:09:19.200 +um we have we can do float 16 where we + +00:09:16.839 --> 00:09:23.279 +have like 10 bits of fraction so this + +00:09:19.200 --> 00:09:25.680 +gives a lot more range of + +00:09:23.279 --> 00:09:27.800 +uh of what the number could be then the + +00:09:25.680 --> 00:09:29.839 +exponent bit is five so you have up to + +00:09:27.800 --> 00:09:33.720 +like 2 to the^ of five + +00:09:29.839 --> 00:09:36.000 +or 2^ NE um as like the scaling Factor + +00:09:33.720 --> 00:09:40.160 +here um and then the sign which is + +00:09:36.000 --> 00:09:45.760 +positive one or negative one um and + +00:09:40.160 --> 00:09:47.680 +so float 16 is uh is pretty common but + +00:09:45.760 --> 00:09:49.920 +for machine learning it's often not + +00:09:47.680 --> 00:09:52.360 +enough because in especially when you're + +00:09:49.920 --> 00:09:54.600 +trying to train a neuronet you often + +00:09:52.360 --> 00:09:58.480 +have very small or very big values like + +00:09:54.600 --> 00:10:00.279 +underflow or overflow and therefore uh a + +00:09:58.480 --> 00:10:02.399 +really popular data type that was + +00:10:00.279 --> 00:10:06.519 +designed just for machine learning is + +00:10:02.399 --> 00:10:07.880 +called B flat 16 which just like moves + +00:10:06.519 --> 00:10:09.680 +the idea is you're just moving some of + +00:10:07.880 --> 00:10:11.600 +the bits from the fractional part to the + +00:10:09.680 --> 00:10:14.279 +exponential part so you can have a + +00:10:11.600 --> 00:10:16.920 +larger range but within that range you + +00:10:14.279 --> 00:10:19.600 +may have a fewer choice of values to + +00:10:16.920 --> 00:10:21.519 +choose from but it just works um it + +00:10:19.600 --> 00:10:23.480 +supports like some of the the problems + +00:10:21.519 --> 00:10:27.560 +that we face in machine + +00:10:23.480 --> 00:10:30.720 +learning uh anyway so this is floating + +00:10:27.560 --> 00:10:32.560 +Point types but as you can imagine um + +00:10:30.720 --> 00:10:35.440 +once you get below + +00:10:32.560 --> 00:10:37.959 +16 you're really impacting the amount of + +00:10:35.440 --> 00:10:39.480 +things you can represent in a float + +00:10:37.959 --> 00:10:41.639 +given that you need like one thing for + +00:10:39.480 --> 00:10:43.320 +the sign + +00:10:41.639 --> 00:10:46.600 +uh and + +00:10:43.320 --> 00:10:48.279 +then if you're range of the exponent is + +00:10:46.600 --> 00:10:50.000 +small then you're suddenly you don't + +00:10:48.279 --> 00:10:52.959 +have that that much of a range of values + +00:10:50.000 --> 00:10:56.040 +you can represent at all so + +00:10:52.959 --> 00:10:59.839 +um a really popular way to like get + +00:10:56.040 --> 00:11:02.320 +really small amounts of uh of footprint + +00:10:59.839 --> 00:11:03.720 +in models is by quantizing to integers + +00:11:02.320 --> 00:11:06.160 +and this is not that obvious because + +00:11:03.720 --> 00:11:10.120 +you're taking a float and turning it + +00:11:06.160 --> 00:11:12.639 +into an INT um and so uh one way this is + +00:11:10.120 --> 00:11:15.639 +done is called ABS Max or absolute + +00:11:12.639 --> 00:11:20.160 +maximum quantization where you basically + +00:11:15.639 --> 00:11:23.600 +map each number in a list of floats to a + +00:11:20.160 --> 00:11:27.399 +range of integers so uh for float for + +00:11:23.600 --> 00:11:30.800 +int8 the range would be like27 to 127 + +00:11:27.399 --> 00:11:32.920 +because that's 155 total values and two + +00:11:30.800 --> 00:11:37.720 +the^ R is 256 + +00:11:32.920 --> 00:11:39.600 +um and here you basically uh find the + +00:11:37.720 --> 00:11:41.800 +the absolute value that is largest in + +00:11:39.600 --> 00:11:45.880 +the whole array and then that + +00:11:41.800 --> 00:11:47.680 +becomes uh 127 or whatever your whatever + +00:11:45.880 --> 00:11:49.920 +the the largest value in your integer + +00:11:47.680 --> 00:11:53.160 +range is and then everything else + +00:11:49.920 --> 00:11:55.959 +becomes uh assigned to the closest + +00:11:53.160 --> 00:11:58.360 +integer that's like scaled by that value + +00:11:55.959 --> 00:12:01.880 +so here for example in this in this + +00:11:58.360 --> 00:12:05.360 +example we have 20 is the largest value + +00:12:01.880 --> 00:12:07.920 +and so uh 20 would become assigned as + +00:12:05.360 --> 00:12:11.839 +127 and then everything else would be + +00:12:07.920 --> 00:12:12.720 +assigned uh proportional to that uh so + +00:12:11.839 --> 00:12:17.399 +like + +00:12:12.720 --> 00:12:20.160 +0.5 uh is 140th of 20 and 140th of 127 + +00:12:17.399 --> 00:12:23.279 +is rounds to the nearest number of three + +00:12:20.160 --> 00:12:26.240 +and so um this is kind of how you can go + +00:12:23.279 --> 00:12:27.880 +beyond floats and actually represent uh + +00:12:26.240 --> 00:12:30.920 +parameters with + +00:12:27.880 --> 00:12:32.519 +in the most extreme example which should + +00:12:30.920 --> 00:12:33.880 +also now highlight some of the issues + +00:12:32.519 --> 00:12:36.959 +with this idea of post- trining + +00:12:33.880 --> 00:12:39.199 +quantization is what if we just had a + +00:12:36.959 --> 00:12:43.360 +binary value for every parameter zero or + +00:12:39.199 --> 00:12:46.079 +one um and so instead of having so we + +00:12:43.360 --> 00:12:49.040 +might train a model using floats and we + +00:12:46.079 --> 00:12:53.399 +get these parameters so we have + +00:12:49.040 --> 00:12:56.079 +um the purple here is like the uh hidden + +00:12:53.399 --> 00:12:58.399 +States and then the red would be the + +00:12:56.079 --> 00:13:00.440 +activations um and these are all between + +00:12:58.399 --> 00:13:02.839 +zero and one one right but if we now + +00:13:00.440 --> 00:13:07.560 +round them to the nearest Value Z or one + +00:13:02.839 --> 00:13:09.040 +we get uh on the top there a list of + +00:13:07.560 --> 00:13:11.120 +binary + +00:13:09.040 --> 00:13:12.360 +values and this seems like really + +00:13:11.120 --> 00:13:16.440 +attractive right because you only need + +00:13:12.360 --> 00:13:18.959 +one bit per per Vector it's like really + +00:13:16.440 --> 00:13:21.519 +small but now let's consider like a real + +00:13:18.959 --> 00:13:23.720 +example here where we are trying to do + +00:13:21.519 --> 00:13:27.760 +translation uh and then we are producing + +00:13:23.720 --> 00:13:30.279 +these float valued hidden States and + +00:13:27.760 --> 00:13:32.839 +activations + +00:13:30.279 --> 00:13:37.440 +in this example if I just rounded up or + +00:13:32.839 --> 00:13:39.000 +down each of the values um the output + +00:13:37.440 --> 00:13:40.320 +vectors that would be that like the the + +00:13:39.000 --> 00:13:42.880 +embedding vectors that we would then + +00:13:40.320 --> 00:13:45.440 +decode to outputs um even though they're + +00:13:42.880 --> 00:13:48.120 +very different in the original float + +00:13:45.440 --> 00:13:50.040 +space they actually become all the same + +00:13:48.120 --> 00:13:52.880 +thing here um which is definitely not + +00:13:50.040 --> 00:13:54.320 +what you want so basically by reducing + +00:13:52.880 --> 00:13:56.480 +the Precision you might be like + +00:13:54.320 --> 00:13:59.320 +significantly impacting the range of + +00:13:56.480 --> 00:14:01.880 +things you can express and this can so + +00:13:59.320 --> 00:14:03.360 +this basically does not work like + +00:14:01.880 --> 00:14:05.800 +turning a + +00:14:03.360 --> 00:14:08.519 +complex uh set of floats of of high + +00:14:05.800 --> 00:14:11.639 +Precision floats to binary numbers does + +00:14:08.519 --> 00:14:13.440 +not work uh and later I'll show that + +00:14:11.639 --> 00:14:14.959 +there are ways to make this work that + +00:14:13.440 --> 00:14:17.720 +are a little more complicated they just + +00:14:14.959 --> 00:14:19.839 +require more processing after the + +00:14:17.720 --> 00:14:22.160 +initial training of your + +00:14:19.839 --> 00:14:24.959 +model okay so now that I've motivated + +00:14:22.160 --> 00:14:26.959 +this problem uh we can talk about a few + +00:14:24.959 --> 00:14:31.040 +like methods that actually work for post + +00:14:26.959 --> 00:14:33.800 +trining quantization um the first one is + +00:14:31.040 --> 00:14:36.519 +uh belongs to a class of methods called + +00:14:33.800 --> 00:14:38.440 +Model aware quantization and the idea + +00:14:36.519 --> 00:14:40.959 +here is that if you can study the + +00:14:38.440 --> 00:14:44.759 +statistics of your model you can sort of + +00:14:40.959 --> 00:14:48.560 +learn um ways to represent values in a + +00:14:44.759 --> 00:14:51.160 +way that is matching the actual learning + +00:14:48.560 --> 00:14:54.120 +uh the actual uh distribution of Weights + +00:14:51.160 --> 00:14:59.480 +in that model so for example with Bert + +00:14:54.120 --> 00:15:01.320 +uh most of the weights in each layer are + +00:14:59.480 --> 00:15:03.440 +are concentrated around like the mean + +00:15:01.320 --> 00:15:06.240 +value there and you have a few weights + +00:15:03.440 --> 00:15:07.959 +that are very far from that mean value + +00:15:06.240 --> 00:15:10.680 +so you can sort to fit a a gausian + +00:15:07.959 --> 00:15:14.440 +distribution normal distribution to the + +00:15:10.680 --> 00:15:16.680 +distribution of Weights um and then only + +00:15:14.440 --> 00:15:19.320 +a few weights in each layer will be at + +00:15:16.680 --> 00:15:22.560 +the taals of this distribution so the + +00:15:19.320 --> 00:15:24.160 +idea here is that therefore um and for + +00:15:22.560 --> 00:15:25.880 +uh just to to motivate this a little + +00:15:24.160 --> 00:15:28.680 +more + +00:15:25.880 --> 00:15:30.480 +um if you have values at the tals of the + +00:15:28.680 --> 00:15:32.000 +distribution they pose issues for + +00:15:30.480 --> 00:15:33.639 +quantization because like if you're + +00:15:32.000 --> 00:15:35.480 +using the ABS Max quantization I + +00:15:33.639 --> 00:15:37.399 +mentioned before you're now defining + +00:15:35.480 --> 00:15:39.880 +your range according to the minimum and + +00:15:37.399 --> 00:15:41.680 +maximum values and then everything in + +00:15:39.880 --> 00:15:43.480 +between which might be close together + +00:15:41.680 --> 00:15:46.880 +will now be grouped into into the same + +00:15:43.480 --> 00:15:49.440 +bucket and that throws away a lot of the + +00:15:46.880 --> 00:15:53.199 +ability to distinguish between weights + +00:15:49.440 --> 00:15:55.600 +in your network so the idea here is that + +00:15:53.199 --> 00:15:57.560 +you basically store the outliers + +00:15:55.600 --> 00:15:59.800 +separately and you actually store them + +00:15:57.560 --> 00:16:02.000 +in full precision so you're like paying + +00:15:59.800 --> 00:16:03.959 +the full storage cost for a few + +00:16:02.000 --> 00:16:05.319 +parameters and then everything else + +00:16:03.959 --> 00:16:07.279 +that's like sort of concentrated + +00:16:05.319 --> 00:16:10.959 +together gets quantized into a much + +00:16:07.279 --> 00:16:13.360 +lower Precision space um and I think + +00:16:10.959 --> 00:16:15.279 +that this is at least in theory very + +00:16:13.360 --> 00:16:17.720 +effective and they have strong results + +00:16:15.279 --> 00:16:20.560 +here um + +00:16:17.720 --> 00:16:24.880 +and however a problem with that approach + +00:16:20.560 --> 00:16:27.959 +is that um you're defining like the + +00:16:24.880 --> 00:16:31.160 +the outliers and the minimum and maximum + +00:16:27.959 --> 00:16:33.440 +for each layer uniformly um where so + +00:16:31.160 --> 00:16:35.720 +instead this llm inate which is actually + +00:16:33.440 --> 00:16:38.399 +very popular in NLP um you might have + +00:16:35.720 --> 00:16:43.399 +heard of it if you're if you uh have + +00:16:38.399 --> 00:16:45.079 +been building NLP systems um they go a + +00:16:43.399 --> 00:16:48.000 +little a step further and they instead + +00:16:45.079 --> 00:16:51.440 +of quantizing each layer uniformly um + +00:16:48.000 --> 00:16:53.639 +they quantize each row or column of a + +00:16:51.440 --> 00:16:57.279 +vector in m in matrix multiplication + +00:16:53.639 --> 00:17:00.880 +separately um with the motivation that + +00:16:57.279 --> 00:17:03.800 +most of the parameters in Transformers + +00:17:00.880 --> 00:17:06.480 +are for matrix multiplication um and so + +00:17:03.800 --> 00:17:09.120 +by doing this they're able to actually + +00:17:06.480 --> 00:17:10.919 +get a better quantization uh because + +00:17:09.120 --> 00:17:14.160 +you're able to like have a more precise + +00:17:10.919 --> 00:17:16.280 +space range of the values um for each + +00:17:14.160 --> 00:17:20.120 +row or column of of a + +00:17:16.280 --> 00:17:23.240 +matrix yeah just curious in the previous + +00:17:20.120 --> 00:17:24.480 +slide why why exactly is the frequency + +00:17:23.240 --> 00:17:30.400 +changing + +00:17:24.480 --> 00:17:30.400 +Bas based on the layers um + +00:17:30.880 --> 00:17:36.640 +different layers might be have might + +00:17:32.880 --> 00:17:38.960 +have more concentration of weights to a + +00:17:36.640 --> 00:17:40.240 +single value or like the values of + +00:17:38.960 --> 00:17:42.520 +Weights in that layer might be + +00:17:40.240 --> 00:17:43.720 +concentrated or might be more broad I + +00:17:42.520 --> 00:17:46.200 +think that's how to think of it is that + +00:17:43.720 --> 00:17:46.200 +invers + +00:17:47.520 --> 00:17:52.919 +proportioners lay layer has high + +00:17:50.440 --> 00:17:52.919 +frequency + +00:17:53.919 --> 00:17:58.960 +compar yeah I think that's right um I + +00:17:56.320 --> 00:18:00.960 +think that that the uh this paper also + +00:17:58.960 --> 00:18:03.360 +discusses that problem but as you get + +00:18:00.960 --> 00:18:05.280 +later into the the layers of a network + +00:18:03.360 --> 00:18:08.360 +you see a lot more of these outliers + +00:18:05.280 --> 00:18:08.360 +these large magnitude + +00:18:09.120 --> 00:18:15.240 +values um okay so uh moving on here the + +00:18:12.960 --> 00:18:17.640 +last thing I'll say is that + +00:18:15.240 --> 00:18:18.679 +um there is like an overhead you're + +00:18:17.640 --> 00:18:21.600 +paying when you're doing this kind of + +00:18:18.679 --> 00:18:23.360 +quantization where you have to for each + +00:18:21.600 --> 00:18:25.559 +Vector you have to you you are now + +00:18:23.360 --> 00:18:28.000 +mapping it to a list of numbers and you + +00:18:25.559 --> 00:18:30.240 +need to then decode those numbers back + +00:18:28.000 --> 00:18:32.720 +into floats like decode your ins back + +00:18:30.240 --> 00:18:34.720 +into floats so there's an overhead that + +00:18:32.720 --> 00:18:37.520 +that costs time when you're doing + +00:18:34.720 --> 00:18:39.000 +inference um so if you have like a small + +00:18:37.520 --> 00:18:41.600 +model this is not going to help you go + +00:18:39.000 --> 00:18:43.400 +faster most likely but if you have like + +00:18:41.600 --> 00:18:45.559 +a really big model it can actually like + +00:18:43.400 --> 00:18:47.600 +double your inference speed at least and + +00:18:45.559 --> 00:18:49.919 +it also lets you load models into memory + +00:18:47.600 --> 00:18:51.360 +that you otherwise would not be able to + +00:18:49.919 --> 00:18:54.679 +so there's definitely a trade-off in + +00:18:51.360 --> 00:18:54.679 +when or this is really + +00:18:54.840 --> 00:18:59.960 +desirable um any questions here before + +00:18:58.120 --> 00:19:02.360 +Ive move on to some more high level + +00:18:59.960 --> 00:19:02.360 +stuff on + +00:19:10.440 --> 00:19:16.120 +quantization so uh in terms of Hardware + +00:19:13.600 --> 00:19:18.000 +concerns I think one of the challenge as + +00:19:16.120 --> 00:19:19.159 +uh like if you're somebody who's + +00:19:18.000 --> 00:19:21.559 +interested in + +00:19:19.159 --> 00:19:23.360 +algorithms you might have very creative + +00:19:21.559 --> 00:19:26.120 +ideas for how you might do + +00:19:23.360 --> 00:19:29.039 +quantization but the problem is that the + +00:19:26.120 --> 00:19:31.679 +ability for quantization to actually + +00:19:29.039 --> 00:19:34.159 +be effective or make things faster is + +00:19:31.679 --> 00:19:36.600 +largely limited by both hardware and + +00:19:34.159 --> 00:19:38.440 +also like low-level systems like your + +00:19:36.600 --> 00:19:42.400 +the framework like pytorch that you're + +00:19:38.440 --> 00:19:44.720 +running your models on um so at a + +00:19:42.400 --> 00:19:46.880 +hardware level some data types are like + +00:19:44.720 --> 00:19:50.080 +basically just not supported by Hardware + +00:19:46.880 --> 00:19:52.720 +like int three like a three bit int is + +00:19:50.080 --> 00:19:56.280 +not something that processors generally + +00:19:52.720 --> 00:19:57.799 +support so uh if your quantization + +00:19:56.280 --> 00:19:59.559 +method uses in3 it's effectively just + +00:19:57.799 --> 00:20:00.919 +using in for and then you're not getting + +00:19:59.559 --> 00:20:04.080 +a speed up + +00:20:00.919 --> 00:20:05.640 +there and then pytorch has its own + +00:20:04.080 --> 00:20:09.360 +requirements like pytorch doesn't have + +00:20:05.640 --> 00:20:12.760 +in4 a lot of modules don't support + +00:20:09.360 --> 00:20:13.840 +quantization at all in py torch like a + +00:20:12.760 --> 00:20:16.360 +something like an + +00:20:13.840 --> 00:20:20.039 +RNN which is now like becoming popular + +00:20:16.360 --> 00:20:23.039 +again uh it's it's not really supporting + +00:20:20.039 --> 00:20:25.200 +quantization right now so you definitely + +00:20:23.039 --> 00:20:26.880 +if you're trying to go this route for a + +00:20:25.200 --> 00:20:28.840 +practical application you'll want to + +00:20:26.880 --> 00:20:30.440 +know what your Hardware is what your + +00:20:28.840 --> 00:20:33.159 +framework is and what you can actually + +00:20:30.440 --> 00:20:36.000 +support with your with the ways you want + +00:20:33.159 --> 00:20:36.000 +to compress your + +00:20:39.320 --> 00:20:45.480 +models and uh one last thing I'll say is + +00:20:42.760 --> 00:20:48.159 +that both of the methods I showed so far + +00:20:45.480 --> 00:20:50.720 +they both have to like they have their + +00:20:48.159 --> 00:20:52.440 +own customized Hardware accelerators + +00:20:50.720 --> 00:20:54.919 +that they wrote to make those things + +00:20:52.440 --> 00:20:57.159 +work and this is like a lot of work and + +00:20:54.919 --> 00:20:59.360 +most people probably don't don't have + +00:20:57.159 --> 00:21:01.480 +the time to do this so um there are + +00:20:59.360 --> 00:21:03.440 +methods that I haven't shown here that + +00:21:01.480 --> 00:21:06.760 +do quantization in a way that is + +00:21:03.440 --> 00:21:08.640 +effective without having to rewrite your + +00:21:06.760 --> 00:21:10.640 +framework or your Hardware accelerator + +00:21:08.640 --> 00:21:13.120 +um but this is definitely something to + +00:21:10.640 --> 00:21:13.120 +consider with + +00:21:13.200 --> 00:21:18.480 +quantization okay so now I think I've + +00:21:15.120 --> 00:21:20.520 +motivated why post training quantization + +00:21:18.480 --> 00:21:23.480 +is hard because you're throwing away + +00:21:20.520 --> 00:21:25.919 +Precision which can make it hard to + +00:21:23.480 --> 00:21:27.039 +um to get the most out of the network + +00:21:25.919 --> 00:21:29.000 +that you have + +00:21:27.039 --> 00:21:32.799 +trained + +00:21:29.000 --> 00:21:34.200 +so uh attempting idea here is now we + +00:21:32.799 --> 00:21:36.120 +know that let's say we know we're going + +00:21:34.200 --> 00:21:39.320 +to quantize our model let's train the + +00:21:36.120 --> 00:21:41.039 +model with quantization in mind um and + +00:21:39.320 --> 00:21:43.440 +now we can revisit the example I showed + +00:21:41.039 --> 00:21:46.120 +before of binarized neural networks + +00:21:43.440 --> 00:21:47.919 +which like didn't work but it actually + +00:21:46.120 --> 00:21:51.120 +can work if you train with the + +00:21:47.919 --> 00:21:54.279 +binarization in mind so uh a paper in + +00:21:51.120 --> 00:21:57.000 +2016 um they considered a case where all + +00:21:54.279 --> 00:21:59.120 +of your weights were negative one or one + +00:21:57.000 --> 00:22:03.200 +um activations were also negative one or + +00:21:59.120 --> 00:22:06.640 +one um and they do some some like clever + +00:22:03.200 --> 00:22:08.039 +statistics to make that work and then + +00:22:06.640 --> 00:22:09.640 +the gradients that you back propagate + +00:22:08.039 --> 00:22:13.640 +through the model are also negative like + +00:22:09.640 --> 00:22:15.559 +they're also discreet um and so it kind + +00:22:13.640 --> 00:22:18.320 +of it's It's probably kind of surprising + +00:22:15.559 --> 00:22:21.679 +that this um that this works but they + +00:22:18.320 --> 00:22:23.200 +basically are using the the core + +00:22:21.679 --> 00:22:26.039 +mechanisms that we use to train neural + +00:22:23.200 --> 00:22:29.880 +networks and using fancy statistics to + +00:22:26.039 --> 00:22:32.760 +make this work for binary values + +00:22:29.880 --> 00:22:35.559 +and by doing this they get like kind of + +00:22:32.760 --> 00:22:37.039 +shockingly good results like on cfr1 + +00:22:35.559 --> 00:22:40.120 +which was a very popular image + +00:22:37.039 --> 00:22:42.360 +classification data set for a while um + +00:22:40.120 --> 00:22:46.720 +they get like a + +00:22:42.360 --> 00:22:48.640 +10% uh training sorry test set error and + +00:22:46.720 --> 00:22:51.200 +a at the time like a state-of-the-art + +00:22:48.640 --> 00:22:55.360 +method was at a little under + +00:22:51.200 --> 00:22:57.559 +12% um so + +00:22:55.360 --> 00:22:59.679 +uh like they're basically matching or + +00:22:57.559 --> 00:23:02.919 +beating some of these extremely strong + +00:22:59.679 --> 00:23:05.120 +models at that time and uh they used + +00:23:02.919 --> 00:23:07.159 +effectively the same architecture from + +00:23:05.120 --> 00:23:09.159 +some of these models but just binarized + +00:23:07.159 --> 00:23:12.520 +so they kind this this was sort of a + +00:23:09.159 --> 00:23:14.760 +proof of concept that if you quantize + +00:23:12.520 --> 00:23:18.159 +during training um you can match + +00:23:14.760 --> 00:23:21.279 +performance and get a much smaller model + +00:23:18.159 --> 00:23:21.279 +which I think was a really surprising + +00:23:22.520 --> 00:23:29.760 +finding and then uh a more recent work + +00:23:25.440 --> 00:23:32.559 +that I think is really cool um is + +00:23:29.760 --> 00:23:33.799 +that for doing quantization another + +00:23:32.559 --> 00:23:36.919 +thing you can do is you can start with + +00:23:33.799 --> 00:23:39.760 +your model that is um that is full + +00:23:36.919 --> 00:23:42.120 +Precision not quantized and then you can + +00:23:39.760 --> 00:23:45.200 +basically train each layer one layer at + +00:23:42.120 --> 00:23:47.400 +a time to replicate its counterpart in + +00:23:45.200 --> 00:23:49.080 +the full Precision space so you can like + +00:23:47.400 --> 00:23:52.440 +run inputs through the full Precision + +00:23:49.080 --> 00:23:55.279 +model you get the like the output the + +00:23:52.440 --> 00:23:56.960 +probabilities of each word for example + +00:23:55.279 --> 00:23:59.159 +and then you replicate your quantitized + +00:23:56.960 --> 00:24:02.240 +model to reproduce to get very close to + +00:23:59.159 --> 00:24:03.880 +those same weights um then you do this + +00:24:02.240 --> 00:24:07.320 +at the second layer so now you have like + +00:24:03.880 --> 00:24:09.120 +the the logits from the like the the + +00:24:07.320 --> 00:24:11.080 +hidden States from the second to last + +00:24:09.120 --> 00:24:14.520 +layer and then you train your quantize + +00:24:11.080 --> 00:24:15.880 +layer to match those those hidden States + +00:24:14.520 --> 00:24:18.520 +and you keep doing that all the way down + +00:24:15.880 --> 00:24:20.799 +and I think the intuition here is that + +00:24:18.520 --> 00:24:23.360 +um by doing like layer by layer + +00:24:20.799 --> 00:24:25.480 +distillation you're sorted uh + +00:24:23.360 --> 00:24:27.320 +replicating not just the output which is + +00:24:25.480 --> 00:24:29.279 +kind of sparse and hard to replicate but + +00:24:27.320 --> 00:24:32.159 +even the + +00:24:29.279 --> 00:24:34.080 +um like the flow of data throughout the + +00:24:32.159 --> 00:24:36.919 +whole model step by step and you can + +00:24:34.080 --> 00:24:39.919 +replicate that into the quantized model + +00:24:36.919 --> 00:24:43.360 +which um which may run into issues when + +00:24:39.919 --> 00:24:43.360 +training just end to + +00:24:45.000 --> 00:24:49.520 +end um and then the last work here which + +00:24:48.039 --> 00:24:53.760 +Graham already talked about two lectures + +00:24:49.520 --> 00:24:56.720 +ago uh what is Cur so here they use + +00:24:53.760 --> 00:24:58.360 +parameter efficient finetuning uh to + +00:24:56.720 --> 00:25:01.039 +train a + +00:24:58.360 --> 00:25:02.399 +highly quantized like four- bit model + +00:25:01.039 --> 00:25:04.240 +and they do a bunch of other fancy + +00:25:02.399 --> 00:25:07.240 +tricks and Kora is like super popular + +00:25:04.240 --> 00:25:08.440 +right now so uh this is probably the if + +00:25:07.240 --> 00:25:11.919 +you're going to use the quantization + +00:25:08.440 --> 00:25:11.919 +method today this probably would be + +00:25:12.240 --> 00:25:18.760 +it okay I think I'm going to move on to + +00:25:14.840 --> 00:25:18.760 +uh pruning now so any questions + +00:25:26.679 --> 00:25:29.679 +here + +00:25:30.000 --> 00:25:35.760 +okay so pruning is uh pretty different + +00:25:33.559 --> 00:25:37.120 +than this than quantization in + +00:25:35.760 --> 00:25:38.919 +quantization you are sort of like + +00:25:37.120 --> 00:25:41.840 +chipping away at every parameter in your + +00:25:38.919 --> 00:25:44.480 +model instead in pruning you're like + +00:25:41.840 --> 00:25:46.399 +completely eliminating some parameters + +00:25:44.480 --> 00:25:49.120 +and completely not changing everything + +00:25:46.399 --> 00:25:52.360 +else + +00:25:49.120 --> 00:25:55.399 +um so a number of parameters set to zero + +00:25:52.360 --> 00:25:55.399 +and the rest are completely + +00:25:55.640 --> 00:26:02.200 +unchanged and the most uh + +00:25:59.240 --> 00:26:04.240 +intuitive way to do this is this idea + +00:26:02.200 --> 00:26:06.840 +that if you have a bunch of parameters + +00:26:04.240 --> 00:26:08.120 +um some of them are probably close to + +00:26:06.840 --> 00:26:09.640 +zero in which case they're not doing + +00:26:08.120 --> 00:26:13.000 +anything anyways so just make them + +00:26:09.640 --> 00:26:14.880 +completely set to zero that way you can + +00:26:13.000 --> 00:26:16.640 +ignore those parameters effectively they + +00:26:14.880 --> 00:26:17.520 +they effectively are not doing anything + +00:26:16.640 --> 00:26:20.679 +uh + +00:26:17.520 --> 00:26:24.559 +and uh it's as if they don't exist so in + +00:26:20.679 --> 00:26:26.240 +magnitude pruning you set to zero some + +00:26:24.559 --> 00:26:29.200 +percentage of parameters that have the + +00:26:26.240 --> 00:26:32.720 +least magnitude + +00:26:29.200 --> 00:26:34.440 +and uh in like machine translation we + +00:26:32.720 --> 00:26:37.279 +people have seen that you can remove + +00:26:34.440 --> 00:26:39.360 +almost half the parameters in a model + +00:26:37.279 --> 00:26:41.559 +and get almost zero change in your + +00:26:39.360 --> 00:26:43.960 +Downstream performance which I think + +00:26:41.559 --> 00:26:45.640 +goes back to the earlier point about + +00:26:43.960 --> 00:26:47.200 +over parameterization like you need a + +00:26:45.640 --> 00:26:49.679 +lot of these parameters for training the + +00:26:47.200 --> 00:26:51.440 +model but in practice they're not really + +00:26:49.679 --> 00:26:53.600 +doing too much and so you can just get + +00:26:51.440 --> 00:26:55.640 +rid of + +00:26:53.600 --> 00:26:58.200 +them and so this is a type of + +00:26:55.640 --> 00:27:01.760 +unstructured pruning where you're just + +00:26:58.200 --> 00:27:04.080 +um you're removing + +00:27:01.760 --> 00:27:06.640 +parameters throughout the model anywhere + +00:27:04.080 --> 00:27:09.039 +you see fit there's no structure to how + +00:27:06.640 --> 00:27:09.039 +you're doing the + +00:27:09.200 --> 00:27:16.760 +pruning um and this is related to the + +00:27:12.640 --> 00:27:19.480 +lottery ticket hypothesis which was uh + +00:27:16.760 --> 00:27:22.159 +this idea that when you train a full + +00:27:19.480 --> 00:27:24.279 +model um there are like randomly initial + +00:27:22.159 --> 00:27:26.520 +there there are sub networks of that + +00:27:24.279 --> 00:27:29.520 +model that + +00:27:26.520 --> 00:27:29.520 +um + +00:27:37.080 --> 00:27:39.799 +the idea is that when you're training a + +00:27:38.320 --> 00:27:42.120 +big model there are sub networks that + +00:27:39.799 --> 00:27:44.080 +are actually a better initialization + +00:27:42.120 --> 00:27:45.720 +than the initial model so it it doesn't + +00:27:44.080 --> 00:27:48.440 +it's it's a little bit unintuitive that + +00:27:45.720 --> 00:27:51.159 +if you have let's say a model with 100 + +00:27:48.440 --> 00:27:53.240 +with 100 billion parameters uh there's + +00:27:51.159 --> 00:27:55.080 +subnetworks of this model with even if + +00:27:53.240 --> 00:27:57.240 +you randomly initialize them that might + +00:27:55.080 --> 00:28:00.080 +say have a billion parameters like 1% of + +00:27:57.240 --> 00:28:03.279 +the size that are actually better than + +00:28:00.080 --> 00:28:06.799 +the full model um and so this is related + +00:28:03.279 --> 00:28:09.240 +to pruning but here um they prune the + +00:28:06.799 --> 00:28:12.360 +model then they retrain it and they find + +00:28:09.240 --> 00:28:15.519 +that surprisingly like here a model that + +00:28:12.360 --> 00:28:19.080 +is 20% the size so it's pruned to 20% of + +00:28:15.519 --> 00:28:21.080 +the original models parameters and then + +00:28:19.080 --> 00:28:23.159 +retrained is actually like more + +00:28:21.080 --> 00:28:26.159 +effective and generalizes better than + +00:28:23.159 --> 00:28:28.880 +your original model um so the idea here + +00:28:26.159 --> 00:28:30.799 +is basically finding like really good + +00:28:28.880 --> 00:28:35.159 +initializations of these sub networks + +00:28:30.799 --> 00:28:37.600 +can be better than uh like the most + +00:28:35.159 --> 00:28:40.960 +intuitive random initialization of a big + +00:28:37.600 --> 00:28:42.559 +model and so this is sort of a Step + +00:28:40.960 --> 00:28:44.080 +Beyond pruning where you're pruning a + +00:28:42.559 --> 00:28:46.440 +model then training on top of that and + +00:28:44.080 --> 00:28:49.640 +that can improve + +00:28:46.440 --> 00:28:51.000 +performance um but generally pruning I + +00:28:49.640 --> 00:28:52.360 +think is not a method to improve + +00:28:51.000 --> 00:28:56.200 +performance method to maintain + +00:28:52.360 --> 00:28:59.519 +performance while improving the um the + +00:28:56.200 --> 00:29:02.559 +efficiency and the size of your + +00:28:59.519 --> 00:29:04.519 +model and uh so there's been like a lot + +00:29:02.559 --> 00:29:08.279 +of cool work in pruning coming out of + +00:29:04.519 --> 00:29:11.120 +CMU recently um this paper Called Wanda + +00:29:08.279 --> 00:29:15.000 +uh which came from ml from folks in MLD + +00:29:11.120 --> 00:29:17.559 +um the idea here is that magnitude + +00:29:15.000 --> 00:29:19.240 +pruning presumes that you can just + +00:29:17.559 --> 00:29:21.640 +decide which parameters you want to + +00:29:19.240 --> 00:29:24.039 +throw away based on how big they are but + +00:29:21.640 --> 00:29:26.720 +it doesn't consider the fact that there + +00:29:24.039 --> 00:29:29.679 +are systematic differences in the size + +00:29:26.720 --> 00:29:31.799 +of inputs that come in um so in the + +00:29:29.679 --> 00:29:33.559 +paper they gave a a nice example which + +00:29:31.799 --> 00:29:37.120 +maybe I'll I'll write on the Whiteboard + +00:29:33.559 --> 00:29:37.120 +here um which is + +00:29:56.320 --> 00:30:02.480 +that if your let's say your your model + +00:29:59.320 --> 00:30:06.200 +was just this basic two parameter model + +00:30:02.480 --> 00:30:08.399 +X and Y and you had weights A and B um + +00:30:06.200 --> 00:30:10.679 +and let's say we know that the magnitude + +00:30:08.399 --> 00:30:15.440 +of a is like a lot bigger than the + +00:30:10.679 --> 00:30:18.320 +magnitude of B then in magnitude pruning + +00:30:15.440 --> 00:30:20.360 +we would just set B to + +00:30:18.320 --> 00:30:24.640 +zero and then the model would just + +00:30:20.360 --> 00:30:27.440 +become a * X because the idea is that + +00:30:24.640 --> 00:30:29.080 +this this parameter has a lot more + +00:30:27.440 --> 00:30:31.799 +effect on the output and therefore we + +00:30:29.080 --> 00:30:33.840 +don't need to consider the other one but + +00:30:31.799 --> 00:30:37.720 +what if I told you now that the range of + +00:30:33.840 --> 00:30:40.279 +X like the average value of x was a + +00:30:37.720 --> 00:30:42.000 +thousand uh sorry it was sorry the + +00:30:40.279 --> 00:30:45.840 +average value of x was one and the + +00:30:42.000 --> 00:30:49.159 +average value of y was a thousand um now + +00:30:45.840 --> 00:30:50.919 +in practice even though B is smaller + +00:30:49.159 --> 00:30:52.640 +it's processing much larger inputs and + +00:30:50.919 --> 00:30:56.760 +therefore it's going to have a outsize + +00:30:52.640 --> 00:30:58.600 +impact on the output of the model um so + +00:30:56.760 --> 00:31:01.880 +that's the motivation of Wanda which is + +00:30:58.600 --> 00:31:04.080 +here they um decide which parameters to + +00:31:01.880 --> 00:31:05.519 +to prune based on a combination of the + +00:31:04.080 --> 00:31:08.679 +magnitude of that + +00:31:05.519 --> 00:31:10.720 +parameter as well as the magnitude of + +00:31:08.679 --> 00:31:13.200 +the actual inputs that come into that + +00:31:10.720 --> 00:31:16.679 +layer of the model so they take like + +00:31:13.200 --> 00:31:18.600 +data uh calibration data and then they + +00:31:16.679 --> 00:31:20.960 +kind say learn what the average + +00:31:18.600 --> 00:31:25.399 +magnitude of the inputs are and they use + +00:31:20.960 --> 00:31:25.399 +this to decide what parameters to + +00:31:26.000 --> 00:31:29.799 +prune um + +00:31:28.200 --> 00:31:31.919 +okay so + +00:31:29.799 --> 00:31:33.799 +uh so far I've been talking about + +00:31:31.919 --> 00:31:35.799 +unstructured pruning and I think there's + +00:31:33.799 --> 00:31:38.399 +a pretty clear problem with this that + +00:31:35.799 --> 00:31:40.880 +makes this really not that that + +00:31:38.399 --> 00:31:44.399 +effective in + +00:31:40.880 --> 00:31:46.919 +practice and uh the problem is that you + +00:31:44.399 --> 00:31:49.559 +can make a model sparse you can take the + +00:31:46.919 --> 00:31:51.960 +vectors and make them sparse but if your + +00:31:49.559 --> 00:31:53.960 +Hardware does not take advantage of that + +00:31:51.960 --> 00:31:57.200 +sparsity then you're not getting any + +00:31:53.960 --> 00:31:59.919 +gains in performance so for example um + +00:31:57.200 --> 00:32:01.799 +here we like turn turned off half the + +00:31:59.919 --> 00:32:04.880 +parameters but if we're still + +00:32:01.799 --> 00:32:07.200 +multiplying zeros with other zeros in + +00:32:04.880 --> 00:32:08.639 +like a dense operation we're doing + +00:32:07.200 --> 00:32:13.559 +exactly the same amount of work we're + +00:32:08.639 --> 00:32:16.440 +getting no no no benefits here um and + +00:32:13.559 --> 00:32:18.880 +the reality is that right now hardware + +00:32:16.440 --> 00:32:20.840 +for machine learning does not support + +00:32:18.880 --> 00:32:23.480 +sparse data structures or computation + +00:32:20.840 --> 00:32:25.919 +that well like Matrix Matrix + +00:32:23.480 --> 00:32:29.919 +multiplications do some kinds of + +00:32:25.919 --> 00:32:31.919 +complicated things under the hood um and + +00:32:29.919 --> 00:32:34.559 +therefore they don't really work that + +00:32:31.919 --> 00:32:37.399 +well for sparse data structures uh and + +00:32:34.559 --> 00:32:39.559 +so so basically right now this is not + +00:32:37.399 --> 00:32:42.519 +very effective in the current Hardware + +00:32:39.559 --> 00:32:45.159 +although I hope this will change in the + +00:32:42.519 --> 00:32:47.399 +future so therefore like a more + +00:32:45.159 --> 00:32:50.360 +immediately useful idea is called + +00:32:47.399 --> 00:32:52.480 +structured pruning and the idea here is + +00:32:50.360 --> 00:32:54.679 +that instead of just picking parameters + +00:32:52.480 --> 00:32:57.480 +across like willy-nilly across the whole + +00:32:54.679 --> 00:33:01.960 +model you remove entire components or + +00:32:57.480 --> 00:33:04.279 +entire layers uh and therefore you're + +00:33:01.960 --> 00:33:07.200 +pruning the model in a way that is + +00:33:04.279 --> 00:33:10.159 +structured and uh really going to make a + +00:33:07.200 --> 00:33:10.159 +difference on your overall + +00:33:10.480 --> 00:33:15.840 +runtime so uh Graham and one of his PhD + +00:33:13.600 --> 00:33:18.799 +students a few years ago did some really + +00:33:15.840 --> 00:33:20.360 +cool work on this where they showed that + +00:33:18.799 --> 00:33:23.320 +if you're training a Transformer model + +00:33:20.360 --> 00:33:25.480 +like Bert you usually have many heads of + +00:33:23.320 --> 00:33:26.840 +attention like you guys also experienced + +00:33:25.480 --> 00:33:27.559 +this for the Llama homework where you + +00:33:26.840 --> 00:33:28.880 +have + +00:33:27.559 --> 00:33:32.519 +I think it was eight heads of attention + +00:33:28.880 --> 00:33:34.320 +there um but in practice most of these + +00:33:32.519 --> 00:33:36.120 +heads of attention can be removed + +00:33:34.320 --> 00:33:39.080 +Without Really + +00:33:36.120 --> 00:33:41.919 +any uh negative impact on the + +00:33:39.080 --> 00:33:44.039 +performance of your model and so here + +00:33:41.919 --> 00:33:46.480 +they show that for Mt model you can + +00:33:44.039 --> 00:33:49.159 +remove half of the attention heads + +00:33:46.480 --> 00:33:50.159 +entirely and get a negligible impact on + +00:33:49.159 --> 00:33:52.120 +your + +00:33:50.159 --> 00:33:55.000 +performance this is different than what + +00:33:52.120 --> 00:33:57.519 +we showed here where we were removing + +00:33:55.000 --> 00:34:01.039 +parameters from anywhere in the model we + +00:33:57.519 --> 00:34:03.559 +saw fit here we're removing entire heads + +00:34:01.039 --> 00:34:05.799 +of attention even some of those heads + +00:34:03.559 --> 00:34:06.799 +might have large magnitude weights some + +00:34:05.799 --> 00:34:09.000 +of them might have small magnitude + +00:34:06.799 --> 00:34:11.159 +weights but um we can just remove the + +00:34:09.000 --> 00:34:12.679 +entire attention head and this has a + +00:34:11.159 --> 00:34:14.919 +immediate impact on the performance of + +00:34:12.679 --> 00:34:14.919 +your + +00:34:17.720 --> 00:34:24.000 +model and uh generalizing this recent + +00:34:21.359 --> 00:34:27.399 +work has proposed + +00:34:24.000 --> 00:34:30.240 +uh controlling even uh other kind of + +00:34:27.399 --> 00:34:32.760 +components of your model so um in this + +00:34:30.240 --> 00:34:35.919 +paper from from two years ago uh they + +00:34:32.760 --> 00:34:39.159 +propose masking having two levels of + +00:34:35.919 --> 00:34:40.720 +masks on your model the first is um like + +00:34:39.159 --> 00:34:44.119 +what they call a coar mask which is + +00:34:40.720 --> 00:34:46.960 +turning off large components like full + +00:34:44.119 --> 00:34:48.440 +attention heads or full feed forward + +00:34:46.960 --> 00:34:51.320 +layers where you replace them with + +00:34:48.440 --> 00:34:53.679 +identity Matrix um and these are like + +00:34:51.320 --> 00:34:57.440 +really big things to turn off and then + +00:34:53.679 --> 00:35:00.119 +you could also have fine masks like um + +00:34:57.440 --> 00:35:02.359 +sorry I meant uh entire self attention + +00:35:00.119 --> 00:35:04.280 +layers not attention heads and then the + +00:35:02.359 --> 00:35:06.920 +fine masks would control like individual + +00:35:04.280 --> 00:35:10.480 +heads or um removing individual + +00:35:06.920 --> 00:35:12.680 +Dimensions so uh changing your hidden + +00:35:10.480 --> 00:35:16.520 +state to be from like 512 Dimensions to + +00:35:12.680 --> 00:35:18.680 +200 Dimensions um and so the idea here + +00:35:16.520 --> 00:35:20.880 +is they give two different levels of + +00:35:18.680 --> 00:35:23.680 +granularity at which you can turn off + +00:35:20.880 --> 00:35:26.160 +different components um and then these + +00:35:23.680 --> 00:35:29.359 +masks then learned using some kind of + +00:35:26.160 --> 00:35:32.040 +held out valid dat to learn what can we + +00:35:29.359 --> 00:35:34.240 +off without totally destroying the + +00:35:32.040 --> 00:35:35.960 +perance of this + +00:35:34.240 --> 00:35:38.800 +model + +00:35:35.960 --> 00:35:40.520 +um and in this paper they showed that + +00:35:38.800 --> 00:35:43.680 +you can really get pretty far with this + +00:35:40.520 --> 00:35:46.480 +idea um and the last thing I'll say + +00:35:43.680 --> 00:35:49.440 +about pruning here is that + +00:35:46.480 --> 00:35:53.240 +uh methods like this you're actually + +00:35:49.440 --> 00:35:55.280 +learning a kind of control over your + +00:35:53.240 --> 00:35:57.440 +your model so you're learning these this + +00:35:55.280 --> 00:36:00.160 +set of of masks + +00:35:57.440 --> 00:36:01.359 +and that's pretty expensive in terms of + +00:36:00.160 --> 00:36:04.160 +training + +00:36:01.359 --> 00:36:06.319 +budget it requires a lot of GPU memory + +00:36:04.160 --> 00:36:09.200 +and so if you want to prune let's say a + +00:36:06.319 --> 00:36:11.200 +llama 70 billion model you'll need + +00:36:09.200 --> 00:36:13.000 +basically as much computer as it need as + +00:36:11.200 --> 00:36:16.359 +it took to train that model to begin + +00:36:13.000 --> 00:36:18.880 +with so um one of this is a recent paper + +00:36:16.359 --> 00:36:21.400 +from Graham and one of his PhD students + +00:36:18.880 --> 00:36:23.079 +where they instead ask can we do pruning + +00:36:21.400 --> 00:36:25.400 +without having to like compute gradients + +00:36:23.079 --> 00:36:27.280 +at all so basically can we if we just + +00:36:25.400 --> 00:36:31.240 +have enough memory to run the model on + +00:36:27.280 --> 00:36:33.160 +our computer uh can we then use that + +00:36:31.240 --> 00:36:35.760 +same computer to prune the model without + +00:36:33.160 --> 00:36:38.400 +having to do any to use atom or to + +00:36:35.760 --> 00:36:40.319 +compute gradients um and so I think the + +00:36:38.400 --> 00:36:42.640 +idea here is really clever they + +00:36:40.319 --> 00:36:45.200 +basically randomly mask out all the + +00:36:42.640 --> 00:36:48.520 +different modules in the network so they + +00:36:45.200 --> 00:36:50.160 +like create like 100 or a thousand + +00:36:48.520 --> 00:36:52.839 +variants of this model with different + +00:36:50.160 --> 00:36:55.240 +masks turned off then they measure the + +00:36:52.839 --> 00:36:58.119 +performance of those like perturbed + +00:36:55.240 --> 00:37:01.040 +models and then they learn a regression + +00:36:58.119 --> 00:37:03.119 +of like how much does each module affect + +00:37:01.040 --> 00:37:05.880 +the performance of the of the full + +00:37:03.119 --> 00:37:08.560 +system and then uh you can then use + +00:37:05.880 --> 00:37:11.560 +these regression weights of this uh + +00:37:08.560 --> 00:37:13.000 +train of this learned regressor to + +00:37:11.560 --> 00:37:14.960 +figure out which modules you can + +00:37:13.000 --> 00:37:17.599 +actually turn off without impacting the + +00:37:14.960 --> 00:37:17.599 +performance too + +00:37:18.880 --> 00:37:27.880 +much okay um any questions + +00:37:24.040 --> 00:37:29.720 +here just doing like a validation for on + +00:37:27.880 --> 00:37:33.680 +these prun models + +00:37:29.720 --> 00:37:33.680 +yeah like randomly prune + +00:37:38.640 --> 00:37:44.000 +models model what happens to the matrix + +00:37:46.000 --> 00:37:50.960 +multiplication yeah yeah that's right + +00:37:48.280 --> 00:37:52.640 +like if you it depends on the yeah I + +00:37:50.960 --> 00:37:54.560 +think like for self attention heads or + +00:37:52.640 --> 00:37:55.880 +for feed forward feed forward layers you + +00:37:54.560 --> 00:37:58.599 +can think of it as just multiplying by + +00:37:55.880 --> 00:37:58.599 +the identity + +00:38:04.000 --> 00:38:08.280 +um and next I'm going to move on to + +00:38:05.720 --> 00:38:10.359 +distillation um so + +00:38:08.280 --> 00:38:12.280 +uh if there's any questions about + +00:38:10.359 --> 00:38:13.800 +anything I've covered in pruning um you + +00:38:12.280 --> 00:38:14.960 +can ask now and I know I was moving + +00:38:13.800 --> 00:38:17.640 +pretty fast + +00:38:14.960 --> 00:38:19.760 +so I'm happy to talk what's the point of + +00:38:17.640 --> 00:38:23.520 +burning or aggression like does it + +00:38:19.760 --> 00:38:27.240 +generalize to other at + +00:38:23.520 --> 00:38:30.400 +all yeah the idea is that so the best + +00:38:27.240 --> 00:38:33.359 +way to do this would be to consider all + +00:38:30.400 --> 00:38:35.920 +possible choices of masks like turn off + +00:38:33.359 --> 00:38:37.880 +every combination of modules and then + +00:38:35.920 --> 00:38:39.599 +measure the performance then you know + +00:38:37.880 --> 00:38:43.319 +exactly which one which combination is + +00:38:39.599 --> 00:38:45.040 +the best but if you have um tens of + +00:38:43.319 --> 00:38:48.560 +layers and tens of components in each + +00:38:45.040 --> 00:38:50.119 +layer you'll have like millions of + +00:38:48.560 --> 00:38:53.079 +different subm modules you want to try + +00:38:50.119 --> 00:38:58.040 +out so this lets you just sample from + +00:38:53.079 --> 00:38:59.960 +that spa combinatorial space and then uh + +00:38:58.040 --> 00:39:02.200 +predict the interaction between what you + +00:38:59.960 --> 00:39:04.520 +have seen between the mod modules based + +00:39:02.200 --> 00:39:07.359 +on what you have seen like slly more + +00:39:04.520 --> 00:39:09.520 +optimiz some kind of random yeah yeah I + +00:39:07.359 --> 00:39:12.119 +think that's right + +00:39:09.520 --> 00:39:14.319 +yeah I think you can make analogy for + +00:39:12.119 --> 00:39:16.280 +this to like parameter selection where + +00:39:14.319 --> 00:39:18.960 +you can do random search but now people + +00:39:16.280 --> 00:39:21.359 +do much like fancier things like Beijing + +00:39:18.960 --> 00:39:23.839 +optimization uh which are trying + +00:39:21.359 --> 00:39:25.839 +to explore the space in a more + +00:39:23.839 --> 00:39:29.319 +structured way and so I think that this + +00:39:25.839 --> 00:39:29.319 +is similar in + +00:39:33.680 --> 00:39:39.200 +spirit are you like randomly just + +00:39:35.960 --> 00:39:41.319 +choosing like like for example the green + +00:39:39.200 --> 00:39:43.520 +one that see is it just like you're + +00:39:41.319 --> 00:39:47.000 +retaining 80% of + +00:39:43.520 --> 00:39:51.079 +theam you're retaining 20% in the green + +00:39:47.000 --> 00:39:51.079 +line so you're throwing away 80% + +00:39:53.520 --> 00:39:58.640 +um so in the like what this green line + +00:39:57.079 --> 00:40:01.599 +exactly means and I did not go into + +00:39:58.640 --> 00:40:04.960 +detail here is that you first train a + +00:40:01.599 --> 00:40:07.079 +full model uh with all of its parameters + +00:40:04.960 --> 00:40:08.720 +and then you apply pruning methods kind + +00:40:07.079 --> 00:40:11.960 +of like magnitude pruning I showed + +00:40:08.720 --> 00:40:14.000 +before um to find like a good sub + +00:40:11.960 --> 00:40:16.079 +Network that is still pretty effective + +00:40:14.000 --> 00:40:18.720 +after training but then they do + +00:40:16.079 --> 00:40:21.240 +something weird which is then they then + +00:40:18.720 --> 00:40:22.160 +restore that subnetwork to its initial + +00:40:21.240 --> 00:40:24.359 +random + +00:40:22.160 --> 00:40:25.359 +initialization so in the beginning you + +00:40:24.359 --> 00:40:28.359 +had + +00:40:25.359 --> 00:40:28.359 +um + +00:40:30.040 --> 00:40:32.920 +in the beginning you + +00:40:49.079 --> 00:40:54.520 +had so you you train this model and you + +00:40:51.760 --> 00:40:57.880 +get you initially initialize it randomly + +00:40:54.520 --> 00:41:00.880 +so you might have um just random values + +00:40:57.880 --> 00:41:04.280 +at each parameter um you then train the + +00:41:00.880 --> 00:41:06.119 +model as it is so these parameters are + +00:41:04.280 --> 00:41:07.359 +all changed and they're they become they + +00:41:06.119 --> 00:41:10.560 +serve the + +00:41:07.359 --> 00:41:13.119 +task then they identify a good sub + +00:41:10.560 --> 00:41:15.720 +Network here so you might throw away + +00:41:13.119 --> 00:41:17.200 +this node you might throw away this node + +00:41:15.720 --> 00:41:19.960 +maybe this + +00:41:17.200 --> 00:41:22.640 +node and then you say okay what was the + +00:41:19.960 --> 00:41:24.359 +initial initialization at these exact + +00:41:22.640 --> 00:41:26.599 +parameters which I've since learned I've + +00:41:24.359 --> 00:41:28.640 +since learned better parameters but what + +00:41:26.599 --> 00:41:30.000 +was the initial value here at the start + +00:41:28.640 --> 00:41:32.400 +of the optimization + +00:41:30.000 --> 00:41:36.160 +process they then go back to that to + +00:41:32.400 --> 00:41:39.200 +those initial values retrain the model + +00:41:36.160 --> 00:41:42.359 +on this task and it becomes even better + +00:41:39.200 --> 00:41:44.599 +um which I think it's uh there's been a + +00:41:42.359 --> 00:41:47.079 +lot of like theory about this since this + +00:41:44.599 --> 00:41:50.640 +paper came out it was kind of like a + +00:41:47.079 --> 00:41:52.960 +influen a very influential paper I think + +00:41:50.640 --> 00:41:54.480 +the to me this shows at the very least + +00:41:52.960 --> 00:41:57.520 +the importance of initialization and how + +00:41:54.480 --> 00:42:00.400 +much like Randomness there is in + +00:41:57.520 --> 00:42:03.359 +uh in your network training and how you + +00:42:00.400 --> 00:42:03.359 +can kind of take advantage of + +00:42:09.440 --> 00:42:13.920 +that yeah that related to drop + +00:42:20.960 --> 00:42:25.160 +off so I think you could you can see + +00:42:23.280 --> 00:42:29.839 +Dropout as + +00:42:25.160 --> 00:42:32.480 +a version of pruning where at each step + +00:42:29.839 --> 00:42:34.520 +of optimization you perform a random + +00:42:32.480 --> 00:42:37.000 +pruning you like randomly prune your + +00:42:34.520 --> 00:42:40.680 +network at each for each update and then + +00:42:37.000 --> 00:42:42.760 +you um then train the you update the + +00:42:40.680 --> 00:42:45.599 +parameters that have not been dropped + +00:42:42.760 --> 00:42:48.720 +out but then in the next step you have a + +00:42:45.599 --> 00:42:50.640 +totally different prune Network um so + +00:42:48.720 --> 00:42:52.839 +here you're doing pruning once and then + +00:42:50.640 --> 00:42:55.480 +you're training it this fixed prune + +00:42:52.839 --> 00:42:56.720 +Network for all the iterations whereas + +00:42:55.480 --> 00:42:58.960 +in Dropout you're + +00:42:56.720 --> 00:43:01.599 +doing like a random pruning each + +00:42:58.960 --> 00:43:04.119 +time and they serve different purposes + +00:43:01.599 --> 00:43:06.359 +is the main difference where pruning um + +00:43:04.119 --> 00:43:08.680 +is primarily for reducing the size of + +00:43:06.359 --> 00:43:11.520 +your model whereas Dropout is primarily + +00:43:08.680 --> 00:43:14.400 +for regularizing your model to avoid + +00:43:11.520 --> 00:43:17.040 +overfitting certain like to individual + +00:43:14.400 --> 00:43:17.040 +um + +00:43:18.559 --> 00:43:24.480 +weights like to avoid individual weights + +00:43:21.040 --> 00:43:24.480 +having too much correlation with the + +00:43:25.079 --> 00:43:28.079 +label + +00:43:35.800 --> 00:43:41.680 +okay so I'll move on now to um the final + +00:43:38.359 --> 00:43:41.680 +piece here which is + +00:43:46.119 --> 00:43:50.400 +uh yeah so the last slide I forgot to + +00:43:48.480 --> 00:43:51.920 +show here is that which I think is a + +00:43:50.400 --> 00:43:55.200 +kind of a summarization of everything + +00:43:51.920 --> 00:43:57.319 +I've taught so far about pruning is that + +00:43:55.200 --> 00:44:00.640 +Wanda which is this unstructured pruning + +00:43:57.319 --> 00:44:04.240 +way that that I showed earlier um even + +00:44:00.640 --> 00:44:06.720 +though it achieves the same number of + +00:44:04.240 --> 00:44:09.599 +parameters as like a structured pruning + +00:44:06.720 --> 00:44:13.960 +method which is Bonsai the one uh from + +00:44:09.599 --> 00:44:16.760 +from Graham uh it actually achieves much + +00:44:13.960 --> 00:44:22.200 +less of a speed up potentially like a + +00:44:16.760 --> 00:44:23.720 +negative speed up um and uh whereas a + +00:44:22.200 --> 00:44:25.480 +structured pruning + +00:44:23.720 --> 00:44:27.839 +method with the same amount of + +00:44:25.480 --> 00:44:31.079 +parameters can be like a lot faster like + +00:44:27.839 --> 00:44:32.359 +50% speed up is huge so um I think that + +00:44:31.079 --> 00:44:35.319 +this shows that like unstructured + +00:44:32.359 --> 00:44:39.079 +pruning can be pretty uh + +00:44:35.319 --> 00:44:39.079 +ineffective if it's done + +00:44:40.520 --> 00:44:46.640 +naively okay so now I'll move on to the + +00:44:42.640 --> 00:44:46.640 +final piece here which is + +00:44:48.079 --> 00:44:54.079 +distillation so in distillation the core + +00:44:50.800 --> 00:44:56.599 +idea here is that you're um training one + +00:44:54.079 --> 00:44:58.880 +model to replicate the behavior of + +00:44:56.599 --> 00:44:58.880 +another + +00:44:59.839 --> 00:45:03.800 +model and uh this is pretty + +00:45:02.480 --> 00:45:07.400 +fundamentally different than the other + +00:45:03.800 --> 00:45:09.760 +two methods we talked about so far uh in + +00:45:07.400 --> 00:45:12.040 +distillation you're probably changing + +00:45:09.760 --> 00:45:14.000 +every parameter in your model you might + +00:45:12.040 --> 00:45:16.599 +even be having a totally different + +00:45:14.000 --> 00:45:19.680 +architecture uh in the other two methods + +00:45:16.599 --> 00:45:21.599 +um in quantization you are like kind of + +00:45:19.680 --> 00:45:23.559 +not changing any of your parameters up + +00:45:21.599 --> 00:45:25.559 +to a certain amount of precision and in + +00:45:23.559 --> 00:45:28.119 +pruning you were keeping a set of your + +00:45:25.559 --> 00:45:29.880 +parameters is completely fixed whereas + +00:45:28.119 --> 00:45:32.599 +in distillation you're like changing + +00:45:29.880 --> 00:45:36.440 +everything uh but hopefully doing it in + +00:45:32.599 --> 00:45:36.440 +a way that requires many fewer + +00:45:39.839 --> 00:45:45.359 +parameters and uh distillation is + +00:45:42.599 --> 00:45:47.400 +related to a really cool idea that uh is + +00:45:45.359 --> 00:45:48.400 +more classic machine learning called + +00:45:47.400 --> 00:45:50.839 +weak + +00:45:48.400 --> 00:45:53.319 +supervision which is the idea that if + +00:45:50.839 --> 00:45:55.640 +you have unlabeled text or it could be + +00:45:53.319 --> 00:45:56.880 +images or whatever uh whatever data you + +00:45:55.640 --> 00:45:59.920 +want a + +00:45:56.880 --> 00:46:02.200 +process you can produce like things that + +00:45:59.920 --> 00:46:04.200 +are like labels that you could use like + +00:46:02.200 --> 00:46:07.240 +labels but maybe are not actually + +00:46:04.200 --> 00:46:09.440 +written by humans um and then you can + +00:46:07.240 --> 00:46:11.640 +train on these as as if they were labels + +00:46:09.440 --> 00:46:14.839 +and actually get pretty good + +00:46:11.640 --> 00:46:17.680 +performance uh so + +00:46:14.839 --> 00:46:19.520 +um this a few like really famous + +00:46:17.680 --> 00:46:23.240 +examples of this is one is self trining + +00:46:19.520 --> 00:46:26.400 +where you initialize like a a model um + +00:46:23.240 --> 00:46:28.240 +maybe with a a handful of points like + +00:46:26.400 --> 00:46:30.680 +three or five + +00:46:28.240 --> 00:46:32.359 +examples you train a classifier on that + +00:46:30.680 --> 00:46:33.839 +very small number of points which is + +00:46:32.359 --> 00:46:36.720 +going to be really bad because it's not + +00:46:33.839 --> 00:46:39.440 +enough data to learn then you have that + +00:46:36.720 --> 00:46:40.960 +model make its own predictions which are + +00:46:39.440 --> 00:46:44.559 +probably pretty pretty bad on a bunch of + +00:46:40.960 --> 00:46:46.800 +unlabeled text you use those pseudo + +00:46:44.559 --> 00:46:48.680 +labels to update the model again and you + +00:46:46.800 --> 00:46:50.319 +can do this iteratively so you're like + +00:46:48.680 --> 00:46:52.559 +basically using a model to produce its + +00:46:50.319 --> 00:46:55.800 +own training data to train itself and + +00:46:52.559 --> 00:46:58.319 +you do this over and over um and this is + +00:46:55.800 --> 00:47:01.079 +is like a a pretty classic method at + +00:46:58.319 --> 00:47:02.319 +this point it's like 30 years old um and + +00:47:01.079 --> 00:47:05.200 +that's self + +00:47:02.319 --> 00:47:08.000 +trining um and then there's a few others + +00:47:05.200 --> 00:47:09.960 +that I won't go into uh just not to be + +00:47:08.000 --> 00:47:12.520 +too dense but um that are related to + +00:47:09.960 --> 00:47:15.520 +this uh and this is all pseudo labels + +00:47:12.520 --> 00:47:17.720 +are also used um when + +00:47:15.520 --> 00:47:19.400 +you let's say you don't have the ability + +00:47:17.720 --> 00:47:21.680 +to annotate thousands of examples but + +00:47:19.400 --> 00:47:24.400 +you can write like a basic rule so you + +00:47:21.680 --> 00:47:27.160 +might say if you see a movie review that + +00:47:24.400 --> 00:47:29.920 +says awesome it's positive and if you + +00:47:27.160 --> 00:47:32.240 +see if it says I hated it it's negative + +00:47:29.920 --> 00:47:34.240 +this is um probably not a good rule to + +00:47:32.240 --> 00:47:36.240 +apply for your actual classifier because + +00:47:34.240 --> 00:47:38.559 +as Graham had showed earlier this is + +00:47:36.240 --> 00:47:41.160 +like really brittle it requires a lot of + +00:47:38.559 --> 00:47:43.000 +work but you can use these rules to + +00:47:41.160 --> 00:47:45.559 +construct pseudo labels that you then + +00:47:43.000 --> 00:47:48.040 +train like an actual full vocabulary + +00:47:45.559 --> 00:47:49.480 +model on um and if you have enough of + +00:47:48.040 --> 00:47:52.720 +these pseudo labels with enough of these + +00:47:49.480 --> 00:47:54.400 +rules um you can actually get pretty far + +00:47:52.720 --> 00:47:56.240 +and so that idea I just described is + +00:47:54.400 --> 00:47:57.720 +called is like in a + +00:47:56.240 --> 00:48:00.680 +there's a startup called snorkel that + +00:47:57.720 --> 00:48:02.760 +does that um and they have a bunch of + +00:48:00.680 --> 00:48:05.800 +papers about this idea as + +00:48:02.760 --> 00:48:08.400 +well so uh yeah + +00:48:05.800 --> 00:48:10.559 +so I'm me mentioning weak supervision + +00:48:08.400 --> 00:48:13.200 +because this to me forms the basis of + +00:48:10.559 --> 00:48:15.960 +knowledge distillation um in knowledge + +00:48:13.200 --> 00:48:18.760 +distillation you train a small model to + +00:48:15.960 --> 00:48:21.400 +just replicate the predictions of a big + +00:48:18.760 --> 00:48:25.480 +model so the big model is producing + +00:48:21.400 --> 00:48:27.240 +pseudo labels on unlabeled text uh and + +00:48:25.480 --> 00:48:29.839 +then that becomes the target for your + +00:48:27.240 --> 00:48:31.359 +small model to match uh the One + +00:48:29.839 --> 00:48:32.880 +requirement here that I think is really + +00:48:31.359 --> 00:48:35.720 +important to note is that you do need + +00:48:32.880 --> 00:48:38.480 +unlabeled text that that matches what + +00:48:35.720 --> 00:48:41.040 +you expect as input so let's say you're + +00:48:38.480 --> 00:48:44.720 +doing a movie review classification you + +00:48:41.040 --> 00:48:46.319 +definitely would need to somehow find um + +00:48:44.720 --> 00:48:47.960 +thousands of movie reviews that look + +00:48:46.319 --> 00:48:50.960 +like the kinds that your model is going + +00:48:47.960 --> 00:48:53.520 +to expect uh and + +00:48:50.960 --> 00:48:57.599 +that's most of these methods require + +00:48:53.520 --> 00:48:57.599 +that to work um Okay so + +00:49:03.920 --> 00:49:08.040 +the there's broadly like two kinds of + +00:49:06.640 --> 00:49:10.760 +ways you can train knowledge + +00:49:08.040 --> 00:49:12.480 +distillation um the first is like the + +00:49:10.760 --> 00:49:14.640 +most obvious which is called hard + +00:49:12.480 --> 00:49:16.280 +targets where you take your unlabeled + +00:49:14.640 --> 00:49:19.160 +text you produce a label from your + +00:49:16.280 --> 00:49:20.960 +teacher model and then you now use that + +00:49:19.160 --> 00:49:24.000 +predicted like that the teacher's + +00:49:20.960 --> 00:49:26.880 +prediction as the target for your for + +00:49:24.000 --> 00:49:28.240 +your model so you might say uh llama 70 + +00:49:26.880 --> 00:49:30.240 +billion predicted positive for this + +00:49:28.240 --> 00:49:31.559 +review therefore I'm going to say this + +00:49:30.240 --> 00:49:33.760 +the label here is + +00:49:31.559 --> 00:49:37.760 +positive this is like really easy it's + +00:49:33.760 --> 00:49:39.520 +convenient it's very intuitive um but + +00:49:37.760 --> 00:49:42.799 +another type of distillation that's even + +00:49:39.520 --> 00:49:45.799 +more effective uh pretty consistently is + +00:49:42.799 --> 00:49:48.839 +called S soft target distillation which + +00:49:45.799 --> 00:49:50.559 +is that instead of trying to match to do + +00:49:48.839 --> 00:49:52.839 +a supervised learning objective where + +00:49:50.559 --> 00:49:54.119 +you're just trying to match the label + +00:49:52.839 --> 00:49:56.160 +predicted by your + +00:49:54.119 --> 00:49:58.160 +teacher you and instead want your + +00:49:56.160 --> 00:50:01.280 +student model to produce probabilities + +00:49:58.160 --> 00:50:03.119 +over the full distribution of labels + +00:50:01.280 --> 00:50:03.920 +that matches the teacher distribution + +00:50:03.119 --> 00:50:07.480 +over + +00:50:03.920 --> 00:50:10.160 +labels um so like here the Llama 70 + +00:50:07.480 --> 00:50:11.960 +billion our teacher has predicted like + +00:50:10.160 --> 00:50:13.960 +probabilities over three different + +00:50:11.960 --> 00:50:15.599 +labels and then we want the student + +00:50:13.960 --> 00:50:17.680 +model to match those + +00:50:15.599 --> 00:50:21.440 +probabilities I think a cool thing here + +00:50:17.680 --> 00:50:23.079 +is that this is usually not possible + +00:50:21.440 --> 00:50:25.119 +with supervised learning when you have + +00:50:23.079 --> 00:50:26.799 +an annotator they usually just give you + +00:50:25.119 --> 00:50:28.200 +one answer they don't tell you how + +00:50:26.799 --> 00:50:32.000 +likely it is that they were wrong about + +00:50:28.200 --> 00:50:35.079 +that answer uh like they don't tell you + +00:50:32.000 --> 00:50:36.640 +what the next best answer was um but + +00:50:35.079 --> 00:50:38.880 +with a neural network that's that's + +00:50:36.640 --> 00:50:42.280 +teaching you you can ask that you have a + +00:50:38.880 --> 00:50:44.200 +lot more flexibility um and then this + +00:50:42.280 --> 00:50:46.200 +also changes how it's optimized so + +00:50:44.200 --> 00:50:48.760 +instead of optimizing for like the + +00:50:46.200 --> 00:50:50.400 +probability of the correct answer you + +00:50:48.760 --> 00:50:52.599 +can basically optimize for the + +00:50:50.400 --> 00:50:57.440 +difference in your the distributions + +00:50:52.599 --> 00:50:57.440 +over the answers um and + +00:51:00.280 --> 00:51:06.280 +as we can see here um here in this paper + +00:51:04.000 --> 00:51:08.960 +by Jeff Hinton and some others um they + +00:51:06.280 --> 00:51:11.559 +Ed this method for speech uh I think it + +00:51:08.960 --> 00:51:15.440 +was speech recognition and they showed + +00:51:11.559 --> 00:51:18.079 +that uh this Baseline here is + +00:51:15.440 --> 00:51:21.319 +um they took a training set and then + +00:51:18.079 --> 00:51:23.240 +threw away the labels and pseudo labeled + +00:51:21.319 --> 00:51:26.280 +those the inputs of the like the input + +00:51:23.240 --> 00:51:27.400 +speech with another model + +00:51:26.280 --> 00:51:28.960 +uh and then they showed that if you're + +00:51:27.400 --> 00:51:30.920 +using hard + +00:51:28.960 --> 00:51:33.079 +targets and you use like the full + +00:51:30.920 --> 00:51:35.480 +training sets inputs you can get pretty + +00:51:33.079 --> 00:51:37.280 +far but if you instead don't have that + +00:51:35.480 --> 00:51:39.799 +much unlabeled speech and you use like a + +00:51:37.280 --> 00:51:41.960 +small amount of speech using soft + +00:51:39.799 --> 00:51:44.960 +targets is like way way more + +00:51:41.960 --> 00:51:44.960 +effective + +00:51:45.880 --> 00:51:54.200 +uh okay uh I'll just jump ahead to uh I + +00:51:50.520 --> 00:51:57.200 +think a a result that is like pretty + +00:51:54.200 --> 00:51:58.680 +shocking that it works um so this came + +00:51:57.200 --> 00:52:01.680 +from uh Zack Lipton and some other + +00:51:58.680 --> 00:52:03.839 +people a few years ago um and uh the + +00:52:01.680 --> 00:52:07.240 +idea here is that you can take a model + +00:52:03.839 --> 00:52:08.960 +that is trained with supervised learning + +00:52:07.240 --> 00:52:11.960 +and here they train it on like a image + +00:52:08.960 --> 00:52:15.359 +classification task then you can + +00:52:11.960 --> 00:52:17.960 +repeatedly distill it to itself using + +00:52:15.359 --> 00:52:20.160 +soft targets so you take a bunch of + +00:52:17.960 --> 00:52:22.680 +images and then predict the distribution + +00:52:20.160 --> 00:52:24.760 +over the labels of those images and then + +00:52:22.680 --> 00:52:26.280 +train the model to basically match its + +00:52:24.760 --> 00:52:28.599 +own + +00:52:26.280 --> 00:52:31.240 +to + +00:52:28.599 --> 00:52:34.440 +um you train the model in a soft target + +00:52:31.240 --> 00:52:38.119 +distillation objective uh using itself + +00:52:34.440 --> 00:52:39.480 +as a teacher and uh it's still kind of + +00:52:38.119 --> 00:52:41.960 +bizarre to me that this works but they + +00:52:39.480 --> 00:52:45.079 +show that this pretty consistently + +00:52:41.960 --> 00:52:46.880 +improves performance of a model so I + +00:52:45.079 --> 00:52:49.799 +think that the to me the intuition here + +00:52:46.880 --> 00:52:51.040 +is that this um soft target objective + +00:52:49.799 --> 00:52:54.079 +which is different than what you were + +00:52:51.040 --> 00:52:56.319 +trained on in what you would train using + +00:52:54.079 --> 00:52:58.280 +supervised learning um it's like a + +00:52:56.319 --> 00:53:00.440 +different objective that is somehow + +00:52:58.280 --> 00:53:02.000 +conveying more information to your model + +00:53:00.440 --> 00:53:06.520 +it's conveying like uncertainties about + +00:53:02.000 --> 00:53:09.280 +the labels and um it's like a + +00:53:06.520 --> 00:53:11.640 +richer it's a richer knowledge interface + +00:53:09.280 --> 00:53:13.799 +between the teacher and the student than + +00:53:11.640 --> 00:53:15.400 +just giving a single answer and that + +00:53:13.799 --> 00:53:17.920 +this like Rich interface of knowledge + +00:53:15.400 --> 00:53:21.240 +can be really effective to me that's + +00:53:17.920 --> 00:53:22.520 +like the takeaway from this results um + +00:53:21.240 --> 00:53:24.160 +but yeah check out this paper if you + +00:53:22.520 --> 00:53:26.319 +think this is + +00:53:24.160 --> 00:53:28.200 +cool + +00:53:26.319 --> 00:53:30.319 +okay so + +00:53:28.200 --> 00:53:33.520 +uh any questions on what I've talked + +00:53:30.319 --> 00:53:33.520 +about so far in distillation + +00:53:34.079 --> 00:53:37.079 +here + +00:53:41.240 --> 00:53:44.920 +toble very similar to + +00:53:47.000 --> 00:53:50.920 +drop I + +00:53:52.079 --> 00:53:57.160 +drop is the example example + +00:54:04.960 --> 00:54:10.240 +oh the on yeah yeah yeah yeah I think + +00:54:07.240 --> 00:54:12.680 +that's right yeah + +00:54:10.240 --> 00:54:15.920 +yeah yeah if you think of of Dropout as + +00:54:12.680 --> 00:54:17.280 +a way to perturb the model that uh we + +00:54:15.920 --> 00:54:19.599 +have different perturbed versions of a + +00:54:17.280 --> 00:54:22.240 +model here they instead instead produce + +00:54:19.599 --> 00:54:23.960 +these perturbed versions of the model by + +00:54:22.240 --> 00:54:26.640 +distilling the previous version to + +00:54:23.960 --> 00:54:29.680 +itself um and then under under that view + +00:54:26.640 --> 00:54:31.760 +it's like they're both just ensembles of + +00:54:29.680 --> 00:54:33.760 +random perturbations of the same model + +00:54:31.760 --> 00:54:36.960 +yeah I think that that's exactly + +00:54:33.760 --> 00:54:42.480 +right I also have question about + +00:54:36.960 --> 00:54:46.440 +this so see op mod is not by initially + +00:54:42.480 --> 00:54:50.240 +then the soft label produced also has a + +00:54:46.440 --> 00:54:52.359 +low quality how can you train train the + +00:54:50.240 --> 00:54:57.079 +model subsequently using the low Quality + +00:54:52.359 --> 00:54:57.079 +Soft label B + +00:54:58.280 --> 00:55:03.079 +yeah I think that this method would + +00:55:00.280 --> 00:55:06.640 +require like pretty strong initial model + +00:55:03.079 --> 00:55:08.599 +to make this work and then given like a + +00:55:06.640 --> 00:55:10.839 +a decent enough initial model it can + +00:55:08.599 --> 00:55:12.480 +make it better that's how that's like my + +00:55:10.839 --> 00:55:14.599 +intuition on on + +00:55:12.480 --> 00:55:16.359 +that at the most extreme case if you + +00:55:14.599 --> 00:55:18.359 +start off with a random a random model + +00:55:16.359 --> 00:55:21.920 +that produces random outputs you'll + +00:55:18.359 --> 00:55:21.920 +probably never improve + +00:55:24.000 --> 00:55:27.000 +performance + +00:55:34.480 --> 00:55:41.440 +um so I I don't have a a clear answer + +00:55:38.440 --> 00:55:43.160 +but I think that um I would say it + +00:55:41.440 --> 00:55:45.760 +probably just has to be a little better + +00:55:43.160 --> 00:55:48.640 +than random chance which is probably a + +00:55:45.760 --> 00:55:54.000 +surprising answer uh there's a previous + +00:55:48.640 --> 00:55:57.000 +work um let me see if I can find it + +00:55:54.000 --> 00:55:57.000 +uh + +00:56:05.240 --> 00:56:10.000 +yeah so this this paper I recently read + +00:56:07.160 --> 00:56:12.599 +and it was pretty shocking to me they + +00:56:10.000 --> 00:56:12.599 +take + +00:56:16.280 --> 00:56:23.880 +um they take models uh sorry they + +00:56:21.799 --> 00:56:25.640 +uh they focus on I think image + +00:56:23.880 --> 00:56:28.280 +classification here and take an initial + +00:56:25.640 --> 00:56:30.839 +data set and they randomly flip the + +00:56:28.280 --> 00:56:33.160 +label for like a certain percentage of + +00:56:30.839 --> 00:56:35.599 +examples so like they they provide the + +00:56:33.160 --> 00:56:38.559 +wrong label in train data for some + +00:56:35.599 --> 00:56:39.920 +percentage of examples and they find + +00:56:38.559 --> 00:56:44.200 +like + +00:56:39.920 --> 00:56:47.079 +that you can replace 99% of labels with + +00:56:44.200 --> 00:56:50.400 +the wrong label and still learn + +00:56:47.079 --> 00:56:52.240 +something useful um which I think is + +00:56:50.400 --> 00:56:55.520 +like is really weird and doesn't make + +00:56:52.240 --> 00:56:57.520 +sense uh and they kind of show that I + +00:56:55.520 --> 00:57:00.079 +think the idea is that this only works + +00:56:57.520 --> 00:57:03.280 +for really deep networks um and by the + +00:57:00.079 --> 00:57:05.599 +way these models I believe were not + +00:57:03.280 --> 00:57:09.839 +pre-trained so they were only learned on + +00:57:05.599 --> 00:57:12.799 +this terrible data um but + +00:57:09.839 --> 00:57:14.640 +um so I think that the so therefore like + +00:57:12.799 --> 00:57:15.520 +Bas if I'm generalizing that finding + +00:57:14.640 --> 00:57:18.319 +here + +00:57:15.520 --> 00:57:20.400 +um I think that because this show this + +00:57:18.319 --> 00:57:22.079 +suggests that neural Nets are very + +00:57:20.400 --> 00:57:24.359 +robust to label noise and if you think + +00:57:22.079 --> 00:57:26.760 +of this kind of method as you're using a + +00:57:24.359 --> 00:57:29.559 +noisy label from a teacher um you can + +00:57:26.760 --> 00:57:31.920 +probably get pretty far with data that + +00:57:29.559 --> 00:57:34.559 +is mostly noise but still better it's + +00:57:31.920 --> 00:57:34.559 +like not Pure + +00:57:38.079 --> 00:57:44.680 +Noise more of a comment rather + +00:57:41.359 --> 00:57:47.400 +than for the similarity between this and + +00:57:44.680 --> 00:57:51.880 +the drop off I think like like around + +00:57:47.400 --> 00:57:54.880 +2016 17 Garen G and like his friends + +00:57:51.880 --> 00:57:56.559 +published a lot on this area actually + +00:57:54.880 --> 00:57:59.920 +before this paper + +00:57:56.559 --> 00:58:02.760 +I so like if anyone is interested in + +00:57:59.920 --> 00:58:05.559 +this area like the keyword will be like + +00:58:02.760 --> 00:58:09.240 +alator uncertainty epistemic uncertainty + +00:58:05.559 --> 00:58:12.920 +or like mod uncertainty and + +00:58:09.240 --> 00:58:16.000 +major off or like Dr out + +00:58:12.920 --> 00:58:19.039 +approximation so there's that and also + +00:58:16.000 --> 00:58:21.039 +on the label noise side I think the + +00:58:19.039 --> 00:58:23.240 +important premise on the previous paper + +00:58:21.039 --> 00:58:26.680 +that you mentioned was giving uniform + +00:58:23.240 --> 00:58:30.280 +noise to temp if we start giving some + +00:58:26.680 --> 00:58:32.319 +like specific like bias noises like the + +00:58:30.280 --> 00:58:36.240 +neural network tends to be like somewhat + +00:58:32.319 --> 00:58:38.160 +like very bi for that specific no but + +00:58:36.240 --> 00:58:41.240 +existing computer vision label noise + +00:58:38.160 --> 00:58:43.880 +literatures often goes up to like label + +00:58:41.240 --> 00:58:47.039 +noise of 90% of something like already + +00:58:43.880 --> 00:58:49.000 +as a so even though those cases are + +00:58:47.039 --> 00:58:51.640 +somewhat like + +00:58:49.000 --> 00:58:55.599 +synthetics I think it's pretty + +00:58:51.640 --> 00:58:57.880 +interesting to see those cases but still + +00:58:55.599 --> 00:59:00.680 +like they only injecting like uniform + +00:58:57.880 --> 00:59:03.400 +distribution for the label so that like + +00:59:00.680 --> 00:59:04.960 +theoretically speaking the model will + +00:59:03.400 --> 00:59:07.079 +like with like enough number of + +00:59:04.960 --> 00:59:09.480 +iterations the model will be biased + +00:59:07.079 --> 00:59:09.480 +towards a + +00:59:11.760 --> 00:59:16.079 +two yeah that's uh thanks for the + +00:59:13.920 --> 00:59:19.240 +clarification so um yeah for the + +00:59:16.079 --> 00:59:20.559 +recording uh the the student pointed out + +00:59:19.240 --> 00:59:22.440 +that in the previous paper um an + +00:59:20.559 --> 00:59:25.520 +important detail is that the noise in + +00:59:22.440 --> 00:59:28.799 +the labels was uniform L sampled and + +00:59:25.520 --> 00:59:30.400 +that if you instead use um non-uniform + +00:59:28.799 --> 00:59:32.799 +random noise it can actually have a + +00:59:30.400 --> 00:59:35.599 +major impact on the ability of a deep + +00:59:32.799 --> 00:59:36.640 +Network to learn the input label mapping + +00:59:35.599 --> 00:59:40.920 +um yeah I think that's a really good + +00:59:36.640 --> 00:59:40.920 +point thanks um okay so + +00:59:42.440 --> 00:59:48.920 +um so uh distillation was originally + +00:59:46.079 --> 00:59:52.799 +designed here for like um when you had a + +00:59:48.920 --> 00:59:55.160 +single label per input but in text we + +00:59:52.799 --> 00:59:58.520 +often have sequences maybe we want to + +00:59:55.160 --> 01:00:01.079 +generate uh a sentence and so how do we + +00:59:58.520 --> 01:00:04.000 +extend distillation to this sequence + +01:00:01.079 --> 01:00:06.839 +labeling setting uh and so there's like + +01:00:04.000 --> 01:00:10.839 +kind of two obvious ways really the + +01:00:06.839 --> 01:00:13.319 +first is that you want to predict + +01:00:10.839 --> 01:00:15.839 +um you want to match the distribution of + +01:00:13.319 --> 01:00:17.440 +words that the teacher suggested at each + +01:00:15.839 --> 01:00:22.200 +point in your generation + +01:00:17.440 --> 01:00:25.039 +process um so uh given a prefix like um + +01:00:22.200 --> 01:00:26.880 +this movie is blank you then see the + +01:00:25.039 --> 01:00:28.319 +teacher distribution over the words and + +01:00:26.880 --> 01:00:29.880 +you try to replicate that in your + +01:00:28.319 --> 01:00:33.079 +student + +01:00:29.880 --> 01:00:36.000 +model uh and then the second idea is + +01:00:33.079 --> 01:00:38.920 +that you might now have + +01:00:36.000 --> 01:00:41.680 +um but the problem I think here is that + +01:00:38.920 --> 01:00:43.839 +the as you keep generating the + +01:00:41.680 --> 01:00:45.920 +text + +01:00:43.839 --> 01:00:48.119 +uh I think this is related to an idea + +01:00:45.920 --> 01:00:50.920 +called exposure bias which is that uh as + +01:00:48.119 --> 01:00:53.160 +you generate text um the teacher and the + +01:00:50.920 --> 01:00:54.839 +student might diverge dramatically like + +01:00:53.160 --> 01:00:57.440 +the teacher might be generating + +01:00:54.839 --> 01:00:58.920 +consistent text and it becomes uh it + +01:00:57.440 --> 01:01:00.240 +starts to look very different than what + +01:00:58.920 --> 01:01:03.240 +the student could have possibly + +01:01:00.240 --> 01:01:05.000 +generated so the second idea here is + +01:01:03.240 --> 01:01:08.079 +sequence level distillation where you + +01:01:05.000 --> 01:01:10.480 +instead just um generate a hard target + +01:01:08.079 --> 01:01:12.599 +from the teacher so instead of using + +01:01:10.480 --> 01:01:14.240 +like the so you use a soft targets at + +01:01:12.599 --> 01:01:17.400 +the word level and then at the at the + +01:01:14.240 --> 01:01:19.640 +teacher at the at the sequence level you + +01:01:17.400 --> 01:01:20.960 +generate a full sentence from teacher + +01:01:19.640 --> 01:01:22.920 +and you just want to maximize the + +01:01:20.960 --> 01:01:25.839 +probability of that like go of that + +01:01:22.920 --> 01:01:27.599 +pseudo labeled gold sentence + +01:01:25.839 --> 01:01:29.000 +and uh they show that if you combine + +01:01:27.599 --> 01:01:30.160 +these two objectives together it's + +01:01:29.000 --> 01:01:32.280 +really + +01:01:30.160 --> 01:01:33.799 +effective so I think that this this is + +01:01:32.280 --> 01:01:36.520 +like a this seems like the right way to + +01:01:33.799 --> 01:01:39.039 +do distillation for like generating + +01:01:36.520 --> 01:01:42.039 +sequences of + +01:01:39.039 --> 01:01:42.039 +text + +01:01:44.760 --> 01:01:51.400 +um and uh one really popular distilled + +01:01:48.319 --> 01:01:54.000 +distilled model in NLP that I use all + +01:01:51.400 --> 01:01:56.480 +the time is called distill birt it + +01:01:54.000 --> 01:01:57.920 +basically is just Bert um which at the + +01:01:56.480 --> 01:02:00.720 +time ber came out it was considered to + +01:01:57.920 --> 01:02:04.760 +be really big um which is kind of like + +01:02:00.720 --> 01:02:07.240 +uh comedic now but uh uh they the idea + +01:02:04.760 --> 01:02:09.920 +here was like can we reduce the size of + +01:02:07.240 --> 01:02:12.559 +Bert in half and get the same + +01:02:09.920 --> 01:02:15.520 +performance so they do this they use a + +01:02:12.559 --> 01:02:17.680 +couple tricks to do this first they just + +01:02:15.520 --> 01:02:19.279 +like took every other layer of ber so if + +01:02:17.680 --> 01:02:21.960 +you had a 12 layer BT model they they + +01:02:19.279 --> 01:02:25.240 +took six layers um and they initialized + +01:02:21.960 --> 01:02:26.920 +each layer from one the layers of the + +01:02:25.240 --> 01:02:27.799 +initial Bert model so it's not like a + +01:02:26.920 --> 01:02:32.319 +random + +01:02:27.799 --> 01:02:34.520 +initialization um then they did uh + +01:02:32.319 --> 01:02:37.279 +effectively soft target distillation + +01:02:34.520 --> 01:02:40.559 +which was effective they also tried in + +01:02:37.279 --> 01:02:42.440 +their paper and they they use um they + +01:02:40.559 --> 01:02:45.400 +combined soft target distillation with + +01:02:42.440 --> 01:02:47.799 +like real supervised a real supervised + +01:02:45.400 --> 01:02:51.440 +objective from language modeling so like + +01:02:47.799 --> 01:02:53.119 +they they masked tokens of text and they + +01:02:51.440 --> 01:02:55.279 +tried to train on both like what was + +01:02:53.119 --> 01:02:57.440 +actually behind the mask but also so + +01:02:55.279 --> 01:03:00.279 +what the teacher would have predicted + +01:02:57.440 --> 01:03:02.799 +for that mask and they kind of f found + +01:03:00.279 --> 01:03:04.720 +surprisingly I think that um the + +01:03:02.799 --> 01:03:06.520 +supervised objective doesn't really help + +01:03:04.720 --> 01:03:08.640 +much at all so if you have a good + +01:03:06.520 --> 01:03:11.680 +teacher that's probably enough for + +01:03:08.640 --> 01:03:14.279 +distillation um and then they did + +01:03:11.680 --> 01:03:16.039 +something else to make sure that the + +01:03:14.279 --> 01:03:18.319 +embedding space like had a similar + +01:03:16.039 --> 01:03:21.760 +geometry in the small model and the big + +01:03:18.319 --> 01:03:23.599 +model um and the the main finding here + +01:03:21.760 --> 01:03:26.559 +is that you can do this and get a model + +01:03:23.599 --> 01:03:28.880 +that is pretty much just as good or very + +01:03:26.559 --> 01:03:30.880 +close to it in most tasks as the Big + +01:03:28.880 --> 01:03:33.599 +Bird model and this little bird is like + +01:03:30.880 --> 01:03:36.079 +super popular people use it all the time + +01:03:33.599 --> 01:03:40.640 +uh + +01:03:36.079 --> 01:03:44.640 +okay so now uh I'm gonna go a little bit + +01:03:40.640 --> 01:03:46.480 +uh off of this initial motivation of + +01:03:44.640 --> 01:03:48.440 +efficiency and talk about how + +01:03:46.480 --> 01:03:51.279 +distillation can be used like actually + +01:03:48.440 --> 01:03:52.799 +do things that you cannot do otherwise + +01:03:51.279 --> 01:03:54.359 +like unlocking capabilities and + +01:03:52.799 --> 01:03:57.440 +performance that are pretty much + +01:03:54.359 --> 01:03:58.760 +impossible using traditional learning um + +01:03:57.440 --> 01:04:01.799 +before I do that any questions about + +01:03:58.760 --> 01:04:01.799 +distillation before I move + +01:04:05.760 --> 01:04:13.160 +on try to define the architecture of the + +01:04:10.079 --> 01:04:15.720 +if you want to dis some given model how + +01:04:13.160 --> 01:04:18.960 +what would you + +01:04:15.720 --> 01:04:20.559 +your um I think it's you have a lot of + +01:04:18.960 --> 01:04:23.200 +flexibility in distillation unlike + +01:04:20.559 --> 01:04:26.200 +something like pruning + +01:04:23.200 --> 01:04:26.200 +um + +01:04:28.880 --> 01:04:32.200 +I think I've seen work that suggests + +01:04:30.520 --> 01:04:34.279 +that distillation is most effective when + +01:04:32.200 --> 01:04:36.400 +your student and teacher have like + +01:04:34.279 --> 01:04:38.559 +similar architectures like if you're for + +01:04:36.400 --> 01:04:41.319 +example if your teacher is an auto + +01:04:38.559 --> 01:04:43.079 +regressive model you might like a GPT 2 + +01:04:41.319 --> 01:04:45.760 +or three you might want your student to + +01:04:43.079 --> 01:04:47.079 +be autoaggressive but um but my + +01:04:45.760 --> 01:04:48.400 +intuition is that there's a lot of + +01:04:47.079 --> 01:04:49.839 +flexibility here and especially if + +01:04:48.400 --> 01:04:51.880 +you're doing Hard Target distillation + +01:04:49.839 --> 01:04:54.279 +where you're generating sequences you + +01:04:51.880 --> 01:04:56.680 +could just treat this as as labels and + +01:04:54.279 --> 01:05:00.200 +then you could train any student model + +01:04:56.680 --> 01:05:00.200 +um so I don't think it's that + +01:05:14.880 --> 01:05:20.359 +constrained okay so um I think that the + +01:05:18.200 --> 01:05:22.720 +first like thing I'll talk about here in + +01:05:20.359 --> 01:05:24.960 +this sort of new age distillation world + +01:05:22.720 --> 01:05:26.720 +is self- instruct which gram already + +01:05:24.960 --> 01:05:29.799 +talked about two weeks ago um so I'll + +01:05:26.720 --> 01:05:32.240 +just touch on this the idea here is that + +01:05:29.799 --> 01:05:35.079 +they doing like self distillation sort + +01:05:32.240 --> 01:05:38.000 +of like this paper where they're taking + +01:05:35.079 --> 01:05:39.880 +a model making it generate data and then + +01:05:38.000 --> 01:05:42.079 +training that same model on that data + +01:05:39.880 --> 01:05:44.359 +that's the basic idea + +01:05:42.079 --> 01:05:45.760 +um but here they're doing something very + +01:05:44.359 --> 01:05:47.160 +specific where they take a vanilla + +01:05:45.760 --> 01:05:49.440 +language model that's just trained to + +01:05:47.160 --> 01:05:51.279 +like generate text and they're trying to + +01:05:49.440 --> 01:05:54.079 +teach it to follow instructions using + +01:05:51.279 --> 01:05:55.640 +instruction fine tuning and the way they + +01:05:54.079 --> 01:05:58.640 +accomplish this is by having this + +01:05:55.640 --> 01:06:01.640 +vanilla language model first like + +01:05:58.640 --> 01:06:05.760 +generate instructions arbitrarily like + +01:06:01.640 --> 01:06:08.760 +um write a poem about dogs and then + +01:06:05.760 --> 01:06:11.359 +produce responses to those instructions + +01:06:08.760 --> 01:06:13.720 +like a poem about dogs and then training + +01:06:11.359 --> 01:06:15.400 +that same model to now imitate its own + +01:06:13.720 --> 01:06:20.200 +behavior + +01:06:15.400 --> 01:06:21.279 +um and uh they use some tricks that like + +01:06:20.200 --> 01:06:23.520 +make this + +01:06:21.279 --> 01:06:28.200 +work the and I think one of the key + +01:06:23.520 --> 01:06:28.200 +tricks that um I'll zoom in on + +01:06:28.559 --> 01:06:33.839 +here is that when you're doing data set + +01:06:31.240 --> 01:06:35.920 +generation uh the most obvious thing to + +01:06:33.839 --> 01:06:38.520 +do is you first generate the inputs then + +01:06:35.920 --> 01:06:40.480 +you like pseudo label your outputs but + +01:06:38.520 --> 01:06:42.400 +the issue here is that the quality of + +01:06:40.480 --> 01:06:45.400 +your labels is only as good as your as + +01:06:42.400 --> 01:06:49.039 +your teacher is so if I + +01:06:45.400 --> 01:06:51.599 +um if I first generate a text and then I + +01:06:49.039 --> 01:06:54.760 +generate the class that I think + +01:06:51.599 --> 01:06:56.920 +corresponds to that text if this class + +01:06:54.760 --> 01:06:58.920 +label is like really bad and maybe in + +01:06:56.920 --> 01:07:02.039 +like very systematic ways as was + +01:06:58.920 --> 01:07:03.039 +mentioned earlier then uh the CR data + +01:07:02.039 --> 01:07:05.480 +will be really bad and you're not going + +01:07:03.039 --> 01:07:06.920 +to be able to learn anything useful but + +01:07:05.480 --> 01:07:10.480 +when you're generating data you don't + +01:07:06.920 --> 01:07:12.960 +need to actually do this linear process + +01:07:10.480 --> 01:07:16.240 +you can instead first generate the + +01:07:12.960 --> 01:07:17.960 +class and then generate inputs condition + +01:07:16.240 --> 01:07:20.160 +on that class so like this is kind of + +01:07:17.960 --> 01:07:21.359 +doing things backwards and you can't do + +01:07:20.160 --> 01:07:23.160 +this when you're doing real prediction + +01:07:21.359 --> 01:07:24.880 +because you don't know the class like + +01:07:23.160 --> 01:07:27.920 +like this is not useful + +01:07:24.880 --> 01:07:29.760 +in practice when you're doing prediction + +01:07:27.920 --> 01:07:33.079 +but for generating data you don't need + +01:07:29.760 --> 01:07:34.880 +to do things linearly and so um I think + +01:07:33.079 --> 01:07:36.799 +that this idea to me is like really + +01:07:34.880 --> 01:07:39.119 +important in data + +01:07:36.799 --> 01:07:42.520 +generation that you can decompose your + +01:07:39.119 --> 01:07:44.839 +task into different patterns or orders + +01:07:42.520 --> 01:07:47.680 +and then generate your data from like + +01:07:44.839 --> 01:07:49.680 +the ground up that way um and hopefully + +01:07:47.680 --> 01:07:52.920 +this way by like reducing a hard problem + +01:07:49.680 --> 01:07:55.880 +to an easy problem uh you can do a lot + +01:07:52.920 --> 01:07:59.279 +better um this is related to one other + +01:07:55.880 --> 01:08:02.119 +paper that I did not put in the + +01:07:59.279 --> 01:08:04.599 +um in the + +01:08:02.119 --> 01:08:06.119 +slides and they so in this paper they + +01:08:04.599 --> 01:08:08.599 +call this + +01:08:06.119 --> 01:08:12.160 +idea task + +01:08:08.599 --> 01:08:12.160 +asymmetry where + +01:08:18.440 --> 01:08:24.560 +uh if you have a task of going from X to + +01:08:21.719 --> 01:08:27.640 +Y that is really hard but going from y + +01:08:24.560 --> 01:08:30.120 +to X is easy then you can just start + +01:08:27.640 --> 01:08:32.839 +with a bunch of y's generate synthetic + +01:08:30.120 --> 01:08:35.239 +X's but because this direction is easy + +01:08:32.839 --> 01:08:37.279 +you can probably do pretty good at this + +01:08:35.239 --> 01:08:39.239 +and then you can now flip the data again + +01:08:37.279 --> 01:08:41.920 +and train your model to generate X to + +01:08:39.239 --> 01:08:44.600 +generate y from X but you have a lot of + +01:08:41.920 --> 01:08:46.799 +data that is like pretty good and then + +01:08:44.600 --> 01:08:49.960 +uh you can do like really surprisingly + +01:08:46.799 --> 01:08:51.799 +well using this strategy uh so in like + +01:08:49.960 --> 01:08:54.799 +this paper they were doing information + +01:08:51.799 --> 01:08:56.679 +extraction where you had um you're given + +01:08:54.799 --> 01:09:00.520 +like sentences and you wanted to extract + +01:08:56.679 --> 01:09:03.040 +triples so here like you had what film + +01:09:00.520 --> 01:09:05.359 +who is what's the location of that film + +01:09:03.040 --> 01:09:07.640 +and instead of and so doing this uh + +01:09:05.359 --> 01:09:10.279 +sentence to information extraction is + +01:09:07.640 --> 01:09:11.560 +pretty hard but it's pretty easy to get + +01:09:10.279 --> 01:09:14.120 +be given a bunch of entities and + +01:09:11.560 --> 01:09:15.759 +generate a sentence about those entities + +01:09:14.120 --> 01:09:18.440 +that's like pretty trivial to do I think + +01:09:15.759 --> 01:09:19.920 +with large language models so they they + +01:09:18.440 --> 01:09:22.520 +went backwards and they just took a + +01:09:19.920 --> 01:09:25.440 +bunch of triples generated text + +01:09:22.520 --> 01:09:28.679 +synthetically and used then flipped the + +01:09:25.440 --> 01:09:30.600 +order of the labels and inputs um and + +01:09:28.679 --> 01:09:33.640 +then what they found is that in terms of + +01:09:30.600 --> 01:09:36.759 +the performance here + +01:09:33.640 --> 01:09:39.400 +um it's + +01:09:36.759 --> 01:09:41.520 +like double as good as the previous best + +01:09:39.400 --> 01:09:45.120 +model here which was already really good + +01:09:41.520 --> 01:09:47.679 +for the time um so I think that this is + +01:09:45.120 --> 01:09:49.759 +like sort of an idea that I want to pass + +01:09:47.679 --> 01:09:52.560 +on that I think is is really nice uh + +01:09:49.759 --> 01:09:56.560 +that was touched on by self + +01:09:52.560 --> 01:10:00.760 +instruct okay okay um and then uh + +01:09:56.560 --> 01:10:02.440 +the one the going a little further in + +01:10:00.760 --> 01:10:04.920 +this idea of using distillation to do + +01:10:02.440 --> 01:10:06.480 +things that you couldn't do before um is + +01:10:04.920 --> 01:10:08.920 +some work that I did with graham last + +01:10:06.480 --> 01:10:12.320 +year this past year um which is called + +01:10:08.920 --> 01:10:14.560 +prompt model and the idea here is that + +01:10:12.320 --> 01:10:17.040 +let's forget that distillation is + +01:10:14.560 --> 01:10:20.760 +anything but just like a bit + +01:10:17.040 --> 01:10:24.080 +generator and now um distillation is one + +01:10:20.760 --> 01:10:25.280 +way to get training data for your model + +01:10:24.080 --> 01:10:27.000 +but there might be other ways to get + +01:10:25.280 --> 01:10:30.960 +data as well that we are like leaving on + +01:10:27.000 --> 01:10:33.080 +the table so I think the the key idea + +01:10:30.960 --> 01:10:35.840 +here is can we + +01:10:33.080 --> 01:10:37.960 +combine retrieved data existing data + +01:10:35.840 --> 01:10:40.159 +that exists on the on the internet with + +01:10:37.960 --> 01:10:42.960 +data generated from llm can we put these + +01:10:40.159 --> 01:10:46.440 +two things together and uh do even + +01:10:42.960 --> 01:10:50.480 +better so uh in this in this paper + +01:10:46.440 --> 01:10:51.920 +we uh ask the user to specify their task + +01:10:50.480 --> 01:10:55.159 +in a prompt kind of like what you use + +01:10:51.920 --> 01:10:57.560 +for gpt3 and and uh they can give a + +01:10:55.159 --> 01:11:00.920 +couple examples if they want and then + +01:10:57.560 --> 01:11:02.640 +given this prompt we first retrieve + +01:11:00.920 --> 01:11:05.000 +existing data sets that might be + +01:11:02.640 --> 01:11:06.920 +relevant to that prompt so we had like a + +01:11:05.000 --> 01:11:09.280 +method for data set retrieval in a + +01:11:06.920 --> 01:11:12.480 +previous paper that just uses text to + +01:11:09.280 --> 01:11:17.159 +find similar data sets so if I say + +01:11:12.480 --> 01:11:19.440 +um answer biomedical questions about uh + +01:11:17.159 --> 01:11:24.159 +for cancer doctors it might find like + +01:11:19.440 --> 01:11:26.960 +the bioasq data set uh and then we take + +01:11:24.159 --> 01:11:29.840 +that retrieve data set which is likely + +01:11:26.960 --> 01:11:31.600 +to be high quality but may not match the + +01:11:29.840 --> 01:11:32.840 +task that the user actually cares about + +01:11:31.600 --> 01:11:35.320 +it might be like a little bit different + +01:11:32.840 --> 01:11:37.880 +than what the user actually wants we + +01:11:35.320 --> 01:11:40.679 +then complement this retrieve data set + +01:11:37.880 --> 01:11:43.159 +with generated data generated by a + +01:11:40.679 --> 01:11:45.719 +language model which is potentially like + +01:11:43.159 --> 01:11:48.199 +not that high quality but is much more + +01:11:45.719 --> 01:11:51.120 +likely to match the user's + +01:11:48.199 --> 01:11:52.840 +intentions uh we then did one other + +01:11:51.120 --> 01:11:54.920 +thing that's not that important which is + +01:11:52.840 --> 01:11:57.159 +um retrieving like a pre-train model as + +01:11:54.920 --> 01:11:58.840 +well like maybe you have a pre-train + +01:11:57.159 --> 01:12:00.639 +model that is in your domain that you + +01:11:58.840 --> 01:12:02.719 +want to that you can actually benefit + +01:12:00.639 --> 01:12:05.440 +from then we just put all these things + +01:12:02.719 --> 01:12:08.480 +together fine-tune this small model on + +01:12:05.440 --> 01:12:10.639 +your generated and retriev data sets and + +01:12:08.480 --> 01:12:12.880 +then um I think the really cool thing + +01:12:10.639 --> 01:12:17.440 +here was that we were able to obtain + +01:12:12.880 --> 01:12:20.400 +small models that often outperform + +01:12:17.440 --> 01:12:22.920 +gpt3 even though gpt3 was the model used + +01:12:20.400 --> 01:12:24.280 +to generate data so like we were beating + +01:12:22.920 --> 01:12:27.880 +the teacher + +01:12:24.280 --> 01:12:30.000 +uh by leveraging like both distillation + +01:12:27.880 --> 01:12:33.199 +but also taking advantage of existing + +01:12:30.000 --> 01:12:33.199 +data sets that were available on the + +01:12:33.560 --> 01:12:37.800 +internet so I think that uh generally + +01:12:36.520 --> 01:12:40.800 +this is a direction I'm really excited + +01:12:37.800 --> 01:12:43.480 +about distillation for the purpose of um + +01:12:40.800 --> 01:12:45.880 +advancing model capabilities + +01:12:43.480 --> 01:12:48.800 +um and + +01:12:45.880 --> 01:12:49.639 +uh I think that this kind of came at a + +01:12:48.800 --> 01:12:52.199 +time + +01:12:49.639 --> 01:12:53.719 +when distillation was becoming really + +01:12:52.199 --> 01:12:56.239 +popular but now it's often used used by + +01:12:53.719 --> 01:12:58.280 +a different name which is called synth + +01:12:56.239 --> 01:12:59.760 +synthetic data generation it's + +01:12:58.280 --> 01:13:00.639 +effectively the same thing as hard + +01:12:59.760 --> 01:13:03.440 +target + +01:13:00.639 --> 01:13:04.800 +distillation uh but this is like + +01:13:03.440 --> 01:13:07.199 +probably one of the hottest research + +01:13:04.800 --> 01:13:09.199 +topics in NLP right now um and just like + +01:13:07.199 --> 01:13:13.560 +last week I saw this paper on the + +01:13:09.199 --> 01:13:15.480 +internet um that provides like a sort of + +01:13:13.560 --> 01:13:17.719 +pytorch like toolkit for doing + +01:13:15.480 --> 01:13:20.840 +distillation so they Define different + +01:13:17.719 --> 01:13:23.560 +like primitive operations um like + +01:13:20.840 --> 01:13:28.120 +generating stuff from prompt or from rag + +01:13:23.560 --> 01:13:31.639 +um doing retrieval uh doing filtering + +01:13:28.120 --> 01:13:34.239 +and ranking of examples or judging the + +01:13:31.639 --> 01:13:37.880 +input like judging your generated + +01:13:34.239 --> 01:13:39.920 +examples using another llm and uh they + +01:13:37.880 --> 01:13:41.360 +also integrate model training into this + +01:13:39.920 --> 01:13:43.880 +Loop but I think that this is like a + +01:13:41.360 --> 01:13:45.360 +really exciting Direction in terms of + +01:13:43.880 --> 01:13:49.600 +making data set generation something + +01:13:45.360 --> 01:13:51.880 +that can be uh very mature and uh like + +01:13:49.600 --> 01:13:55.320 +managed like a real engineering problem + +01:13:51.880 --> 01:13:57.639 +uh so + +01:13:55.320 --> 01:13:59.880 +uh yeah so I think that for like final + +01:13:57.639 --> 01:14:01.800 +projects I think the synthetic data is a + +01:13:59.880 --> 01:14:02.840 +really exciting Direction and this kind + +01:14:01.800 --> 01:14:06.480 +of toolkit could be something to + +01:14:02.840 --> 01:14:06.480 +consider for if you decide to go that + +01:14:06.679 --> 01:14:11.800 +route okay so we have a couple minutes + +01:14:09.040 --> 01:14:11.800 +left anyone ask + +01:14:17.880 --> 01:14:21.800 +questions otherwise uh that's + +01:14:22.280 --> 01:14:25.280 +it diff --git a/CMU Advanced NLP 2024 (12) Reinforcement Learning/CMU Advanced NLP 2024 (12) Reinforcement Learning.mp4 b/CMU Advanced NLP 2024 (12) Reinforcement Learning/CMU Advanced NLP 2024 (12) Reinforcement Learning.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7513f8a962fe25e899fad31ca413d4f3a9cfc7fc --- /dev/null +++ b/CMU Advanced NLP 2024 (12) Reinforcement Learning/CMU Advanced NLP 2024 (12) Reinforcement Learning.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1bb70e0fa6c406fd7c2d8d736e10c2e652b52b3e65757930cea4fb235a50ffb3 +size 72409479 diff --git a/CMU Advanced NLP 2024 (12) Reinforcement Learning/metadata.json b/CMU Advanced NLP 2024 (12) Reinforcement Learning/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..e4c435b86511dd0cb836fea830de4281bcb2ee79 --- /dev/null +++ b/CMU Advanced NLP 2024 (12) Reinforcement Learning/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=NX0l1M0NWPM", + "title": "CMU Advanced NLP 2024 (12) Reinforcement Learning" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (12) Reinforcement Learning/transcript.srt b/CMU Advanced NLP 2024 (12) Reinforcement Learning/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..072d7de294994321ef45abdf99e320272be091a7 --- /dev/null +++ b/CMU Advanced NLP 2024 (12) Reinforcement Learning/transcript.srt @@ -0,0 +1,7063 @@ +1 +00:00:00,840 --> 00:00:05,920 +okay so uh let's get started um today + +2 +00:00:04,200 --> 00:00:08,000 +I'm going to be talking about learning + +3 +00:00:05,920 --> 00:00:09,480 +from Human feedback I wrote + +4 +00:00:08,000 --> 00:00:12,160 +reinforcement learning from Human + +5 +00:00:09,480 --> 00:00:14,519 +feedback because that's what um you know + +6 +00:00:12,160 --> 00:00:15,759 +a lot of people talk about nowadays but + +7 +00:00:14,519 --> 00:00:18,880 +actually there's other methods of + +8 +00:00:15,759 --> 00:00:21,840 +learning from Human feedback so first + +9 +00:00:18,880 --> 00:00:24,760 +I'm going to be talking about the ways + +10 +00:00:21,840 --> 00:00:27,920 +we can get uh human feedback for the + +11 +00:00:24,760 --> 00:00:31,039 +generations of models and mostly focus + +12 +00:00:27,920 --> 00:00:32,960 +on generation tasks because is um + +13 +00:00:31,039 --> 00:00:35,800 +generation tasks are harder than like + +14 +00:00:32,960 --> 00:00:38,559 +classification tasks that we uh we deal + +15 +00:00:35,800 --> 00:00:40,000 +with normally so I'll spend a fair + +16 +00:00:38,559 --> 00:00:42,239 +amount of time talking about how we do + +17 +00:00:40,000 --> 00:00:45,760 +that and then after I talk about how we + +18 +00:00:42,239 --> 00:00:48,360 +do that we'll move into um how we + +19 +00:00:45,760 --> 00:00:51,160 +actually learn from that + +20 +00:00:48,360 --> 00:00:53,399 +signal so normally what we've done up + +21 +00:00:51,160 --> 00:00:56,399 +until this point is maximum likelihood + +22 +00:00:53,399 --> 00:00:58,199 +training uh this is just an overview + +23 +00:00:56,399 --> 00:00:59,559 +slide so we what we want to do is we + +24 +00:00:58,199 --> 00:01:00,760 +want to maximize the likelihood of + +25 +00:00:59,559 --> 00:01:03,280 +predicting the next word and the + +26 +00:01:00,760 --> 00:01:05,960 +reference given the previous words uh + +27 +00:01:03,280 --> 00:01:08,119 +which gives us the loss of the output + +28 +00:01:05,960 --> 00:01:09,799 +given the input uh where you know the + +29 +00:01:08,119 --> 00:01:13,960 +input can be the prompt the output can + +30 +00:01:09,799 --> 00:01:16,080 +be the answer to uh the output but + +31 +00:01:13,960 --> 00:01:18,360 +there's uh lots of problems with + +32 +00:01:16,080 --> 00:01:20,439 +learning from Maximum likelihood and I'm + +33 +00:01:18,360 --> 00:01:22,079 +going to give three examples here I + +34 +00:01:20,439 --> 00:01:24,159 +think all of these are actually real + +35 +00:01:22,079 --> 00:01:26,880 +problems uh that we need to be worried + +36 +00:01:24,159 --> 00:01:30,240 +about so the first one is that some + +37 +00:01:26,880 --> 00:01:32,439 +mistakes are worse than others so um in + +38 +00:01:30,240 --> 00:01:33,560 +the end we want good outputs and some + +39 +00:01:32,439 --> 00:01:36,520 +mistaken + +40 +00:01:33,560 --> 00:01:38,200 +predictions uh can be a bigger problem + +41 +00:01:36,520 --> 00:01:42,680 +for the output being + +42 +00:01:38,200 --> 00:01:46,000 +good so to give an example uh let's say + +43 +00:01:42,680 --> 00:01:47,600 +what we actually wanted from like a + +44 +00:01:46,000 --> 00:01:49,320 +speech recognition system or a + +45 +00:01:47,600 --> 00:01:54,040 +translation system or something like + +46 +00:01:49,320 --> 00:01:54,040 +that is uh please send this package to + +47 +00:01:54,280 --> 00:01:58,920 +Pittsburgh if I write please send a + +48 +00:01:56,880 --> 00:02:01,560 +package to Pittsburgh then this is not a + +49 +00:01:58,920 --> 00:02:03,560 +huge problem + +50 +00:02:01,560 --> 00:02:06,479 +if I write uh please send this package + +51 +00:02:03,560 --> 00:02:07,719 +to Tokyo then that might be a big + +52 +00:02:06,479 --> 00:02:09,640 +problem because the package you wanted + +53 +00:02:07,719 --> 00:02:12,760 +to come to Pittsburgh goes to Tokyo + +54 +00:02:09,640 --> 00:02:13,680 +instead and uh you might not want that + +55 +00:02:12,760 --> 00:02:16,080 +to + +56 +00:02:13,680 --> 00:02:18,000 +happen you might also have it say + +57 +00:02:16,080 --> 00:02:20,400 +bleeping send this package to Pittsburgh + +58 +00:02:18,000 --> 00:02:22,200 +instead of pleas um and that would be a + +59 +00:02:20,400 --> 00:02:24,200 +problem in a customer service system + +60 +00:02:22,200 --> 00:02:28,400 +right because your customer would uh + +61 +00:02:24,200 --> 00:02:28,400 +leave and never come back + +62 +00:02:28,840 --> 00:02:32,040 +so + +63 +00:02:30,360 --> 00:02:33,720 +determiner like this is not going to + +64 +00:02:32,040 --> 00:02:35,640 +cause a huge issue U messing up other + +65 +00:02:33,720 --> 00:02:37,519 +things is going to cause a larger + +66 +00:02:35,640 --> 00:02:39,519 +issue but from the point of view of + +67 +00:02:37,519 --> 00:02:42,680 +Maximum likelihood all of these are just + +68 +00:02:39,519 --> 00:02:44,560 +tokens and messing up one token is the + +69 +00:02:42,680 --> 00:02:47,519 +same as messing up another token so + +70 +00:02:44,560 --> 00:02:50,040 +that's uh you know an + +71 +00:02:47,519 --> 00:02:52,080 +issue another problem is that the gold + +72 +00:02:50,040 --> 00:02:54,640 +standard and maximum likelihood + +73 +00:02:52,080 --> 00:02:57,480 +estimation can be bad it can be like not + +74 +00:02:54,640 --> 00:02:59,239 +what you want and uh corpa are full of + +75 +00:02:57,480 --> 00:03:02,400 +outputs that we wouldn't want a language + +76 +00:02:59,239 --> 00:03:05,400 +model producing so for example uh toxic + +77 +00:03:02,400 --> 00:03:07,799 +comments on Reddit uh + +78 +00:03:05,400 --> 00:03:09,959 +disinformation um another thing that a + +79 +00:03:07,799 --> 00:03:13,000 +lot of people don't think about uh quite + +80 +00:03:09,959 --> 00:03:15,640 +as much is a lot of the data online is + +81 +00:03:13,000 --> 00:03:17,680 +uh from is automatically generated + +82 +00:03:15,640 --> 00:03:19,720 +nowadays for example from machine + +83 +00:03:17,680 --> 00:03:24,080 +translation a lot of the translations + +84 +00:03:19,720 --> 00:03:25,720 +online are from uh 2016 Google translate + +85 +00:03:24,080 --> 00:03:27,560 +uh when Google translate was a lot less + +86 +00:03:25,720 --> 00:03:29,120 +good than it is now and so you have like + +87 +00:03:27,560 --> 00:03:31,760 +poor quality translations that were + +88 +00:03:29,120 --> 00:03:31,760 +automatically + +89 +00:03:33,040 --> 00:03:37,959 +a final problem is uh something that's + +90 +00:03:35,280 --> 00:03:40,360 +called exposure bias and exposure bias + +91 +00:03:37,959 --> 00:03:44,000 +basically what it means is mle training + +92 +00:03:40,360 --> 00:03:46,000 +doesn't consider um the necessarity the + +93 +00:03:44,000 --> 00:03:48,599 +necessity for generation and it relies + +94 +00:03:46,000 --> 00:03:51,360 +on gold standard context so if we go + +95 +00:03:48,599 --> 00:03:54,159 +back to the mle equation when we're + +96 +00:03:51,360 --> 00:03:57,200 +calculating mle this y less than T is + +97 +00:03:54,159 --> 00:03:59,200 +always correct it's always a good output + +98 +00:03:57,200 --> 00:04:01,439 +and so what the model does is it learns + +99 +00:03:59,200 --> 00:04:04,280 +to over rely on good + +100 +00:04:01,439 --> 00:04:06,079 +outputs and one example of a problem + +101 +00:04:04,280 --> 00:04:08,360 +that this causes is models tend to + +102 +00:04:06,079 --> 00:04:10,560 +repeat themselves over and over again + +103 +00:04:08,360 --> 00:04:12,319 +for example um when you use some + +104 +00:04:10,560 --> 00:04:15,079 +generation algorithms and the reason why + +105 +00:04:12,319 --> 00:04:18,519 +this happens is because in a gold + +106 +00:04:15,079 --> 00:04:22,079 +standard output if a word has appeared + +107 +00:04:18,519 --> 00:04:25,840 +previously that word is more likely to + +108 +00:04:22,079 --> 00:04:28,560 +happen next so like if you say um like I + +109 +00:04:25,840 --> 00:04:29,759 +am going um I am going to Pittsburgh + +110 +00:04:28,560 --> 00:04:31,880 +you're much more likely to say + +111 +00:04:29,759 --> 00:04:33,000 +Pittsburgh again in the future because + +112 +00:04:31,880 --> 00:04:35,720 +you're talking about Pittsburgh + +113 +00:04:33,000 --> 00:04:37,400 +topically as coherent so what you get is + +114 +00:04:35,720 --> 00:04:38,639 +you get mle trained models saying I'm + +115 +00:04:37,400 --> 00:04:40,160 +going to Pittsburgh I am going to + +116 +00:04:38,639 --> 00:04:41,680 +Pittsburgh I am going to Pittsburgh I + +117 +00:04:40,160 --> 00:04:45,280 +going to Pittsburgh you've probably seen + +118 +00:04:41,680 --> 00:04:47,320 +this before uh at some point and so um + +119 +00:04:45,280 --> 00:04:49,320 +exposure bias is basically that the + +120 +00:04:47,320 --> 00:04:51,039 +model has never been exposed to mistakes + +121 +00:04:49,320 --> 00:04:55,240 +in the past and so it can't deal with + +122 +00:04:51,039 --> 00:04:56,840 +them so what this does is um if you have + +123 +00:04:55,240 --> 00:04:58,560 +an alternative training algorithm you + +124 +00:04:56,840 --> 00:05:02,120 +can fix this by generating a whole bunch + +125 +00:04:58,560 --> 00:05:04,880 +of outputs uh down like scoring some of + +126 +00:05:02,120 --> 00:05:06,880 +them poorly and penalizing the model for + +127 +00:05:04,880 --> 00:05:09,960 +uh generating po outputs and so that can + +128 +00:05:06,880 --> 00:05:09,960 +fix these problems as + +129 +00:05:10,800 --> 00:05:18,440 +well uh any questions about this all + +130 +00:05:15,199 --> 00:05:20,800 +good Okay cool so now I'd like to get + +131 +00:05:18,440 --> 00:05:23,919 +into how we measure how good an output + +132 +00:05:20,800 --> 00:05:26,360 +is and there's different ways of doing + +133 +00:05:23,919 --> 00:05:30,319 +this um the first one is objective + +134 +00:05:26,360 --> 00:05:32,680 +assessment so for some uh tasks or for + +135 +00:05:30,319 --> 00:05:35,400 +many tasks there's kind of objectively a + +136 +00:05:32,680 --> 00:05:37,280 +correct answer there's also human + +137 +00:05:35,400 --> 00:05:40,360 +subjective annotations so you can ask + +138 +00:05:37,280 --> 00:05:42,919 +humans to do annotation for you there's + +139 +00:05:40,360 --> 00:05:45,400 +machine prediction of human + +140 +00:05:42,919 --> 00:05:48,319 +preferences and there's also use in + +141 +00:05:45,400 --> 00:05:50,840 +another system in a downstream + +142 +00:05:48,319 --> 00:05:52,960 +task so the way objective assessment + +143 +00:05:50,840 --> 00:05:54,919 +works is you have an annotated correct + +144 +00:05:52,960 --> 00:05:57,080 +answer in match against this so like if + +145 +00:05:54,919 --> 00:06:00,600 +you're solving math problems uh + +146 +00:05:57,080 --> 00:06:02,560 +answering objective questions and and + +147 +00:06:00,600 --> 00:06:04,280 +you know you can pick any arbitrary + +148 +00:06:02,560 --> 00:06:06,840 +example you can pick your classification + +149 +00:06:04,280 --> 00:06:09,800 +example from uh like your text + +150 +00:06:06,840 --> 00:06:11,880 +classification tasks an even clearer + +151 +00:06:09,800 --> 00:06:13,880 +example is if you have math problems + +152 +00:06:11,880 --> 00:06:15,639 +there's kind of objectively one answer + +153 +00:06:13,880 --> 00:06:18,080 +to any math problem and there's no other + +154 +00:06:15,639 --> 00:06:19,680 +answer that could be correct so this + +155 +00:06:18,080 --> 00:06:21,160 +makes your life easy if you're handling + +156 +00:06:19,680 --> 00:06:22,560 +this type of problem but of course + +157 +00:06:21,160 --> 00:06:24,120 +there's many other types of problems we + +158 +00:06:22,560 --> 00:06:26,039 +want to handle that don't have objective + +159 +00:06:24,120 --> 00:06:29,039 +answers like + +160 +00:06:26,039 --> 00:06:31,440 +this so let's say we're handling a gener + +161 +00:06:29,039 --> 00:06:34,680 +a generation task where we don't have an + +162 +00:06:31,440 --> 00:06:36,360 +objective answer um in this Cas kind of + +163 +00:06:34,680 --> 00:06:39,440 +one of our gold standards is human + +164 +00:06:36,360 --> 00:06:42,360 +evaluation so we might have a source + +165 +00:06:39,440 --> 00:06:44,919 +input like a prompt or an input text for + +166 +00:06:42,360 --> 00:06:47,240 +machine translation we have one or + +167 +00:06:44,919 --> 00:06:49,960 +several hypotheses and we ask a human + +168 +00:06:47,240 --> 00:06:53,280 +annotator to basically give uh a score + +169 +00:06:49,960 --> 00:06:55,759 +for them or do some sort of other + +170 +00:06:53,280 --> 00:06:59,759 +annotation and the different varieties + +171 +00:06:55,759 --> 00:07:03,080 +of annotation that we can give are um + +172 +00:06:59,759 --> 00:07:04,599 +something called direct assessment so uh + +173 +00:07:03,080 --> 00:07:06,599 +direct assessment is a term that comes + +174 +00:07:04,599 --> 00:07:09,280 +from machine translation uh so you might + +175 +00:07:06,599 --> 00:07:11,039 +not see it used uh lots of other places + +176 +00:07:09,280 --> 00:07:13,120 +but it's basically just give a score + +177 +00:07:11,039 --> 00:07:15,759 +directly to how good the output is so + +178 +00:07:13,120 --> 00:07:17,199 +you can say like if you say please send + +179 +00:07:15,759 --> 00:07:18,960 +this translation is please send this + +180 +00:07:17,199 --> 00:07:21,759 +package to Tokyo we give it a score of + +181 +00:07:18,960 --> 00:07:24,360 +two out of 10 or something like + +182 +00:07:21,759 --> 00:07:28,000 +this + +183 +00:07:24,360 --> 00:07:30,840 +so the the question here is like what + +184 +00:07:28,000 --> 00:07:32,400 +does like let's say I gave a score of + +185 +00:07:30,840 --> 00:07:34,520 +two out of 10 for please send this + +186 +00:07:32,400 --> 00:07:37,680 +package to Tokyo what score should I + +187 +00:07:34,520 --> 00:07:40,240 +give for please send a package to Tokyo + +188 +00:07:37,680 --> 00:07:42,360 +anyone have any ideas the the correct + +189 +00:07:40,240 --> 00:07:46,520 +answer is please send this package to + +190 +00:07:42,360 --> 00:07:48,000 +take out of eight out of 10 yeah but you + +191 +00:07:46,520 --> 00:07:50,440 +might disagree on that right it's kind + +192 +00:07:48,000 --> 00:07:52,159 +of like subjective um one of the + +193 +00:07:50,440 --> 00:07:54,039 +difficulties of direct assessment is + +194 +00:07:52,159 --> 00:07:55,520 +giving a number like this is pretty + +195 +00:07:54,039 --> 00:07:57,800 +difficult if you don't have a very clear + +196 +00:07:55,520 --> 00:07:59,720 +rubric and very skilled annotators and + +197 +00:07:57,800 --> 00:08:02,879 +it's hard to get consistency between + +198 +00:07:59,720 --> 00:08:04,400 +people when you do this so the advantage + +199 +00:08:02,879 --> 00:08:05,599 +is it kind of gives you an idea of how + +200 +00:08:04,400 --> 00:08:07,520 +good things are overall but the + +201 +00:08:05,599 --> 00:08:09,280 +disadvantage is it's more difficult to + +202 +00:08:07,520 --> 00:08:11,319 +annotate and get + +203 +00:08:09,280 --> 00:08:13,159 +consistency um another thing that I + +204 +00:08:11,319 --> 00:08:15,319 +should point out is often scores are + +205 +00:08:13,159 --> 00:08:18,680 +assigned separately based on desirable + +206 +00:08:15,319 --> 00:08:20,960 +traits so um we don't necessarily just + +207 +00:08:18,680 --> 00:08:23,479 +say how good is it we say how fluent is + +208 +00:08:20,960 --> 00:08:26,120 +it like is it fluent uh + +209 +00:08:23,479 --> 00:08:28,159 +English in Translation there's a concept + +210 +00:08:26,120 --> 00:08:30,720 +called adequacy which is how well does + +211 +00:08:28,159 --> 00:08:34,599 +the output reflect the input + +212 +00:08:30,720 --> 00:08:36,519 +semantics um and if you're assessing + +213 +00:08:34,599 --> 00:08:38,440 +translation systems actually it's common + +214 +00:08:36,519 --> 00:08:40,519 +to assess fluency without even looking + +215 +00:08:38,440 --> 00:08:43,200 +at the input because then you can just + +216 +00:08:40,519 --> 00:08:44,880 +say how fluent is it but for adequacy + +217 +00:08:43,200 --> 00:08:46,320 +you definitely need to understand the + +218 +00:08:44,880 --> 00:08:49,600 +input so you need to be a bilingual + +219 +00:08:46,320 --> 00:08:54,680 +speaker to be able to assess + +220 +00:08:49,600 --> 00:08:57,560 +that um factuality um and so factuality + +221 +00:08:54,680 --> 00:09:00,160 +is tricky um it can either be factuality + +222 +00:08:57,560 --> 00:09:03,880 +grounded in a particular input text in + +223 +00:09:00,160 --> 00:09:05,600 +which case um the facts would have to be + +224 +00:09:03,880 --> 00:09:07,680 +you know things that were said in the + +225 +00:09:05,600 --> 00:09:09,399 +input or it can be just kind of is the + +226 +00:09:07,680 --> 00:09:11,120 +statement factual in general in which + +227 +00:09:09,399 --> 00:09:13,720 +case you need to go online you need to + +228 +00:09:11,120 --> 00:09:16,480 +search for things and like uh check + +229 +00:09:13,720 --> 00:09:18,480 +whether the statement is factual or not + +230 +00:09:16,480 --> 00:09:20,480 +um other things are like coherence does + +231 +00:09:18,480 --> 00:09:21,480 +the output fit coherently within the + +232 +00:09:20,480 --> 00:09:23,680 +larger + +233 +00:09:21,480 --> 00:09:25,680 +discs um and there's many many other + +234 +00:09:23,680 --> 00:09:28,120 +ones of these this is also task + +235 +00:09:25,680 --> 00:09:29,760 +dependent so like the things you will + +236 +00:09:28,120 --> 00:09:31,000 +evaluate for machine transl are + +237 +00:09:29,760 --> 00:09:32,880 +different than the ones you would do for + +238 +00:09:31,000 --> 00:09:35,760 +dialog which are different than the ones + +239 +00:09:32,880 --> 00:09:38,200 +you would do for a general purpose + +240 +00:09:35,760 --> 00:09:41,279 +chatot uh which is different kind things + +241 +00:09:38,200 --> 00:09:44,120 +you would do for um summarization for + +242 +00:09:41,279 --> 00:09:46,320 +example so if you're interested in doing + +243 +00:09:44,120 --> 00:09:47,519 +something like this uh then I definitely + +244 +00:09:46,320 --> 00:09:48,800 +encourage you to look at what other + +245 +00:09:47,519 --> 00:09:51,399 +people have done for the tasks you're + +246 +00:09:48,800 --> 00:09:53,079 +interested in uh previously and uh find + +247 +00:09:51,399 --> 00:09:54,880 +out the different types of traits that + +248 +00:09:53,079 --> 00:09:58,320 +did + +249 +00:09:54,880 --> 00:10:00,760 +last uh any any questions about this + +250 +00:09:58,320 --> 00:10:03,079 +also + +251 +00:10:00,760 --> 00:10:06,920 +okay the next type of feedback is + +252 +00:10:03,079 --> 00:10:09,839 +preference ratings um and so this is uh + +253 +00:10:06,920 --> 00:10:12,600 +basically what you do is you have two or + +254 +00:10:09,839 --> 00:10:14,240 +more outputs from different models or + +255 +00:10:12,600 --> 00:10:16,440 +different Generations from an individual + +256 +00:10:14,240 --> 00:10:18,839 +model and you ask a human which one is + +257 +00:10:16,440 --> 00:10:22,320 +better like is one better than the other + +258 +00:10:18,839 --> 00:10:23,839 +or are they tied and so in this case um + +259 +00:10:22,320 --> 00:10:26,320 +you might have please send this package + +260 +00:10:23,839 --> 00:10:28,880 +to Tokyo please send a package to + +261 +00:10:26,320 --> 00:10:31,040 +Tokyo we might disagree on how like good + +262 +00:10:28,880 --> 00:10:33,959 +or bad each of them are but I think most + +263 +00:10:31,040 --> 00:10:35,959 +people would agree that this one is like + +264 +00:10:33,959 --> 00:10:37,480 +despite the fact that it got this wrong + +265 +00:10:35,959 --> 00:10:40,160 +the second one is better than the first + +266 +00:10:37,480 --> 00:10:42,240 +one so this is a little bit of an easier + +267 +00:10:40,160 --> 00:10:45,040 +task it's easier to uh get people to + +268 +00:10:42,240 --> 00:10:46,839 +annotate these things + +269 +00:10:45,040 --> 00:10:50,519 +consistently however it has the + +270 +00:10:46,839 --> 00:10:52,839 +disadvantage that you can't really tell + +271 +00:10:50,519 --> 00:10:55,360 +uh whether systems are really good or + +272 +00:10:52,839 --> 00:10:57,200 +really bad so let's say you have a bunch + +273 +00:10:55,360 --> 00:11:00,279 +of really bad systems that you're + +274 +00:10:57,200 --> 00:11:01,839 +comparing with each other um you might + +275 +00:11:00,279 --> 00:11:03,680 +find that one is better than the other + +276 +00:11:01,839 --> 00:11:06,000 +but that still doesn't mean it's ready + +277 +00:11:03,680 --> 00:11:07,399 +to be deployed or if you have a bunch of + +278 +00:11:06,000 --> 00:11:11,040 +really good systems they're all + +279 +00:11:07,399 --> 00:11:13,000 +basically you know very very similar to + +280 +00:11:11,040 --> 00:11:14,399 +another but one is like slightly more + +281 +00:11:13,000 --> 00:11:18,639 +fluent than the other you might still + +282 +00:11:14,399 --> 00:11:20,680 +get a similar result um and so that also + +283 +00:11:18,639 --> 00:11:22,760 +makes it uh you know a little bit + +284 +00:11:20,680 --> 00:11:24,880 +difficult to use practically in some + +285 +00:11:22,760 --> 00:11:27,040 +ways I didn't put it on the slide but + +286 +00:11:24,880 --> 00:11:30,680 +there's another way you can kind of get + +287 +00:11:27,040 --> 00:11:33,920 +the best of both worlds um which is a + +288 +00:11:30,680 --> 00:11:35,560 +side by side assessment and side by-side + +289 +00:11:33,920 --> 00:11:38,440 +assessment basically what you would do + +290 +00:11:35,560 --> 00:11:40,560 +is you would say um please send this + +291 +00:11:38,440 --> 00:11:43,399 +package to Tokyo please send a package + +292 +00:11:40,560 --> 00:11:47,279 +to Pittsburgh give each of them a direct + +293 +00:11:43,399 --> 00:11:48,839 +score um but you can use decimal places + +294 +00:11:47,279 --> 00:11:51,120 +and you can't use the same score for all + +295 +00:11:48,839 --> 00:11:55,920 +of them and so it's + +296 +00:11:51,120 --> 00:11:57,480 +like five 500 and 4.99 out of five or + +297 +00:11:55,920 --> 00:11:59,519 +something like that like you like one + +298 +00:11:57,480 --> 00:12:02,639 +slightly better than the other or or + +299 +00:11:59,519 --> 00:12:04,480 +something like that um so there are ways + +300 +00:12:02,639 --> 00:12:07,240 +to kind of get Best of Both Worlds if + +301 +00:12:04,480 --> 00:12:11,720 +you're interested in doing + +302 +00:12:07,240 --> 00:12:11,720 +that um + +303 +00:12:14,920 --> 00:12:20,519 +so one problem one other problem with + +304 +00:12:18,279 --> 00:12:22,519 +preference rankings is that there's a + +305 +00:12:20,519 --> 00:12:24,440 +limited number of things that humans can + +306 +00:12:22,519 --> 00:12:28,160 +compare before they get really + +307 +00:12:24,440 --> 00:12:32,360 +overwhelmed so if you say I + +308 +00:12:28,160 --> 00:12:35,560 +want like I want to + +309 +00:12:32,360 --> 00:12:36,920 +rate 15 systems or 20 systems with + +310 +00:12:35,560 --> 00:12:39,120 +respect to how good they are with + +311 +00:12:36,920 --> 00:12:40,639 +respect to each other it's going to be + +312 +00:12:39,120 --> 00:12:43,680 +impossible for humans to come up with a + +313 +00:12:40,639 --> 00:12:46,959 +good preference ranking between them and + +314 +00:12:43,680 --> 00:12:49,480 +so the typical way around this um which + +315 +00:12:46,959 --> 00:12:52,360 +is also used in uh things like the + +316 +00:12:49,480 --> 00:12:55,440 +chatbot Arena by lmis and other things + +317 +00:12:52,360 --> 00:12:58,720 +like this is to use uh something like an + +318 +00:12:55,440 --> 00:13:00,959 +ELO or true skill ranking and what these + +319 +00:12:58,720 --> 00:13:03,079 +are is these are things that were + +320 +00:13:00,959 --> 00:13:05,760 +created for the ranking of like chess + +321 +00:13:03,079 --> 00:13:09,160 +players or video game players or other + +322 +00:13:05,760 --> 00:13:11,720 +things where they like b battle against + +323 +00:13:09,160 --> 00:13:13,920 +each other in multiple matches uh + +324 +00:13:11,720 --> 00:13:16,440 +pair-wise and then you put all of the + +325 +00:13:13,920 --> 00:13:18,399 +wins and losses into these ranking + +326 +00:13:16,440 --> 00:13:20,600 +algorithms and they give you a score + +327 +00:13:18,399 --> 00:13:22,920 +about how good like each of the each of + +328 +00:13:20,600 --> 00:13:27,079 +the players are so if you do something + +329 +00:13:22,920 --> 00:13:29,480 +like this you can um get basically a + +330 +00:13:27,079 --> 00:13:32,120 +ranking of systems despite the that you + +331 +00:13:29,480 --> 00:13:35,240 +only did pairwise assessments so these + +332 +00:13:32,120 --> 00:13:35,240 +are also a good thing to know + +333 +00:13:37,399 --> 00:13:43,839 +about a final variety of human feedback + +334 +00:13:40,600 --> 00:13:45,320 +uh that we create is uh air annotation + +335 +00:13:43,839 --> 00:13:47,519 +and this can be useful for a number of + +336 +00:13:45,320 --> 00:13:49,839 +reasons um but basically the way it + +337 +00:13:47,519 --> 00:13:53,839 +works is you annotate individual errors + +338 +00:13:49,839 --> 00:13:55,639 +within the outputs and um oh one thing I + +339 +00:13:53,839 --> 00:13:58,120 +should mention is that um I'm giving a + +340 +00:13:55,639 --> 00:14:00,880 +lot of examples from machine translation + +341 +00:13:58,120 --> 00:14:02,800 +um I feel like machine translation has + +342 +00:14:00,880 --> 00:14:04,519 +been doing evaluation of generated + +343 +00:14:02,800 --> 00:14:07,600 +outputs for a lot longer than a lot of + +344 +00:14:04,519 --> 00:14:09,000 +other uh fields of NLP have and + +345 +00:14:07,600 --> 00:14:11,800 +therefore their methodology is more + +346 +00:14:09,000 --> 00:14:13,480 +developed than a lot of other fields um + +347 +00:14:11,800 --> 00:14:16,199 +but a lot of these things can also be + +348 +00:14:13,480 --> 00:14:18,079 +applied to uh other uh other tasks as + +349 +00:14:16,199 --> 00:14:19,079 +well but anyway getting back to this + +350 +00:14:18,079 --> 00:14:20,680 +there's something for machine + +351 +00:14:19,079 --> 00:14:23,639 +translation called multi-dimensional + +352 +00:14:20,680 --> 00:14:26,240 +quality metrics and the multidimensional + +353 +00:14:23,639 --> 00:14:29,160 +quality metrics basically what they do + +354 +00:14:26,240 --> 00:14:32,199 +is they annotate spans in the output + +355 +00:14:29,160 --> 00:14:34,800 +where each Span in the output is given a + +356 +00:14:32,199 --> 00:14:38,079 +severity ranking of the error and it's + +357 +00:14:34,800 --> 00:14:40,199 +given a type of the error and there's + +358 +00:14:38,079 --> 00:14:42,600 +about eight different types of Errors + +359 +00:14:40,199 --> 00:14:44,839 +like this doesn't violate or this + +360 +00:14:42,600 --> 00:14:47,399 +violates linguistic conventions of using + +361 +00:14:44,839 --> 00:14:49,880 +the word this instead of uh here by + +362 +00:14:47,399 --> 00:14:51,639 +using the word uh instead of this here + +363 +00:14:49,880 --> 00:14:55,079 +and then this is an accuracy error + +364 +00:14:51,639 --> 00:14:57,839 +because it's not accurately con uh uh + +365 +00:14:55,079 --> 00:15:01,720 +conveying the output and then this error + +366 +00:14:57,839 --> 00:15:04,600 +is minor uh this error is Major um and + +367 +00:15:01,720 --> 00:15:06,399 +then there's also like severe severe + +368 +00:15:04,600 --> 00:15:07,440 +versus major but minor and major is a + +369 +00:15:06,399 --> 00:15:09,680 +more important + +370 +00:15:07,440 --> 00:15:11,839 +distinction um so the advantage of this + +371 +00:15:09,680 --> 00:15:14,279 +is a couple fold number one it gives you + +372 +00:15:11,839 --> 00:15:16,440 +more fine grained feedback uh in that + +373 +00:15:14,279 --> 00:15:19,199 +you can say okay this system has a lot + +374 +00:15:16,440 --> 00:15:22,199 +of uh accuracy errors this system has a + +375 +00:15:19,199 --> 00:15:24,880 +lot of linguistic conventions errors um + +376 +00:15:22,199 --> 00:15:28,600 +it also can be more consistent because + +377 +00:15:24,880 --> 00:15:29,839 +if you just say to people which output + +378 +00:15:28,600 --> 00:15:31,800 +is better + +379 +00:15:29,839 --> 00:15:34,560 +or what is the score of this output + +380 +00:15:31,800 --> 00:15:36,360 +people have trouble deciding about that + +381 +00:15:34,560 --> 00:15:39,560 +because it's a more subjective + +382 +00:15:36,360 --> 00:15:41,680 +evaluation but if I say is this word + +383 +00:15:39,560 --> 00:15:43,000 +correct it's a little bit easier for + +384 +00:15:41,680 --> 00:15:44,759 +people to do so you can get more + +385 +00:15:43,000 --> 00:15:46,920 +consistent annotations + +386 +00:15:44,759 --> 00:15:49,720 +here the problem with this is this can + +387 +00:15:46,920 --> 00:15:50,839 +be very time consuming so um you know + +388 +00:15:49,720 --> 00:15:52,480 +obviously you need to go through and + +389 +00:15:50,839 --> 00:15:56,440 +annotate every single error if it's for + +390 +00:15:52,480 --> 00:15:56,440 +a long outputs or something your + +391 +00:15:56,959 --> 00:16:03,519 +problem so anyway these are just three + +392 +00:15:59,800 --> 00:16:05,680 +uh ways of collecting human feedback um + +393 +00:16:03,519 --> 00:16:08,639 +and then there's an alternative which is + +394 +00:16:05,680 --> 00:16:10,079 +automatic evaluation of outputs and um + +395 +00:16:08,639 --> 00:16:14,399 +there's a bunch of different ways we can + +396 +00:16:10,079 --> 00:16:16,800 +do this the basic idea here is we have a + +397 +00:16:14,399 --> 00:16:20,199 +source um we have a couple + +398 +00:16:16,800 --> 00:16:22,800 +hypotheses and uh we have an automatic + +399 +00:16:20,199 --> 00:16:26,000 +system that generates outputs uh like + +400 +00:16:22,800 --> 00:16:28,279 +scores and we optionally have a + +401 +00:16:26,000 --> 00:16:30,839 +reference output so the reference output + +402 +00:16:28,279 --> 00:16:33,519 +is a human created gold standard output + +403 +00:16:30,839 --> 00:16:35,120 +with respect to how good that um uh with + +404 +00:16:33,519 --> 00:16:38,240 +respect to like what the output should + +405 +00:16:35,120 --> 00:16:38,240 +be in an ideal + +406 +00:16:38,279 --> 00:16:47,079 +case and basically the goal of automatic + +407 +00:16:43,199 --> 00:16:50,199 +evaluation is to + +408 +00:16:47,079 --> 00:16:52,839 +predict human preferences or to predict + +409 +00:16:50,199 --> 00:16:56,240 +what the human scores would be um + +410 +00:16:52,839 --> 00:16:58,600 +because still at this point um we mostly + +411 +00:16:56,240 --> 00:16:59,480 +view what humans think of the output to + +412 +00:16:58,600 --> 00:17:01,680 +be + +413 +00:16:59,480 --> 00:17:03,280 +uh kind of the + +414 +00:17:01,680 --> 00:17:06,199 +standard + +415 +00:17:03,280 --> 00:17:08,439 +and this is called a variety of things + +416 +00:17:06,199 --> 00:17:10,600 +depending on what field you're in um in + +417 +00:17:08,439 --> 00:17:12,559 +machine translation and summarization + +418 +00:17:10,600 --> 00:17:13,520 +it's called automatic evaluation also a + +419 +00:17:12,559 --> 00:17:16,520 +lot in + +420 +00:17:13,520 --> 00:17:18,400 +dialogue um if you're talking about + +421 +00:17:16,520 --> 00:17:21,000 +people from reinforcement learning or + +422 +00:17:18,400 --> 00:17:24,600 +other things um or chat Bots or things + +423 +00:17:21,000 --> 00:17:28,240 +like that uh a lot of people or uh like + +424 +00:17:24,600 --> 00:17:31,280 +AGI or whatever um a lot of people call + +425 +00:17:28,240 --> 00:17:32,520 +it uh word model um because that + +426 +00:17:31,280 --> 00:17:34,480 +specifically comes from the point of + +427 +00:17:32,520 --> 00:17:36,440 +view of like learning from this feedback + +428 +00:17:34,480 --> 00:17:37,960 +but essentially they're the same thing + +429 +00:17:36,440 --> 00:17:41,080 +uh from my point of view they're trying + +430 +00:17:37,960 --> 00:17:42,520 +to predict how good an output is and how + +431 +00:17:41,080 --> 00:17:44,240 +much you should reward the model for + +432 +00:17:42,520 --> 00:17:46,559 +producing that + +433 +00:17:44,240 --> 00:17:48,679 +output + +434 +00:17:46,559 --> 00:17:50,520 +um so there's a bunch of different + +435 +00:17:48,679 --> 00:17:51,720 +methods to do this I'm not going to + +436 +00:17:50,520 --> 00:17:53,799 +cover all of them I'm just going to + +437 +00:17:51,720 --> 00:17:55,240 +cover three paradigms for doing this so + +438 +00:17:53,799 --> 00:17:57,880 +you know where to look further if you're + +439 +00:17:55,240 --> 00:18:00,039 +interested in doing these things um the + +440 +00:17:57,880 --> 00:18:02,400 +first one is embedding based + +441 +00:18:00,039 --> 00:18:04,679 +evaluation and the way embedding based + +442 +00:18:02,400 --> 00:18:06,600 +evaluation works is usually it's + +443 +00:18:04,679 --> 00:18:11,400 +unsupervised calculation based on + +444 +00:18:06,600 --> 00:18:14,880 +embeding similarity between um + +445 +00:18:11,400 --> 00:18:18,080 +the output that the model generated and + +446 +00:18:14,880 --> 00:18:20,840 +a reference output that uh you have + +447 +00:18:18,080 --> 00:18:23,400 +created so sorry this is very small but + +448 +00:18:20,840 --> 00:18:25,559 +we have a reference here that says the + +449 +00:18:23,400 --> 00:18:27,640 +weather is cold today and we have a + +450 +00:18:25,559 --> 00:18:30,240 +candidate that says it is freezing today + +451 +00:18:27,640 --> 00:18:33,000 +so this is probably you know like a good + +452 +00:18:30,240 --> 00:18:35,480 +um a reasonably good + +453 +00:18:33,000 --> 00:18:37,640 +output and we run this through some + +454 +00:18:35,480 --> 00:18:39,120 +embedding model uh it was called Bert + +455 +00:18:37,640 --> 00:18:40,679 +score and so of course you can run it + +456 +00:18:39,120 --> 00:18:42,240 +through Bert but basically it can be any + +457 +00:18:40,679 --> 00:18:43,799 +embedding model that gives you embedding + +458 +00:18:42,240 --> 00:18:46,200 +for each token in the + +459 +00:18:43,799 --> 00:18:47,640 +sequence and so there are five tokens in + +460 +00:18:46,200 --> 00:18:49,720 +this sequence four tokens in this + +461 +00:18:47,640 --> 00:18:51,960 +sequence you get five tokens and then + +462 +00:18:49,720 --> 00:18:54,799 +four sorry five embeddings and then four + +463 +00:18:51,960 --> 00:18:57,400 +embeddings you calculate carewise cosine + +464 +00:18:54,799 --> 00:18:59,880 +similarity between all of them and this + +465 +00:18:57,400 --> 00:19:03,480 +gives you cosine + +466 +00:18:59,880 --> 00:19:06,480 +similarity Matrix and then you take the + +467 +00:19:03,480 --> 00:19:09,120 +ARG Max or you take the maximum + +468 +00:19:06,480 --> 00:19:11,280 +similarity along either the + +469 +00:19:09,120 --> 00:19:15,799 +rows or the + +470 +00:19:11,280 --> 00:19:19,559 +columns and here the rows correspond + +471 +00:19:15,799 --> 00:19:22,400 +to tokens in the reference and because + +472 +00:19:19,559 --> 00:19:24,039 +the rows correspond to tokens in the + +473 +00:19:22,400 --> 00:19:26,960 +reference + +474 +00:19:24,039 --> 00:19:28,320 +the how well you find something that is + +475 +00:19:26,960 --> 00:19:31,679 +similar to each of the tokens in the + +476 +00:19:28,320 --> 00:19:34,000 +reference is like a recall based method + +477 +00:19:31,679 --> 00:19:35,919 +because it's saying how many tokens in + +478 +00:19:34,000 --> 00:19:39,520 +the reference have a good match in the + +479 +00:19:35,919 --> 00:19:41,120 +output and then if you look at the + +480 +00:19:39,520 --> 00:19:42,799 +columns if you look at the max and the + +481 +00:19:41,120 --> 00:19:44,960 +columns this is like a precision based + +482 +00:19:42,799 --> 00:19:47,000 +metric because it's saying how many of + +483 +00:19:44,960 --> 00:19:49,360 +the things in the output are similar + +484 +00:19:47,000 --> 00:19:51,240 +have a similar match in the reference so + +485 +00:19:49,360 --> 00:19:54,480 +basically you can calculate recall and + +486 +00:19:51,240 --> 00:19:56,200 +precision over all of the tokens and + +487 +00:19:54,480 --> 00:20:00,200 +then feed this into something that looks + +488 +00:19:56,200 --> 00:20:02,400 +like fmeasure and you can also use tfidf + +489 +00:20:00,200 --> 00:20:06,000 +waiting um like what I talked about in + +490 +00:20:02,400 --> 00:20:07,799 +the rag lecture uh to upweight low + +491 +00:20:06,000 --> 00:20:09,520 +frequency words because low frequency + +492 +00:20:07,799 --> 00:20:11,440 +words tend to be more content words and + +493 +00:20:09,520 --> 00:20:13,120 +going back to my example you know if you + +494 +00:20:11,440 --> 00:20:14,280 +make a mistake from Pittsburgh to Tokyo + +495 +00:20:13,120 --> 00:20:17,880 +that's going to be more painful than + +496 +00:20:14,280 --> 00:20:21,000 +making a mistake from this to um so + +497 +00:20:17,880 --> 00:20:22,520 +actually if you'll uh if you were paying + +498 +00:20:21,000 --> 00:20:25,480 +close attention to the rag lecture this + +499 +00:20:22,520 --> 00:20:27,360 +looks really similar to the co bear um + +500 +00:20:25,480 --> 00:20:29,559 +the co bear retrieval objective that I + +501 +00:20:27,360 --> 00:20:30,960 +talked about in the r lecture um I don't + +502 +00:20:29,559 --> 00:20:32,840 +think it's a coincidence they both came + +503 +00:20:30,960 --> 00:20:34,360 +out around the same time uh so people + +504 +00:20:32,840 --> 00:20:36,360 +were thinking about the same thing but + +505 +00:20:34,360 --> 00:20:37,600 +um this is one method that's pretty + +506 +00:20:36,360 --> 00:20:40,200 +widely + +507 +00:20:37,600 --> 00:20:43,480 +use the bird Square code base is also + +508 +00:20:40,200 --> 00:20:45,440 +really nice and easy to use so um if uh + +509 +00:20:43,480 --> 00:20:47,640 +you want to try it out feel free to take + +510 +00:20:45,440 --> 00:20:47,640 +a + +511 +00:20:48,159 --> 00:20:53,840 +look cool um the next one I'd like to + +512 +00:20:51,600 --> 00:20:56,080 +talk about is a regression based + +513 +00:20:53,840 --> 00:20:58,760 +evaluation and the way this works is + +514 +00:20:56,080 --> 00:21:02,600 +this is usually used in a supervised uh + +515 +00:20:58,760 --> 00:21:04,320 +setting so uh the way what you have to + +516 +00:21:02,600 --> 00:21:07,600 +do is you have to calculate a whole + +517 +00:21:04,320 --> 00:21:09,799 +bunch of like actual human + +518 +00:21:07,600 --> 00:21:12,440 +judgments and + +519 +00:21:09,799 --> 00:21:15,000 +usually these judgments can either be + +520 +00:21:12,440 --> 00:21:16,960 +direct assessment uh where you actually + +521 +00:21:15,000 --> 00:21:19,120 +have a score or they can be pairwise + +522 +00:21:16,960 --> 00:21:20,840 +judgments and then if you have direct + +523 +00:21:19,120 --> 00:21:23,640 +assessment you use a regression based + +524 +00:21:20,840 --> 00:21:26,039 +loss like uh minimum squared error if + +525 +00:21:23,640 --> 00:21:27,520 +you have pairwise uh you use a ranking + +526 +00:21:26,039 --> 00:21:29,039 +based loss that tries to upweight the + +527 +00:21:27,520 --> 00:21:31,360 +ones that are higher scoring downward + +528 +00:21:29,039 --> 00:21:33,200 +the ones that are lower scoring one + +529 +00:21:31,360 --> 00:21:35,720 +typical example of this is Comet which + +530 +00:21:33,200 --> 00:21:37,200 +is or has been at least for a very long + +531 +00:21:35,720 --> 00:21:39,880 +time the state-of-the art and machine + +532 +00:21:37,200 --> 00:21:41,279 +translation evaluation and the reason + +533 +00:21:39,880 --> 00:21:43,440 +why it works so well is because we have + +534 +00:21:41,279 --> 00:21:44,720 +a bunch of evaluations for machine + +535 +00:21:43,440 --> 00:21:46,080 +translation they've been doing + +536 +00:21:44,720 --> 00:21:47,600 +evaluation and machine translation + +537 +00:21:46,080 --> 00:21:50,480 +systems for years and you can use that + +538 +00:21:47,600 --> 00:21:52,720 +as lots of supervised training data so + +539 +00:21:50,480 --> 00:21:54,640 +basically you just take um these + +540 +00:21:52,720 --> 00:21:56,440 +evaluation data you have human + +541 +00:21:54,640 --> 00:21:59,080 +annotations you have the output + +542 +00:21:56,440 --> 00:22:00,320 +according to a model like comet um you + +543 +00:21:59,080 --> 00:22:02,679 +calculate the difference between them + +544 +00:22:00,320 --> 00:22:05,640 +and you update model + +545 +00:22:02,679 --> 00:22:07,080 +parameters um the problem this is great + +546 +00:22:05,640 --> 00:22:08,520 +if you have lots of training data the + +547 +00:22:07,080 --> 00:22:10,640 +problem with this is for a lot of tasks + +548 +00:22:08,520 --> 00:22:12,360 +we don't have lots of training data so + +549 +00:22:10,640 --> 00:22:14,720 +um you know training these is a little + +550 +00:22:12,360 --> 00:22:14,720 +bit less + +551 +00:22:15,400 --> 00:22:22,919 +feasible and now recently uh what we + +552 +00:22:19,600 --> 00:22:25,279 +have been moving into is is a QA based + +553 +00:22:22,919 --> 00:22:27,120 +evaluation which is basically where we + +554 +00:22:25,279 --> 00:22:30,760 +ask a language model how good the output + +555 +00:22:27,120 --> 00:22:32,279 +is and so uh gmba is an example one of + +556 +00:22:30,760 --> 00:22:34,559 +the early examples of this for machine + +557 +00:22:32,279 --> 00:22:37,320 +translation evaluation uh where they + +558 +00:22:34,559 --> 00:22:39,840 +basically just ask a g gp4 like score + +559 +00:22:37,320 --> 00:22:41,600 +the following translation from Source + +560 +00:22:39,840 --> 00:22:44,000 +language to target language with respect + +561 +00:22:41,600 --> 00:22:47,080 +to the human reference um on a + +562 +00:22:44,000 --> 00:22:49,200 +continuous scale from Z to 100 uh where + +563 +00:22:47,080 --> 00:22:51,320 +the score of zero means no meaning + +564 +00:22:49,200 --> 00:22:54,039 +preserved and the score of 100 means a + +565 +00:22:51,320 --> 00:22:56,880 +perfect meaning in grammar uh you feed + +566 +00:22:54,039 --> 00:22:58,760 +in the source um you feed in the T the + +567 +00:22:56,880 --> 00:23:01,000 +human reference optionally if you have a + +568 +00:22:58,760 --> 00:23:03,320 +human reference and then you feed in the + +569 +00:23:01,000 --> 00:23:06,760 +Target um and you get a + +570 +00:23:03,320 --> 00:23:09,919 +score and um so this this works pretty + +571 +00:23:06,760 --> 00:23:12,720 +well this can give you uh better results + +572 +00:23:09,919 --> 00:23:15,159 +um there's a especially if you have a + +573 +00:23:12,720 --> 00:23:16,960 +strong language model the problem is + +574 +00:23:15,159 --> 00:23:18,279 +it's very unpredictable whether this is + +575 +00:23:16,960 --> 00:23:20,120 +going to work well and it's very + +576 +00:23:18,279 --> 00:23:23,039 +dependent on the prompt that you're + +577 +00:23:20,120 --> 00:23:25,279 +using so um right now A lot of people + +578 +00:23:23,039 --> 00:23:27,279 +are using gp4 without actually + +579 +00:23:25,279 --> 00:23:29,039 +validating whether it does a good job at + +580 +00:23:27,279 --> 00:23:33,080 +evaluation and + +581 +00:23:29,039 --> 00:23:34,919 +and my the results are all across the + +582 +00:23:33,080 --> 00:23:36,880 +board it can be anywhere from very very + +583 +00:23:34,919 --> 00:23:38,640 +good to very very bad at evaluating + +584 +00:23:36,880 --> 00:23:41,320 +particular tasks so I would be at least + +585 +00:23:38,640 --> 00:23:43,559 +a little bit suspicious of whether gp4 + +586 +00:23:41,320 --> 00:23:45,679 +is doing a good job evaluating for your + +587 +00:23:43,559 --> 00:23:49,320 +task especially more complex + +588 +00:23:45,679 --> 00:23:51,960 +tests um I would especially be + +589 +00:23:49,320 --> 00:23:54,000 +suspicious if you're doing two uh any of + +590 +00:23:51,960 --> 00:23:56,760 +the two following things number one if + +591 +00:23:54,000 --> 00:23:59,880 +you're comparing gp4 or any model + +592 +00:23:56,760 --> 00:24:02,400 +against itself in another model because + +593 +00:23:59,880 --> 00:24:05,200 +gp4 really likes + +594 +00:24:02,400 --> 00:24:06,880 +gp4 it really likes its own outputs and + +595 +00:24:05,200 --> 00:24:08,120 +there are papers uh sorry I don't + +596 +00:24:06,880 --> 00:24:09,679 +actually have the references here but I + +597 +00:24:08,120 --> 00:24:11,200 +can follow up if people are interested + +598 +00:24:09,679 --> 00:24:13,080 +but there are papers that demonstrate + +599 +00:24:11,200 --> 00:24:15,799 +that gp4 likes it you know its own + +600 +00:24:13,080 --> 00:24:19,200 +outputs more than others also if you're + +601 +00:24:15,799 --> 00:24:22,120 +explicitly optimizing the outputs using + +602 +00:24:19,200 --> 00:24:24,640 +rlf um there is something called good + +603 +00:24:22,120 --> 00:24:27,120 +Hearts law which is basically anytime + +604 +00:24:24,640 --> 00:24:29,520 +you uh start optimizing towards a metric + +605 +00:24:27,120 --> 00:24:32,559 +it becomes a bad metric and that also + +606 +00:24:29,520 --> 00:24:35,000 +happens for gp4 based evaluations so if + +607 +00:24:32,559 --> 00:24:37,200 +you start optimizing for gp4 based + +608 +00:24:35,000 --> 00:24:38,960 +evaluations especially for reference + +609 +00:24:37,200 --> 00:24:41,679 +list metrics that don't use a reference + +610 +00:24:38,960 --> 00:24:44,840 +output then um you start basically + +611 +00:24:41,679 --> 00:24:47,440 +exploiting the metric + +612 +00:24:44,840 --> 00:24:49,840 +um another thing that you can do with QA + +613 +00:24:47,440 --> 00:24:53,279 +based evaluation is ask about fine grade + +614 +00:24:49,840 --> 00:24:54,919 +mistakes and so this is a paper by um uh + +615 +00:24:53,279 --> 00:24:56,480 +Patrick Fernandez who's a student who's + +616 +00:24:54,919 --> 00:25:02,080 +working with me and basically what we + +617 +00:24:56,480 --> 00:25:05,240 +did is we asked the model to um not give + +618 +00:25:02,080 --> 00:25:07,360 +a particular score but actually identify + +619 +00:25:05,240 --> 00:25:08,880 +the mistakes in the output and when we + +620 +00:25:07,360 --> 00:25:10,559 +asked it to identify the mistakes in the + +621 +00:25:08,880 --> 00:25:13,720 +output we found that this gave more + +622 +00:25:10,559 --> 00:25:17,320 +consistent uh results so kind of + +623 +00:25:13,720 --> 00:25:18,840 +interestingly we ask humans to identify + +624 +00:25:17,320 --> 00:25:21,120 +individual mistakes and the output that + +625 +00:25:18,840 --> 00:25:24,240 +gives humans more consistent results + +626 +00:25:21,120 --> 00:25:25,559 +it's the same thing for gp4 so um that + +627 +00:25:24,240 --> 00:25:27,320 +that's another paper you can look at if + +628 +00:25:25,559 --> 00:25:29,640 +you're + +629 +00:25:27,320 --> 00:25:32,679 +interested + +630 +00:25:29,640 --> 00:25:38,000 +cool um so I I mentioned that you could + +631 +00:25:32,679 --> 00:25:38,000 +or could not uh trust uh yeah sorry go + +632 +00:25:44,679 --> 00:25:51,279 +ahead uh correct so yeah B basically + +633 +00:25:47,360 --> 00:25:53,279 +just what you do is you have the source + +634 +00:25:51,279 --> 00:25:54,960 +um ideally you'll also have a reference + +635 +00:25:53,279 --> 00:25:57,840 +output that was created by skilled + +636 +00:25:54,960 --> 00:25:59,720 +humans and then you put in the Target + +637 +00:25:57,840 --> 00:26:02,279 +you know output basically you have the + +638 +00:25:59,720 --> 00:26:08,000 +input ideally a reference output created + +639 +00:26:02,279 --> 00:26:08,000 +by Good by skilled humans and uh like + +640 +00:26:15,159 --> 00:26:20,240 +hypothesis yeah I + +641 +00:26:17,919 --> 00:26:24,559 +mean it's a good question and I don't + +642 +00:26:20,240 --> 00:26:26,919 +know if we actually have a a very clear + +643 +00:26:24,559 --> 00:26:31,399 +empirical like evidence of why this is + +644 +00:26:26,919 --> 00:26:33,320 +the case but my hypothesis about this is + +645 +00:26:31,399 --> 00:26:36,159 +yes we kind of would expect models to be + +646 +00:26:33,320 --> 00:26:38,200 +more biased towards their own outputs + +647 +00:26:36,159 --> 00:26:40,919 +and the reason why is because + +648 +00:26:38,200 --> 00:26:43,080 +essentially you know models + +649 +00:26:40,919 --> 00:26:44,279 +are within their embeddings they're + +650 +00:26:43,080 --> 00:26:45,760 +encoding when they're in a high + +651 +00:26:44,279 --> 00:26:47,600 +probability part of the space and when + +652 +00:26:45,760 --> 00:26:50,200 +they're in a low probability part of the + +653 +00:26:47,600 --> 00:26:51,120 +space and like the high probability part + +654 +00:26:50,200 --> 00:26:54,600 +of the + +655 +00:26:51,120 --> 00:26:56,200 +space is going to be the high + +656 +00:26:54,600 --> 00:26:58,600 +probability part of the space is going + +657 +00:26:56,200 --> 00:27:02,559 +to be associated with good outputs + +658 +00:26:58,600 --> 00:27:07,000 +because like when + +659 +00:27:02,559 --> 00:27:08,600 +models are more sure of their outputs + +660 +00:27:07,000 --> 00:27:11,960 +they're more likely to be + +661 +00:27:08,600 --> 00:27:13,520 +good just because that indicates that + +662 +00:27:11,960 --> 00:27:15,240 +like they're closer to the training data + +663 +00:27:13,520 --> 00:27:17,760 +that it had and other things like that + +664 +00:27:15,240 --> 00:27:21,600 +so model probabilities are associated + +665 +00:27:17,760 --> 00:27:23,760 +with outputs uh with uh with good + +666 +00:27:21,600 --> 00:27:26,600 +outputs but just + +667 +00:27:23,760 --> 00:27:29,440 +correla separately from + +668 +00:27:26,600 --> 00:27:32,120 +that I believe a model can identify when + +669 +00:27:29,440 --> 00:27:33,320 +it's in a high probability segment of + +670 +00:27:32,120 --> 00:27:35,799 +the space and when it's in a low + +671 +00:27:33,320 --> 00:27:39,399 +probability segment of the space and + +672 +00:27:35,799 --> 00:27:39,399 +because of that I expect + +673 +00:27:39,519 --> 00:27:45,519 +that I like there are segments of the + +674 +00:27:43,240 --> 00:27:47,120 +embedding space where it's more likely + +675 +00:27:45,519 --> 00:27:48,360 +to answer yes about something being good + +676 +00:27:47,120 --> 00:27:50,960 +or not and those are going to be + +677 +00:27:48,360 --> 00:27:54,760 +associated with high uh like high + +678 +00:27:50,960 --> 00:27:56,159 +probability outbreaks as well and also + +679 +00:27:54,760 --> 00:27:57,760 +models are more likely to generate + +680 +00:27:56,159 --> 00:28:00,240 +outputs that are high probability + +681 +00:27:57,760 --> 00:28:02,320 +according into their model by definition + +682 +00:28:00,240 --> 00:28:03,880 +so all three of those effects together + +683 +00:28:02,320 --> 00:28:05,640 +would basically go into a model being + +684 +00:28:03,880 --> 00:28:09,120 +bios supports its own outputs compared + +685 +00:28:05,640 --> 00:28:11,559 +to that puts in another model but um + +686 +00:28:09,120 --> 00:28:13,279 +yeah this is a very handwavy explanation + +687 +00:28:11,559 --> 00:28:15,519 +but like putting the two the three + +688 +00:28:13,279 --> 00:28:18,600 +together models output high probability + +689 +00:28:15,519 --> 00:28:20,880 +things from their own probability Space + +690 +00:28:18,600 --> 00:28:23,440 +by definition + +691 +00:28:20,880 --> 00:28:25,760 +um things that are high probability are + +692 +00:28:23,440 --> 00:28:27,519 +associated with being good uh just + +693 +00:28:25,760 --> 00:28:29,279 +because otherwise a model would be + +694 +00:28:27,519 --> 00:28:31,840 +outputting garbage + +695 +00:28:29,279 --> 00:28:33,840 +and um the final thing which is more + +696 +00:28:31,840 --> 00:28:35,679 +tenuous is if the model is in a high + +697 +00:28:33,840 --> 00:28:37,919 +probability segment of the space it's + +698 +00:28:35,679 --> 00:28:39,760 +more likely to Output yes according to a + +699 +00:28:37,919 --> 00:28:41,480 +question of it being good and I I think + +700 +00:28:39,760 --> 00:28:44,360 +that's probably true but I'm not 100% + +701 +00:28:41,480 --> 00:28:44,360 +sure about the the + +702 +00:28:45,559 --> 00:28:51,039 +fin um maybe maybe someone wants to + +703 +00:28:49,000 --> 00:28:52,840 +examinate examine that as a final + +704 +00:28:51,039 --> 00:28:54,200 +project it seems like a interesting + +705 +00:28:52,840 --> 00:28:57,080 +interesting + +706 +00:28:54,200 --> 00:29:00,039 +question um cool uh were there any other + +707 +00:28:57,080 --> 00:29:00,039 +questions about these methods + +708 +00:29:00,159 --> 00:29:07,120 +here um okay so when I say like an + +709 +00:29:03,960 --> 00:29:11,080 +evaluation metric is good or not what do + +710 +00:29:07,120 --> 00:29:13,200 +I mean by this being good or not um or a + +711 +00:29:11,080 --> 00:29:16,880 +reward model or whatever else and + +712 +00:29:13,200 --> 00:29:18,440 +basically the um the way we typically do + +713 +00:29:16,880 --> 00:29:19,840 +this is by doing something called meta + +714 +00:29:18,440 --> 00:29:22,440 +evaluation so it's called meta + +715 +00:29:19,840 --> 00:29:25,799 +evaluation because it's evaluation of + +716 +00:29:22,440 --> 00:29:29,279 +evaluation and uh the way we do this is + +717 +00:29:25,799 --> 00:29:32,519 +we have human uh scores and we have + +718 +00:29:29,279 --> 00:29:34,760 +automatic scores and we usually + +719 +00:29:32,519 --> 00:29:38,640 +calculate some sort of correlation + +720 +00:29:34,760 --> 00:29:41,000 +between the scores so um typical ones + +721 +00:29:38,640 --> 00:29:46,440 +are rank correlations like Pearson's + +722 +00:29:41,000 --> 00:29:48,799 +correlation or tendle uh Tow and uh so + +723 +00:29:46,440 --> 00:29:51,200 +the more Associated the automatic scores + +724 +00:29:48,799 --> 00:29:53,960 +are with the human scores the higher + +725 +00:29:51,200 --> 00:29:55,159 +these correlations are going to be um + +726 +00:29:53,960 --> 00:29:57,559 +there's other things that you can + +727 +00:29:55,159 --> 00:30:00,080 +calculate so if you're trying to figure + +728 +00:29:57,559 --> 00:30:01,640 +out whether a model um matches human + +729 +00:30:00,080 --> 00:30:04,279 +pairwise preferences you can just + +730 +00:30:01,640 --> 00:30:06,440 +calculate accuracy so I didn't put that + +731 +00:30:04,279 --> 00:30:08,080 +on um I didn't put that on the slide + +732 +00:30:06,440 --> 00:30:10,880 +here but you can just calculate accuracy + +733 +00:30:08,080 --> 00:30:13,120 +of pairwise preferences um you can also + +734 +00:30:10,880 --> 00:30:15,360 +calculate the absolute error between the + +735 +00:30:13,120 --> 00:30:19,320 +the judgments if you want to know uh + +736 +00:30:15,360 --> 00:30:21,720 +whether the absolute error matches so um + +737 +00:30:19,320 --> 00:30:24,159 +the these are good things to do if you + +738 +00:30:21,720 --> 00:30:25,600 +want to use an evaluation metric but you + +739 +00:30:24,159 --> 00:30:27,200 +aren't sure whether it's good or not I + +740 +00:30:25,600 --> 00:30:29,640 +would check to see whether the authors + +741 +00:30:27,200 --> 00:30:32,000 +have done this sort of meta evaluation + +742 +00:30:29,640 --> 00:30:33,760 +if they haven't be a little bit + +743 +00:30:32,000 --> 00:30:36,960 +suspicious if they have be a little bit + +744 +00:30:33,760 --> 00:30:39,799 +less suspicious but um + +745 +00:30:36,960 --> 00:30:42,960 +yeah how do people do this typically uh + +746 +00:30:39,799 --> 00:30:45,640 +usually they create uh data sets like + +747 +00:30:42,960 --> 00:30:49,440 +the WM they use data sets like the WMT + +748 +00:30:45,640 --> 00:30:53,960 +shared tasks um or + +749 +00:30:49,440 --> 00:30:57,679 +uh uh like some evl um but there's also + +750 +00:30:53,960 --> 00:30:59,960 +other ways to create um uh there's also + +751 +00:30:57,679 --> 00:31:01,639 +Lots other data sets but in order to do + +752 +00:30:59,960 --> 00:31:05,639 +this reliably you need a fairly large + +753 +00:31:01,639 --> 00:31:05,639 +data set so it's one thing to be aware + +754 +00:31:07,080 --> 00:31:10,760 +of + +755 +00:31:08,720 --> 00:31:14,200 +cool + +756 +00:31:10,760 --> 00:31:16,360 +um then the final thing um all of the + +757 +00:31:14,200 --> 00:31:17,919 +automatic evaluation methods that I + +758 +00:31:16,360 --> 00:31:20,240 +talked about now are trying to match + +759 +00:31:17,919 --> 00:31:22,679 +human preferences but that's not the + +760 +00:31:20,240 --> 00:31:24,960 +only thing that you necessarily want to + +761 +00:31:22,679 --> 00:31:28,440 +do the final thing that you might want + +762 +00:31:24,960 --> 00:31:30,840 +to do is uh use the model outputs in a + +763 +00:31:28,440 --> 00:31:34,200 +downstream system and see whether they + +764 +00:31:30,840 --> 00:31:36,399 +are effective for that so there's two + +765 +00:31:34,200 --> 00:31:39,080 +concepts of intrinsic evaluation and + +766 +00:31:36,399 --> 00:31:41,720 +extrinsic evaluation so intrinsic + +767 +00:31:39,080 --> 00:31:44,159 +evaluation um evaluates the quality of + +768 +00:31:41,720 --> 00:31:45,720 +the output itself and so that would be + +769 +00:31:44,159 --> 00:31:48,639 +like asking a human directly about how + +770 +00:31:45,720 --> 00:31:50,720 +good is this output extrinsic evaluation + +771 +00:31:48,639 --> 00:31:53,679 +is evaluating output quality by its + +772 +00:31:50,720 --> 00:31:57,000 +utility um and so just to give one + +773 +00:31:53,679 --> 00:31:58,360 +example um if you can evaluate large + +774 +00:31:57,000 --> 00:32:00,200 +language model summary + +775 +00:31:58,360 --> 00:32:04,200 +through question answering + +776 +00:32:00,200 --> 00:32:05,880 +accuracy um and so you can take the + +777 +00:32:04,200 --> 00:32:07,399 +output of an llm and feed it through a + +778 +00:32:05,880 --> 00:32:09,600 +question answering model and see whether + +779 +00:32:07,399 --> 00:32:12,399 +you're able to answer questions based on + +780 +00:32:09,600 --> 00:32:15,799 +this and that kind of gives you a better + +781 +00:32:12,399 --> 00:32:18,279 +idea of whether the summary require uh + +782 +00:32:15,799 --> 00:32:20,120 +incorporates requisite information but + +783 +00:32:18,279 --> 00:32:22,120 +if you think about anything an llm can + +784 +00:32:20,120 --> 00:32:23,760 +be used for usually it's part of a + +785 +00:32:22,120 --> 00:32:26,679 +bigger system so you can evaluate it as + +786 +00:32:23,760 --> 00:32:28,399 +a part of that bigger system um the + +787 +00:32:26,679 --> 00:32:30,639 +problem with this is it's a very + +788 +00:32:28,399 --> 00:32:33,960 +indirect way of assessing things so like + +789 +00:32:30,639 --> 00:32:36,080 +let's say your QA model is just bad uh + +790 +00:32:33,960 --> 00:32:38,480 +how can you disentangle the effect of + +791 +00:32:36,080 --> 00:32:41,679 +the L summary versus the QA model that's + +792 +00:32:38,480 --> 00:32:44,120 +not a trivial thing to do so ideally + +793 +00:32:41,679 --> 00:32:47,000 +like a combination of these two is + +794 +00:32:44,120 --> 00:32:47,000 +practically the best way + +795 +00:32:48,039 --> 00:32:52,200 +go cool so + +796 +00:32:56,039 --> 00:32:59,960 +yeah yeah it wouldn't necessar + +797 +00:32:58,360 --> 00:33:05,679 +say it's harder to do it might even be + +798 +00:32:59,960 --> 00:33:05,679 +easier to do um which is like let's + +799 +00:33:06,679 --> 00:33:11,720 +say Let me let me see if I can come up + +800 +00:33:09,360 --> 00:33:11,720 +with + +801 +00:33:12,639 --> 00:33:17,600 +example what let's + +802 +00:33:15,000 --> 00:33:19,670 +say you + +803 +00:33:17,600 --> 00:33:22,979 +are trying + +804 +00:33:19,670 --> 00:33:22,979 +[Music] + +805 +00:33:24,639 --> 00:33:29,760 +to let's say you're trying to + +806 +00:33:30,559 --> 00:33:33,559 +guess + +807 +00:33:39,000 --> 00:33:45,399 +whether let's say you're trying to guess + +808 +00:33:42,399 --> 00:33:46,559 +whether a someone will be hired at a + +809 +00:33:45,399 --> 00:33:52,039 +company or + +810 +00:33:46,559 --> 00:33:53,880 +not based on an llm generated summary of + +811 +00:33:52,039 --> 00:33:58,880 +their qualifications for a position or + +812 +00:33:53,880 --> 00:34:01,799 +something like that um and + +813 +00:33:58,880 --> 00:34:03,080 +you what actually maybe this is not a + +814 +00:34:01,799 --> 00:34:04,720 +great example because whether you should + +815 +00:34:03,080 --> 00:34:06,960 +be doing this ethically is a little bit + +816 +00:34:04,720 --> 00:34:08,159 +unclear but let's say you were doing + +817 +00:34:06,960 --> 00:34:09,560 +let's say you were doing something like + +818 +00:34:08,159 --> 00:34:11,520 +that just because it's one example I can + +819 +00:34:09,560 --> 00:34:14,320 +think of right now whether they will get + +820 +00:34:11,520 --> 00:34:16,320 +hired or not is um is clear because you + +821 +00:34:14,320 --> 00:34:19,399 +have a objective answer right whether + +822 +00:34:16,320 --> 00:34:21,480 +they were hired or not um or maybe maybe + +823 +00:34:19,399 --> 00:34:23,800 +another example would be like let's say + +824 +00:34:21,480 --> 00:34:26,320 +um let's say you want to predict the + +825 +00:34:23,800 --> 00:34:29,599 +diagnosis in a medical application based + +826 +00:34:26,320 --> 00:34:32,960 +on an llm generated some of somebody's + +827 +00:34:29,599 --> 00:34:35,919 +uh you know LM generated summary of + +828 +00:34:32,960 --> 00:34:38,480 +somebody's you know past medical history + +829 +00:34:35,919 --> 00:34:40,839 +and all this stuff and here you want the + +830 +00:34:38,480 --> 00:34:43,440 +llm generated summary you definitely + +831 +00:34:40,839 --> 00:34:44,879 +want the summary because the summary is + +832 +00:34:43,440 --> 00:34:47,560 +going to be viewed by a doctor who will + +833 +00:34:44,879 --> 00:34:49,359 +make the final decision but you also + +834 +00:34:47,560 --> 00:34:50,760 +have information about the diagnoses of + +835 +00:34:49,359 --> 00:34:52,399 +all the people in your medical system + +836 +00:34:50,760 --> 00:34:54,560 +later because you know they went through + +837 +00:34:52,399 --> 00:34:56,480 +your medical system for years and you + +838 +00:34:54,560 --> 00:34:58,200 +know later like through lots of tests + +839 +00:34:56,480 --> 00:35:00,800 +and stuff uh whether how they were + +840 +00:34:58,200 --> 00:35:02,320 +diagnosed so you generate an LM based + +841 +00:35:00,800 --> 00:35:05,000 +summary and then you predict the + +842 +00:35:02,320 --> 00:35:06,599 +diagnosis from the summary so there the + +843 +00:35:05,000 --> 00:35:08,040 +evaluation of the diagnosis is very + +844 +00:35:06,599 --> 00:35:11,480 +clear because you kind of have a gold + +845 +00:35:08,040 --> 00:35:12,599 +standard answer um but the EV intrinsic + +846 +00:35:11,480 --> 00:35:14,839 +evaluation of whether it's a good + +847 +00:35:12,599 --> 00:35:16,839 +summary or not is not as clear because + +848 +00:35:14,839 --> 00:35:19,400 +you'd have pass do whether it's good and + +849 +00:35:16,839 --> 00:35:21,079 +understandable summary so the extrinsic + +850 +00:35:19,400 --> 00:35:24,920 +evaluation might be easier because it's + +851 +00:35:21,079 --> 00:35:26,480 +clearer um so there are cases like that + +852 +00:35:24,920 --> 00:35:30,720 +um the problem is you would have to have + +853 +00:35:26,480 --> 00:35:33,800 +that data in order to do that um yeah do + +854 +00:35:30,720 --> 00:35:38,240 +like evaluation yeah I was just + +855 +00:35:33,800 --> 00:35:40,800 +wondering typically the + +856 +00:35:38,240 --> 00:35:42,880 +like like how do you accomodate the + +857 +00:35:40,800 --> 00:35:47,160 +diversity oh yeah that's a great that's + +858 +00:35:42,880 --> 00:35:50,240 +a great question um so how do you how do + +859 +00:35:47,160 --> 00:35:50,240 +you get these scores + +860 +00:35:50,720 --> 00:35:55,800 +here there's a number of different + +861 +00:35:53,200 --> 00:35:59,160 +things in the WMT shared tasks what they + +862 +00:35:55,800 --> 00:36:00,280 +did is they did + +863 +00:35:59,160 --> 00:36:03,200 +the first thing they do is they + +864 +00:36:00,280 --> 00:36:06,319 +normalize by annotator and what they do + +865 +00:36:03,200 --> 00:36:10,400 +is they basically take the zcore or Z + +866 +00:36:06,319 --> 00:36:12,240 +score of the um of the human annotator's + +867 +00:36:10,400 --> 00:36:14,880 +actual scores because some people are + +868 +00:36:12,240 --> 00:36:16,400 +more harsh than other people and so what + +869 +00:36:14,880 --> 00:36:20,680 +that means is you basically normalize to + +870 +00:36:16,400 --> 00:36:22,119 +have zero mean in unit variance um and + +871 +00:36:20,680 --> 00:36:24,119 +then after they've normalized to zero + +872 +00:36:22,119 --> 00:36:29,560 +mean and unit variance then I think they + +873 +00:36:24,119 --> 00:36:29,560 +average together different humans so um + +874 +00:36:30,160 --> 00:36:36,520 +then for how do you deal with the fact + +875 +00:36:33,680 --> 00:36:38,040 +that humans disagree on things and I + +876 +00:36:36,520 --> 00:36:39,480 +think it's pretty varied I don't know if + +877 +00:36:38,040 --> 00:36:42,160 +there's any gold standard way of doing + +878 +00:36:39,480 --> 00:36:43,839 +it but sometimes you just average + +879 +00:36:42,160 --> 00:36:46,359 +sometimes you throw away examples where + +880 +00:36:43,839 --> 00:36:47,960 +humans disagree a lot um because like + +881 +00:36:46,359 --> 00:36:50,200 +you can't get the humans to agree how + +882 +00:36:47,960 --> 00:36:53,319 +could you expect how could you expect a + +883 +00:36:50,200 --> 00:36:55,119 +machine to do well um so I think it it's + +884 +00:36:53,319 --> 00:36:59,200 +a little bit test + +885 +00:36:55,119 --> 00:37:01,560 +defending yeah so for + +886 +00:36:59,200 --> 00:37:04,560 +generation inin + +887 +00:37:01,560 --> 00:37:06,280 +andin yeah so for code generation that's + +888 +00:37:04,560 --> 00:37:08,200 +I I I love this example because I've + +889 +00:37:06,280 --> 00:37:09,960 +worked on code generation a lot of + +890 +00:37:08,200 --> 00:37:12,680 +people only think about extrinsic + +891 +00:37:09,960 --> 00:37:14,400 +evaluation of code Generation Um or I + +892 +00:37:12,680 --> 00:37:16,160 +don't know if it's extrinsic but only + +893 +00:37:14,400 --> 00:37:19,160 +think about execution based evaluation + +894 +00:37:16,160 --> 00:37:20,520 +of code generation which is like you + +895 +00:37:19,160 --> 00:37:22,400 +execute the code you see whether it + +896 +00:37:20,520 --> 00:37:25,040 +passs unit tests and other things like + +897 +00:37:22,400 --> 00:37:26,839 +this but in reality actually there's a + +898 +00:37:25,040 --> 00:37:28,599 +lot of other important things for code + +899 +00:37:26,839 --> 00:37:30,560 +like readability and other stuff like + +900 +00:37:28,599 --> 00:37:32,160 +that and you should be evaluating those + +901 +00:37:30,560 --> 00:37:34,920 +things but I think a lot of people like + +902 +00:37:32,160 --> 00:37:36,520 +kind of ignore that so um there there + +903 +00:37:34,920 --> 00:37:38,880 +are a few Pap that do that but most of + +904 +00:37:36,520 --> 00:37:41,000 +the time people just execute the Cod + +905 +00:37:38,880 --> 00:37:45,520 +process + +906 +00:37:41,000 --> 00:37:47,760 +un cool okay um so yeah moving on to the + +907 +00:37:45,520 --> 00:37:51,160 +learning part so now I'd like to talk + +908 +00:37:47,760 --> 00:37:55,280 +about uh learning and the first thing + +909 +00:37:51,160 --> 00:37:59,480 +I'll cover is error and risk and so + +910 +00:37:55,280 --> 00:38:02,280 +basically um the way we calculate air is + +911 +00:37:59,480 --> 00:38:03,119 +we generate an output and we calculate + +912 +00:38:02,280 --> 00:38:07,680 +its + +913 +00:38:03,119 --> 00:38:09,480 +Badness um and so generating the output + +914 +00:38:07,680 --> 00:38:13,160 +could be argmax it could be sampling it + +915 +00:38:09,480 --> 00:38:15,800 +could be anything else like that um and + +916 +00:38:13,160 --> 00:38:18,640 +we calculate its Badness uh which is one + +917 +00:38:15,800 --> 00:38:21,040 +minus in which could be like how bad is + +918 +00:38:18,640 --> 00:38:22,720 +the output uh if you're you have a + +919 +00:38:21,040 --> 00:38:24,760 +Badness measure or it could be one minus + +920 +00:38:22,720 --> 00:38:28,400 +the evaluation Square to calculate its + +921 +00:38:24,760 --> 00:38:30,160 +Badness and this is defined as error + +922 +00:38:28,400 --> 00:38:31,440 +and generally what you want to do is you + +923 +00:38:30,160 --> 00:38:33,520 +want to minimize + +924 +00:38:31,440 --> 00:38:36,800 +error + +925 +00:38:33,520 --> 00:38:39,400 +um because in the end you're going to be + +926 +00:38:36,800 --> 00:38:42,359 +deploying A system that just outputs you + +927 +00:38:39,400 --> 00:38:46,079 +know one thing and uh you're going to + +928 +00:38:42,359 --> 00:38:49,800 +want that to be as good a thing as + +929 +00:38:46,079 --> 00:38:53,000 +possible um but the problem with this is + +930 +00:38:49,800 --> 00:38:56,400 +there's no easy way to actually optimize + +931 +00:38:53,000 --> 00:38:59,079 +this value in especially in a text + +932 +00:38:56,400 --> 00:39:01,800 +generation sty setting but even in the + +933 +00:38:59,079 --> 00:39:06,839 +classification setting we can't easily + +934 +00:39:01,800 --> 00:39:06,839 +maximize err because um if you look at + +935 +00:39:09,040 --> 00:39:14,200 +the if you look at the surface of air uh + +936 +00:39:12,760 --> 00:39:15,960 +at some point you're going to have a + +937 +00:39:14,200 --> 00:39:18,319 +non-differentiable part when you take + +938 +00:39:15,960 --> 00:39:21,119 +the argmax and or when you do sampling + +939 +00:39:18,319 --> 00:39:23,319 +or anything like that so um you're not + +940 +00:39:21,119 --> 00:39:27,119 +going to be able to do gradient based + +941 +00:39:23,319 --> 00:39:29,200 +optimization so what we do normally is + +942 +00:39:27,119 --> 00:39:33,400 +um + +943 +00:39:29,200 --> 00:39:37,000 +we instead calculate something uh called + +944 +00:39:33,400 --> 00:39:38,560 +risk and what risk looks like is uh we + +945 +00:39:37,000 --> 00:39:40,599 +talked a little bit about minimum based + +946 +00:39:38,560 --> 00:39:43,520 +risk for decoding but this is for uh + +947 +00:39:40,599 --> 00:39:46,160 +training time and what it looks like is + +948 +00:39:43,520 --> 00:39:49,040 +it's essentially the expected err of the + +949 +00:39:46,160 --> 00:39:52,359 +output and the expected err of the + +950 +00:39:49,040 --> 00:39:54,760 +output um includes a probability in the + +951 +00:39:52,359 --> 00:39:58,240 +objective function here and that + +952 +00:39:54,760 --> 00:40:01,079 +probability uh is differential basically + +953 +00:39:58,240 --> 00:40:02,319 +so we can um uh we can easily do + +954 +00:40:01,079 --> 00:40:05,720 +gradient based + +955 +00:40:02,319 --> 00:40:09,119 +optimization through it um the problem + +956 +00:40:05,720 --> 00:40:12,200 +with this is It's differentiable but for + +957 +00:40:09,119 --> 00:40:17,160 +text generation for example the sum is + +958 +00:40:12,200 --> 00:40:20,319 +intractable because we have a combinator + +959 +00:40:17,160 --> 00:40:23,880 +large number of potential outputs um + +960 +00:40:20,319 --> 00:40:25,520 +because you know if this is we've talked + +961 +00:40:23,880 --> 00:40:28,720 +about this before but if this is like + +962 +00:40:25,520 --> 00:40:30,680 +link you know 50 and we have a 30,000 + +963 +00:40:28,720 --> 00:40:32,839 +vocabul that's 30,000 to the 50 + +964 +00:40:30,680 --> 00:40:34,599 +possibilities we can't take a su over + +965 +00:40:32,839 --> 00:40:36,359 +that many + +966 +00:40:34,599 --> 00:40:38,400 +possibilities + +967 +00:40:36,359 --> 00:40:42,680 +um + +968 +00:40:38,400 --> 00:40:45,839 +so minimum R risk training uh tries to + +969 +00:40:42,680 --> 00:40:48,440 +minimize risk reinforcement learning + +970 +00:40:45,839 --> 00:40:50,040 +also many of the models especially + +971 +00:40:48,440 --> 00:40:53,599 +policy gradient models are trying to + +972 +00:40:50,040 --> 00:40:55,240 +minimize risk as well so um but the + +973 +00:40:53,599 --> 00:40:58,040 +reason why I wanted to talk about risk + +974 +00:40:55,240 --> 00:41:00,440 +first is because this is very simple to + +975 +00:40:58,040 --> 00:41:01,640 +get to from the uh the point of view of + +976 +00:41:00,440 --> 00:41:06,560 +like all the things that we've studied + +977 +00:41:01,640 --> 00:41:06,560 +so so I think it's talking about + +978 +00:41:06,760 --> 00:41:11,800 +that + +979 +00:41:08,319 --> 00:41:15,520 +um one other thing that I should mention + +980 +00:41:11,800 --> 00:41:18,400 +about is + +981 +00:41:15,520 --> 00:41:23,079 +um or no sorry I'll I'll talk about that + +982 +00:41:18,400 --> 00:41:26,880 +later so when we want to optimize risk + +983 +00:41:23,079 --> 00:41:30,560 +um what we do is we sample in order to + +984 +00:41:26,880 --> 00:41:35,520 +make this trct so a very simple way to + +985 +00:41:30,560 --> 00:41:37,640 +minimize risk is instead of um instead + +986 +00:41:35,520 --> 00:41:39,359 +of summing over all of the possible + +987 +00:41:37,640 --> 00:41:42,760 +outputs we sum over a small number of + +988 +00:41:39,359 --> 00:41:46,079 +possible outputs and we upgrade uh and + +989 +00:41:42,760 --> 00:41:47,359 +we uh sorry normalize uh to make this + +990 +00:41:46,079 --> 00:41:51,200 +all add up to + +991 +00:41:47,359 --> 00:41:52,839 +one and so this normalizer here is + +992 +00:41:51,200 --> 00:41:55,319 +basically the sum over all of the + +993 +00:41:52,839 --> 00:41:58,599 +probabilities that we have uh on the top + +994 +00:41:55,319 --> 00:42:02,119 +part here and and these samples can be + +995 +00:41:58,599 --> 00:42:05,480 +created either using sampling or n best + +996 +00:42:02,119 --> 00:42:07,040 +search we don't need to have from the + +997 +00:42:05,480 --> 00:42:11,040 +point of view of doing this sort of + +998 +00:42:07,040 --> 00:42:13,960 +minimum risk training the kind of + +999 +00:42:11,040 --> 00:42:16,880 +correct way of doing this is sampling + +1000 +00:42:13,960 --> 00:42:19,880 +using ancestral sampling uh like we + +1001 +00:42:16,880 --> 00:42:23,079 +talked about before and um in minimizing + +1002 +00:42:19,880 --> 00:42:25,839 +the output based on the the samples but + +1003 +00:42:23,079 --> 00:42:28,480 +the problem with that is um as many of + +1004 +00:42:25,839 --> 00:42:31,440 +you also might have seen when you were + +1005 +00:42:28,480 --> 00:42:33,599 +sampling from your language model uh + +1006 +00:42:31,440 --> 00:42:35,160 +from assignment one if you sample with + +1007 +00:42:33,599 --> 00:42:38,040 +temperature one it gives you a lot of + +1008 +00:42:35,160 --> 00:42:40,720 +like not very good outlets right and so + +1009 +00:42:38,040 --> 00:42:43,400 +if you're sampling with temperature one + +1010 +00:42:40,720 --> 00:42:45,000 +um you'll be exploring a a very large + +1011 +00:42:43,400 --> 00:42:47,880 +part of the space that actually isn't + +1012 +00:42:45,000 --> 00:42:49,720 +very good and so because of this uh some + +1013 +00:42:47,880 --> 00:42:51,480 +other Alternatives that you can use is + +1014 +00:42:49,720 --> 00:42:53,400 +you can just do endb search to find the + +1015 +00:42:51,480 --> 00:42:55,280 +best outputs or you can sample with a + +1016 +00:42:53,400 --> 00:42:58,079 +temperature that's not one or something + +1017 +00:42:55,280 --> 00:43:00,240 +like that and basically create uh you + +1018 +00:42:58,079 --> 00:43:02,520 +know a list of possible hypotheses and + +1019 +00:43:00,240 --> 00:43:04,079 +then normalize other B so that's another + +1020 +00:43:02,520 --> 00:43:06,240 +option and very often not using + +1021 +00:43:04,079 --> 00:43:11,200 +temperature one is a better + +1022 +00:43:06,240 --> 00:43:15,280 +way um if you're sampling with not + +1023 +00:43:11,200 --> 00:43:18,640 +temperature one and you are um + +1024 +00:43:15,280 --> 00:43:20,920 +potentially getting multiple outputs you + +1025 +00:43:18,640 --> 00:43:23,400 +should try to D duplicate or sample + +1026 +00:43:20,920 --> 00:43:25,480 +without replacement because if you get + +1027 +00:43:23,400 --> 00:43:27,559 +multiple outputs here it messes up your + +1028 +00:43:25,480 --> 00:43:30,680 +equations if you basically uh have the + +1029 +00:43:27,559 --> 00:43:30,680 +same one in there multiple + +1030 +00:43:32,160 --> 00:43:37,800 +times cool so so this is a really simple + +1031 +00:43:35,880 --> 00:43:40,079 +example of how you can do minimal risk + +1032 +00:43:37,800 --> 00:43:42,119 +training but now I want to get into uh + +1033 +00:43:40,079 --> 00:43:44,640 +like reinforcement learning which is the + +1034 +00:43:42,119 --> 00:43:48,119 +framing that most um + +1035 +00:43:44,640 --> 00:43:50,760 +modern Works about this Paulo uh one + +1036 +00:43:48,119 --> 00:43:52,559 +thing I should mention is there are + +1037 +00:43:50,760 --> 00:43:55,240 +actually other alternatives to learning + +1038 +00:43:52,559 --> 00:43:57,359 +from uh human feedback including like + +1039 +00:43:55,240 --> 00:43:59,359 +margin loss margin based losses and + +1040 +00:43:57,359 --> 00:44:00,960 +other stuff like that but most people + +1041 +00:43:59,359 --> 00:44:03,440 +nowadays use reinforcement learning so + +1042 +00:44:00,960 --> 00:44:06,359 +I'm only going to cover that + +1043 +00:44:03,440 --> 00:44:08,440 +here so what is reinforcement learning + +1044 +00:44:06,359 --> 00:44:11,000 +um learning reinforcement learning is + +1045 +00:44:08,440 --> 00:44:14,559 +learning where we have an environment uh + +1046 +00:44:11,000 --> 00:44:16,079 +x uh ability to make actions a and get a + +1047 +00:44:14,559 --> 00:44:20,160 +delayed reward + +1048 +00:44:16,079 --> 00:44:21,880 +R and um there's a really nice example + +1049 +00:44:20,160 --> 00:44:24,400 +uh if you're not familiar with the + +1050 +00:44:21,880 --> 00:44:27,480 +basics of policy gradient by Andre + +1051 +00:44:24,400 --> 00:44:28,800 +karpathy which I linked in the um in the + +1052 +00:44:27,480 --> 00:44:29,680 +recommended reading so you can take a + +1053 +00:44:28,800 --> 00:44:34,680 +look at + +1054 +00:44:29,680 --> 00:44:37,240 +that um but in that example gives an + +1055 +00:44:34,680 --> 00:44:39,440 +example of pong uh where you're playing + +1056 +00:44:37,240 --> 00:44:42,640 +the game pong where X is your observed + +1057 +00:44:39,440 --> 00:44:45,640 +image a is up or down and R is the wind + +1058 +00:44:42,640 --> 00:44:47,480 +loss at the end of the game uh does + +1059 +00:44:45,640 --> 00:44:50,559 +anyone have an idea about uh what this + +1060 +00:44:47,480 --> 00:44:52,119 +looks like for any arbitrary NLP task + +1061 +00:44:50,559 --> 00:44:56,520 +that we might want to do reinforcement + +1062 +00:44:52,119 --> 00:44:59,040 +learning for so what what is X what is a + +1063 +00:44:56,520 --> 00:44:59,040 +and what is + +1064 +00:45:00,040 --> 00:45:04,680 +are pick your favorite uh your favorite + +1065 +00:45:06,920 --> 00:45:09,920 +Trask + +1066 +00:45:10,960 --> 00:45:18,400 +anybody + +1067 +00:45:12,520 --> 00:45:18,400 +yeah be or what what's X first + +1068 +00:45:19,680 --> 00:45:28,720 +yeah you have generate okay is the + +1069 +00:45:24,440 --> 00:45:29,720 +next be like the Buton like whether or + +1070 +00:45:28,720 --> 00:45:32,520 +not + +1071 +00:45:29,720 --> 00:45:35,240 +you okay yeah I I think this is very + +1072 +00:45:32,520 --> 00:45:37,119 +close just to repeat it it's like X is + +1073 +00:45:35,240 --> 00:45:39,599 +what you've generated so far a is the + +1074 +00:45:37,119 --> 00:45:41,559 +next token and R is the button that the + +1075 +00:45:39,599 --> 00:45:45,400 +user clicks about whether it's good or + +1076 +00:45:41,559 --> 00:45:46,920 +not um I think that's reasonably good + +1077 +00:45:45,400 --> 00:45:48,760 +although I don't know if we'd expect + +1078 +00:45:46,920 --> 00:45:52,960 +them to click the button every token we + +1079 +00:45:48,760 --> 00:45:54,880 +generate right so um it might be that X + +1080 +00:45:52,960 --> 00:45:57,880 +is the conversational history up till + +1081 +00:45:54,880 --> 00:46:02,319 +this point um a + +1082 +00:45:57,880 --> 00:46:04,280 +a could be a next token generation and + +1083 +00:46:02,319 --> 00:46:06,520 +then R is a reward we get in an + +1084 +00:46:04,280 --> 00:46:08,280 +arbitrary time point it might not be + +1085 +00:46:06,520 --> 00:46:09,960 +like immediately after generating the + +1086 +00:46:08,280 --> 00:46:12,040 +next token but it might be later and + +1087 +00:46:09,960 --> 00:46:13,480 +that's actually really really important + +1088 +00:46:12,040 --> 00:46:15,040 +from the point of view of reinforcement + +1089 +00:46:13,480 --> 00:46:19,599 +learning and I'll I'll talk about that + +1090 +00:46:15,040 --> 00:46:23,040 +in a second um anyone have an idea from + +1091 +00:46:19,599 --> 00:46:24,960 +I don't know uh code generation or + +1092 +00:46:23,040 --> 00:46:28,119 +translation or some other + +1093 +00:46:24,960 --> 00:46:31,160 +things C generation maybe s is a + +1094 +00:46:28,119 --> 00:46:33,040 +compiler or like the gra scpt and then + +1095 +00:46:31,160 --> 00:46:37,000 +the + +1096 +00:46:33,040 --> 00:46:42,520 +is the actual code that right and reward + +1097 +00:46:37,000 --> 00:46:44,839 +is yep um so X could be the compiler + +1098 +00:46:42,520 --> 00:46:47,559 +it's probably the compiler and all of + +1099 +00:46:44,839 --> 00:46:50,200 +the surrounding code context like what + +1100 +00:46:47,559 --> 00:46:52,520 +what is the natural language output and + +1101 +00:46:50,200 --> 00:46:53,960 +it's also um you know what is the + +1102 +00:46:52,520 --> 00:46:57,280 +project that you're you're working on + +1103 +00:46:53,960 --> 00:47:00,079 +and stuff like that um a i think + +1104 +00:46:57,280 --> 00:47:02,800 +typically we would treat each token in + +1105 +00:47:00,079 --> 00:47:04,160 +the code to be an action um and then R + +1106 +00:47:02,800 --> 00:47:06,599 +would be the reward after a long + +1107 +00:47:04,160 --> 00:47:08,640 +sequence of actions um and it could be + +1108 +00:47:06,599 --> 00:47:11,119 +the reward from the compiler it could be + +1109 +00:47:08,640 --> 00:47:13,160 +the reward from a code readability model + +1110 +00:47:11,119 --> 00:47:15,720 +it could be the reward from a speed + +1111 +00:47:13,160 --> 00:47:17,079 +execution speed and stuff like that so + +1112 +00:47:15,720 --> 00:47:18,839 +like one of the interesting things about + +1113 +00:47:17,079 --> 00:47:22,640 +R is you can be really creative about + +1114 +00:47:18,839 --> 00:47:25,400 +how you form R um which is not easy to + +1115 +00:47:22,640 --> 00:47:27,319 +do uh if you're just doing maximum + +1116 +00:47:25,400 --> 00:47:29,240 +likelihood also so you can come up with + +1117 +00:47:27,319 --> 00:47:32,920 +a r that really matches with like what + +1118 +00:47:29,240 --> 00:47:36,559 +you want um what you want in an output + +1119 +00:47:32,920 --> 00:47:40,079 +so why reinforcement learning in NLP um + +1120 +00:47:36,559 --> 00:47:42,599 +and I think there's basically three um + +1121 +00:47:40,079 --> 00:47:44,240 +three answers the first one is you have + +1122 +00:47:42,599 --> 00:47:49,000 +a typical reinforcement learning + +1123 +00:47:44,240 --> 00:47:51,119 +scenario um where you have a dialogue + +1124 +00:47:49,000 --> 00:47:52,720 +where you get lots of responses and then + +1125 +00:47:51,119 --> 00:47:54,559 +you get a reward at the end so the + +1126 +00:47:52,720 --> 00:47:57,359 +thumbs up and thumbs down from humans is + +1127 +00:47:54,559 --> 00:47:59,839 +a very typical example of + +1128 +00:47:57,359 --> 00:48:02,800 +uh reinforcement learning because you + +1129 +00:47:59,839 --> 00:48:05,000 +get a delayed reward uh at some point in + +1130 +00:48:02,800 --> 00:48:07,599 +the dialogue when a human presses up or + +1131 +00:48:05,000 --> 00:48:09,280 +down um another like actually more + +1132 +00:48:07,599 --> 00:48:11,680 +technical scenario where reinforcement + +1133 +00:48:09,280 --> 00:48:14,960 +learning has been used um for a long + +1134 +00:48:11,680 --> 00:48:17,400 +time is call centers so we've had + +1135 +00:48:14,960 --> 00:48:20,680 +dialogue systems for call centers and + +1136 +00:48:17,400 --> 00:48:23,160 +then if you complete a ticket purchase + +1137 +00:48:20,680 --> 00:48:24,839 +um or you complete resolve a ticket + +1138 +00:48:23,160 --> 00:48:27,480 +without ever having to go to a human + +1139 +00:48:24,839 --> 00:48:30,800 +operator you get a really big reward + +1140 +00:48:27,480 --> 00:48:33,640 +if you have to go to the human operator + +1141 +00:48:30,800 --> 00:48:36,400 +you get maybe a smaller reward and if + +1142 +00:48:33,640 --> 00:48:39,200 +the person yells at you and hangs up + +1143 +00:48:36,400 --> 00:48:41,640 +then you get a really negative reward so + +1144 +00:48:39,200 --> 00:48:43,040 +um this is kind of the typical example + +1145 +00:48:41,640 --> 00:48:45,599 +reinforcement learning has been used for + +1146 +00:48:43,040 --> 00:48:48,520 +a long time there another example is if + +1147 +00:48:45,599 --> 00:48:53,280 +you have like latent variables uh chains + +1148 +00:48:48,520 --> 00:48:55,799 +of thought where um you decide the + +1149 +00:48:53,280 --> 00:48:58,839 +latent variable and then get a reward um + +1150 +00:48:55,799 --> 00:49:02,799 +you get a reward based Bas on how those + +1151 +00:48:58,839 --> 00:49:03,920 +latent variables affect the output so um + +1152 +00:49:02,799 --> 00:49:07,200 +this + +1153 +00:49:03,920 --> 00:49:09,799 +is uh this is another example + +1154 +00:49:07,200 --> 00:49:12,599 +because the Chain of Thought itself + +1155 +00:49:09,799 --> 00:49:13,880 +might not actually be good you might + +1156 +00:49:12,599 --> 00:49:15,839 +have a bad Chain of Thought and still + +1157 +00:49:13,880 --> 00:49:17,760 +get the correct answer so you don't + +1158 +00:49:15,839 --> 00:49:19,640 +actually know for sure that a chain of + +1159 +00:49:17,760 --> 00:49:22,359 +thought that was automatically generated + +1160 +00:49:19,640 --> 00:49:24,799 +is good or not but um that so that kind + +1161 +00:49:22,359 --> 00:49:27,000 +of makes it a reinforcement learning + +1162 +00:49:24,799 --> 00:49:29,520 +problem and another thing is you might + +1163 +00:49:27,000 --> 00:49:32,520 +have a sequence level evaluation metric + +1164 +00:49:29,520 --> 00:49:34,240 +um so that you can't optimize the + +1165 +00:49:32,520 --> 00:49:36,839 +evaluation metric without uh first + +1166 +00:49:34,240 --> 00:49:38,480 +generating the whole like sequence so + +1167 +00:49:36,839 --> 00:49:40,880 +that would be any of the evaluation + +1168 +00:49:38,480 --> 00:49:42,400 +metrics that I talked about before so um + +1169 +00:49:40,880 --> 00:49:44,720 +these are three scenarios where you can + +1170 +00:49:42,400 --> 00:49:47,079 +use reinforcement + +1171 +00:49:44,720 --> 00:49:50,000 +planning so + +1172 +00:49:47,079 --> 00:49:51,400 +um I'm going to make a few steps through + +1173 +00:49:50,000 --> 00:49:54,640 +but like let's start again with our + +1174 +00:49:51,400 --> 00:49:57,359 +supervised mle loss and uh that's just + +1175 +00:49:54,640 --> 00:50:01,799 +the log probability here um in the + +1176 +00:49:57,359 --> 00:50:04,160 +context of reinforcement learning this + +1177 +00:50:01,799 --> 00:50:07,079 +is also called imitation + +1178 +00:50:04,160 --> 00:50:08,880 +learning because um essentially you're + +1179 +00:50:07,079 --> 00:50:12,680 +learning how to perform actions by + +1180 +00:50:08,880 --> 00:50:14,559 +imitating a teacher um and imitation + +1181 +00:50:12,680 --> 00:50:15,960 +learning is not just supervised mle + +1182 +00:50:14,559 --> 00:50:18,440 +there's also other varieties of + +1183 +00:50:15,960 --> 00:50:21,440 +imitation learning but um this is one + +1184 +00:50:18,440 --> 00:50:21,440 +variety of imitation + +1185 +00:50:22,520 --> 00:50:27,640 +learning the next thing I'd like to talk + +1186 +00:50:24,599 --> 00:50:30,079 +about is self-training and basically + +1187 +00:50:27,640 --> 00:50:31,760 +self-training the idea is that you + +1188 +00:50:30,079 --> 00:50:33,720 +sample or argmax according to the + +1189 +00:50:31,760 --> 00:50:36,119 +current model so you have your current + +1190 +00:50:33,720 --> 00:50:38,000 +model and you get a sample from it and + +1191 +00:50:36,119 --> 00:50:41,520 +then you use the sample or samples to + +1192 +00:50:38,000 --> 00:50:43,680 +maximize likelihood so um basically + +1193 +00:50:41,520 --> 00:50:47,520 +instead of doing maximum likelihood with + +1194 +00:50:43,680 --> 00:50:49,520 +respect to the a gold standard output + +1195 +00:50:47,520 --> 00:50:51,280 +you're doing it with respect to your own + +1196 +00:50:49,520 --> 00:50:55,280 +output + +1197 +00:50:51,280 --> 00:50:55,280 +so does this seem like a good + +1198 +00:50:55,640 --> 00:51:03,880 +idea I see a few people shaking heads um + +1199 +00:51:00,480 --> 00:51:03,880 +any ideas why this is not a good + +1200 +00:51:04,680 --> 00:51:07,680 +idea + +1201 +00:51:15,040 --> 00:51:20,599 +yeah yeah exactly so if you don't have + +1202 +00:51:17,720 --> 00:51:23,760 +any access to any notion well it's good + +1203 +00:51:20,599 --> 00:51:27,480 +um this will be optimizing towards good + +1204 +00:51:23,760 --> 00:51:28,839 +outputs and bad outputs right so um your + +1205 +00:51:27,480 --> 00:51:30,200 +model might be outputting bad outputs + +1206 +00:51:28,839 --> 00:51:32,839 +and you're just reinforcing the errors + +1207 +00:51:30,200 --> 00:51:35,160 +set the model R already nonetheless like + +1208 +00:51:32,839 --> 00:51:37,799 +self trining actually improves your + +1209 +00:51:35,160 --> 00:51:39,680 +accuracy somewhat in some cases like for + +1210 +00:51:37,799 --> 00:51:43,040 +example if your accuracy is if your + +1211 +00:51:39,680 --> 00:51:45,520 +model is Right more often than not um + +1212 +00:51:43,040 --> 00:51:49,119 +basically optimizing towards the more + +1213 +00:51:45,520 --> 00:51:51,720 +often the not right outputs can actually + +1214 +00:51:49,119 --> 00:51:53,640 +um due to the implicit regularization + +1215 +00:51:51,720 --> 00:51:55,000 +that models have and early stopping and + +1216 +00:51:53,640 --> 00:51:56,559 +other things like that it can actually + +1217 +00:51:55,000 --> 00:51:59,280 +move you in the right direction and + +1218 +00:51:56,559 --> 00:52:01,559 +improve accuracy + +1219 +00:51:59,280 --> 00:52:05,000 +um + +1220 +00:52:01,559 --> 00:52:06,640 +so there are alternatives to this that + +1221 +00:52:05,000 --> 00:52:09,520 +further improve accuracy so like for + +1222 +00:52:06,640 --> 00:52:12,720 +example if you have multiple models and + +1223 +00:52:09,520 --> 00:52:16,200 +um you only generate sentences where the + +1224 +00:52:12,720 --> 00:52:17,760 +models agree then this can improve your + +1225 +00:52:16,200 --> 00:52:20,000 +uh overall accuracy + +1226 +00:52:17,760 --> 00:52:24,240 +further um this is called code training + +1227 +00:52:20,000 --> 00:52:27,799 +it was actually uh created by uh uh + +1228 +00:52:24,240 --> 00:52:30,160 +people at at CMU as well and another + +1229 +00:52:27,799 --> 00:52:32,280 +successful alternative uh is adding + +1230 +00:52:30,160 --> 00:52:34,920 +noise to the input to match the noise + +1231 +00:52:32,280 --> 00:52:38,760 +that you find in the output so if you uh + +1232 +00:52:34,920 --> 00:52:40,720 +add like word uh word-based Dropout or + +1233 +00:52:38,760 --> 00:52:44,000 +other things like that this can also + +1234 +00:52:40,720 --> 00:52:47,400 +help uh accommodate these things but + +1235 +00:52:44,000 --> 00:52:48,920 +anyway um so self trining is is useful + +1236 +00:52:47,400 --> 00:52:50,480 +but there are better Alternatives if you + +1237 +00:52:48,920 --> 00:52:54,079 +can get a reward + +1238 +00:52:50,480 --> 00:52:55,559 +function so um the simplest variety of + +1239 +00:52:54,079 --> 00:52:56,960 +this is something called policy gradient + +1240 +00:52:55,559 --> 00:52:59,720 +or reinforce + +1241 +00:52:56,960 --> 00:53:02,319 +um or more specifically reinforce and + +1242 +00:52:59,720 --> 00:53:06,280 +basically what this does is this adds a + +1243 +00:53:02,319 --> 00:53:08,359 +term that scales the loss by the reward + +1244 +00:53:06,280 --> 00:53:12,400 +so if you can get a reward for each + +1245 +00:53:08,359 --> 00:53:15,680 +output basically this + +1246 +00:53:12,400 --> 00:53:18,119 +um you uh instead of doing self trining + +1247 +00:53:15,680 --> 00:53:21,760 +entirely by itself you multiply it by a + +1248 +00:53:18,119 --> 00:53:23,119 +reward and this allows you to increase + +1249 +00:53:21,760 --> 00:53:24,640 +the likelihood of things that get a high + +1250 +00:53:23,119 --> 00:53:28,440 +reward decrease the likelihood of things + +1251 +00:53:24,640 --> 00:53:28,440 +that get a low reward + +1252 +00:53:29,680 --> 00:53:34,960 +so uh a brief quiz here under what + +1253 +00:53:32,440 --> 00:53:37,599 +conditions is this equal equivalent to + +1254 +00:53:34,960 --> 00:53:41,480 +ml or essentially equivalent to maximum + +1255 +00:53:37,599 --> 00:53:43,079 +leg uh estimation and so like in order + +1256 +00:53:41,480 --> 00:53:45,480 +to make this quiz easier I'll go back to + +1257 +00:53:43,079 --> 00:53:47,720 +maximum likelihood estimation so it + +1258 +00:53:45,480 --> 00:53:50,359 +looked a bit like this um you calculated + +1259 +00:53:47,720 --> 00:53:53,440 +the log probability of the true output + +1260 +00:53:50,359 --> 00:53:55,440 +and now let me go uh to + +1261 +00:53:53,440 --> 00:53:56,960 +here any + +1262 +00:53:55,440 --> 00:54:00,119 +ideas + +1263 +00:53:56,960 --> 00:54:05,040 +yeah when your reward equals to + +1264 +00:54:00,119 --> 00:54:05,040 +one some sometimes in zero other times + +1265 +00:54:07,760 --> 00:54:10,960 +what any + +1266 +00:54:12,760 --> 00:54:17,520 +ideas what when when does your reward + +1267 +00:54:15,280 --> 00:54:19,640 +need to be equal to one in order to make + +1268 +00:54:17,520 --> 00:54:23,400 +this + +1269 +00:54:19,640 --> 00:54:23,400 +equation equivalent this + +1270 +00:54:24,960 --> 00:54:31,680 +equation yeah when Y and Y hat are the + +1271 +00:54:27,319 --> 00:54:36,119 +same so um basically + +1272 +00:54:31,680 --> 00:54:38,880 +this objective is equivalent to the mle + +1273 +00:54:36,119 --> 00:54:43,160 +objective when you're using a zero1 + +1274 +00:54:38,880 --> 00:54:44,480 +loss um where or you're using an + +1275 +00:54:43,160 --> 00:54:46,359 +evaluation function that gives you a + +1276 +00:54:44,480 --> 00:54:50,920 +score of one when it's exact match and + +1277 +00:54:46,359 --> 00:54:51,720 +zero when it's not exact match so um but + +1278 +00:54:50,920 --> 00:54:54,480 +that + +1279 +00:54:51,720 --> 00:54:56,440 +also demonstrates that this can be more + +1280 +00:54:54,480 --> 00:54:58,400 +flexible because you can have other + +1281 +00:54:56,440 --> 00:55:00,160 +rewards that are not just one and zero + +1282 +00:54:58,400 --> 00:55:02,599 +for exact match but you can use things + +1283 +00:55:00,160 --> 00:55:05,359 +that give you partial credit you can use + +1284 +00:55:02,599 --> 00:55:06,880 +things that uplate multiple potential uh + +1285 +00:55:05,359 --> 00:55:08,880 +potentially correct outputs and other + +1286 +00:55:06,880 --> 00:55:13,400 +things like + +1287 +00:55:08,880 --> 00:55:17,160 +that so one problem with these methods + +1288 +00:55:13,400 --> 00:55:21,799 +is um how do we know which action led to + +1289 +00:55:17,160 --> 00:55:24,720 +the reward so the best scenario is after + +1290 +00:55:21,799 --> 00:55:26,359 +each action you get a reward so after + +1291 +00:55:24,720 --> 00:55:28,960 +each token that you generated you get + +1292 +00:55:26,359 --> 00:55:31,240 +get a thumbs up or thumbs down uh from + +1293 +00:55:28,960 --> 00:55:34,280 +the user about whether they like that + +1294 +00:55:31,240 --> 00:55:36,000 +token or not um and how much happier + +1295 +00:55:34,280 --> 00:55:37,720 +they are after you generated that token + +1296 +00:55:36,000 --> 00:55:42,400 +than they were before you generated that + +1297 +00:55:37,720 --> 00:55:44,200 +token um the problem with this is that + +1298 +00:55:42,400 --> 00:55:45,799 +that's completely infeasible right like + +1299 +00:55:44,200 --> 00:55:47,039 +every time after you use chat GPD you're + +1300 +00:55:45,799 --> 00:55:50,480 +not going to press thumbs up and thumbs + +1301 +00:55:47,039 --> 00:55:52,559 +down after each token so um in reality + +1302 +00:55:50,480 --> 00:55:55,559 +what we get is usually we get it at the + +1303 +00:55:52,559 --> 00:55:57,000 +end of uh roll out of many many + +1304 +00:55:55,559 --> 00:55:58,640 +different actions and we're not sure + +1305 +00:55:57,000 --> 00:55:59,720 +which action is responsible for giving + +1306 +00:55:58,640 --> 00:56:02,559 +us the + +1307 +00:55:59,720 --> 00:56:05,440 +reward and + +1308 +00:56:02,559 --> 00:56:08,000 +so there's a few typical ways of dealing + +1309 +00:56:05,440 --> 00:56:09,640 +with this um the most typical way of + +1310 +00:56:08,000 --> 00:56:13,359 +dealing with this right now is just not + +1311 +00:56:09,640 --> 00:56:15,440 +dealing with it um and just hoping that + +1312 +00:56:13,359 --> 00:56:17,200 +your optimization algorithm internally + +1313 +00:56:15,440 --> 00:56:21,480 +will be able to do credit + +1314 +00:56:17,200 --> 00:56:24,520 +assignment um and so what that entails + +1315 +00:56:21,480 --> 00:56:27,319 +is essentially you um give an equal + +1316 +00:56:24,520 --> 00:56:29,880 +reward for each token in the output + +1317 +00:56:27,319 --> 00:56:32,480 +other ways that you can deal with it are + +1318 +00:56:29,880 --> 00:56:35,640 +um you can assign decaying rewards from + +1319 +00:56:32,480 --> 00:56:37,559 +future events so like let's say let's + +1320 +00:56:35,640 --> 00:56:41,839 +say you're talking about a chat bot for + +1321 +00:56:37,559 --> 00:56:44,119 +example maybe this is the the most uh + +1322 +00:56:41,839 --> 00:56:46,599 +kind of intuitive way of thinking about + +1323 +00:56:44,119 --> 00:56:50,400 +it but you you have a chat bot you have + +1324 +00:56:46,599 --> 00:56:52,599 +like 20 chat turns and you have the user + +1325 +00:56:50,400 --> 00:56:55,640 +give a thumbs up or a thumbs down on the + +1326 +00:56:52,599 --> 00:56:58,920 +20th chat turn there you would assign a + +1327 +00:56:55,640 --> 00:57:01,440 +reward of um like let's say it gave a + +1328 +00:56:58,920 --> 00:57:03,640 +thumbs up there you would re assign a + +1329 +00:57:01,440 --> 00:57:06,559 +reward of one for the previous chat turn + +1330 +00:57:03,640 --> 00:57:09,839 +a reward of like 0.5 for the second to + +1331 +00:57:06,559 --> 00:57:11,720 +previous chat term a reward of 0.25 for + +1332 +00:57:09,839 --> 00:57:14,319 +the third to previous chat term to + +1333 +00:57:11,720 --> 00:57:16,160 +basically say yeah like the user is + +1334 +00:57:14,319 --> 00:57:18,240 +feeling good at the moment they gave the + +1335 +00:57:16,160 --> 00:57:20,359 +thumbs up and that's probably more + +1336 +00:57:18,240 --> 00:57:23,400 +likely due to the things that happened + +1337 +00:57:20,359 --> 00:57:23,400 +recently so + +1338 +00:57:23,559 --> 00:57:28,119 +yeah we have a + +1339 +00:57:26,680 --> 00:57:32,280 +like not + +1340 +00:57:28,119 --> 00:57:34,160 +learning so the reward model can be any + +1341 +00:57:32,280 --> 00:57:35,839 +of the methods that I talked about + +1342 +00:57:34,160 --> 00:57:37,480 +before so it can be human feedback + +1343 +00:57:35,839 --> 00:57:39,000 +directly like a thumbs up or a thumbs + +1344 +00:57:37,480 --> 00:57:42,200 +down it could also be from a reward + +1345 +00:57:39,000 --> 00:57:44,599 +model uh that was pre-trained you could + +1346 +00:57:42,200 --> 00:57:47,680 +also theoretically learn the reward + +1347 +00:57:44,599 --> 00:57:52,720 +model simultaneously but you'd have to + +1348 +00:57:47,680 --> 00:57:55,200 +simultaneously with the model itself um + +1349 +00:57:52,720 --> 00:57:57,280 +so yeah I'm going to talk a little bit + +1350 +00:57:55,200 --> 00:58:00,359 +about DP which kind of does that a + +1351 +00:57:57,280 --> 00:58:01,720 +little bit but um I I would basically + +1352 +00:58:00,359 --> 00:58:03,160 +say that wherever you're getting your + +1353 +00:58:01,720 --> 00:58:06,280 +reward is probably from one of the + +1354 +00:58:03,160 --> 00:58:06,280 +things I talked about earlier + +1355 +00:58:06,359 --> 00:58:14,960 +today cool any other + +1356 +00:58:09,319 --> 00:58:17,720 +questions okay um so that's the basic + +1357 +00:58:14,960 --> 00:58:20,640 +the basic idea the very simplest thing + +1358 +00:58:17,720 --> 00:58:23,359 +that you can do is you can just sample + +1359 +00:58:20,640 --> 00:58:26,079 +um optimize the subjective function this + +1360 +00:58:23,359 --> 00:58:28,359 +is dead easy you it's not hard to imp + +1361 +00:58:26,079 --> 00:58:30,799 +imp it all as long as you have some + +1362 +00:58:28,359 --> 00:58:32,760 +source of reward signal um but the + +1363 +00:58:30,799 --> 00:58:35,559 +problem is uh reinforcement learning can + +1364 +00:58:32,760 --> 00:58:38,599 +be very unstable and it's hard to get it + +1365 +00:58:35,559 --> 00:58:40,160 +to uh you know work properly if you uh + +1366 +00:58:38,599 --> 00:58:42,400 +don't do some additional tricks so I'd + +1367 +00:58:40,160 --> 00:58:45,720 +like to talk about this + +1368 +00:58:42,400 --> 00:58:45,720 +next oh yeah + +1369 +00:58:48,880 --> 00:58:51,880 +sir + +1370 +00:58:55,039 --> 00:58:58,039 +yeah + +1371 +00:59:03,280 --> 00:59:08,960 +yeah the typical the typical way is you + +1372 +00:59:05,440 --> 00:59:12,960 +just have an exponential decay um so you + +1373 +00:59:08,960 --> 00:59:16,200 +you multiply each time by what 0.5 0. or + +1374 +00:59:12,960 --> 00:59:19,400 +something like that + +1375 +00:59:16,200 --> 00:59:19,400 +um from + +1376 +00:59:20,319 --> 00:59:27,720 +A6 um cool okay + +1377 +00:59:25,039 --> 00:59:30,720 +so + +1378 +00:59:27,720 --> 00:59:33,319 +and that's one option and sorry just to + +1379 +00:59:30,720 --> 00:59:35,760 +clarify the most common option nowadays + +1380 +00:59:33,319 --> 00:59:37,920 +um at least from the point of view of + +1381 +00:59:35,760 --> 00:59:39,839 +models is not to Decay it at all and + +1382 +00:59:37,920 --> 00:59:43,880 +just assign the same amount for each + +1383 +00:59:39,839 --> 00:59:45,319 +token um I'm not actually 100% sure what + +1384 +00:59:43,880 --> 00:59:47,319 +people are doing with respect to like + +1385 +00:59:45,319 --> 00:59:49,280 +long chat things I think probably + +1386 +00:59:47,319 --> 00:59:51,720 +they're only assigning it to the current + +1387 +00:59:49,280 --> 00:59:54,240 +like utterance and then not optimizing + +1388 +00:59:51,720 --> 00:59:57,240 +the previous utterances so like if they + +1389 +00:59:54,240 --> 00:59:59,039 +get a thumbs up or thumbs down signal um + +1390 +00:59:57,240 --> 01:00:00,720 +then they they would assign an + +1391 +00:59:59,039 --> 01:00:02,440 +equivalent reward for all of the tokens + +1392 +01:00:00,720 --> 01:00:04,640 +and the current utterance and zero + +1393 +01:00:02,440 --> 01:00:06,119 +reward for the previous ones but I'm not + +1394 +01:00:04,640 --> 01:00:08,480 +100% sure about that there might be + +1395 +01:00:06,119 --> 01:00:11,200 +other methods that people are + +1396 +01:00:08,480 --> 01:00:13,960 +using um + +1397 +01:00:11,200 --> 01:00:16,680 +cool so uh stabilizing reinforcement + +1398 +01:00:13,960 --> 01:00:18,520 +learning so um stabilizing reinforcement + +1399 +01:00:16,680 --> 01:00:21,839 +learning there's a lot of reasons why + +1400 +01:00:18,520 --> 01:00:23,880 +it's unstable um the first reason is + +1401 +01:00:21,839 --> 01:00:27,200 +you're sampling an individual output and + +1402 +01:00:23,880 --> 01:00:30,160 +calculating the um uh calculating based + +1403 +01:00:27,200 --> 01:00:32,039 +on the S individual sampled output and + +1404 +01:00:30,160 --> 01:00:33,440 +then there's an Infinity of other + +1405 +01:00:32,039 --> 01:00:36,480 +outputs that you could be optimizing + +1406 +01:00:33,440 --> 01:00:39,119 +over for mle this is not a problem + +1407 +01:00:36,480 --> 01:00:41,319 +because for mle you're always + +1408 +01:00:39,119 --> 01:00:45,359 +contrasting the gold standard output to + +1409 +01:00:41,319 --> 01:00:46,599 +all of the other outputs in the space um + +1410 +01:00:45,359 --> 01:00:48,280 +and you're saying I want to upweight the + +1411 +01:00:46,599 --> 01:00:51,200 +gold standard output and down we all of + +1412 +01:00:48,280 --> 01:00:53,039 +the other ones but for reinforcement + +1413 +01:00:51,200 --> 01:00:54,760 +learning you only have a single sampled + +1414 +01:00:53,039 --> 01:00:57,520 +output that output might be wrong and + +1415 +01:00:54,760 --> 01:00:59,359 +that's a source of inst ility this is + +1416 +01:00:57,520 --> 01:01:02,079 +particularly a problem when using bigger + +1417 +01:00:59,359 --> 01:01:05,960 +output spaces like all of the in the + +1418 +01:01:02,079 --> 01:01:07,920 +vocabul another problem is uh anytime + +1419 +01:01:05,960 --> 01:01:11,599 +you start using negative + +1420 +01:01:07,920 --> 01:01:15,160 +rewards um because if you start using + +1421 +01:01:11,599 --> 01:01:17,559 +negative rewards those rewards will be + +1422 +01:01:15,160 --> 01:01:19,520 +downweighting the probability of a + +1423 +01:01:17,559 --> 01:01:20,680 +particular output sequence and that + +1424 +01:01:19,520 --> 01:01:22,440 +might be a good idea maybe you're + +1425 +01:01:20,680 --> 01:01:24,319 +getting a toxic output or something like + +1426 +01:01:22,440 --> 01:01:25,960 +that and you want to down it but at the + +1427 +01:01:24,319 --> 01:01:28,280 +same time in addition to that toxic + +1428 +01:01:25,960 --> 01:01:30,000 +output there's like you know a + +1429 +01:01:28,280 --> 01:01:31,599 +combinatorial number of completely + +1430 +01:01:30,000 --> 01:01:33,880 +nonsense outputs that aren't even + +1431 +01:01:31,599 --> 01:01:36,599 +English and so basically you can start + +1432 +01:01:33,880 --> 01:01:38,920 +diverge from the N starting start to + +1433 +01:01:36,599 --> 01:01:40,799 +diverge from the natural like language + +1434 +01:01:38,920 --> 01:01:44,720 +modeling distribution that you have + +1435 +01:01:40,799 --> 01:01:49,079 +before so this is a big uh a big + +1436 +01:01:44,720 --> 01:01:51,880 +problem so a number of uh strategies can + +1437 +01:01:49,079 --> 01:01:53,880 +be used to stabilize the first one is + +1438 +01:01:51,880 --> 01:01:55,480 +this is completely obvious right now and + +1439 +01:01:53,880 --> 01:01:57,240 +nobody in their right mind would avoid + +1440 +01:01:55,480 --> 01:02:00,119 +doing this but the first one is + +1441 +01:01:57,240 --> 01:02:02,839 +pre-training with mle and so you start + +1442 +01:02:00,119 --> 01:02:04,920 +with a pre-trained model um and then + +1443 +01:02:02,839 --> 01:02:09,359 +switch over to RL after you finished + +1444 +01:02:04,920 --> 01:02:11,520 +pre-training the model um and so + +1445 +01:02:09,359 --> 01:02:13,279 +this makes a lot of sense if you're + +1446 +01:02:11,520 --> 01:02:14,960 +training a language model which I assume + +1447 +01:02:13,279 --> 01:02:17,039 +that almost everybody in this class is + +1448 +01:02:14,960 --> 01:02:20,279 +going to be doing but it does only work + +1449 +01:02:17,039 --> 01:02:22,720 +in scenarios where you can run mle and + +1450 +01:02:20,279 --> 01:02:24,359 +so it doesn't work if you're predicting + +1451 +01:02:22,720 --> 01:02:27,240 +like latent variables that aren't + +1452 +01:02:24,359 --> 01:02:28,760 +included in the original space + +1453 +01:02:27,240 --> 01:02:31,960 +um it + +1454 +01:02:28,760 --> 01:02:34,279 +also doesn't work in a setting where + +1455 +01:02:31,960 --> 01:02:36,640 +like you want to learn a + +1456 +01:02:34,279 --> 01:02:40,799 +chatbot you want to learn a chatbot for + +1457 +01:02:36,640 --> 01:02:44,200 +customer service for a + +1458 +01:02:40,799 --> 01:02:48,039 +company that + +1459 +01:02:44,200 --> 01:02:49,960 +has like for example a product catalog + +1460 +01:02:48,039 --> 01:02:53,559 +that the language model has never seen + +1461 +01:02:49,960 --> 01:02:56,000 +before and so if the language model has + +1462 +01:02:53,559 --> 01:02:57,359 +no information about the product catalog + +1463 +01:02:56,000 --> 01:02:59,920 +whatsoever you don't provide it through + +1464 +01:02:57,359 --> 01:03:02,440 +rag or something like that it's going to + +1465 +01:02:59,920 --> 01:03:04,039 +have to explore infinitely or not + +1466 +01:03:02,440 --> 01:03:05,599 +infinitely but it's going to have to + +1467 +01:03:04,039 --> 01:03:08,359 +explore too large of a space and you're + +1468 +01:03:05,599 --> 01:03:10,000 +never going to converge with um with + +1469 +01:03:08,359 --> 01:03:12,359 +your language modeling objectives so you + +1470 +01:03:10,000 --> 01:03:15,000 +need to basically be able to create at + +1471 +01:03:12,359 --> 01:03:16,079 +least some supervised training data to + +1472 +01:03:15,000 --> 01:03:19,279 +train with + +1473 +01:03:16,079 --> 01:03:20,720 +mle um but assuming you can do that I'm + +1474 +01:03:19,279 --> 01:03:22,920 +assuming that almost everybody is going + +1475 +01:03:20,720 --> 01:03:26,400 +to do some sort of pre-training with + +1476 +01:03:22,920 --> 01:03:27,880 +ML um The Next Step that people use uh + +1477 +01:03:26,400 --> 01:03:30,520 +in reinforcement learning that's really + +1478 +01:03:27,880 --> 01:03:34,319 +important to stabilize is regularization + +1479 +01:03:30,520 --> 01:03:35,880 +to an existing model and you have an + +1480 +01:03:34,319 --> 01:03:39,039 +existing model and you want to prevent + +1481 +01:03:35,880 --> 01:03:40,559 +it from getting too far away and the + +1482 +01:03:39,039 --> 01:03:42,279 +reason why you want to do this is like + +1483 +01:03:40,559 --> 01:03:45,720 +let's say you start assigning a negative + +1484 +01:03:42,279 --> 01:03:47,440 +reward to toxic utterances for example + +1485 +01:03:45,720 --> 01:03:49,200 +if your model stops being a language + +1486 +01:03:47,440 --> 01:03:51,920 +model whatsoever that's a bad idea so + +1487 +01:03:49,200 --> 01:03:53,400 +you want to keep it as a language model + +1488 +01:03:51,920 --> 01:03:55,599 +keep it close enough to still being a + +1489 +01:03:53,400 --> 01:03:57,559 +competent language model while you know + +1490 +01:03:55,599 --> 01:03:59,599 +like removing the toxic + +1491 +01:03:57,559 --> 01:04:03,039 +utterances so there's a number of + +1492 +01:03:59,599 --> 01:04:05,680 +methods that people use to do this um uh + +1493 +01:04:03,039 --> 01:04:08,359 +the most prominent ones are kale + +1494 +01:04:05,680 --> 01:04:10,279 +regularization uh well so the the first + +1495 +01:04:08,359 --> 01:04:13,119 +most prominent one is K regularization + +1496 +01:04:10,279 --> 01:04:15,839 +and the way this works is basically in + +1497 +01:04:13,119 --> 01:04:19,400 +addition you add you have two + +1498 +01:04:15,839 --> 01:04:22,279 +terms the first term is a term that + +1499 +01:04:19,400 --> 01:04:25,760 +improves your reward so you have your + +1500 +01:04:22,279 --> 01:04:28,039 +old model where your old model is + +1501 +01:04:25,760 --> 01:04:31,279 +creating a + +1502 +01:04:28,039 --> 01:04:32,440 +probability uh it has a probability here + +1503 +01:04:31,279 --> 01:04:34,960 +and then you have the probability + +1504 +01:04:32,440 --> 01:04:38,160 +assigned by your new model and then you + +1505 +01:04:34,960 --> 01:04:41,200 +have your reward signal here and so this + +1506 +01:04:38,160 --> 01:04:43,599 +is basically improving the log odds or + +1507 +01:04:41,200 --> 01:04:46,960 +improving the odds of getting a good + +1508 +01:04:43,599 --> 01:04:49,720 +reward for high reward + +1509 +01:04:46,960 --> 01:04:52,920 +sequences separately from this you have + +1510 +01:04:49,720 --> 01:04:55,920 +this K regularization term and this K + +1511 +01:04:52,920 --> 01:04:58,119 +regularization term is keeping the + +1512 +01:04:55,920 --> 01:05:00,279 +scores of or it's keeping the + +1513 +01:04:58,119 --> 01:05:02,400 +probability distribution of your new + +1514 +01:05:00,279 --> 01:05:03,960 +model similar to the probability + +1515 +01:05:02,400 --> 01:05:09,200 +distribution of your old + +1516 +01:05:03,960 --> 01:05:11,359 +model and this beta parameter basically + +1517 +01:05:09,200 --> 01:05:15,240 +you can increase it or decrease it based + +1518 +01:05:11,359 --> 01:05:18,400 +on how similar you want to keep the um + +1519 +01:05:15,240 --> 01:05:18,400 +how similar you want to keep the + +1520 +01:05:20,720 --> 01:05:24,640 +model another method that people use is + +1521 +01:05:23,160 --> 01:05:29,279 +something called proximal policy + +1522 +01:05:24,640 --> 01:05:30,920 +optimization or or Po and this is a + +1523 +01:05:29,279 --> 01:05:33,920 +method that is based on + +1524 +01:05:30,920 --> 01:05:38,160 +clipping uh the + +1525 +01:05:33,920 --> 01:05:40,920 +outputs and we Define uh this ratio + +1526 +01:05:38,160 --> 01:05:43,880 +here so this ratio is equivalent to this + +1527 +01:05:40,920 --> 01:05:46,160 +here so it's basically um kind of the + +1528 +01:05:43,880 --> 01:05:47,839 +amount that you're learning or the + +1529 +01:05:46,160 --> 01:05:51,720 +amount that the new model up weights + +1530 +01:05:47,839 --> 01:05:54,039 +High reward sequences and so here we + +1531 +01:05:51,720 --> 01:05:58,200 +have the same thing that we had + +1532 +01:05:54,039 --> 01:06:01,200 +above so it it looks like this but over + +1533 +01:05:58,200 --> 01:06:03,720 +here we have a clipped version of this + +1534 +01:06:01,200 --> 01:06:07,000 +where essentially what we do is we + +1535 +01:06:03,720 --> 01:06:07,000 +clip this + +1536 +01:06:21,119 --> 01:06:27,880 +ratio this ratio to be within uh a + +1537 +01:06:24,720 --> 01:06:32,160 +certain range of the original ratio and + +1538 +01:06:27,880 --> 01:06:37,880 +what this is doing is this is + +1539 +01:06:32,160 --> 01:06:41,400 +essentially forcing the model to um not + +1540 +01:06:37,880 --> 01:06:44,000 +reward large jumps in the space um + +1541 +01:06:41,400 --> 01:06:47,559 +because if you take the + +1542 +01:06:44,000 --> 01:06:49,160 +minimum and actually I'm I'm sorry I + +1543 +01:06:47,559 --> 01:06:50,720 +just realized I I might have done + +1544 +01:06:49,160 --> 01:06:52,520 +something confusing here because this is + +1545 +01:06:50,720 --> 01:06:53,960 +actually higher as better so this isn't + +1546 +01:06:52,520 --> 01:06:56,079 +really a loss function this is something + +1547 +01:06:53,960 --> 01:06:57,680 +you're attempting to maximize so + +1548 +01:06:56,079 --> 01:06:59,839 +in contrast to all of the other things I + +1549 +01:06:57,680 --> 01:07:01,680 +was talking about before um this is + +1550 +01:06:59,839 --> 01:07:04,400 +something where higher is better instead + +1551 +01:07:01,680 --> 01:07:07,599 +of lower is better but anyway basically + +1552 +01:07:04,400 --> 01:07:09,599 +by taking the minimum of this you're + +1553 +01:07:07,599 --> 01:07:11,960 +encouraging the model + +1554 +01:07:09,599 --> 01:07:16,279 +to + +1555 +01:07:11,960 --> 01:07:18,559 +uh keep examining the space where you + +1556 +01:07:16,279 --> 01:07:20,799 +don't diverge much from the original + +1557 +01:07:18,559 --> 01:07:22,920 +model and if the space where the + +1558 +01:07:20,799 --> 01:07:25,240 +original model was in is better than the + +1559 +01:07:22,920 --> 01:07:27,440 +new space that your model has moved into + +1560 +01:07:25,240 --> 01:07:30,920 +you move back towards the original model + +1561 +01:07:27,440 --> 01:07:33,000 +so basically like if you had um if you + +1562 +01:07:30,920 --> 01:07:34,960 +learned a model if you started learning + +1563 +01:07:33,000 --> 01:07:37,960 +a model that looked like it was + +1564 +01:07:34,960 --> 01:07:40,279 +optimizing uh your your reward but then + +1565 +01:07:37,960 --> 01:07:43,119 +suddenly the model went off the rails + +1566 +01:07:40,279 --> 01:07:45,000 +and um it starts generating completely + +1567 +01:07:43,119 --> 01:07:47,319 +nonsense outputs that get really bad + +1568 +01:07:45,000 --> 01:07:49,119 +reward this will push it back towards + +1569 +01:07:47,319 --> 01:07:50,920 +the original policy and that's the basic + +1570 +01:07:49,119 --> 01:07:54,279 +idea behind + +1571 +01:07:50,920 --> 01:07:57,640 +P um in terms of what I see people using + +1572 +01:07:54,279 --> 01:07:59,799 +um po was like really really popular for + +1573 +01:07:57,640 --> 01:08:01,880 +a while but I've started to see people + +1574 +01:07:59,799 --> 01:08:04,799 +use alternative strategies that use K + +1575 +01:08:01,880 --> 01:08:06,880 +regularization so I don't I don't think + +1576 +01:08:04,799 --> 01:08:08,520 +either one of them is like particularly + +1577 +01:08:06,880 --> 01:08:10,039 +more popular than any of the others and + +1578 +01:08:08,520 --> 01:08:13,720 +this one's a little bit simpler + +1579 +01:08:10,039 --> 01:08:13,720 +conceptually so I like the the + +1580 +01:08:14,880 --> 01:08:19,279 +one cool um any questions about + +1581 +01:08:20,359 --> 01:08:26,759 +this okay um and actually one thing I + +1582 +01:08:24,640 --> 01:08:29,679 +should mention is um all of these things + +1583 +01:08:26,759 --> 01:08:32,120 +are implemented uh in you know whatever + +1584 +01:08:29,679 --> 01:08:33,759 +libraries you use like hugging face TRL + +1585 +01:08:32,120 --> 01:08:35,679 +Transformer reinforcement learning as an + +1586 +01:08:33,759 --> 01:08:37,040 +example Library all of these methods are + +1587 +01:08:35,679 --> 01:08:38,400 +implemented there so if you actually + +1588 +01:08:37,040 --> 01:08:40,600 +want to use these in practice that's + +1589 +01:08:38,400 --> 01:08:40,600 +good + +1590 +01:08:40,839 --> 01:08:46,359 +place the next thing is adding a + +1591 +01:08:42,920 --> 01:08:48,679 +Baseline and so the basic idea is that + +1592 +01:08:46,359 --> 01:08:52,199 +you have ex expectations about your + +1593 +01:08:48,679 --> 01:08:54,640 +reward for a particular sentence and um + +1594 +01:08:52,199 --> 01:08:56,560 +like let's say we wanted to uh translate + +1595 +01:08:54,640 --> 01:08:58,400 +a sentence and we have uh something like + +1596 +01:08:56,560 --> 01:09:01,279 +this is an easy sentence and buffalo + +1597 +01:08:58,400 --> 01:09:02,920 +buffalo buffalo which is a harder + +1598 +01:09:01,279 --> 01:09:07,799 +sentence to + +1599 +01:09:02,920 --> 01:09:09,679 +translate and so we have a reward um if + +1600 +01:09:07,799 --> 01:09:11,759 +if you're not familiar with this example + +1601 +01:09:09,679 --> 01:09:13,480 +you can search on Wikipedia for buffalo + +1602 +01:09:11,759 --> 01:09:16,759 +buffalo buffalo and you'll you'll find + +1603 +01:09:13,480 --> 01:09:19,520 +out what I'm talking about um but uh + +1604 +01:09:16,759 --> 01:09:21,440 +there's a reward uh and let's say you + +1605 +01:09:19,520 --> 01:09:24,359 +got a reward of 0.8 for the first one + +1606 +01:09:21,440 --> 01:09:29,679 +and a reward of 0.3 for the second + +1607 +01:09:24,359 --> 01:09:31,679 +one but the problem is if um the first + +1608 +01:09:29,679 --> 01:09:33,640 +one actually is really easy and the + +1609 +01:09:31,679 --> 01:09:36,120 +second one is really hard getting a + +1610 +01:09:33,640 --> 01:09:37,799 +reward of 0.8 for the second one for + +1611 +01:09:36,120 --> 01:09:40,080 +like a translation or something is + +1612 +01:09:37,799 --> 01:09:41,120 +actually bad right and a reward of 0.3 + +1613 +01:09:40,080 --> 01:09:45,239 +is good because you're moving in the + +1614 +01:09:41,120 --> 01:09:49,359 +right direction and so you basically um + +1615 +01:09:45,239 --> 01:09:52,239 +you have uh the Baseline uh minus reward + +1616 +01:09:49,359 --> 01:09:54,960 +or sorry reward minus Baseline and this + +1617 +01:09:52,239 --> 01:09:56,520 +would give you a negative value for this + +1618 +01:09:54,960 --> 01:09:59,320 +first one a positive value for the + +1619 +01:09:56,520 --> 01:10:01,360 +second one and so the basic idea is can + +1620 +01:09:59,320 --> 01:10:04,400 +we predict a priori how difficult this + +1621 +01:10:01,360 --> 01:10:05,440 +example is and then uh adjust our reward + +1622 +01:10:04,400 --> 01:10:08,360 +based on + +1623 +01:10:05,440 --> 01:10:10,960 +that and + +1624 +01:10:08,360 --> 01:10:13,679 +so that's the basic idea you just have + +1625 +01:10:10,960 --> 01:10:15,560 +kind of like a baseline model um you + +1626 +01:10:13,679 --> 01:10:19,320 +have a baseline model that predicts this + +1627 +01:10:15,560 --> 01:10:19,320 +and uh you adjust uh + +1628 +01:10:19,760 --> 01:10:25,000 +appropriately um there's two major ways + +1629 +01:10:22,719 --> 01:10:27,600 +you can do this the first one um the + +1630 +01:10:25,000 --> 01:10:29,800 +Baseline doesn't need to be anything um + +1631 +01:10:27,600 --> 01:10:32,960 +the only hope is that it decreases the + +1632 +01:10:29,800 --> 01:10:35,960 +variance in your reward uh and makes + +1633 +01:10:32,960 --> 01:10:38,239 +learning more stable um there's two + +1634 +01:10:35,960 --> 01:10:40,159 +options that I see done pretty widely + +1635 +01:10:38,239 --> 01:10:43,000 +the first one is predicting the final + +1636 +01:10:40,159 --> 01:10:47,360 +reward um predicting the final reward + +1637 +01:10:43,000 --> 01:10:50,960 +using a model that doesn't look at + +1638 +01:10:47,360 --> 01:10:53,400 +all at the answer that you provided it + +1639 +01:10:50,960 --> 01:10:55,880 +only looks at the input or it only looks + +1640 +01:10:53,400 --> 01:10:58,840 +at the intermediate States of uh you + +1641 +01:10:55,880 --> 01:11:00,480 +know a model or something and so at the + +1642 +01:10:58,840 --> 01:11:03,280 +sentence level you can have one Baseline + +1643 +01:11:00,480 --> 01:11:04,719 +per sentence um you can also do it at + +1644 +01:11:03,280 --> 01:11:10,560 +each decoder + +1645 +01:11:04,719 --> 01:11:11,640 +State and this is uh basically you can + +1646 +01:11:10,560 --> 01:11:13,040 +do this anytime you're doing + +1647 +01:11:11,640 --> 01:11:15,199 +reinforcement learning by just training + +1648 +01:11:13,040 --> 01:11:18,199 +a regression model that does this for + +1649 +01:11:15,199 --> 01:11:19,679 +you based on the rewards you get the + +1650 +01:11:18,199 --> 01:11:21,040 +important thing is the Baseline is not + +1651 +01:11:19,679 --> 01:11:22,640 +allowed to use any of your actual + +1652 +01:11:21,040 --> 01:11:25,679 +predictions because once you start using + +1653 +01:11:22,640 --> 01:11:26,640 +the predictions then um your uh it's not + +1654 +01:11:25,679 --> 01:11:28,679 +a + +1655 +01:11:26,640 --> 01:11:30,840 +baseline another option which is + +1656 +01:11:28,679 --> 01:11:33,440 +relatively easy to implement but can + +1657 +01:11:30,840 --> 01:11:36,320 +still be effective is you calculate the + +1658 +01:11:33,440 --> 01:11:38,719 +mean of the rewards in a batch and so if + +1659 +01:11:36,320 --> 01:11:40,880 +you have a big batch of data and your + +1660 +01:11:38,719 --> 01:11:44,440 +average reward in the batch is like + +1661 +01:11:40,880 --> 01:11:46,480 +0.4 uh then you just subtract that 0.4 + +1662 +01:11:44,440 --> 01:11:50,080 +uh and calculate your reward based on + +1663 +01:11:46,480 --> 01:11:50,080 +that so that's another option that can + +1664 +01:11:51,800 --> 01:11:57,800 +use + +1665 +01:11:53,639 --> 01:12:00,000 +um a kind of extreme example of this uh + +1666 +01:11:57,800 --> 01:12:01,199 +of creating a baseline is contrasting + +1667 +01:12:00,000 --> 01:12:03,639 +pairwise + +1668 +01:12:01,199 --> 01:12:05,880 +examples um or + +1669 +01:12:03,639 --> 01:12:08,280 +contrasting different outputs for the + +1670 +01:12:05,880 --> 01:12:12,040 +same input + +1671 +01:12:08,280 --> 01:12:13,920 +and you can easily learn uh directly + +1672 +01:12:12,040 --> 01:12:16,239 +from pairwise Human + +1673 +01:12:13,920 --> 01:12:18,199 +preferences uh which can provide more + +1674 +01:12:16,239 --> 01:12:20,760 +stability because you know one is better + +1675 +01:12:18,199 --> 01:12:23,880 +than the other so you essentially can be + +1676 +01:12:20,760 --> 01:12:26,199 +sure that uh you're upweighting a better + +1677 +01:12:23,880 --> 01:12:29,560 +one and down weting a worse one + +1678 +01:12:26,199 --> 01:12:31,400 +um this is the idea behind DPO which is + +1679 +01:12:29,560 --> 01:12:33,719 +a recently pretty popular model but + +1680 +01:12:31,400 --> 01:12:36,800 +there's also other previous methods that + +1681 +01:12:33,719 --> 01:12:40,199 +did similar things and the way DPO works + +1682 +01:12:36,800 --> 01:12:45,040 +is it basically calculates this ratio of + +1683 +01:12:40,199 --> 01:12:49,280 +uh the probability of the new uh the new + +1684 +01:12:45,040 --> 01:12:51,639 +model to the old model but it UPS this + +1685 +01:12:49,280 --> 01:12:53,639 +probability for a good output and it + +1686 +01:12:51,639 --> 01:12:56,280 +downweights this probability for a bad + +1687 +01:12:53,639 --> 01:12:57,679 +output and so + +1688 +01:12:56,280 --> 01:13:00,120 +here we have our better outputs over + +1689 +01:12:57,679 --> 01:13:02,040 +here here we have our worse outputs and + +1690 +01:13:00,120 --> 01:13:03,600 +you just it's basically learning to + +1691 +01:13:02,040 --> 01:13:05,639 +upate the probability and downweight + +1692 +01:13:03,600 --> 01:13:09,320 +probability + +1693 +01:13:05,639 --> 01:13:09,320 +accordingly so + +1694 +01:13:09,360 --> 01:13:15,040 +um you can notice that DPO is very + +1695 +01:13:12,280 --> 01:13:18,040 +similar to PO um and that it's learning + +1696 +01:13:15,040 --> 01:13:19,679 +uh it's using these ratios but the + +1697 +01:13:18,040 --> 01:13:21,520 +disadvantage of this is you obviously + +1698 +01:13:19,679 --> 01:13:23,120 +require pairwise judgments and you can't + +1699 +01:13:21,520 --> 01:13:26,120 +learn a model if you don't have these + +1700 +01:13:23,120 --> 01:13:28,080 +pawise judgments so + +1701 +01:13:26,120 --> 01:13:30,760 +the + +1702 +01:13:28,080 --> 01:13:33,159 +beta yeah so the beta term is is + +1703 +01:13:30,760 --> 01:13:35,840 +basically a normalization term it's a + +1704 +01:13:33,159 --> 01:13:39,960 +hyper parameter um + +1705 +01:13:35,840 --> 01:13:41,840 +for DPO sorry I read the paper right + +1706 +01:13:39,960 --> 01:13:43,639 +when it came out and I don't remember if + +1707 +01:13:41,840 --> 01:13:45,600 +it's a direct derivation from the K + +1708 +01:13:43,639 --> 01:13:47,960 +Divergence term or not but I think it + +1709 +01:13:45,600 --> 01:13:49,800 +might be um I'd have to go back and look + +1710 +01:13:47,960 --> 01:13:50,480 +at the look at the paper but basically + +1711 +01:13:49,800 --> 01:13:53,600 +the + +1712 +01:13:50,480 --> 01:13:56,760 +more the larger this is the larger + +1713 +01:13:53,600 --> 01:13:59,320 +gradient steps you'll be + +1714 +01:13:56,760 --> 01:14:00,639 +it also um like you'll notice there + +1715 +01:13:59,320 --> 01:14:03,400 +sorry I didn't mention this but you'll + +1716 +01:14:00,639 --> 01:14:06,120 +notice there's a sigmoid term here so + +1717 +01:14:03,400 --> 01:14:09,000 +the the + +1718 +01:14:06,120 --> 01:14:10,080 +beta the larger you increase the beta + +1719 +01:14:09,000 --> 01:14:13,239 +the + +1720 +01:14:10,080 --> 01:14:16,600 +more small differences in these + +1721 +01:14:13,239 --> 01:14:18,719 +values like it basically like stretches + +1722 +01:14:16,600 --> 01:14:22,280 +or shrinks the sigmoid with respect to + +1723 +01:14:18,719 --> 01:14:24,120 +how beak the it is so it will um it will + +1724 +01:14:22,280 --> 01:14:25,800 +affect how much like small differences + +1725 +01:14:24,120 --> 01:14:27,960 +in this will affect + +1726 +01:14:25,800 --> 01:14:30,120 +but I I think this was derived from the + +1727 +01:14:27,960 --> 01:14:31,760 +K regularization term that we had + +1728 +01:14:30,120 --> 01:14:34,400 +previously in + +1729 +01:14:31,760 --> 01:14:35,800 +um in this slide here but I have to go + +1730 +01:14:34,400 --> 01:14:40,520 +back and double check unless somebody + +1731 +01:14:35,800 --> 01:14:43,239 +knows it is okay good yeah + +1732 +01:14:40,520 --> 01:14:45,000 +so I don't want to say wrong things but + +1733 +01:14:43,239 --> 01:14:48,239 +I also don't want + +1734 +01:14:45,000 --> 01:14:50,920 +to okay cool um and so then increasing + +1735 +01:14:48,239 --> 01:14:55,080 +batch size + +1736 +01:14:50,920 --> 01:14:57,360 +um because each uh another thing is um + +1737 +01:14:55,080 --> 01:14:58,440 +kind of NE necessarily reinforcement + +1738 +01:14:57,360 --> 01:14:59,920 +learning is going to have higher + +1739 +01:14:58,440 --> 01:15:01,400 +variance and maximum likelihood + +1740 +01:14:59,920 --> 01:15:04,199 +estimation just because we're doing samp + +1741 +01:15:01,400 --> 01:15:07,840 +playing and other things like this and + +1742 +01:15:04,199 --> 01:15:09,440 +um so one very simple thing you can do + +1743 +01:15:07,840 --> 01:15:11,280 +is just increase the number of examples + +1744 +01:15:09,440 --> 01:15:13,679 +or rollouts that you do before an update + +1745 +01:15:11,280 --> 01:15:15,800 +to stabilize and so I I would definitely + +1746 +01:15:13,679 --> 01:15:17,480 +suggest that if you're seeing any + +1747 +01:15:15,800 --> 01:15:18,679 +stability after doing all of the tricks + +1748 +01:15:17,480 --> 01:15:20,400 +that I mentioned before that you + +1749 +01:15:18,679 --> 01:15:23,040 +increase your batch size and often that + +1750 +01:15:20,400 --> 01:15:25,480 +can just resolve your problems + +1751 +01:15:23,040 --> 01:15:28,760 +um another uh + +1752 +01:15:25,480 --> 01:15:30,560 +thing that people often do is um save + +1753 +01:15:28,760 --> 01:15:32,040 +many many previous rollouts because + +1754 +01:15:30,560 --> 01:15:34,199 +generally doing rollouts is more + +1755 +01:15:32,040 --> 01:15:37,840 +expensive doing rollouts and collecting + +1756 +01:15:34,199 --> 01:15:39,560 +rewards is more expensive and so um you + +1757 +01:15:37,840 --> 01:15:42,360 +can save the roll outs that you have + +1758 +01:15:39,560 --> 01:15:43,840 +done before and uh keep them around so + +1759 +01:15:42,360 --> 01:15:46,600 +you can update parameters with larger + +1760 +01:15:43,840 --> 01:15:50,800 +batches in a more efficient + +1761 +01:15:46,600 --> 01:15:53,120 +way cool so that's all I have uh I just + +1762 +01:15:50,800 --> 01:15:54,400 +realized we're exactly at time so uh I + +1763 +01:15:53,120 --> 01:15:56,440 +should finish up here but I'll be happy + +1764 +01:15:54,400 --> 01:15:59,440 +to take any + +1765 +01:15:56,440 --> 01:15:59,440 +for + +1766 +01:16:01,679 --> 01:16:04,679 +thanks \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (12) Reinforcement Learning/transcript.vtt b/CMU Advanced NLP 2024 (12) Reinforcement Learning/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..1befa334ccf5b66e0296562a586ba19478207d07 --- /dev/null +++ b/CMU Advanced NLP 2024 (12) Reinforcement Learning/transcript.vtt @@ -0,0 +1,5299 @@ +WEBVTT + +00:00:00.840 --> 00:00:05.920 +okay so uh let's get started um today + +00:00:04.200 --> 00:00:08.000 +I'm going to be talking about learning + +00:00:05.920 --> 00:00:09.480 +from Human feedback I wrote + +00:00:08.000 --> 00:00:12.160 +reinforcement learning from Human + +00:00:09.480 --> 00:00:14.519 +feedback because that's what um you know + +00:00:12.160 --> 00:00:15.759 +a lot of people talk about nowadays but + +00:00:14.519 --> 00:00:18.880 +actually there's other methods of + +00:00:15.759 --> 00:00:21.840 +learning from Human feedback so first + +00:00:18.880 --> 00:00:24.760 +I'm going to be talking about the ways + +00:00:21.840 --> 00:00:27.920 +we can get uh human feedback for the + +00:00:24.760 --> 00:00:31.039 +generations of models and mostly focus + +00:00:27.920 --> 00:00:32.960 +on generation tasks because is um + +00:00:31.039 --> 00:00:35.800 +generation tasks are harder than like + +00:00:32.960 --> 00:00:38.559 +classification tasks that we uh we deal + +00:00:35.800 --> 00:00:40.000 +with normally so I'll spend a fair + +00:00:38.559 --> 00:00:42.239 +amount of time talking about how we do + +00:00:40.000 --> 00:00:45.760 +that and then after I talk about how we + +00:00:42.239 --> 00:00:48.360 +do that we'll move into um how we + +00:00:45.760 --> 00:00:51.160 +actually learn from that + +00:00:48.360 --> 00:00:53.399 +signal so normally what we've done up + +00:00:51.160 --> 00:00:56.399 +until this point is maximum likelihood + +00:00:53.399 --> 00:00:58.199 +training uh this is just an overview + +00:00:56.399 --> 00:00:59.559 +slide so we what we want to do is we + +00:00:58.199 --> 00:01:00.760 +want to maximize the likelihood of + +00:00:59.559 --> 00:01:03.280 +predicting the next word and the + +00:01:00.760 --> 00:01:05.960 +reference given the previous words uh + +00:01:03.280 --> 00:01:08.119 +which gives us the loss of the output + +00:01:05.960 --> 00:01:09.799 +given the input uh where you know the + +00:01:08.119 --> 00:01:13.960 +input can be the prompt the output can + +00:01:09.799 --> 00:01:16.080 +be the answer to uh the output but + +00:01:13.960 --> 00:01:18.360 +there's uh lots of problems with + +00:01:16.080 --> 00:01:20.439 +learning from Maximum likelihood and I'm + +00:01:18.360 --> 00:01:22.079 +going to give three examples here I + +00:01:20.439 --> 00:01:24.159 +think all of these are actually real + +00:01:22.079 --> 00:01:26.880 +problems uh that we need to be worried + +00:01:24.159 --> 00:01:30.240 +about so the first one is that some + +00:01:26.880 --> 00:01:32.439 +mistakes are worse than others so um in + +00:01:30.240 --> 00:01:33.560 +the end we want good outputs and some + +00:01:32.439 --> 00:01:36.520 +mistaken + +00:01:33.560 --> 00:01:38.200 +predictions uh can be a bigger problem + +00:01:36.520 --> 00:01:42.680 +for the output being + +00:01:38.200 --> 00:01:46.000 +good so to give an example uh let's say + +00:01:42.680 --> 00:01:47.600 +what we actually wanted from like a + +00:01:46.000 --> 00:01:49.320 +speech recognition system or a + +00:01:47.600 --> 00:01:54.040 +translation system or something like + +00:01:49.320 --> 00:01:54.040 +that is uh please send this package to + +00:01:54.280 --> 00:01:58.920 +Pittsburgh if I write please send a + +00:01:56.880 --> 00:02:01.560 +package to Pittsburgh then this is not a + +00:01:58.920 --> 00:02:03.560 +huge problem + +00:02:01.560 --> 00:02:06.479 +if I write uh please send this package + +00:02:03.560 --> 00:02:07.719 +to Tokyo then that might be a big + +00:02:06.479 --> 00:02:09.640 +problem because the package you wanted + +00:02:07.719 --> 00:02:12.760 +to come to Pittsburgh goes to Tokyo + +00:02:09.640 --> 00:02:13.680 +instead and uh you might not want that + +00:02:12.760 --> 00:02:16.080 +to + +00:02:13.680 --> 00:02:18.000 +happen you might also have it say + +00:02:16.080 --> 00:02:20.400 +bleeping send this package to Pittsburgh + +00:02:18.000 --> 00:02:22.200 +instead of pleas um and that would be a + +00:02:20.400 --> 00:02:24.200 +problem in a customer service system + +00:02:22.200 --> 00:02:28.400 +right because your customer would uh + +00:02:24.200 --> 00:02:28.400 +leave and never come back + +00:02:28.840 --> 00:02:32.040 +so + +00:02:30.360 --> 00:02:33.720 +determiner like this is not going to + +00:02:32.040 --> 00:02:35.640 +cause a huge issue U messing up other + +00:02:33.720 --> 00:02:37.519 +things is going to cause a larger + +00:02:35.640 --> 00:02:39.519 +issue but from the point of view of + +00:02:37.519 --> 00:02:42.680 +Maximum likelihood all of these are just + +00:02:39.519 --> 00:02:44.560 +tokens and messing up one token is the + +00:02:42.680 --> 00:02:47.519 +same as messing up another token so + +00:02:44.560 --> 00:02:50.040 +that's uh you know an + +00:02:47.519 --> 00:02:52.080 +issue another problem is that the gold + +00:02:50.040 --> 00:02:54.640 +standard and maximum likelihood + +00:02:52.080 --> 00:02:57.480 +estimation can be bad it can be like not + +00:02:54.640 --> 00:02:59.239 +what you want and uh corpa are full of + +00:02:57.480 --> 00:03:02.400 +outputs that we wouldn't want a language + +00:02:59.239 --> 00:03:05.400 +model producing so for example uh toxic + +00:03:02.400 --> 00:03:07.799 +comments on Reddit uh + +00:03:05.400 --> 00:03:09.959 +disinformation um another thing that a + +00:03:07.799 --> 00:03:13.000 +lot of people don't think about uh quite + +00:03:09.959 --> 00:03:15.640 +as much is a lot of the data online is + +00:03:13.000 --> 00:03:17.680 +uh from is automatically generated + +00:03:15.640 --> 00:03:19.720 +nowadays for example from machine + +00:03:17.680 --> 00:03:24.080 +translation a lot of the translations + +00:03:19.720 --> 00:03:25.720 +online are from uh 2016 Google translate + +00:03:24.080 --> 00:03:27.560 +uh when Google translate was a lot less + +00:03:25.720 --> 00:03:29.120 +good than it is now and so you have like + +00:03:27.560 --> 00:03:31.760 +poor quality translations that were + +00:03:29.120 --> 00:03:31.760 +automatically + +00:03:33.040 --> 00:03:37.959 +a final problem is uh something that's + +00:03:35.280 --> 00:03:40.360 +called exposure bias and exposure bias + +00:03:37.959 --> 00:03:44.000 +basically what it means is mle training + +00:03:40.360 --> 00:03:46.000 +doesn't consider um the necessarity the + +00:03:44.000 --> 00:03:48.599 +necessity for generation and it relies + +00:03:46.000 --> 00:03:51.360 +on gold standard context so if we go + +00:03:48.599 --> 00:03:54.159 +back to the mle equation when we're + +00:03:51.360 --> 00:03:57.200 +calculating mle this y less than T is + +00:03:54.159 --> 00:03:59.200 +always correct it's always a good output + +00:03:57.200 --> 00:04:01.439 +and so what the model does is it learns + +00:03:59.200 --> 00:04:04.280 +to over rely on good + +00:04:01.439 --> 00:04:06.079 +outputs and one example of a problem + +00:04:04.280 --> 00:04:08.360 +that this causes is models tend to + +00:04:06.079 --> 00:04:10.560 +repeat themselves over and over again + +00:04:08.360 --> 00:04:12.319 +for example um when you use some + +00:04:10.560 --> 00:04:15.079 +generation algorithms and the reason why + +00:04:12.319 --> 00:04:18.519 +this happens is because in a gold + +00:04:15.079 --> 00:04:22.079 +standard output if a word has appeared + +00:04:18.519 --> 00:04:25.840 +previously that word is more likely to + +00:04:22.079 --> 00:04:28.560 +happen next so like if you say um like I + +00:04:25.840 --> 00:04:29.759 +am going um I am going to Pittsburgh + +00:04:28.560 --> 00:04:31.880 +you're much more likely to say + +00:04:29.759 --> 00:04:33.000 +Pittsburgh again in the future because + +00:04:31.880 --> 00:04:35.720 +you're talking about Pittsburgh + +00:04:33.000 --> 00:04:37.400 +topically as coherent so what you get is + +00:04:35.720 --> 00:04:38.639 +you get mle trained models saying I'm + +00:04:37.400 --> 00:04:40.160 +going to Pittsburgh I am going to + +00:04:38.639 --> 00:04:41.680 +Pittsburgh I am going to Pittsburgh I + +00:04:40.160 --> 00:04:45.280 +going to Pittsburgh you've probably seen + +00:04:41.680 --> 00:04:47.320 +this before uh at some point and so um + +00:04:45.280 --> 00:04:49.320 +exposure bias is basically that the + +00:04:47.320 --> 00:04:51.039 +model has never been exposed to mistakes + +00:04:49.320 --> 00:04:55.240 +in the past and so it can't deal with + +00:04:51.039 --> 00:04:56.840 +them so what this does is um if you have + +00:04:55.240 --> 00:04:58.560 +an alternative training algorithm you + +00:04:56.840 --> 00:05:02.120 +can fix this by generating a whole bunch + +00:04:58.560 --> 00:05:04.880 +of outputs uh down like scoring some of + +00:05:02.120 --> 00:05:06.880 +them poorly and penalizing the model for + +00:05:04.880 --> 00:05:09.960 +uh generating po outputs and so that can + +00:05:06.880 --> 00:05:09.960 +fix these problems as + +00:05:10.800 --> 00:05:18.440 +well uh any questions about this all + +00:05:15.199 --> 00:05:20.800 +good Okay cool so now I'd like to get + +00:05:18.440 --> 00:05:23.919 +into how we measure how good an output + +00:05:20.800 --> 00:05:26.360 +is and there's different ways of doing + +00:05:23.919 --> 00:05:30.319 +this um the first one is objective + +00:05:26.360 --> 00:05:32.680 +assessment so for some uh tasks or for + +00:05:30.319 --> 00:05:35.400 +many tasks there's kind of objectively a + +00:05:32.680 --> 00:05:37.280 +correct answer there's also human + +00:05:35.400 --> 00:05:40.360 +subjective annotations so you can ask + +00:05:37.280 --> 00:05:42.919 +humans to do annotation for you there's + +00:05:40.360 --> 00:05:45.400 +machine prediction of human + +00:05:42.919 --> 00:05:48.319 +preferences and there's also use in + +00:05:45.400 --> 00:05:50.840 +another system in a downstream + +00:05:48.319 --> 00:05:52.960 +task so the way objective assessment + +00:05:50.840 --> 00:05:54.919 +works is you have an annotated correct + +00:05:52.960 --> 00:05:57.080 +answer in match against this so like if + +00:05:54.919 --> 00:06:00.600 +you're solving math problems uh + +00:05:57.080 --> 00:06:02.560 +answering objective questions and and + +00:06:00.600 --> 00:06:04.280 +you know you can pick any arbitrary + +00:06:02.560 --> 00:06:06.840 +example you can pick your classification + +00:06:04.280 --> 00:06:09.800 +example from uh like your text + +00:06:06.840 --> 00:06:11.880 +classification tasks an even clearer + +00:06:09.800 --> 00:06:13.880 +example is if you have math problems + +00:06:11.880 --> 00:06:15.639 +there's kind of objectively one answer + +00:06:13.880 --> 00:06:18.080 +to any math problem and there's no other + +00:06:15.639 --> 00:06:19.680 +answer that could be correct so this + +00:06:18.080 --> 00:06:21.160 +makes your life easy if you're handling + +00:06:19.680 --> 00:06:22.560 +this type of problem but of course + +00:06:21.160 --> 00:06:24.120 +there's many other types of problems we + +00:06:22.560 --> 00:06:26.039 +want to handle that don't have objective + +00:06:24.120 --> 00:06:29.039 +answers like + +00:06:26.039 --> 00:06:31.440 +this so let's say we're handling a gener + +00:06:29.039 --> 00:06:34.680 +a generation task where we don't have an + +00:06:31.440 --> 00:06:36.360 +objective answer um in this Cas kind of + +00:06:34.680 --> 00:06:39.440 +one of our gold standards is human + +00:06:36.360 --> 00:06:42.360 +evaluation so we might have a source + +00:06:39.440 --> 00:06:44.919 +input like a prompt or an input text for + +00:06:42.360 --> 00:06:47.240 +machine translation we have one or + +00:06:44.919 --> 00:06:49.960 +several hypotheses and we ask a human + +00:06:47.240 --> 00:06:53.280 +annotator to basically give uh a score + +00:06:49.960 --> 00:06:55.759 +for them or do some sort of other + +00:06:53.280 --> 00:06:59.759 +annotation and the different varieties + +00:06:55.759 --> 00:07:03.080 +of annotation that we can give are um + +00:06:59.759 --> 00:07:04.599 +something called direct assessment so uh + +00:07:03.080 --> 00:07:06.599 +direct assessment is a term that comes + +00:07:04.599 --> 00:07:09.280 +from machine translation uh so you might + +00:07:06.599 --> 00:07:11.039 +not see it used uh lots of other places + +00:07:09.280 --> 00:07:13.120 +but it's basically just give a score + +00:07:11.039 --> 00:07:15.759 +directly to how good the output is so + +00:07:13.120 --> 00:07:17.199 +you can say like if you say please send + +00:07:15.759 --> 00:07:18.960 +this translation is please send this + +00:07:17.199 --> 00:07:21.759 +package to Tokyo we give it a score of + +00:07:18.960 --> 00:07:24.360 +two out of 10 or something like + +00:07:21.759 --> 00:07:28.000 +this + +00:07:24.360 --> 00:07:30.840 +so the the question here is like what + +00:07:28.000 --> 00:07:32.400 +does like let's say I gave a score of + +00:07:30.840 --> 00:07:34.520 +two out of 10 for please send this + +00:07:32.400 --> 00:07:37.680 +package to Tokyo what score should I + +00:07:34.520 --> 00:07:40.240 +give for please send a package to Tokyo + +00:07:37.680 --> 00:07:42.360 +anyone have any ideas the the correct + +00:07:40.240 --> 00:07:46.520 +answer is please send this package to + +00:07:42.360 --> 00:07:48.000 +take out of eight out of 10 yeah but you + +00:07:46.520 --> 00:07:50.440 +might disagree on that right it's kind + +00:07:48.000 --> 00:07:52.159 +of like subjective um one of the + +00:07:50.440 --> 00:07:54.039 +difficulties of direct assessment is + +00:07:52.159 --> 00:07:55.520 +giving a number like this is pretty + +00:07:54.039 --> 00:07:57.800 +difficult if you don't have a very clear + +00:07:55.520 --> 00:07:59.720 +rubric and very skilled annotators and + +00:07:57.800 --> 00:08:02.879 +it's hard to get consistency between + +00:07:59.720 --> 00:08:04.400 +people when you do this so the advantage + +00:08:02.879 --> 00:08:05.599 +is it kind of gives you an idea of how + +00:08:04.400 --> 00:08:07.520 +good things are overall but the + +00:08:05.599 --> 00:08:09.280 +disadvantage is it's more difficult to + +00:08:07.520 --> 00:08:11.319 +annotate and get + +00:08:09.280 --> 00:08:13.159 +consistency um another thing that I + +00:08:11.319 --> 00:08:15.319 +should point out is often scores are + +00:08:13.159 --> 00:08:18.680 +assigned separately based on desirable + +00:08:15.319 --> 00:08:20.960 +traits so um we don't necessarily just + +00:08:18.680 --> 00:08:23.479 +say how good is it we say how fluent is + +00:08:20.960 --> 00:08:26.120 +it like is it fluent uh + +00:08:23.479 --> 00:08:28.159 +English in Translation there's a concept + +00:08:26.120 --> 00:08:30.720 +called adequacy which is how well does + +00:08:28.159 --> 00:08:34.599 +the output reflect the input + +00:08:30.720 --> 00:08:36.519 +semantics um and if you're assessing + +00:08:34.599 --> 00:08:38.440 +translation systems actually it's common + +00:08:36.519 --> 00:08:40.519 +to assess fluency without even looking + +00:08:38.440 --> 00:08:43.200 +at the input because then you can just + +00:08:40.519 --> 00:08:44.880 +say how fluent is it but for adequacy + +00:08:43.200 --> 00:08:46.320 +you definitely need to understand the + +00:08:44.880 --> 00:08:49.600 +input so you need to be a bilingual + +00:08:46.320 --> 00:08:54.680 +speaker to be able to assess + +00:08:49.600 --> 00:08:57.560 +that um factuality um and so factuality + +00:08:54.680 --> 00:09:00.160 +is tricky um it can either be factuality + +00:08:57.560 --> 00:09:03.880 +grounded in a particular input text in + +00:09:00.160 --> 00:09:05.600 +which case um the facts would have to be + +00:09:03.880 --> 00:09:07.680 +you know things that were said in the + +00:09:05.600 --> 00:09:09.399 +input or it can be just kind of is the + +00:09:07.680 --> 00:09:11.120 +statement factual in general in which + +00:09:09.399 --> 00:09:13.720 +case you need to go online you need to + +00:09:11.120 --> 00:09:16.480 +search for things and like uh check + +00:09:13.720 --> 00:09:18.480 +whether the statement is factual or not + +00:09:16.480 --> 00:09:20.480 +um other things are like coherence does + +00:09:18.480 --> 00:09:21.480 +the output fit coherently within the + +00:09:20.480 --> 00:09:23.680 +larger + +00:09:21.480 --> 00:09:25.680 +discs um and there's many many other + +00:09:23.680 --> 00:09:28.120 +ones of these this is also task + +00:09:25.680 --> 00:09:29.760 +dependent so like the things you will + +00:09:28.120 --> 00:09:31.000 +evaluate for machine transl are + +00:09:29.760 --> 00:09:32.880 +different than the ones you would do for + +00:09:31.000 --> 00:09:35.760 +dialog which are different than the ones + +00:09:32.880 --> 00:09:38.200 +you would do for a general purpose + +00:09:35.760 --> 00:09:41.279 +chatot uh which is different kind things + +00:09:38.200 --> 00:09:44.120 +you would do for um summarization for + +00:09:41.279 --> 00:09:46.320 +example so if you're interested in doing + +00:09:44.120 --> 00:09:47.519 +something like this uh then I definitely + +00:09:46.320 --> 00:09:48.800 +encourage you to look at what other + +00:09:47.519 --> 00:09:51.399 +people have done for the tasks you're + +00:09:48.800 --> 00:09:53.079 +interested in uh previously and uh find + +00:09:51.399 --> 00:09:54.880 +out the different types of traits that + +00:09:53.079 --> 00:09:58.320 +did + +00:09:54.880 --> 00:10:00.760 +last uh any any questions about this + +00:09:58.320 --> 00:10:03.079 +also + +00:10:00.760 --> 00:10:06.920 +okay the next type of feedback is + +00:10:03.079 --> 00:10:09.839 +preference ratings um and so this is uh + +00:10:06.920 --> 00:10:12.600 +basically what you do is you have two or + +00:10:09.839 --> 00:10:14.240 +more outputs from different models or + +00:10:12.600 --> 00:10:16.440 +different Generations from an individual + +00:10:14.240 --> 00:10:18.839 +model and you ask a human which one is + +00:10:16.440 --> 00:10:22.320 +better like is one better than the other + +00:10:18.839 --> 00:10:23.839 +or are they tied and so in this case um + +00:10:22.320 --> 00:10:26.320 +you might have please send this package + +00:10:23.839 --> 00:10:28.880 +to Tokyo please send a package to + +00:10:26.320 --> 00:10:31.040 +Tokyo we might disagree on how like good + +00:10:28.880 --> 00:10:33.959 +or bad each of them are but I think most + +00:10:31.040 --> 00:10:35.959 +people would agree that this one is like + +00:10:33.959 --> 00:10:37.480 +despite the fact that it got this wrong + +00:10:35.959 --> 00:10:40.160 +the second one is better than the first + +00:10:37.480 --> 00:10:42.240 +one so this is a little bit of an easier + +00:10:40.160 --> 00:10:45.040 +task it's easier to uh get people to + +00:10:42.240 --> 00:10:46.839 +annotate these things + +00:10:45.040 --> 00:10:50.519 +consistently however it has the + +00:10:46.839 --> 00:10:52.839 +disadvantage that you can't really tell + +00:10:50.519 --> 00:10:55.360 +uh whether systems are really good or + +00:10:52.839 --> 00:10:57.200 +really bad so let's say you have a bunch + +00:10:55.360 --> 00:11:00.279 +of really bad systems that you're + +00:10:57.200 --> 00:11:01.839 +comparing with each other um you might + +00:11:00.279 --> 00:11:03.680 +find that one is better than the other + +00:11:01.839 --> 00:11:06.000 +but that still doesn't mean it's ready + +00:11:03.680 --> 00:11:07.399 +to be deployed or if you have a bunch of + +00:11:06.000 --> 00:11:11.040 +really good systems they're all + +00:11:07.399 --> 00:11:13.000 +basically you know very very similar to + +00:11:11.040 --> 00:11:14.399 +another but one is like slightly more + +00:11:13.000 --> 00:11:18.639 +fluent than the other you might still + +00:11:14.399 --> 00:11:20.680 +get a similar result um and so that also + +00:11:18.639 --> 00:11:22.760 +makes it uh you know a little bit + +00:11:20.680 --> 00:11:24.880 +difficult to use practically in some + +00:11:22.760 --> 00:11:27.040 +ways I didn't put it on the slide but + +00:11:24.880 --> 00:11:30.680 +there's another way you can kind of get + +00:11:27.040 --> 00:11:33.920 +the best of both worlds um which is a + +00:11:30.680 --> 00:11:35.560 +side by side assessment and side by-side + +00:11:33.920 --> 00:11:38.440 +assessment basically what you would do + +00:11:35.560 --> 00:11:40.560 +is you would say um please send this + +00:11:38.440 --> 00:11:43.399 +package to Tokyo please send a package + +00:11:40.560 --> 00:11:47.279 +to Pittsburgh give each of them a direct + +00:11:43.399 --> 00:11:48.839 +score um but you can use decimal places + +00:11:47.279 --> 00:11:51.120 +and you can't use the same score for all + +00:11:48.839 --> 00:11:55.920 +of them and so it's + +00:11:51.120 --> 00:11:57.480 +like five 500 and 4.99 out of five or + +00:11:55.920 --> 00:11:59.519 +something like that like you like one + +00:11:57.480 --> 00:12:02.639 +slightly better than the other or or + +00:11:59.519 --> 00:12:04.480 +something like that um so there are ways + +00:12:02.639 --> 00:12:07.240 +to kind of get Best of Both Worlds if + +00:12:04.480 --> 00:12:11.720 +you're interested in doing + +00:12:07.240 --> 00:12:11.720 +that um + +00:12:14.920 --> 00:12:20.519 +so one problem one other problem with + +00:12:18.279 --> 00:12:22.519 +preference rankings is that there's a + +00:12:20.519 --> 00:12:24.440 +limited number of things that humans can + +00:12:22.519 --> 00:12:28.160 +compare before they get really + +00:12:24.440 --> 00:12:32.360 +overwhelmed so if you say I + +00:12:28.160 --> 00:12:35.560 +want like I want to + +00:12:32.360 --> 00:12:36.920 +rate 15 systems or 20 systems with + +00:12:35.560 --> 00:12:39.120 +respect to how good they are with + +00:12:36.920 --> 00:12:40.639 +respect to each other it's going to be + +00:12:39.120 --> 00:12:43.680 +impossible for humans to come up with a + +00:12:40.639 --> 00:12:46.959 +good preference ranking between them and + +00:12:43.680 --> 00:12:49.480 +so the typical way around this um which + +00:12:46.959 --> 00:12:52.360 +is also used in uh things like the + +00:12:49.480 --> 00:12:55.440 +chatbot Arena by lmis and other things + +00:12:52.360 --> 00:12:58.720 +like this is to use uh something like an + +00:12:55.440 --> 00:13:00.959 +ELO or true skill ranking and what these + +00:12:58.720 --> 00:13:03.079 +are is these are things that were + +00:13:00.959 --> 00:13:05.760 +created for the ranking of like chess + +00:13:03.079 --> 00:13:09.160 +players or video game players or other + +00:13:05.760 --> 00:13:11.720 +things where they like b battle against + +00:13:09.160 --> 00:13:13.920 +each other in multiple matches uh + +00:13:11.720 --> 00:13:16.440 +pair-wise and then you put all of the + +00:13:13.920 --> 00:13:18.399 +wins and losses into these ranking + +00:13:16.440 --> 00:13:20.600 +algorithms and they give you a score + +00:13:18.399 --> 00:13:22.920 +about how good like each of the each of + +00:13:20.600 --> 00:13:27.079 +the players are so if you do something + +00:13:22.920 --> 00:13:29.480 +like this you can um get basically a + +00:13:27.079 --> 00:13:32.120 +ranking of systems despite the that you + +00:13:29.480 --> 00:13:35.240 +only did pairwise assessments so these + +00:13:32.120 --> 00:13:35.240 +are also a good thing to know + +00:13:37.399 --> 00:13:43.839 +about a final variety of human feedback + +00:13:40.600 --> 00:13:45.320 +uh that we create is uh air annotation + +00:13:43.839 --> 00:13:47.519 +and this can be useful for a number of + +00:13:45.320 --> 00:13:49.839 +reasons um but basically the way it + +00:13:47.519 --> 00:13:53.839 +works is you annotate individual errors + +00:13:49.839 --> 00:13:55.639 +within the outputs and um oh one thing I + +00:13:53.839 --> 00:13:58.120 +should mention is that um I'm giving a + +00:13:55.639 --> 00:14:00.880 +lot of examples from machine translation + +00:13:58.120 --> 00:14:02.800 +um I feel like machine translation has + +00:14:00.880 --> 00:14:04.519 +been doing evaluation of generated + +00:14:02.800 --> 00:14:07.600 +outputs for a lot longer than a lot of + +00:14:04.519 --> 00:14:09.000 +other uh fields of NLP have and + +00:14:07.600 --> 00:14:11.800 +therefore their methodology is more + +00:14:09.000 --> 00:14:13.480 +developed than a lot of other fields um + +00:14:11.800 --> 00:14:16.199 +but a lot of these things can also be + +00:14:13.480 --> 00:14:18.079 +applied to uh other uh other tasks as + +00:14:16.199 --> 00:14:19.079 +well but anyway getting back to this + +00:14:18.079 --> 00:14:20.680 +there's something for machine + +00:14:19.079 --> 00:14:23.639 +translation called multi-dimensional + +00:14:20.680 --> 00:14:26.240 +quality metrics and the multidimensional + +00:14:23.639 --> 00:14:29.160 +quality metrics basically what they do + +00:14:26.240 --> 00:14:32.199 +is they annotate spans in the output + +00:14:29.160 --> 00:14:34.800 +where each Span in the output is given a + +00:14:32.199 --> 00:14:38.079 +severity ranking of the error and it's + +00:14:34.800 --> 00:14:40.199 +given a type of the error and there's + +00:14:38.079 --> 00:14:42.600 +about eight different types of Errors + +00:14:40.199 --> 00:14:44.839 +like this doesn't violate or this + +00:14:42.600 --> 00:14:47.399 +violates linguistic conventions of using + +00:14:44.839 --> 00:14:49.880 +the word this instead of uh here by + +00:14:47.399 --> 00:14:51.639 +using the word uh instead of this here + +00:14:49.880 --> 00:14:55.079 +and then this is an accuracy error + +00:14:51.639 --> 00:14:57.839 +because it's not accurately con uh uh + +00:14:55.079 --> 00:15:01.720 +conveying the output and then this error + +00:14:57.839 --> 00:15:04.600 +is minor uh this error is Major um and + +00:15:01.720 --> 00:15:06.399 +then there's also like severe severe + +00:15:04.600 --> 00:15:07.440 +versus major but minor and major is a + +00:15:06.399 --> 00:15:09.680 +more important + +00:15:07.440 --> 00:15:11.839 +distinction um so the advantage of this + +00:15:09.680 --> 00:15:14.279 +is a couple fold number one it gives you + +00:15:11.839 --> 00:15:16.440 +more fine grained feedback uh in that + +00:15:14.279 --> 00:15:19.199 +you can say okay this system has a lot + +00:15:16.440 --> 00:15:22.199 +of uh accuracy errors this system has a + +00:15:19.199 --> 00:15:24.880 +lot of linguistic conventions errors um + +00:15:22.199 --> 00:15:28.600 +it also can be more consistent because + +00:15:24.880 --> 00:15:29.839 +if you just say to people which output + +00:15:28.600 --> 00:15:31.800 +is better + +00:15:29.839 --> 00:15:34.560 +or what is the score of this output + +00:15:31.800 --> 00:15:36.360 +people have trouble deciding about that + +00:15:34.560 --> 00:15:39.560 +because it's a more subjective + +00:15:36.360 --> 00:15:41.680 +evaluation but if I say is this word + +00:15:39.560 --> 00:15:43.000 +correct it's a little bit easier for + +00:15:41.680 --> 00:15:44.759 +people to do so you can get more + +00:15:43.000 --> 00:15:46.920 +consistent annotations + +00:15:44.759 --> 00:15:49.720 +here the problem with this is this can + +00:15:46.920 --> 00:15:50.839 +be very time consuming so um you know + +00:15:49.720 --> 00:15:52.480 +obviously you need to go through and + +00:15:50.839 --> 00:15:56.440 +annotate every single error if it's for + +00:15:52.480 --> 00:15:56.440 +a long outputs or something your + +00:15:56.959 --> 00:16:03.519 +problem so anyway these are just three + +00:15:59.800 --> 00:16:05.680 +uh ways of collecting human feedback um + +00:16:03.519 --> 00:16:08.639 +and then there's an alternative which is + +00:16:05.680 --> 00:16:10.079 +automatic evaluation of outputs and um + +00:16:08.639 --> 00:16:14.399 +there's a bunch of different ways we can + +00:16:10.079 --> 00:16:16.800 +do this the basic idea here is we have a + +00:16:14.399 --> 00:16:20.199 +source um we have a couple + +00:16:16.800 --> 00:16:22.800 +hypotheses and uh we have an automatic + +00:16:20.199 --> 00:16:26.000 +system that generates outputs uh like + +00:16:22.800 --> 00:16:28.279 +scores and we optionally have a + +00:16:26.000 --> 00:16:30.839 +reference output so the reference output + +00:16:28.279 --> 00:16:33.519 +is a human created gold standard output + +00:16:30.839 --> 00:16:35.120 +with respect to how good that um uh with + +00:16:33.519 --> 00:16:38.240 +respect to like what the output should + +00:16:35.120 --> 00:16:38.240 +be in an ideal + +00:16:38.279 --> 00:16:47.079 +case and basically the goal of automatic + +00:16:43.199 --> 00:16:50.199 +evaluation is to + +00:16:47.079 --> 00:16:52.839 +predict human preferences or to predict + +00:16:50.199 --> 00:16:56.240 +what the human scores would be um + +00:16:52.839 --> 00:16:58.600 +because still at this point um we mostly + +00:16:56.240 --> 00:16:59.480 +view what humans think of the output to + +00:16:58.600 --> 00:17:01.680 +be + +00:16:59.480 --> 00:17:03.280 +uh kind of the + +00:17:01.680 --> 00:17:06.199 +standard + +00:17:03.280 --> 00:17:08.439 +and this is called a variety of things + +00:17:06.199 --> 00:17:10.600 +depending on what field you're in um in + +00:17:08.439 --> 00:17:12.559 +machine translation and summarization + +00:17:10.600 --> 00:17:13.520 +it's called automatic evaluation also a + +00:17:12.559 --> 00:17:16.520 +lot in + +00:17:13.520 --> 00:17:18.400 +dialogue um if you're talking about + +00:17:16.520 --> 00:17:21.000 +people from reinforcement learning or + +00:17:18.400 --> 00:17:24.600 +other things um or chat Bots or things + +00:17:21.000 --> 00:17:28.240 +like that uh a lot of people or uh like + +00:17:24.600 --> 00:17:31.280 +AGI or whatever um a lot of people call + +00:17:28.240 --> 00:17:32.520 +it uh word model um because that + +00:17:31.280 --> 00:17:34.480 +specifically comes from the point of + +00:17:32.520 --> 00:17:36.440 +view of like learning from this feedback + +00:17:34.480 --> 00:17:37.960 +but essentially they're the same thing + +00:17:36.440 --> 00:17:41.080 +uh from my point of view they're trying + +00:17:37.960 --> 00:17:42.520 +to predict how good an output is and how + +00:17:41.080 --> 00:17:44.240 +much you should reward the model for + +00:17:42.520 --> 00:17:46.559 +producing that + +00:17:44.240 --> 00:17:48.679 +output + +00:17:46.559 --> 00:17:50.520 +um so there's a bunch of different + +00:17:48.679 --> 00:17:51.720 +methods to do this I'm not going to + +00:17:50.520 --> 00:17:53.799 +cover all of them I'm just going to + +00:17:51.720 --> 00:17:55.240 +cover three paradigms for doing this so + +00:17:53.799 --> 00:17:57.880 +you know where to look further if you're + +00:17:55.240 --> 00:18:00.039 +interested in doing these things um the + +00:17:57.880 --> 00:18:02.400 +first one is embedding based + +00:18:00.039 --> 00:18:04.679 +evaluation and the way embedding based + +00:18:02.400 --> 00:18:06.600 +evaluation works is usually it's + +00:18:04.679 --> 00:18:11.400 +unsupervised calculation based on + +00:18:06.600 --> 00:18:14.880 +embeding similarity between um + +00:18:11.400 --> 00:18:18.080 +the output that the model generated and + +00:18:14.880 --> 00:18:20.840 +a reference output that uh you have + +00:18:18.080 --> 00:18:23.400 +created so sorry this is very small but + +00:18:20.840 --> 00:18:25.559 +we have a reference here that says the + +00:18:23.400 --> 00:18:27.640 +weather is cold today and we have a + +00:18:25.559 --> 00:18:30.240 +candidate that says it is freezing today + +00:18:27.640 --> 00:18:33.000 +so this is probably you know like a good + +00:18:30.240 --> 00:18:35.480 +um a reasonably good + +00:18:33.000 --> 00:18:37.640 +output and we run this through some + +00:18:35.480 --> 00:18:39.120 +embedding model uh it was called Bert + +00:18:37.640 --> 00:18:40.679 +score and so of course you can run it + +00:18:39.120 --> 00:18:42.240 +through Bert but basically it can be any + +00:18:40.679 --> 00:18:43.799 +embedding model that gives you embedding + +00:18:42.240 --> 00:18:46.200 +for each token in the + +00:18:43.799 --> 00:18:47.640 +sequence and so there are five tokens in + +00:18:46.200 --> 00:18:49.720 +this sequence four tokens in this + +00:18:47.640 --> 00:18:51.960 +sequence you get five tokens and then + +00:18:49.720 --> 00:18:54.799 +four sorry five embeddings and then four + +00:18:51.960 --> 00:18:57.400 +embeddings you calculate carewise cosine + +00:18:54.799 --> 00:18:59.880 +similarity between all of them and this + +00:18:57.400 --> 00:19:03.480 +gives you cosine + +00:18:59.880 --> 00:19:06.480 +similarity Matrix and then you take the + +00:19:03.480 --> 00:19:09.120 +ARG Max or you take the maximum + +00:19:06.480 --> 00:19:11.280 +similarity along either the + +00:19:09.120 --> 00:19:15.799 +rows or the + +00:19:11.280 --> 00:19:19.559 +columns and here the rows correspond + +00:19:15.799 --> 00:19:22.400 +to tokens in the reference and because + +00:19:19.559 --> 00:19:24.039 +the rows correspond to tokens in the + +00:19:22.400 --> 00:19:26.960 +reference + +00:19:24.039 --> 00:19:28.320 +the how well you find something that is + +00:19:26.960 --> 00:19:31.679 +similar to each of the tokens in the + +00:19:28.320 --> 00:19:34.000 +reference is like a recall based method + +00:19:31.679 --> 00:19:35.919 +because it's saying how many tokens in + +00:19:34.000 --> 00:19:39.520 +the reference have a good match in the + +00:19:35.919 --> 00:19:41.120 +output and then if you look at the + +00:19:39.520 --> 00:19:42.799 +columns if you look at the max and the + +00:19:41.120 --> 00:19:44.960 +columns this is like a precision based + +00:19:42.799 --> 00:19:47.000 +metric because it's saying how many of + +00:19:44.960 --> 00:19:49.360 +the things in the output are similar + +00:19:47.000 --> 00:19:51.240 +have a similar match in the reference so + +00:19:49.360 --> 00:19:54.480 +basically you can calculate recall and + +00:19:51.240 --> 00:19:56.200 +precision over all of the tokens and + +00:19:54.480 --> 00:20:00.200 +then feed this into something that looks + +00:19:56.200 --> 00:20:02.400 +like fmeasure and you can also use tfidf + +00:20:00.200 --> 00:20:06.000 +waiting um like what I talked about in + +00:20:02.400 --> 00:20:07.799 +the rag lecture uh to upweight low + +00:20:06.000 --> 00:20:09.520 +frequency words because low frequency + +00:20:07.799 --> 00:20:11.440 +words tend to be more content words and + +00:20:09.520 --> 00:20:13.120 +going back to my example you know if you + +00:20:11.440 --> 00:20:14.280 +make a mistake from Pittsburgh to Tokyo + +00:20:13.120 --> 00:20:17.880 +that's going to be more painful than + +00:20:14.280 --> 00:20:21.000 +making a mistake from this to um so + +00:20:17.880 --> 00:20:22.520 +actually if you'll uh if you were paying + +00:20:21.000 --> 00:20:25.480 +close attention to the rag lecture this + +00:20:22.520 --> 00:20:27.360 +looks really similar to the co bear um + +00:20:25.480 --> 00:20:29.559 +the co bear retrieval objective that I + +00:20:27.360 --> 00:20:30.960 +talked about in the r lecture um I don't + +00:20:29.559 --> 00:20:32.840 +think it's a coincidence they both came + +00:20:30.960 --> 00:20:34.360 +out around the same time uh so people + +00:20:32.840 --> 00:20:36.360 +were thinking about the same thing but + +00:20:34.360 --> 00:20:37.600 +um this is one method that's pretty + +00:20:36.360 --> 00:20:40.200 +widely + +00:20:37.600 --> 00:20:43.480 +use the bird Square code base is also + +00:20:40.200 --> 00:20:45.440 +really nice and easy to use so um if uh + +00:20:43.480 --> 00:20:47.640 +you want to try it out feel free to take + +00:20:45.440 --> 00:20:47.640 +a + +00:20:48.159 --> 00:20:53.840 +look cool um the next one I'd like to + +00:20:51.600 --> 00:20:56.080 +talk about is a regression based + +00:20:53.840 --> 00:20:58.760 +evaluation and the way this works is + +00:20:56.080 --> 00:21:02.600 +this is usually used in a supervised uh + +00:20:58.760 --> 00:21:04.320 +setting so uh the way what you have to + +00:21:02.600 --> 00:21:07.600 +do is you have to calculate a whole + +00:21:04.320 --> 00:21:09.799 +bunch of like actual human + +00:21:07.600 --> 00:21:12.440 +judgments and + +00:21:09.799 --> 00:21:15.000 +usually these judgments can either be + +00:21:12.440 --> 00:21:16.960 +direct assessment uh where you actually + +00:21:15.000 --> 00:21:19.120 +have a score or they can be pairwise + +00:21:16.960 --> 00:21:20.840 +judgments and then if you have direct + +00:21:19.120 --> 00:21:23.640 +assessment you use a regression based + +00:21:20.840 --> 00:21:26.039 +loss like uh minimum squared error if + +00:21:23.640 --> 00:21:27.520 +you have pairwise uh you use a ranking + +00:21:26.039 --> 00:21:29.039 +based loss that tries to upweight the + +00:21:27.520 --> 00:21:31.360 +ones that are higher scoring downward + +00:21:29.039 --> 00:21:33.200 +the ones that are lower scoring one + +00:21:31.360 --> 00:21:35.720 +typical example of this is Comet which + +00:21:33.200 --> 00:21:37.200 +is or has been at least for a very long + +00:21:35.720 --> 00:21:39.880 +time the state-of-the art and machine + +00:21:37.200 --> 00:21:41.279 +translation evaluation and the reason + +00:21:39.880 --> 00:21:43.440 +why it works so well is because we have + +00:21:41.279 --> 00:21:44.720 +a bunch of evaluations for machine + +00:21:43.440 --> 00:21:46.080 +translation they've been doing + +00:21:44.720 --> 00:21:47.600 +evaluation and machine translation + +00:21:46.080 --> 00:21:50.480 +systems for years and you can use that + +00:21:47.600 --> 00:21:52.720 +as lots of supervised training data so + +00:21:50.480 --> 00:21:54.640 +basically you just take um these + +00:21:52.720 --> 00:21:56.440 +evaluation data you have human + +00:21:54.640 --> 00:21:59.080 +annotations you have the output + +00:21:56.440 --> 00:22:00.320 +according to a model like comet um you + +00:21:59.080 --> 00:22:02.679 +calculate the difference between them + +00:22:00.320 --> 00:22:05.640 +and you update model + +00:22:02.679 --> 00:22:07.080 +parameters um the problem this is great + +00:22:05.640 --> 00:22:08.520 +if you have lots of training data the + +00:22:07.080 --> 00:22:10.640 +problem with this is for a lot of tasks + +00:22:08.520 --> 00:22:12.360 +we don't have lots of training data so + +00:22:10.640 --> 00:22:14.720 +um you know training these is a little + +00:22:12.360 --> 00:22:14.720 +bit less + +00:22:15.400 --> 00:22:22.919 +feasible and now recently uh what we + +00:22:19.600 --> 00:22:25.279 +have been moving into is is a QA based + +00:22:22.919 --> 00:22:27.120 +evaluation which is basically where we + +00:22:25.279 --> 00:22:30.760 +ask a language model how good the output + +00:22:27.120 --> 00:22:32.279 +is and so uh gmba is an example one of + +00:22:30.760 --> 00:22:34.559 +the early examples of this for machine + +00:22:32.279 --> 00:22:37.320 +translation evaluation uh where they + +00:22:34.559 --> 00:22:39.840 +basically just ask a g gp4 like score + +00:22:37.320 --> 00:22:41.600 +the following translation from Source + +00:22:39.840 --> 00:22:44.000 +language to target language with respect + +00:22:41.600 --> 00:22:47.080 +to the human reference um on a + +00:22:44.000 --> 00:22:49.200 +continuous scale from Z to 100 uh where + +00:22:47.080 --> 00:22:51.320 +the score of zero means no meaning + +00:22:49.200 --> 00:22:54.039 +preserved and the score of 100 means a + +00:22:51.320 --> 00:22:56.880 +perfect meaning in grammar uh you feed + +00:22:54.039 --> 00:22:58.760 +in the source um you feed in the T the + +00:22:56.880 --> 00:23:01.000 +human reference optionally if you have a + +00:22:58.760 --> 00:23:03.320 +human reference and then you feed in the + +00:23:01.000 --> 00:23:06.760 +Target um and you get a + +00:23:03.320 --> 00:23:09.919 +score and um so this this works pretty + +00:23:06.760 --> 00:23:12.720 +well this can give you uh better results + +00:23:09.919 --> 00:23:15.159 +um there's a especially if you have a + +00:23:12.720 --> 00:23:16.960 +strong language model the problem is + +00:23:15.159 --> 00:23:18.279 +it's very unpredictable whether this is + +00:23:16.960 --> 00:23:20.120 +going to work well and it's very + +00:23:18.279 --> 00:23:23.039 +dependent on the prompt that you're + +00:23:20.120 --> 00:23:25.279 +using so um right now A lot of people + +00:23:23.039 --> 00:23:27.279 +are using gp4 without actually + +00:23:25.279 --> 00:23:29.039 +validating whether it does a good job at + +00:23:27.279 --> 00:23:33.080 +evaluation and + +00:23:29.039 --> 00:23:34.919 +and my the results are all across the + +00:23:33.080 --> 00:23:36.880 +board it can be anywhere from very very + +00:23:34.919 --> 00:23:38.640 +good to very very bad at evaluating + +00:23:36.880 --> 00:23:41.320 +particular tasks so I would be at least + +00:23:38.640 --> 00:23:43.559 +a little bit suspicious of whether gp4 + +00:23:41.320 --> 00:23:45.679 +is doing a good job evaluating for your + +00:23:43.559 --> 00:23:49.320 +task especially more complex + +00:23:45.679 --> 00:23:51.960 +tests um I would especially be + +00:23:49.320 --> 00:23:54.000 +suspicious if you're doing two uh any of + +00:23:51.960 --> 00:23:56.760 +the two following things number one if + +00:23:54.000 --> 00:23:59.880 +you're comparing gp4 or any model + +00:23:56.760 --> 00:24:02.400 +against itself in another model because + +00:23:59.880 --> 00:24:05.200 +gp4 really likes + +00:24:02.400 --> 00:24:06.880 +gp4 it really likes its own outputs and + +00:24:05.200 --> 00:24:08.120 +there are papers uh sorry I don't + +00:24:06.880 --> 00:24:09.679 +actually have the references here but I + +00:24:08.120 --> 00:24:11.200 +can follow up if people are interested + +00:24:09.679 --> 00:24:13.080 +but there are papers that demonstrate + +00:24:11.200 --> 00:24:15.799 +that gp4 likes it you know its own + +00:24:13.080 --> 00:24:19.200 +outputs more than others also if you're + +00:24:15.799 --> 00:24:22.120 +explicitly optimizing the outputs using + +00:24:19.200 --> 00:24:24.640 +rlf um there is something called good + +00:24:22.120 --> 00:24:27.120 +Hearts law which is basically anytime + +00:24:24.640 --> 00:24:29.520 +you uh start optimizing towards a metric + +00:24:27.120 --> 00:24:32.559 +it becomes a bad metric and that also + +00:24:29.520 --> 00:24:35.000 +happens for gp4 based evaluations so if + +00:24:32.559 --> 00:24:37.200 +you start optimizing for gp4 based + +00:24:35.000 --> 00:24:38.960 +evaluations especially for reference + +00:24:37.200 --> 00:24:41.679 +list metrics that don't use a reference + +00:24:38.960 --> 00:24:44.840 +output then um you start basically + +00:24:41.679 --> 00:24:47.440 +exploiting the metric + +00:24:44.840 --> 00:24:49.840 +um another thing that you can do with QA + +00:24:47.440 --> 00:24:53.279 +based evaluation is ask about fine grade + +00:24:49.840 --> 00:24:54.919 +mistakes and so this is a paper by um uh + +00:24:53.279 --> 00:24:56.480 +Patrick Fernandez who's a student who's + +00:24:54.919 --> 00:25:02.080 +working with me and basically what we + +00:24:56.480 --> 00:25:05.240 +did is we asked the model to um not give + +00:25:02.080 --> 00:25:07.360 +a particular score but actually identify + +00:25:05.240 --> 00:25:08.880 +the mistakes in the output and when we + +00:25:07.360 --> 00:25:10.559 +asked it to identify the mistakes in the + +00:25:08.880 --> 00:25:13.720 +output we found that this gave more + +00:25:10.559 --> 00:25:17.320 +consistent uh results so kind of + +00:25:13.720 --> 00:25:18.840 +interestingly we ask humans to identify + +00:25:17.320 --> 00:25:21.120 +individual mistakes and the output that + +00:25:18.840 --> 00:25:24.240 +gives humans more consistent results + +00:25:21.120 --> 00:25:25.559 +it's the same thing for gp4 so um that + +00:25:24.240 --> 00:25:27.320 +that's another paper you can look at if + +00:25:25.559 --> 00:25:29.640 +you're + +00:25:27.320 --> 00:25:32.679 +interested + +00:25:29.640 --> 00:25:38.000 +cool um so I I mentioned that you could + +00:25:32.679 --> 00:25:38.000 +or could not uh trust uh yeah sorry go + +00:25:44.679 --> 00:25:51.279 +ahead uh correct so yeah B basically + +00:25:47.360 --> 00:25:53.279 +just what you do is you have the source + +00:25:51.279 --> 00:25:54.960 +um ideally you'll also have a reference + +00:25:53.279 --> 00:25:57.840 +output that was created by skilled + +00:25:54.960 --> 00:25:59.720 +humans and then you put in the Target + +00:25:57.840 --> 00:26:02.279 +you know output basically you have the + +00:25:59.720 --> 00:26:08.000 +input ideally a reference output created + +00:26:02.279 --> 00:26:08.000 +by Good by skilled humans and uh like + +00:26:15.159 --> 00:26:20.240 +hypothesis yeah I + +00:26:17.919 --> 00:26:24.559 +mean it's a good question and I don't + +00:26:20.240 --> 00:26:26.919 +know if we actually have a a very clear + +00:26:24.559 --> 00:26:31.399 +empirical like evidence of why this is + +00:26:26.919 --> 00:26:33.320 +the case but my hypothesis about this is + +00:26:31.399 --> 00:26:36.159 +yes we kind of would expect models to be + +00:26:33.320 --> 00:26:38.200 +more biased towards their own outputs + +00:26:36.159 --> 00:26:40.919 +and the reason why is because + +00:26:38.200 --> 00:26:43.080 +essentially you know models + +00:26:40.919 --> 00:26:44.279 +are within their embeddings they're + +00:26:43.080 --> 00:26:45.760 +encoding when they're in a high + +00:26:44.279 --> 00:26:47.600 +probability part of the space and when + +00:26:45.760 --> 00:26:50.200 +they're in a low probability part of the + +00:26:47.600 --> 00:26:51.120 +space and like the high probability part + +00:26:50.200 --> 00:26:54.600 +of the + +00:26:51.120 --> 00:26:56.200 +space is going to be the high + +00:26:54.600 --> 00:26:58.600 +probability part of the space is going + +00:26:56.200 --> 00:27:02.559 +to be associated with good outputs + +00:26:58.600 --> 00:27:07.000 +because like when + +00:27:02.559 --> 00:27:08.600 +models are more sure of their outputs + +00:27:07.000 --> 00:27:11.960 +they're more likely to be + +00:27:08.600 --> 00:27:13.520 +good just because that indicates that + +00:27:11.960 --> 00:27:15.240 +like they're closer to the training data + +00:27:13.520 --> 00:27:17.760 +that it had and other things like that + +00:27:15.240 --> 00:27:21.600 +so model probabilities are associated + +00:27:17.760 --> 00:27:23.760 +with outputs uh with uh with good + +00:27:21.600 --> 00:27:26.600 +outputs but just + +00:27:23.760 --> 00:27:29.440 +correla separately from + +00:27:26.600 --> 00:27:32.120 +that I believe a model can identify when + +00:27:29.440 --> 00:27:33.320 +it's in a high probability segment of + +00:27:32.120 --> 00:27:35.799 +the space and when it's in a low + +00:27:33.320 --> 00:27:39.399 +probability segment of the space and + +00:27:35.799 --> 00:27:39.399 +because of that I expect + +00:27:39.519 --> 00:27:45.519 +that I like there are segments of the + +00:27:43.240 --> 00:27:47.120 +embedding space where it's more likely + +00:27:45.519 --> 00:27:48.360 +to answer yes about something being good + +00:27:47.120 --> 00:27:50.960 +or not and those are going to be + +00:27:48.360 --> 00:27:54.760 +associated with high uh like high + +00:27:50.960 --> 00:27:56.159 +probability outbreaks as well and also + +00:27:54.760 --> 00:27:57.760 +models are more likely to generate + +00:27:56.159 --> 00:28:00.240 +outputs that are high probability + +00:27:57.760 --> 00:28:02.320 +according into their model by definition + +00:28:00.240 --> 00:28:03.880 +so all three of those effects together + +00:28:02.320 --> 00:28:05.640 +would basically go into a model being + +00:28:03.880 --> 00:28:09.120 +bios supports its own outputs compared + +00:28:05.640 --> 00:28:11.559 +to that puts in another model but um + +00:28:09.120 --> 00:28:13.279 +yeah this is a very handwavy explanation + +00:28:11.559 --> 00:28:15.519 +but like putting the two the three + +00:28:13.279 --> 00:28:18.600 +together models output high probability + +00:28:15.519 --> 00:28:20.880 +things from their own probability Space + +00:28:18.600 --> 00:28:23.440 +by definition + +00:28:20.880 --> 00:28:25.760 +um things that are high probability are + +00:28:23.440 --> 00:28:27.519 +associated with being good uh just + +00:28:25.760 --> 00:28:29.279 +because otherwise a model would be + +00:28:27.519 --> 00:28:31.840 +outputting garbage + +00:28:29.279 --> 00:28:33.840 +and um the final thing which is more + +00:28:31.840 --> 00:28:35.679 +tenuous is if the model is in a high + +00:28:33.840 --> 00:28:37.919 +probability segment of the space it's + +00:28:35.679 --> 00:28:39.760 +more likely to Output yes according to a + +00:28:37.919 --> 00:28:41.480 +question of it being good and I I think + +00:28:39.760 --> 00:28:44.360 +that's probably true but I'm not 100% + +00:28:41.480 --> 00:28:44.360 +sure about the the + +00:28:45.559 --> 00:28:51.039 +fin um maybe maybe someone wants to + +00:28:49.000 --> 00:28:52.840 +examinate examine that as a final + +00:28:51.039 --> 00:28:54.200 +project it seems like a interesting + +00:28:52.840 --> 00:28:57.080 +interesting + +00:28:54.200 --> 00:29:00.039 +question um cool uh were there any other + +00:28:57.080 --> 00:29:00.039 +questions about these methods + +00:29:00.159 --> 00:29:07.120 +here um okay so when I say like an + +00:29:03.960 --> 00:29:11.080 +evaluation metric is good or not what do + +00:29:07.120 --> 00:29:13.200 +I mean by this being good or not um or a + +00:29:11.080 --> 00:29:16.880 +reward model or whatever else and + +00:29:13.200 --> 00:29:18.440 +basically the um the way we typically do + +00:29:16.880 --> 00:29:19.840 +this is by doing something called meta + +00:29:18.440 --> 00:29:22.440 +evaluation so it's called meta + +00:29:19.840 --> 00:29:25.799 +evaluation because it's evaluation of + +00:29:22.440 --> 00:29:29.279 +evaluation and uh the way we do this is + +00:29:25.799 --> 00:29:32.519 +we have human uh scores and we have + +00:29:29.279 --> 00:29:34.760 +automatic scores and we usually + +00:29:32.519 --> 00:29:38.640 +calculate some sort of correlation + +00:29:34.760 --> 00:29:41.000 +between the scores so um typical ones + +00:29:38.640 --> 00:29:46.440 +are rank correlations like Pearson's + +00:29:41.000 --> 00:29:48.799 +correlation or tendle uh Tow and uh so + +00:29:46.440 --> 00:29:51.200 +the more Associated the automatic scores + +00:29:48.799 --> 00:29:53.960 +are with the human scores the higher + +00:29:51.200 --> 00:29:55.159 +these correlations are going to be um + +00:29:53.960 --> 00:29:57.559 +there's other things that you can + +00:29:55.159 --> 00:30:00.080 +calculate so if you're trying to figure + +00:29:57.559 --> 00:30:01.640 +out whether a model um matches human + +00:30:00.080 --> 00:30:04.279 +pairwise preferences you can just + +00:30:01.640 --> 00:30:06.440 +calculate accuracy so I didn't put that + +00:30:04.279 --> 00:30:08.080 +on um I didn't put that on the slide + +00:30:06.440 --> 00:30:10.880 +here but you can just calculate accuracy + +00:30:08.080 --> 00:30:13.120 +of pairwise preferences um you can also + +00:30:10.880 --> 00:30:15.360 +calculate the absolute error between the + +00:30:13.120 --> 00:30:19.320 +the judgments if you want to know uh + +00:30:15.360 --> 00:30:21.720 +whether the absolute error matches so um + +00:30:19.320 --> 00:30:24.159 +the these are good things to do if you + +00:30:21.720 --> 00:30:25.600 +want to use an evaluation metric but you + +00:30:24.159 --> 00:30:27.200 +aren't sure whether it's good or not I + +00:30:25.600 --> 00:30:29.640 +would check to see whether the authors + +00:30:27.200 --> 00:30:32.000 +have done this sort of meta evaluation + +00:30:29.640 --> 00:30:33.760 +if they haven't be a little bit + +00:30:32.000 --> 00:30:36.960 +suspicious if they have be a little bit + +00:30:33.760 --> 00:30:39.799 +less suspicious but um + +00:30:36.960 --> 00:30:42.960 +yeah how do people do this typically uh + +00:30:39.799 --> 00:30:45.640 +usually they create uh data sets like + +00:30:42.960 --> 00:30:49.440 +the WM they use data sets like the WMT + +00:30:45.640 --> 00:30:53.960 +shared tasks um or + +00:30:49.440 --> 00:30:57.679 +uh uh like some evl um but there's also + +00:30:53.960 --> 00:30:59.960 +other ways to create um uh there's also + +00:30:57.679 --> 00:31:01.639 +Lots other data sets but in order to do + +00:30:59.960 --> 00:31:05.639 +this reliably you need a fairly large + +00:31:01.639 --> 00:31:05.639 +data set so it's one thing to be aware + +00:31:07.080 --> 00:31:10.760 +of + +00:31:08.720 --> 00:31:14.200 +cool + +00:31:10.760 --> 00:31:16.360 +um then the final thing um all of the + +00:31:14.200 --> 00:31:17.919 +automatic evaluation methods that I + +00:31:16.360 --> 00:31:20.240 +talked about now are trying to match + +00:31:17.919 --> 00:31:22.679 +human preferences but that's not the + +00:31:20.240 --> 00:31:24.960 +only thing that you necessarily want to + +00:31:22.679 --> 00:31:28.440 +do the final thing that you might want + +00:31:24.960 --> 00:31:30.840 +to do is uh use the model outputs in a + +00:31:28.440 --> 00:31:34.200 +downstream system and see whether they + +00:31:30.840 --> 00:31:36.399 +are effective for that so there's two + +00:31:34.200 --> 00:31:39.080 +concepts of intrinsic evaluation and + +00:31:36.399 --> 00:31:41.720 +extrinsic evaluation so intrinsic + +00:31:39.080 --> 00:31:44.159 +evaluation um evaluates the quality of + +00:31:41.720 --> 00:31:45.720 +the output itself and so that would be + +00:31:44.159 --> 00:31:48.639 +like asking a human directly about how + +00:31:45.720 --> 00:31:50.720 +good is this output extrinsic evaluation + +00:31:48.639 --> 00:31:53.679 +is evaluating output quality by its + +00:31:50.720 --> 00:31:57.000 +utility um and so just to give one + +00:31:53.679 --> 00:31:58.360 +example um if you can evaluate large + +00:31:57.000 --> 00:32:00.200 +language model summary + +00:31:58.360 --> 00:32:04.200 +through question answering + +00:32:00.200 --> 00:32:05.880 +accuracy um and so you can take the + +00:32:04.200 --> 00:32:07.399 +output of an llm and feed it through a + +00:32:05.880 --> 00:32:09.600 +question answering model and see whether + +00:32:07.399 --> 00:32:12.399 +you're able to answer questions based on + +00:32:09.600 --> 00:32:15.799 +this and that kind of gives you a better + +00:32:12.399 --> 00:32:18.279 +idea of whether the summary require uh + +00:32:15.799 --> 00:32:20.120 +incorporates requisite information but + +00:32:18.279 --> 00:32:22.120 +if you think about anything an llm can + +00:32:20.120 --> 00:32:23.760 +be used for usually it's part of a + +00:32:22.120 --> 00:32:26.679 +bigger system so you can evaluate it as + +00:32:23.760 --> 00:32:28.399 +a part of that bigger system um the + +00:32:26.679 --> 00:32:30.639 +problem with this is it's a very + +00:32:28.399 --> 00:32:33.960 +indirect way of assessing things so like + +00:32:30.639 --> 00:32:36.080 +let's say your QA model is just bad uh + +00:32:33.960 --> 00:32:38.480 +how can you disentangle the effect of + +00:32:36.080 --> 00:32:41.679 +the L summary versus the QA model that's + +00:32:38.480 --> 00:32:44.120 +not a trivial thing to do so ideally + +00:32:41.679 --> 00:32:47.000 +like a combination of these two is + +00:32:44.120 --> 00:32:47.000 +practically the best way + +00:32:48.039 --> 00:32:52.200 +go cool so + +00:32:56.039 --> 00:32:59.960 +yeah yeah it wouldn't necessar + +00:32:58.360 --> 00:33:05.679 +say it's harder to do it might even be + +00:32:59.960 --> 00:33:05.679 +easier to do um which is like let's + +00:33:06.679 --> 00:33:11.720 +say Let me let me see if I can come up + +00:33:09.360 --> 00:33:11.720 +with + +00:33:12.639 --> 00:33:17.600 +example what let's + +00:33:15.000 --> 00:33:19.670 +say you + +00:33:17.600 --> 00:33:22.979 +are trying + +00:33:19.670 --> 00:33:22.979 +[Music] + +00:33:24.639 --> 00:33:29.760 +to let's say you're trying to + +00:33:30.559 --> 00:33:33.559 +guess + +00:33:39.000 --> 00:33:45.399 +whether let's say you're trying to guess + +00:33:42.399 --> 00:33:46.559 +whether a someone will be hired at a + +00:33:45.399 --> 00:33:52.039 +company or + +00:33:46.559 --> 00:33:53.880 +not based on an llm generated summary of + +00:33:52.039 --> 00:33:58.880 +their qualifications for a position or + +00:33:53.880 --> 00:34:01.799 +something like that um and + +00:33:58.880 --> 00:34:03.080 +you what actually maybe this is not a + +00:34:01.799 --> 00:34:04.720 +great example because whether you should + +00:34:03.080 --> 00:34:06.960 +be doing this ethically is a little bit + +00:34:04.720 --> 00:34:08.159 +unclear but let's say you were doing + +00:34:06.960 --> 00:34:09.560 +let's say you were doing something like + +00:34:08.159 --> 00:34:11.520 +that just because it's one example I can + +00:34:09.560 --> 00:34:14.320 +think of right now whether they will get + +00:34:11.520 --> 00:34:16.320 +hired or not is um is clear because you + +00:34:14.320 --> 00:34:19.399 +have a objective answer right whether + +00:34:16.320 --> 00:34:21.480 +they were hired or not um or maybe maybe + +00:34:19.399 --> 00:34:23.800 +another example would be like let's say + +00:34:21.480 --> 00:34:26.320 +um let's say you want to predict the + +00:34:23.800 --> 00:34:29.599 +diagnosis in a medical application based + +00:34:26.320 --> 00:34:32.960 +on an llm generated some of somebody's + +00:34:29.599 --> 00:34:35.919 +uh you know LM generated summary of + +00:34:32.960 --> 00:34:38.480 +somebody's you know past medical history + +00:34:35.919 --> 00:34:40.839 +and all this stuff and here you want the + +00:34:38.480 --> 00:34:43.440 +llm generated summary you definitely + +00:34:40.839 --> 00:34:44.879 +want the summary because the summary is + +00:34:43.440 --> 00:34:47.560 +going to be viewed by a doctor who will + +00:34:44.879 --> 00:34:49.359 +make the final decision but you also + +00:34:47.560 --> 00:34:50.760 +have information about the diagnoses of + +00:34:49.359 --> 00:34:52.399 +all the people in your medical system + +00:34:50.760 --> 00:34:54.560 +later because you know they went through + +00:34:52.399 --> 00:34:56.480 +your medical system for years and you + +00:34:54.560 --> 00:34:58.200 +know later like through lots of tests + +00:34:56.480 --> 00:35:00.800 +and stuff uh whether how they were + +00:34:58.200 --> 00:35:02.320 +diagnosed so you generate an LM based + +00:35:00.800 --> 00:35:05.000 +summary and then you predict the + +00:35:02.320 --> 00:35:06.599 +diagnosis from the summary so there the + +00:35:05.000 --> 00:35:08.040 +evaluation of the diagnosis is very + +00:35:06.599 --> 00:35:11.480 +clear because you kind of have a gold + +00:35:08.040 --> 00:35:12.599 +standard answer um but the EV intrinsic + +00:35:11.480 --> 00:35:14.839 +evaluation of whether it's a good + +00:35:12.599 --> 00:35:16.839 +summary or not is not as clear because + +00:35:14.839 --> 00:35:19.400 +you'd have pass do whether it's good and + +00:35:16.839 --> 00:35:21.079 +understandable summary so the extrinsic + +00:35:19.400 --> 00:35:24.920 +evaluation might be easier because it's + +00:35:21.079 --> 00:35:26.480 +clearer um so there are cases like that + +00:35:24.920 --> 00:35:30.720 +um the problem is you would have to have + +00:35:26.480 --> 00:35:33.800 +that data in order to do that um yeah do + +00:35:30.720 --> 00:35:38.240 +like evaluation yeah I was just + +00:35:33.800 --> 00:35:40.800 +wondering typically the + +00:35:38.240 --> 00:35:42.880 +like like how do you accomodate the + +00:35:40.800 --> 00:35:47.160 +diversity oh yeah that's a great that's + +00:35:42.880 --> 00:35:50.240 +a great question um so how do you how do + +00:35:47.160 --> 00:35:50.240 +you get these scores + +00:35:50.720 --> 00:35:55.800 +here there's a number of different + +00:35:53.200 --> 00:35:59.160 +things in the WMT shared tasks what they + +00:35:55.800 --> 00:36:00.280 +did is they did + +00:35:59.160 --> 00:36:03.200 +the first thing they do is they + +00:36:00.280 --> 00:36:06.319 +normalize by annotator and what they do + +00:36:03.200 --> 00:36:10.400 +is they basically take the zcore or Z + +00:36:06.319 --> 00:36:12.240 +score of the um of the human annotator's + +00:36:10.400 --> 00:36:14.880 +actual scores because some people are + +00:36:12.240 --> 00:36:16.400 +more harsh than other people and so what + +00:36:14.880 --> 00:36:20.680 +that means is you basically normalize to + +00:36:16.400 --> 00:36:22.119 +have zero mean in unit variance um and + +00:36:20.680 --> 00:36:24.119 +then after they've normalized to zero + +00:36:22.119 --> 00:36:29.560 +mean and unit variance then I think they + +00:36:24.119 --> 00:36:29.560 +average together different humans so um + +00:36:30.160 --> 00:36:36.520 +then for how do you deal with the fact + +00:36:33.680 --> 00:36:38.040 +that humans disagree on things and I + +00:36:36.520 --> 00:36:39.480 +think it's pretty varied I don't know if + +00:36:38.040 --> 00:36:42.160 +there's any gold standard way of doing + +00:36:39.480 --> 00:36:43.839 +it but sometimes you just average + +00:36:42.160 --> 00:36:46.359 +sometimes you throw away examples where + +00:36:43.839 --> 00:36:47.960 +humans disagree a lot um because like + +00:36:46.359 --> 00:36:50.200 +you can't get the humans to agree how + +00:36:47.960 --> 00:36:53.319 +could you expect how could you expect a + +00:36:50.200 --> 00:36:55.119 +machine to do well um so I think it it's + +00:36:53.319 --> 00:36:59.200 +a little bit test + +00:36:55.119 --> 00:37:01.560 +defending yeah so for + +00:36:59.200 --> 00:37:04.560 +generation inin + +00:37:01.560 --> 00:37:06.280 +andin yeah so for code generation that's + +00:37:04.560 --> 00:37:08.200 +I I I love this example because I've + +00:37:06.280 --> 00:37:09.960 +worked on code generation a lot of + +00:37:08.200 --> 00:37:12.680 +people only think about extrinsic + +00:37:09.960 --> 00:37:14.400 +evaluation of code Generation Um or I + +00:37:12.680 --> 00:37:16.160 +don't know if it's extrinsic but only + +00:37:14.400 --> 00:37:19.160 +think about execution based evaluation + +00:37:16.160 --> 00:37:20.520 +of code generation which is like you + +00:37:19.160 --> 00:37:22.400 +execute the code you see whether it + +00:37:20.520 --> 00:37:25.040 +passs unit tests and other things like + +00:37:22.400 --> 00:37:26.839 +this but in reality actually there's a + +00:37:25.040 --> 00:37:28.599 +lot of other important things for code + +00:37:26.839 --> 00:37:30.560 +like readability and other stuff like + +00:37:28.599 --> 00:37:32.160 +that and you should be evaluating those + +00:37:30.560 --> 00:37:34.920 +things but I think a lot of people like + +00:37:32.160 --> 00:37:36.520 +kind of ignore that so um there there + +00:37:34.920 --> 00:37:38.880 +are a few Pap that do that but most of + +00:37:36.520 --> 00:37:41.000 +the time people just execute the Cod + +00:37:38.880 --> 00:37:45.520 +process + +00:37:41.000 --> 00:37:47.760 +un cool okay um so yeah moving on to the + +00:37:45.520 --> 00:37:51.160 +learning part so now I'd like to talk + +00:37:47.760 --> 00:37:55.280 +about uh learning and the first thing + +00:37:51.160 --> 00:37:59.480 +I'll cover is error and risk and so + +00:37:55.280 --> 00:38:02.280 +basically um the way we calculate air is + +00:37:59.480 --> 00:38:03.119 +we generate an output and we calculate + +00:38:02.280 --> 00:38:07.680 +its + +00:38:03.119 --> 00:38:09.480 +Badness um and so generating the output + +00:38:07.680 --> 00:38:13.160 +could be argmax it could be sampling it + +00:38:09.480 --> 00:38:15.800 +could be anything else like that um and + +00:38:13.160 --> 00:38:18.640 +we calculate its Badness uh which is one + +00:38:15.800 --> 00:38:21.040 +minus in which could be like how bad is + +00:38:18.640 --> 00:38:22.720 +the output uh if you're you have a + +00:38:21.040 --> 00:38:24.760 +Badness measure or it could be one minus + +00:38:22.720 --> 00:38:28.400 +the evaluation Square to calculate its + +00:38:24.760 --> 00:38:30.160 +Badness and this is defined as error + +00:38:28.400 --> 00:38:31.440 +and generally what you want to do is you + +00:38:30.160 --> 00:38:33.520 +want to minimize + +00:38:31.440 --> 00:38:36.800 +error + +00:38:33.520 --> 00:38:39.400 +um because in the end you're going to be + +00:38:36.800 --> 00:38:42.359 +deploying A system that just outputs you + +00:38:39.400 --> 00:38:46.079 +know one thing and uh you're going to + +00:38:42.359 --> 00:38:49.800 +want that to be as good a thing as + +00:38:46.079 --> 00:38:53.000 +possible um but the problem with this is + +00:38:49.800 --> 00:38:56.400 +there's no easy way to actually optimize + +00:38:53.000 --> 00:38:59.079 +this value in especially in a text + +00:38:56.400 --> 00:39:01.800 +generation sty setting but even in the + +00:38:59.079 --> 00:39:06.839 +classification setting we can't easily + +00:39:01.800 --> 00:39:06.839 +maximize err because um if you look at + +00:39:09.040 --> 00:39:14.200 +the if you look at the surface of air uh + +00:39:12.760 --> 00:39:15.960 +at some point you're going to have a + +00:39:14.200 --> 00:39:18.319 +non-differentiable part when you take + +00:39:15.960 --> 00:39:21.119 +the argmax and or when you do sampling + +00:39:18.319 --> 00:39:23.319 +or anything like that so um you're not + +00:39:21.119 --> 00:39:27.119 +going to be able to do gradient based + +00:39:23.319 --> 00:39:29.200 +optimization so what we do normally is + +00:39:27.119 --> 00:39:33.400 +um + +00:39:29.200 --> 00:39:37.000 +we instead calculate something uh called + +00:39:33.400 --> 00:39:38.560 +risk and what risk looks like is uh we + +00:39:37.000 --> 00:39:40.599 +talked a little bit about minimum based + +00:39:38.560 --> 00:39:43.520 +risk for decoding but this is for uh + +00:39:40.599 --> 00:39:46.160 +training time and what it looks like is + +00:39:43.520 --> 00:39:49.040 +it's essentially the expected err of the + +00:39:46.160 --> 00:39:52.359 +output and the expected err of the + +00:39:49.040 --> 00:39:54.760 +output um includes a probability in the + +00:39:52.359 --> 00:39:58.240 +objective function here and that + +00:39:54.760 --> 00:40:01.079 +probability uh is differential basically + +00:39:58.240 --> 00:40:02.319 +so we can um uh we can easily do + +00:40:01.079 --> 00:40:05.720 +gradient based + +00:40:02.319 --> 00:40:09.119 +optimization through it um the problem + +00:40:05.720 --> 00:40:12.200 +with this is It's differentiable but for + +00:40:09.119 --> 00:40:17.160 +text generation for example the sum is + +00:40:12.200 --> 00:40:20.319 +intractable because we have a combinator + +00:40:17.160 --> 00:40:23.880 +large number of potential outputs um + +00:40:20.319 --> 00:40:25.520 +because you know if this is we've talked + +00:40:23.880 --> 00:40:28.720 +about this before but if this is like + +00:40:25.520 --> 00:40:30.680 +link you know 50 and we have a 30,000 + +00:40:28.720 --> 00:40:32.839 +vocabul that's 30,000 to the 50 + +00:40:30.680 --> 00:40:34.599 +possibilities we can't take a su over + +00:40:32.839 --> 00:40:36.359 +that many + +00:40:34.599 --> 00:40:38.400 +possibilities + +00:40:36.359 --> 00:40:42.680 +um + +00:40:38.400 --> 00:40:45.839 +so minimum R risk training uh tries to + +00:40:42.680 --> 00:40:48.440 +minimize risk reinforcement learning + +00:40:45.839 --> 00:40:50.040 +also many of the models especially + +00:40:48.440 --> 00:40:53.599 +policy gradient models are trying to + +00:40:50.040 --> 00:40:55.240 +minimize risk as well so um but the + +00:40:53.599 --> 00:40:58.040 +reason why I wanted to talk about risk + +00:40:55.240 --> 00:41:00.440 +first is because this is very simple to + +00:40:58.040 --> 00:41:01.640 +get to from the uh the point of view of + +00:41:00.440 --> 00:41:06.560 +like all the things that we've studied + +00:41:01.640 --> 00:41:06.560 +so so I think it's talking about + +00:41:06.760 --> 00:41:11.800 +that + +00:41:08.319 --> 00:41:15.520 +um one other thing that I should mention + +00:41:11.800 --> 00:41:18.400 +about is + +00:41:15.520 --> 00:41:23.079 +um or no sorry I'll I'll talk about that + +00:41:18.400 --> 00:41:26.880 +later so when we want to optimize risk + +00:41:23.079 --> 00:41:30.560 +um what we do is we sample in order to + +00:41:26.880 --> 00:41:35.520 +make this trct so a very simple way to + +00:41:30.560 --> 00:41:37.640 +minimize risk is instead of um instead + +00:41:35.520 --> 00:41:39.359 +of summing over all of the possible + +00:41:37.640 --> 00:41:42.760 +outputs we sum over a small number of + +00:41:39.359 --> 00:41:46.079 +possible outputs and we upgrade uh and + +00:41:42.760 --> 00:41:47.359 +we uh sorry normalize uh to make this + +00:41:46.079 --> 00:41:51.200 +all add up to + +00:41:47.359 --> 00:41:52.839 +one and so this normalizer here is + +00:41:51.200 --> 00:41:55.319 +basically the sum over all of the + +00:41:52.839 --> 00:41:58.599 +probabilities that we have uh on the top + +00:41:55.319 --> 00:42:02.119 +part here and and these samples can be + +00:41:58.599 --> 00:42:05.480 +created either using sampling or n best + +00:42:02.119 --> 00:42:07.040 +search we don't need to have from the + +00:42:05.480 --> 00:42:11.040 +point of view of doing this sort of + +00:42:07.040 --> 00:42:13.960 +minimum risk training the kind of + +00:42:11.040 --> 00:42:16.880 +correct way of doing this is sampling + +00:42:13.960 --> 00:42:19.880 +using ancestral sampling uh like we + +00:42:16.880 --> 00:42:23.079 +talked about before and um in minimizing + +00:42:19.880 --> 00:42:25.839 +the output based on the the samples but + +00:42:23.079 --> 00:42:28.480 +the problem with that is um as many of + +00:42:25.839 --> 00:42:31.440 +you also might have seen when you were + +00:42:28.480 --> 00:42:33.599 +sampling from your language model uh + +00:42:31.440 --> 00:42:35.160 +from assignment one if you sample with + +00:42:33.599 --> 00:42:38.040 +temperature one it gives you a lot of + +00:42:35.160 --> 00:42:40.720 +like not very good outlets right and so + +00:42:38.040 --> 00:42:43.400 +if you're sampling with temperature one + +00:42:40.720 --> 00:42:45.000 +um you'll be exploring a a very large + +00:42:43.400 --> 00:42:47.880 +part of the space that actually isn't + +00:42:45.000 --> 00:42:49.720 +very good and so because of this uh some + +00:42:47.880 --> 00:42:51.480 +other Alternatives that you can use is + +00:42:49.720 --> 00:42:53.400 +you can just do endb search to find the + +00:42:51.480 --> 00:42:55.280 +best outputs or you can sample with a + +00:42:53.400 --> 00:42:58.079 +temperature that's not one or something + +00:42:55.280 --> 00:43:00.240 +like that and basically create uh you + +00:42:58.079 --> 00:43:02.520 +know a list of possible hypotheses and + +00:43:00.240 --> 00:43:04.079 +then normalize other B so that's another + +00:43:02.520 --> 00:43:06.240 +option and very often not using + +00:43:04.079 --> 00:43:11.200 +temperature one is a better + +00:43:06.240 --> 00:43:15.280 +way um if you're sampling with not + +00:43:11.200 --> 00:43:18.640 +temperature one and you are um + +00:43:15.280 --> 00:43:20.920 +potentially getting multiple outputs you + +00:43:18.640 --> 00:43:23.400 +should try to D duplicate or sample + +00:43:20.920 --> 00:43:25.480 +without replacement because if you get + +00:43:23.400 --> 00:43:27.559 +multiple outputs here it messes up your + +00:43:25.480 --> 00:43:30.680 +equations if you basically uh have the + +00:43:27.559 --> 00:43:30.680 +same one in there multiple + +00:43:32.160 --> 00:43:37.800 +times cool so so this is a really simple + +00:43:35.880 --> 00:43:40.079 +example of how you can do minimal risk + +00:43:37.800 --> 00:43:42.119 +training but now I want to get into uh + +00:43:40.079 --> 00:43:44.640 +like reinforcement learning which is the + +00:43:42.119 --> 00:43:48.119 +framing that most um + +00:43:44.640 --> 00:43:50.760 +modern Works about this Paulo uh one + +00:43:48.119 --> 00:43:52.559 +thing I should mention is there are + +00:43:50.760 --> 00:43:55.240 +actually other alternatives to learning + +00:43:52.559 --> 00:43:57.359 +from uh human feedback including like + +00:43:55.240 --> 00:43:59.359 +margin loss margin based losses and + +00:43:57.359 --> 00:44:00.960 +other stuff like that but most people + +00:43:59.359 --> 00:44:03.440 +nowadays use reinforcement learning so + +00:44:00.960 --> 00:44:06.359 +I'm only going to cover that + +00:44:03.440 --> 00:44:08.440 +here so what is reinforcement learning + +00:44:06.359 --> 00:44:11.000 +um learning reinforcement learning is + +00:44:08.440 --> 00:44:14.559 +learning where we have an environment uh + +00:44:11.000 --> 00:44:16.079 +x uh ability to make actions a and get a + +00:44:14.559 --> 00:44:20.160 +delayed reward + +00:44:16.079 --> 00:44:21.880 +R and um there's a really nice example + +00:44:20.160 --> 00:44:24.400 +uh if you're not familiar with the + +00:44:21.880 --> 00:44:27.480 +basics of policy gradient by Andre + +00:44:24.400 --> 00:44:28.800 +karpathy which I linked in the um in the + +00:44:27.480 --> 00:44:29.680 +recommended reading so you can take a + +00:44:28.800 --> 00:44:34.680 +look at + +00:44:29.680 --> 00:44:37.240 +that um but in that example gives an + +00:44:34.680 --> 00:44:39.440 +example of pong uh where you're playing + +00:44:37.240 --> 00:44:42.640 +the game pong where X is your observed + +00:44:39.440 --> 00:44:45.640 +image a is up or down and R is the wind + +00:44:42.640 --> 00:44:47.480 +loss at the end of the game uh does + +00:44:45.640 --> 00:44:50.559 +anyone have an idea about uh what this + +00:44:47.480 --> 00:44:52.119 +looks like for any arbitrary NLP task + +00:44:50.559 --> 00:44:56.520 +that we might want to do reinforcement + +00:44:52.119 --> 00:44:59.040 +learning for so what what is X what is a + +00:44:56.520 --> 00:44:59.040 +and what is + +00:45:00.040 --> 00:45:04.680 +are pick your favorite uh your favorite + +00:45:06.920 --> 00:45:09.920 +Trask + +00:45:10.960 --> 00:45:18.400 +anybody + +00:45:12.520 --> 00:45:18.400 +yeah be or what what's X first + +00:45:19.680 --> 00:45:28.720 +yeah you have generate okay is the + +00:45:24.440 --> 00:45:29.720 +next be like the Buton like whether or + +00:45:28.720 --> 00:45:32.520 +not + +00:45:29.720 --> 00:45:35.240 +you okay yeah I I think this is very + +00:45:32.520 --> 00:45:37.119 +close just to repeat it it's like X is + +00:45:35.240 --> 00:45:39.599 +what you've generated so far a is the + +00:45:37.119 --> 00:45:41.559 +next token and R is the button that the + +00:45:39.599 --> 00:45:45.400 +user clicks about whether it's good or + +00:45:41.559 --> 00:45:46.920 +not um I think that's reasonably good + +00:45:45.400 --> 00:45:48.760 +although I don't know if we'd expect + +00:45:46.920 --> 00:45:52.960 +them to click the button every token we + +00:45:48.760 --> 00:45:54.880 +generate right so um it might be that X + +00:45:52.960 --> 00:45:57.880 +is the conversational history up till + +00:45:54.880 --> 00:46:02.319 +this point um a + +00:45:57.880 --> 00:46:04.280 +a could be a next token generation and + +00:46:02.319 --> 00:46:06.520 +then R is a reward we get in an + +00:46:04.280 --> 00:46:08.280 +arbitrary time point it might not be + +00:46:06.520 --> 00:46:09.960 +like immediately after generating the + +00:46:08.280 --> 00:46:12.040 +next token but it might be later and + +00:46:09.960 --> 00:46:13.480 +that's actually really really important + +00:46:12.040 --> 00:46:15.040 +from the point of view of reinforcement + +00:46:13.480 --> 00:46:19.599 +learning and I'll I'll talk about that + +00:46:15.040 --> 00:46:23.040 +in a second um anyone have an idea from + +00:46:19.599 --> 00:46:24.960 +I don't know uh code generation or + +00:46:23.040 --> 00:46:28.119 +translation or some other + +00:46:24.960 --> 00:46:31.160 +things C generation maybe s is a + +00:46:28.119 --> 00:46:33.040 +compiler or like the gra scpt and then + +00:46:31.160 --> 00:46:37.000 +the + +00:46:33.040 --> 00:46:42.520 +is the actual code that right and reward + +00:46:37.000 --> 00:46:44.839 +is yep um so X could be the compiler + +00:46:42.520 --> 00:46:47.559 +it's probably the compiler and all of + +00:46:44.839 --> 00:46:50.200 +the surrounding code context like what + +00:46:47.559 --> 00:46:52.520 +what is the natural language output and + +00:46:50.200 --> 00:46:53.960 +it's also um you know what is the + +00:46:52.520 --> 00:46:57.280 +project that you're you're working on + +00:46:53.960 --> 00:47:00.079 +and stuff like that um a i think + +00:46:57.280 --> 00:47:02.800 +typically we would treat each token in + +00:47:00.079 --> 00:47:04.160 +the code to be an action um and then R + +00:47:02.800 --> 00:47:06.599 +would be the reward after a long + +00:47:04.160 --> 00:47:08.640 +sequence of actions um and it could be + +00:47:06.599 --> 00:47:11.119 +the reward from the compiler it could be + +00:47:08.640 --> 00:47:13.160 +the reward from a code readability model + +00:47:11.119 --> 00:47:15.720 +it could be the reward from a speed + +00:47:13.160 --> 00:47:17.079 +execution speed and stuff like that so + +00:47:15.720 --> 00:47:18.839 +like one of the interesting things about + +00:47:17.079 --> 00:47:22.640 +R is you can be really creative about + +00:47:18.839 --> 00:47:25.400 +how you form R um which is not easy to + +00:47:22.640 --> 00:47:27.319 +do uh if you're just doing maximum + +00:47:25.400 --> 00:47:29.240 +likelihood also so you can come up with + +00:47:27.319 --> 00:47:32.920 +a r that really matches with like what + +00:47:29.240 --> 00:47:36.559 +you want um what you want in an output + +00:47:32.920 --> 00:47:40.079 +so why reinforcement learning in NLP um + +00:47:36.559 --> 00:47:42.599 +and I think there's basically three um + +00:47:40.079 --> 00:47:44.240 +three answers the first one is you have + +00:47:42.599 --> 00:47:49.000 +a typical reinforcement learning + +00:47:44.240 --> 00:47:51.119 +scenario um where you have a dialogue + +00:47:49.000 --> 00:47:52.720 +where you get lots of responses and then + +00:47:51.119 --> 00:47:54.559 +you get a reward at the end so the + +00:47:52.720 --> 00:47:57.359 +thumbs up and thumbs down from humans is + +00:47:54.559 --> 00:47:59.839 +a very typical example of + +00:47:57.359 --> 00:48:02.800 +uh reinforcement learning because you + +00:47:59.839 --> 00:48:05.000 +get a delayed reward uh at some point in + +00:48:02.800 --> 00:48:07.599 +the dialogue when a human presses up or + +00:48:05.000 --> 00:48:09.280 +down um another like actually more + +00:48:07.599 --> 00:48:11.680 +technical scenario where reinforcement + +00:48:09.280 --> 00:48:14.960 +learning has been used um for a long + +00:48:11.680 --> 00:48:17.400 +time is call centers so we've had + +00:48:14.960 --> 00:48:20.680 +dialogue systems for call centers and + +00:48:17.400 --> 00:48:23.160 +then if you complete a ticket purchase + +00:48:20.680 --> 00:48:24.839 +um or you complete resolve a ticket + +00:48:23.160 --> 00:48:27.480 +without ever having to go to a human + +00:48:24.839 --> 00:48:30.800 +operator you get a really big reward + +00:48:27.480 --> 00:48:33.640 +if you have to go to the human operator + +00:48:30.800 --> 00:48:36.400 +you get maybe a smaller reward and if + +00:48:33.640 --> 00:48:39.200 +the person yells at you and hangs up + +00:48:36.400 --> 00:48:41.640 +then you get a really negative reward so + +00:48:39.200 --> 00:48:43.040 +um this is kind of the typical example + +00:48:41.640 --> 00:48:45.599 +reinforcement learning has been used for + +00:48:43.040 --> 00:48:48.520 +a long time there another example is if + +00:48:45.599 --> 00:48:53.280 +you have like latent variables uh chains + +00:48:48.520 --> 00:48:55.799 +of thought where um you decide the + +00:48:53.280 --> 00:48:58.839 +latent variable and then get a reward um + +00:48:55.799 --> 00:49:02.799 +you get a reward based Bas on how those + +00:48:58.839 --> 00:49:03.920 +latent variables affect the output so um + +00:49:02.799 --> 00:49:07.200 +this + +00:49:03.920 --> 00:49:09.799 +is uh this is another example + +00:49:07.200 --> 00:49:12.599 +because the Chain of Thought itself + +00:49:09.799 --> 00:49:13.880 +might not actually be good you might + +00:49:12.599 --> 00:49:15.839 +have a bad Chain of Thought and still + +00:49:13.880 --> 00:49:17.760 +get the correct answer so you don't + +00:49:15.839 --> 00:49:19.640 +actually know for sure that a chain of + +00:49:17.760 --> 00:49:22.359 +thought that was automatically generated + +00:49:19.640 --> 00:49:24.799 +is good or not but um that so that kind + +00:49:22.359 --> 00:49:27.000 +of makes it a reinforcement learning + +00:49:24.799 --> 00:49:29.520 +problem and another thing is you might + +00:49:27.000 --> 00:49:32.520 +have a sequence level evaluation metric + +00:49:29.520 --> 00:49:34.240 +um so that you can't optimize the + +00:49:32.520 --> 00:49:36.839 +evaluation metric without uh first + +00:49:34.240 --> 00:49:38.480 +generating the whole like sequence so + +00:49:36.839 --> 00:49:40.880 +that would be any of the evaluation + +00:49:38.480 --> 00:49:42.400 +metrics that I talked about before so um + +00:49:40.880 --> 00:49:44.720 +these are three scenarios where you can + +00:49:42.400 --> 00:49:47.079 +use reinforcement + +00:49:44.720 --> 00:49:50.000 +planning so + +00:49:47.079 --> 00:49:51.400 +um I'm going to make a few steps through + +00:49:50.000 --> 00:49:54.640 +but like let's start again with our + +00:49:51.400 --> 00:49:57.359 +supervised mle loss and uh that's just + +00:49:54.640 --> 00:50:01.799 +the log probability here um in the + +00:49:57.359 --> 00:50:04.160 +context of reinforcement learning this + +00:50:01.799 --> 00:50:07.079 +is also called imitation + +00:50:04.160 --> 00:50:08.880 +learning because um essentially you're + +00:50:07.079 --> 00:50:12.680 +learning how to perform actions by + +00:50:08.880 --> 00:50:14.559 +imitating a teacher um and imitation + +00:50:12.680 --> 00:50:15.960 +learning is not just supervised mle + +00:50:14.559 --> 00:50:18.440 +there's also other varieties of + +00:50:15.960 --> 00:50:21.440 +imitation learning but um this is one + +00:50:18.440 --> 00:50:21.440 +variety of imitation + +00:50:22.520 --> 00:50:27.640 +learning the next thing I'd like to talk + +00:50:24.599 --> 00:50:30.079 +about is self-training and basically + +00:50:27.640 --> 00:50:31.760 +self-training the idea is that you + +00:50:30.079 --> 00:50:33.720 +sample or argmax according to the + +00:50:31.760 --> 00:50:36.119 +current model so you have your current + +00:50:33.720 --> 00:50:38.000 +model and you get a sample from it and + +00:50:36.119 --> 00:50:41.520 +then you use the sample or samples to + +00:50:38.000 --> 00:50:43.680 +maximize likelihood so um basically + +00:50:41.520 --> 00:50:47.520 +instead of doing maximum likelihood with + +00:50:43.680 --> 00:50:49.520 +respect to the a gold standard output + +00:50:47.520 --> 00:50:51.280 +you're doing it with respect to your own + +00:50:49.520 --> 00:50:55.280 +output + +00:50:51.280 --> 00:50:55.280 +so does this seem like a good + +00:50:55.640 --> 00:51:03.880 +idea I see a few people shaking heads um + +00:51:00.480 --> 00:51:03.880 +any ideas why this is not a good + +00:51:04.680 --> 00:51:07.680 +idea + +00:51:15.040 --> 00:51:20.599 +yeah yeah exactly so if you don't have + +00:51:17.720 --> 00:51:23.760 +any access to any notion well it's good + +00:51:20.599 --> 00:51:27.480 +um this will be optimizing towards good + +00:51:23.760 --> 00:51:28.839 +outputs and bad outputs right so um your + +00:51:27.480 --> 00:51:30.200 +model might be outputting bad outputs + +00:51:28.839 --> 00:51:32.839 +and you're just reinforcing the errors + +00:51:30.200 --> 00:51:35.160 +set the model R already nonetheless like + +00:51:32.839 --> 00:51:37.799 +self trining actually improves your + +00:51:35.160 --> 00:51:39.680 +accuracy somewhat in some cases like for + +00:51:37.799 --> 00:51:43.040 +example if your accuracy is if your + +00:51:39.680 --> 00:51:45.520 +model is Right more often than not um + +00:51:43.040 --> 00:51:49.119 +basically optimizing towards the more + +00:51:45.520 --> 00:51:51.720 +often the not right outputs can actually + +00:51:49.119 --> 00:51:53.640 +um due to the implicit regularization + +00:51:51.720 --> 00:51:55.000 +that models have and early stopping and + +00:51:53.640 --> 00:51:56.559 +other things like that it can actually + +00:51:55.000 --> 00:51:59.280 +move you in the right direction and + +00:51:56.559 --> 00:52:01.559 +improve accuracy + +00:51:59.280 --> 00:52:05.000 +um + +00:52:01.559 --> 00:52:06.640 +so there are alternatives to this that + +00:52:05.000 --> 00:52:09.520 +further improve accuracy so like for + +00:52:06.640 --> 00:52:12.720 +example if you have multiple models and + +00:52:09.520 --> 00:52:16.200 +um you only generate sentences where the + +00:52:12.720 --> 00:52:17.760 +models agree then this can improve your + +00:52:16.200 --> 00:52:20.000 +uh overall accuracy + +00:52:17.760 --> 00:52:24.240 +further um this is called code training + +00:52:20.000 --> 00:52:27.799 +it was actually uh created by uh uh + +00:52:24.240 --> 00:52:30.160 +people at at CMU as well and another + +00:52:27.799 --> 00:52:32.280 +successful alternative uh is adding + +00:52:30.160 --> 00:52:34.920 +noise to the input to match the noise + +00:52:32.280 --> 00:52:38.760 +that you find in the output so if you uh + +00:52:34.920 --> 00:52:40.720 +add like word uh word-based Dropout or + +00:52:38.760 --> 00:52:44.000 +other things like that this can also + +00:52:40.720 --> 00:52:47.400 +help uh accommodate these things but + +00:52:44.000 --> 00:52:48.920 +anyway um so self trining is is useful + +00:52:47.400 --> 00:52:50.480 +but there are better Alternatives if you + +00:52:48.920 --> 00:52:54.079 +can get a reward + +00:52:50.480 --> 00:52:55.559 +function so um the simplest variety of + +00:52:54.079 --> 00:52:56.960 +this is something called policy gradient + +00:52:55.559 --> 00:52:59.720 +or reinforce + +00:52:56.960 --> 00:53:02.319 +um or more specifically reinforce and + +00:52:59.720 --> 00:53:06.280 +basically what this does is this adds a + +00:53:02.319 --> 00:53:08.359 +term that scales the loss by the reward + +00:53:06.280 --> 00:53:12.400 +so if you can get a reward for each + +00:53:08.359 --> 00:53:15.680 +output basically this + +00:53:12.400 --> 00:53:18.119 +um you uh instead of doing self trining + +00:53:15.680 --> 00:53:21.760 +entirely by itself you multiply it by a + +00:53:18.119 --> 00:53:23.119 +reward and this allows you to increase + +00:53:21.760 --> 00:53:24.640 +the likelihood of things that get a high + +00:53:23.119 --> 00:53:28.440 +reward decrease the likelihood of things + +00:53:24.640 --> 00:53:28.440 +that get a low reward + +00:53:29.680 --> 00:53:34.960 +so uh a brief quiz here under what + +00:53:32.440 --> 00:53:37.599 +conditions is this equal equivalent to + +00:53:34.960 --> 00:53:41.480 +ml or essentially equivalent to maximum + +00:53:37.599 --> 00:53:43.079 +leg uh estimation and so like in order + +00:53:41.480 --> 00:53:45.480 +to make this quiz easier I'll go back to + +00:53:43.079 --> 00:53:47.720 +maximum likelihood estimation so it + +00:53:45.480 --> 00:53:50.359 +looked a bit like this um you calculated + +00:53:47.720 --> 00:53:53.440 +the log probability of the true output + +00:53:50.359 --> 00:53:55.440 +and now let me go uh to + +00:53:53.440 --> 00:53:56.960 +here any + +00:53:55.440 --> 00:54:00.119 +ideas + +00:53:56.960 --> 00:54:05.040 +yeah when your reward equals to + +00:54:00.119 --> 00:54:05.040 +one some sometimes in zero other times + +00:54:07.760 --> 00:54:10.960 +what any + +00:54:12.760 --> 00:54:17.520 +ideas what when when does your reward + +00:54:15.280 --> 00:54:19.640 +need to be equal to one in order to make + +00:54:17.520 --> 00:54:23.400 +this + +00:54:19.640 --> 00:54:23.400 +equation equivalent this + +00:54:24.960 --> 00:54:31.680 +equation yeah when Y and Y hat are the + +00:54:27.319 --> 00:54:36.119 +same so um basically + +00:54:31.680 --> 00:54:38.880 +this objective is equivalent to the mle + +00:54:36.119 --> 00:54:43.160 +objective when you're using a zero1 + +00:54:38.880 --> 00:54:44.480 +loss um where or you're using an + +00:54:43.160 --> 00:54:46.359 +evaluation function that gives you a + +00:54:44.480 --> 00:54:50.920 +score of one when it's exact match and + +00:54:46.359 --> 00:54:51.720 +zero when it's not exact match so um but + +00:54:50.920 --> 00:54:54.480 +that + +00:54:51.720 --> 00:54:56.440 +also demonstrates that this can be more + +00:54:54.480 --> 00:54:58.400 +flexible because you can have other + +00:54:56.440 --> 00:55:00.160 +rewards that are not just one and zero + +00:54:58.400 --> 00:55:02.599 +for exact match but you can use things + +00:55:00.160 --> 00:55:05.359 +that give you partial credit you can use + +00:55:02.599 --> 00:55:06.880 +things that uplate multiple potential uh + +00:55:05.359 --> 00:55:08.880 +potentially correct outputs and other + +00:55:06.880 --> 00:55:13.400 +things like + +00:55:08.880 --> 00:55:17.160 +that so one problem with these methods + +00:55:13.400 --> 00:55:21.799 +is um how do we know which action led to + +00:55:17.160 --> 00:55:24.720 +the reward so the best scenario is after + +00:55:21.799 --> 00:55:26.359 +each action you get a reward so after + +00:55:24.720 --> 00:55:28.960 +each token that you generated you get + +00:55:26.359 --> 00:55:31.240 +get a thumbs up or thumbs down uh from + +00:55:28.960 --> 00:55:34.280 +the user about whether they like that + +00:55:31.240 --> 00:55:36.000 +token or not um and how much happier + +00:55:34.280 --> 00:55:37.720 +they are after you generated that token + +00:55:36.000 --> 00:55:42.400 +than they were before you generated that + +00:55:37.720 --> 00:55:44.200 +token um the problem with this is that + +00:55:42.400 --> 00:55:45.799 +that's completely infeasible right like + +00:55:44.200 --> 00:55:47.039 +every time after you use chat GPD you're + +00:55:45.799 --> 00:55:50.480 +not going to press thumbs up and thumbs + +00:55:47.039 --> 00:55:52.559 +down after each token so um in reality + +00:55:50.480 --> 00:55:55.559 +what we get is usually we get it at the + +00:55:52.559 --> 00:55:57.000 +end of uh roll out of many many + +00:55:55.559 --> 00:55:58.640 +different actions and we're not sure + +00:55:57.000 --> 00:55:59.720 +which action is responsible for giving + +00:55:58.640 --> 00:56:02.559 +us the + +00:55:59.720 --> 00:56:05.440 +reward and + +00:56:02.559 --> 00:56:08.000 +so there's a few typical ways of dealing + +00:56:05.440 --> 00:56:09.640 +with this um the most typical way of + +00:56:08.000 --> 00:56:13.359 +dealing with this right now is just not + +00:56:09.640 --> 00:56:15.440 +dealing with it um and just hoping that + +00:56:13.359 --> 00:56:17.200 +your optimization algorithm internally + +00:56:15.440 --> 00:56:21.480 +will be able to do credit + +00:56:17.200 --> 00:56:24.520 +assignment um and so what that entails + +00:56:21.480 --> 00:56:27.319 +is essentially you um give an equal + +00:56:24.520 --> 00:56:29.880 +reward for each token in the output + +00:56:27.319 --> 00:56:32.480 +other ways that you can deal with it are + +00:56:29.880 --> 00:56:35.640 +um you can assign decaying rewards from + +00:56:32.480 --> 00:56:37.559 +future events so like let's say let's + +00:56:35.640 --> 00:56:41.839 +say you're talking about a chat bot for + +00:56:37.559 --> 00:56:44.119 +example maybe this is the the most uh + +00:56:41.839 --> 00:56:46.599 +kind of intuitive way of thinking about + +00:56:44.119 --> 00:56:50.400 +it but you you have a chat bot you have + +00:56:46.599 --> 00:56:52.599 +like 20 chat turns and you have the user + +00:56:50.400 --> 00:56:55.640 +give a thumbs up or a thumbs down on the + +00:56:52.599 --> 00:56:58.920 +20th chat turn there you would assign a + +00:56:55.640 --> 00:57:01.440 +reward of um like let's say it gave a + +00:56:58.920 --> 00:57:03.640 +thumbs up there you would re assign a + +00:57:01.440 --> 00:57:06.559 +reward of one for the previous chat turn + +00:57:03.640 --> 00:57:09.839 +a reward of like 0.5 for the second to + +00:57:06.559 --> 00:57:11.720 +previous chat term a reward of 0.25 for + +00:57:09.839 --> 00:57:14.319 +the third to previous chat term to + +00:57:11.720 --> 00:57:16.160 +basically say yeah like the user is + +00:57:14.319 --> 00:57:18.240 +feeling good at the moment they gave the + +00:57:16.160 --> 00:57:20.359 +thumbs up and that's probably more + +00:57:18.240 --> 00:57:23.400 +likely due to the things that happened + +00:57:20.359 --> 00:57:23.400 +recently so + +00:57:23.559 --> 00:57:28.119 +yeah we have a + +00:57:26.680 --> 00:57:32.280 +like not + +00:57:28.119 --> 00:57:34.160 +learning so the reward model can be any + +00:57:32.280 --> 00:57:35.839 +of the methods that I talked about + +00:57:34.160 --> 00:57:37.480 +before so it can be human feedback + +00:57:35.839 --> 00:57:39.000 +directly like a thumbs up or a thumbs + +00:57:37.480 --> 00:57:42.200 +down it could also be from a reward + +00:57:39.000 --> 00:57:44.599 +model uh that was pre-trained you could + +00:57:42.200 --> 00:57:47.680 +also theoretically learn the reward + +00:57:44.599 --> 00:57:52.720 +model simultaneously but you'd have to + +00:57:47.680 --> 00:57:55.200 +simultaneously with the model itself um + +00:57:52.720 --> 00:57:57.280 +so yeah I'm going to talk a little bit + +00:57:55.200 --> 00:58:00.359 +about DP which kind of does that a + +00:57:57.280 --> 00:58:01.720 +little bit but um I I would basically + +00:58:00.359 --> 00:58:03.160 +say that wherever you're getting your + +00:58:01.720 --> 00:58:06.280 +reward is probably from one of the + +00:58:03.160 --> 00:58:06.280 +things I talked about earlier + +00:58:06.359 --> 00:58:14.960 +today cool any other + +00:58:09.319 --> 00:58:17.720 +questions okay um so that's the basic + +00:58:14.960 --> 00:58:20.640 +the basic idea the very simplest thing + +00:58:17.720 --> 00:58:23.359 +that you can do is you can just sample + +00:58:20.640 --> 00:58:26.079 +um optimize the subjective function this + +00:58:23.359 --> 00:58:28.359 +is dead easy you it's not hard to imp + +00:58:26.079 --> 00:58:30.799 +imp it all as long as you have some + +00:58:28.359 --> 00:58:32.760 +source of reward signal um but the + +00:58:30.799 --> 00:58:35.559 +problem is uh reinforcement learning can + +00:58:32.760 --> 00:58:38.599 +be very unstable and it's hard to get it + +00:58:35.559 --> 00:58:40.160 +to uh you know work properly if you uh + +00:58:38.599 --> 00:58:42.400 +don't do some additional tricks so I'd + +00:58:40.160 --> 00:58:45.720 +like to talk about this + +00:58:42.400 --> 00:58:45.720 +next oh yeah + +00:58:48.880 --> 00:58:51.880 +sir + +00:58:55.039 --> 00:58:58.039 +yeah + +00:59:03.280 --> 00:59:08.960 +yeah the typical the typical way is you + +00:59:05.440 --> 00:59:12.960 +just have an exponential decay um so you + +00:59:08.960 --> 00:59:16.200 +you multiply each time by what 0.5 0. or + +00:59:12.960 --> 00:59:19.400 +something like that + +00:59:16.200 --> 00:59:19.400 +um from + +00:59:20.319 --> 00:59:27.720 +A6 um cool okay + +00:59:25.039 --> 00:59:30.720 +so + +00:59:27.720 --> 00:59:33.319 +and that's one option and sorry just to + +00:59:30.720 --> 00:59:35.760 +clarify the most common option nowadays + +00:59:33.319 --> 00:59:37.920 +um at least from the point of view of + +00:59:35.760 --> 00:59:39.839 +models is not to Decay it at all and + +00:59:37.920 --> 00:59:43.880 +just assign the same amount for each + +00:59:39.839 --> 00:59:45.319 +token um I'm not actually 100% sure what + +00:59:43.880 --> 00:59:47.319 +people are doing with respect to like + +00:59:45.319 --> 00:59:49.280 +long chat things I think probably + +00:59:47.319 --> 00:59:51.720 +they're only assigning it to the current + +00:59:49.280 --> 00:59:54.240 +like utterance and then not optimizing + +00:59:51.720 --> 00:59:57.240 +the previous utterances so like if they + +00:59:54.240 --> 00:59:59.039 +get a thumbs up or thumbs down signal um + +00:59:57.240 --> 01:00:00.720 +then they they would assign an + +00:59:59.039 --> 01:00:02.440 +equivalent reward for all of the tokens + +01:00:00.720 --> 01:00:04.640 +and the current utterance and zero + +01:00:02.440 --> 01:00:06.119 +reward for the previous ones but I'm not + +01:00:04.640 --> 01:00:08.480 +100% sure about that there might be + +01:00:06.119 --> 01:00:11.200 +other methods that people are + +01:00:08.480 --> 01:00:13.960 +using um + +01:00:11.200 --> 01:00:16.680 +cool so uh stabilizing reinforcement + +01:00:13.960 --> 01:00:18.520 +learning so um stabilizing reinforcement + +01:00:16.680 --> 01:00:21.839 +learning there's a lot of reasons why + +01:00:18.520 --> 01:00:23.880 +it's unstable um the first reason is + +01:00:21.839 --> 01:00:27.200 +you're sampling an individual output and + +01:00:23.880 --> 01:00:30.160 +calculating the um uh calculating based + +01:00:27.200 --> 01:00:32.039 +on the S individual sampled output and + +01:00:30.160 --> 01:00:33.440 +then there's an Infinity of other + +01:00:32.039 --> 01:00:36.480 +outputs that you could be optimizing + +01:00:33.440 --> 01:00:39.119 +over for mle this is not a problem + +01:00:36.480 --> 01:00:41.319 +because for mle you're always + +01:00:39.119 --> 01:00:45.359 +contrasting the gold standard output to + +01:00:41.319 --> 01:00:46.599 +all of the other outputs in the space um + +01:00:45.359 --> 01:00:48.280 +and you're saying I want to upweight the + +01:00:46.599 --> 01:00:51.200 +gold standard output and down we all of + +01:00:48.280 --> 01:00:53.039 +the other ones but for reinforcement + +01:00:51.200 --> 01:00:54.760 +learning you only have a single sampled + +01:00:53.039 --> 01:00:57.520 +output that output might be wrong and + +01:00:54.760 --> 01:00:59.359 +that's a source of inst ility this is + +01:00:57.520 --> 01:01:02.079 +particularly a problem when using bigger + +01:00:59.359 --> 01:01:05.960 +output spaces like all of the in the + +01:01:02.079 --> 01:01:07.920 +vocabul another problem is uh anytime + +01:01:05.960 --> 01:01:11.599 +you start using negative + +01:01:07.920 --> 01:01:15.160 +rewards um because if you start using + +01:01:11.599 --> 01:01:17.559 +negative rewards those rewards will be + +01:01:15.160 --> 01:01:19.520 +downweighting the probability of a + +01:01:17.559 --> 01:01:20.680 +particular output sequence and that + +01:01:19.520 --> 01:01:22.440 +might be a good idea maybe you're + +01:01:20.680 --> 01:01:24.319 +getting a toxic output or something like + +01:01:22.440 --> 01:01:25.960 +that and you want to down it but at the + +01:01:24.319 --> 01:01:28.280 +same time in addition to that toxic + +01:01:25.960 --> 01:01:30.000 +output there's like you know a + +01:01:28.280 --> 01:01:31.599 +combinatorial number of completely + +01:01:30.000 --> 01:01:33.880 +nonsense outputs that aren't even + +01:01:31.599 --> 01:01:36.599 +English and so basically you can start + +01:01:33.880 --> 01:01:38.920 +diverge from the N starting start to + +01:01:36.599 --> 01:01:40.799 +diverge from the natural like language + +01:01:38.920 --> 01:01:44.720 +modeling distribution that you have + +01:01:40.799 --> 01:01:49.079 +before so this is a big uh a big + +01:01:44.720 --> 01:01:51.880 +problem so a number of uh strategies can + +01:01:49.079 --> 01:01:53.880 +be used to stabilize the first one is + +01:01:51.880 --> 01:01:55.480 +this is completely obvious right now and + +01:01:53.880 --> 01:01:57.240 +nobody in their right mind would avoid + +01:01:55.480 --> 01:02:00.119 +doing this but the first one is + +01:01:57.240 --> 01:02:02.839 +pre-training with mle and so you start + +01:02:00.119 --> 01:02:04.920 +with a pre-trained model um and then + +01:02:02.839 --> 01:02:09.359 +switch over to RL after you finished + +01:02:04.920 --> 01:02:11.520 +pre-training the model um and so + +01:02:09.359 --> 01:02:13.279 +this makes a lot of sense if you're + +01:02:11.520 --> 01:02:14.960 +training a language model which I assume + +01:02:13.279 --> 01:02:17.039 +that almost everybody in this class is + +01:02:14.960 --> 01:02:20.279 +going to be doing but it does only work + +01:02:17.039 --> 01:02:22.720 +in scenarios where you can run mle and + +01:02:20.279 --> 01:02:24.359 +so it doesn't work if you're predicting + +01:02:22.720 --> 01:02:27.240 +like latent variables that aren't + +01:02:24.359 --> 01:02:28.760 +included in the original space + +01:02:27.240 --> 01:02:31.960 +um it + +01:02:28.760 --> 01:02:34.279 +also doesn't work in a setting where + +01:02:31.960 --> 01:02:36.640 +like you want to learn a + +01:02:34.279 --> 01:02:40.799 +chatbot you want to learn a chatbot for + +01:02:36.640 --> 01:02:44.200 +customer service for a + +01:02:40.799 --> 01:02:48.039 +company that + +01:02:44.200 --> 01:02:49.960 +has like for example a product catalog + +01:02:48.039 --> 01:02:53.559 +that the language model has never seen + +01:02:49.960 --> 01:02:56.000 +before and so if the language model has + +01:02:53.559 --> 01:02:57.359 +no information about the product catalog + +01:02:56.000 --> 01:02:59.920 +whatsoever you don't provide it through + +01:02:57.359 --> 01:03:02.440 +rag or something like that it's going to + +01:02:59.920 --> 01:03:04.039 +have to explore infinitely or not + +01:03:02.440 --> 01:03:05.599 +infinitely but it's going to have to + +01:03:04.039 --> 01:03:08.359 +explore too large of a space and you're + +01:03:05.599 --> 01:03:10.000 +never going to converge with um with + +01:03:08.359 --> 01:03:12.359 +your language modeling objectives so you + +01:03:10.000 --> 01:03:15.000 +need to basically be able to create at + +01:03:12.359 --> 01:03:16.079 +least some supervised training data to + +01:03:15.000 --> 01:03:19.279 +train with + +01:03:16.079 --> 01:03:20.720 +mle um but assuming you can do that I'm + +01:03:19.279 --> 01:03:22.920 +assuming that almost everybody is going + +01:03:20.720 --> 01:03:26.400 +to do some sort of pre-training with + +01:03:22.920 --> 01:03:27.880 +ML um The Next Step that people use uh + +01:03:26.400 --> 01:03:30.520 +in reinforcement learning that's really + +01:03:27.880 --> 01:03:34.319 +important to stabilize is regularization + +01:03:30.520 --> 01:03:35.880 +to an existing model and you have an + +01:03:34.319 --> 01:03:39.039 +existing model and you want to prevent + +01:03:35.880 --> 01:03:40.559 +it from getting too far away and the + +01:03:39.039 --> 01:03:42.279 +reason why you want to do this is like + +01:03:40.559 --> 01:03:45.720 +let's say you start assigning a negative + +01:03:42.279 --> 01:03:47.440 +reward to toxic utterances for example + +01:03:45.720 --> 01:03:49.200 +if your model stops being a language + +01:03:47.440 --> 01:03:51.920 +model whatsoever that's a bad idea so + +01:03:49.200 --> 01:03:53.400 +you want to keep it as a language model + +01:03:51.920 --> 01:03:55.599 +keep it close enough to still being a + +01:03:53.400 --> 01:03:57.559 +competent language model while you know + +01:03:55.599 --> 01:03:59.599 +like removing the toxic + +01:03:57.559 --> 01:04:03.039 +utterances so there's a number of + +01:03:59.599 --> 01:04:05.680 +methods that people use to do this um uh + +01:04:03.039 --> 01:04:08.359 +the most prominent ones are kale + +01:04:05.680 --> 01:04:10.279 +regularization uh well so the the first + +01:04:08.359 --> 01:04:13.119 +most prominent one is K regularization + +01:04:10.279 --> 01:04:15.839 +and the way this works is basically in + +01:04:13.119 --> 01:04:19.400 +addition you add you have two + +01:04:15.839 --> 01:04:22.279 +terms the first term is a term that + +01:04:19.400 --> 01:04:25.760 +improves your reward so you have your + +01:04:22.279 --> 01:04:28.039 +old model where your old model is + +01:04:25.760 --> 01:04:31.279 +creating a + +01:04:28.039 --> 01:04:32.440 +probability uh it has a probability here + +01:04:31.279 --> 01:04:34.960 +and then you have the probability + +01:04:32.440 --> 01:04:38.160 +assigned by your new model and then you + +01:04:34.960 --> 01:04:41.200 +have your reward signal here and so this + +01:04:38.160 --> 01:04:43.599 +is basically improving the log odds or + +01:04:41.200 --> 01:04:46.960 +improving the odds of getting a good + +01:04:43.599 --> 01:04:49.720 +reward for high reward + +01:04:46.960 --> 01:04:52.920 +sequences separately from this you have + +01:04:49.720 --> 01:04:55.920 +this K regularization term and this K + +01:04:52.920 --> 01:04:58.119 +regularization term is keeping the + +01:04:55.920 --> 01:05:00.279 +scores of or it's keeping the + +01:04:58.119 --> 01:05:02.400 +probability distribution of your new + +01:05:00.279 --> 01:05:03.960 +model similar to the probability + +01:05:02.400 --> 01:05:09.200 +distribution of your old + +01:05:03.960 --> 01:05:11.359 +model and this beta parameter basically + +01:05:09.200 --> 01:05:15.240 +you can increase it or decrease it based + +01:05:11.359 --> 01:05:18.400 +on how similar you want to keep the um + +01:05:15.240 --> 01:05:18.400 +how similar you want to keep the + +01:05:20.720 --> 01:05:24.640 +model another method that people use is + +01:05:23.160 --> 01:05:29.279 +something called proximal policy + +01:05:24.640 --> 01:05:30.920 +optimization or or Po and this is a + +01:05:29.279 --> 01:05:33.920 +method that is based on + +01:05:30.920 --> 01:05:38.160 +clipping uh the + +01:05:33.920 --> 01:05:40.920 +outputs and we Define uh this ratio + +01:05:38.160 --> 01:05:43.880 +here so this ratio is equivalent to this + +01:05:40.920 --> 01:05:46.160 +here so it's basically um kind of the + +01:05:43.880 --> 01:05:47.839 +amount that you're learning or the + +01:05:46.160 --> 01:05:51.720 +amount that the new model up weights + +01:05:47.839 --> 01:05:54.039 +High reward sequences and so here we + +01:05:51.720 --> 01:05:58.200 +have the same thing that we had + +01:05:54.039 --> 01:06:01.200 +above so it it looks like this but over + +01:05:58.200 --> 01:06:03.720 +here we have a clipped version of this + +01:06:01.200 --> 01:06:07.000 +where essentially what we do is we + +01:06:03.720 --> 01:06:07.000 +clip this + +01:06:21.119 --> 01:06:27.880 +ratio this ratio to be within uh a + +01:06:24.720 --> 01:06:32.160 +certain range of the original ratio and + +01:06:27.880 --> 01:06:37.880 +what this is doing is this is + +01:06:32.160 --> 01:06:41.400 +essentially forcing the model to um not + +01:06:37.880 --> 01:06:44.000 +reward large jumps in the space um + +01:06:41.400 --> 01:06:47.559 +because if you take the + +01:06:44.000 --> 01:06:49.160 +minimum and actually I'm I'm sorry I + +01:06:47.559 --> 01:06:50.720 +just realized I I might have done + +01:06:49.160 --> 01:06:52.520 +something confusing here because this is + +01:06:50.720 --> 01:06:53.960 +actually higher as better so this isn't + +01:06:52.520 --> 01:06:56.079 +really a loss function this is something + +01:06:53.960 --> 01:06:57.680 +you're attempting to maximize so + +01:06:56.079 --> 01:06:59.839 +in contrast to all of the other things I + +01:06:57.680 --> 01:07:01.680 +was talking about before um this is + +01:06:59.839 --> 01:07:04.400 +something where higher is better instead + +01:07:01.680 --> 01:07:07.599 +of lower is better but anyway basically + +01:07:04.400 --> 01:07:09.599 +by taking the minimum of this you're + +01:07:07.599 --> 01:07:11.960 +encouraging the model + +01:07:09.599 --> 01:07:16.279 +to + +01:07:11.960 --> 01:07:18.559 +uh keep examining the space where you + +01:07:16.279 --> 01:07:20.799 +don't diverge much from the original + +01:07:18.559 --> 01:07:22.920 +model and if the space where the + +01:07:20.799 --> 01:07:25.240 +original model was in is better than the + +01:07:22.920 --> 01:07:27.440 +new space that your model has moved into + +01:07:25.240 --> 01:07:30.920 +you move back towards the original model + +01:07:27.440 --> 01:07:33.000 +so basically like if you had um if you + +01:07:30.920 --> 01:07:34.960 +learned a model if you started learning + +01:07:33.000 --> 01:07:37.960 +a model that looked like it was + +01:07:34.960 --> 01:07:40.279 +optimizing uh your your reward but then + +01:07:37.960 --> 01:07:43.119 +suddenly the model went off the rails + +01:07:40.279 --> 01:07:45.000 +and um it starts generating completely + +01:07:43.119 --> 01:07:47.319 +nonsense outputs that get really bad + +01:07:45.000 --> 01:07:49.119 +reward this will push it back towards + +01:07:47.319 --> 01:07:50.920 +the original policy and that's the basic + +01:07:49.119 --> 01:07:54.279 +idea behind + +01:07:50.920 --> 01:07:57.640 +P um in terms of what I see people using + +01:07:54.279 --> 01:07:59.799 +um po was like really really popular for + +01:07:57.640 --> 01:08:01.880 +a while but I've started to see people + +01:07:59.799 --> 01:08:04.799 +use alternative strategies that use K + +01:08:01.880 --> 01:08:06.880 +regularization so I don't I don't think + +01:08:04.799 --> 01:08:08.520 +either one of them is like particularly + +01:08:06.880 --> 01:08:10.039 +more popular than any of the others and + +01:08:08.520 --> 01:08:13.720 +this one's a little bit simpler + +01:08:10.039 --> 01:08:13.720 +conceptually so I like the the + +01:08:14.880 --> 01:08:19.279 +one cool um any questions about + +01:08:20.359 --> 01:08:26.759 +this okay um and actually one thing I + +01:08:24.640 --> 01:08:29.679 +should mention is um all of these things + +01:08:26.759 --> 01:08:32.120 +are implemented uh in you know whatever + +01:08:29.679 --> 01:08:33.759 +libraries you use like hugging face TRL + +01:08:32.120 --> 01:08:35.679 +Transformer reinforcement learning as an + +01:08:33.759 --> 01:08:37.040 +example Library all of these methods are + +01:08:35.679 --> 01:08:38.400 +implemented there so if you actually + +01:08:37.040 --> 01:08:40.600 +want to use these in practice that's + +01:08:38.400 --> 01:08:40.600 +good + +01:08:40.839 --> 01:08:46.359 +place the next thing is adding a + +01:08:42.920 --> 01:08:48.679 +Baseline and so the basic idea is that + +01:08:46.359 --> 01:08:52.199 +you have ex expectations about your + +01:08:48.679 --> 01:08:54.640 +reward for a particular sentence and um + +01:08:52.199 --> 01:08:56.560 +like let's say we wanted to uh translate + +01:08:54.640 --> 01:08:58.400 +a sentence and we have uh something like + +01:08:56.560 --> 01:09:01.279 +this is an easy sentence and buffalo + +01:08:58.400 --> 01:09:02.920 +buffalo buffalo which is a harder + +01:09:01.279 --> 01:09:07.799 +sentence to + +01:09:02.920 --> 01:09:09.679 +translate and so we have a reward um if + +01:09:07.799 --> 01:09:11.759 +if you're not familiar with this example + +01:09:09.679 --> 01:09:13.480 +you can search on Wikipedia for buffalo + +01:09:11.759 --> 01:09:16.759 +buffalo buffalo and you'll you'll find + +01:09:13.480 --> 01:09:19.520 +out what I'm talking about um but uh + +01:09:16.759 --> 01:09:21.440 +there's a reward uh and let's say you + +01:09:19.520 --> 01:09:24.359 +got a reward of 0.8 for the first one + +01:09:21.440 --> 01:09:29.679 +and a reward of 0.3 for the second + +01:09:24.359 --> 01:09:31.679 +one but the problem is if um the first + +01:09:29.679 --> 01:09:33.640 +one actually is really easy and the + +01:09:31.679 --> 01:09:36.120 +second one is really hard getting a + +01:09:33.640 --> 01:09:37.799 +reward of 0.8 for the second one for + +01:09:36.120 --> 01:09:40.080 +like a translation or something is + +01:09:37.799 --> 01:09:41.120 +actually bad right and a reward of 0.3 + +01:09:40.080 --> 01:09:45.239 +is good because you're moving in the + +01:09:41.120 --> 01:09:49.359 +right direction and so you basically um + +01:09:45.239 --> 01:09:52.239 +you have uh the Baseline uh minus reward + +01:09:49.359 --> 01:09:54.960 +or sorry reward minus Baseline and this + +01:09:52.239 --> 01:09:56.520 +would give you a negative value for this + +01:09:54.960 --> 01:09:59.320 +first one a positive value for the + +01:09:56.520 --> 01:10:01.360 +second one and so the basic idea is can + +01:09:59.320 --> 01:10:04.400 +we predict a priori how difficult this + +01:10:01.360 --> 01:10:05.440 +example is and then uh adjust our reward + +01:10:04.400 --> 01:10:08.360 +based on + +01:10:05.440 --> 01:10:10.960 +that and + +01:10:08.360 --> 01:10:13.679 +so that's the basic idea you just have + +01:10:10.960 --> 01:10:15.560 +kind of like a baseline model um you + +01:10:13.679 --> 01:10:19.320 +have a baseline model that predicts this + +01:10:15.560 --> 01:10:19.320 +and uh you adjust uh + +01:10:19.760 --> 01:10:25.000 +appropriately um there's two major ways + +01:10:22.719 --> 01:10:27.600 +you can do this the first one um the + +01:10:25.000 --> 01:10:29.800 +Baseline doesn't need to be anything um + +01:10:27.600 --> 01:10:32.960 +the only hope is that it decreases the + +01:10:29.800 --> 01:10:35.960 +variance in your reward uh and makes + +01:10:32.960 --> 01:10:38.239 +learning more stable um there's two + +01:10:35.960 --> 01:10:40.159 +options that I see done pretty widely + +01:10:38.239 --> 01:10:43.000 +the first one is predicting the final + +01:10:40.159 --> 01:10:47.360 +reward um predicting the final reward + +01:10:43.000 --> 01:10:50.960 +using a model that doesn't look at + +01:10:47.360 --> 01:10:53.400 +all at the answer that you provided it + +01:10:50.960 --> 01:10:55.880 +only looks at the input or it only looks + +01:10:53.400 --> 01:10:58.840 +at the intermediate States of uh you + +01:10:55.880 --> 01:11:00.480 +know a model or something and so at the + +01:10:58.840 --> 01:11:03.280 +sentence level you can have one Baseline + +01:11:00.480 --> 01:11:04.719 +per sentence um you can also do it at + +01:11:03.280 --> 01:11:10.560 +each decoder + +01:11:04.719 --> 01:11:11.640 +State and this is uh basically you can + +01:11:10.560 --> 01:11:13.040 +do this anytime you're doing + +01:11:11.640 --> 01:11:15.199 +reinforcement learning by just training + +01:11:13.040 --> 01:11:18.199 +a regression model that does this for + +01:11:15.199 --> 01:11:19.679 +you based on the rewards you get the + +01:11:18.199 --> 01:11:21.040 +important thing is the Baseline is not + +01:11:19.679 --> 01:11:22.640 +allowed to use any of your actual + +01:11:21.040 --> 01:11:25.679 +predictions because once you start using + +01:11:22.640 --> 01:11:26.640 +the predictions then um your uh it's not + +01:11:25.679 --> 01:11:28.679 +a + +01:11:26.640 --> 01:11:30.840 +baseline another option which is + +01:11:28.679 --> 01:11:33.440 +relatively easy to implement but can + +01:11:30.840 --> 01:11:36.320 +still be effective is you calculate the + +01:11:33.440 --> 01:11:38.719 +mean of the rewards in a batch and so if + +01:11:36.320 --> 01:11:40.880 +you have a big batch of data and your + +01:11:38.719 --> 01:11:44.440 +average reward in the batch is like + +01:11:40.880 --> 01:11:46.480 +0.4 uh then you just subtract that 0.4 + +01:11:44.440 --> 01:11:50.080 +uh and calculate your reward based on + +01:11:46.480 --> 01:11:50.080 +that so that's another option that can + +01:11:51.800 --> 01:11:57.800 +use + +01:11:53.639 --> 01:12:00.000 +um a kind of extreme example of this uh + +01:11:57.800 --> 01:12:01.199 +of creating a baseline is contrasting + +01:12:00.000 --> 01:12:03.639 +pairwise + +01:12:01.199 --> 01:12:05.880 +examples um or + +01:12:03.639 --> 01:12:08.280 +contrasting different outputs for the + +01:12:05.880 --> 01:12:12.040 +same input + +01:12:08.280 --> 01:12:13.920 +and you can easily learn uh directly + +01:12:12.040 --> 01:12:16.239 +from pairwise Human + +01:12:13.920 --> 01:12:18.199 +preferences uh which can provide more + +01:12:16.239 --> 01:12:20.760 +stability because you know one is better + +01:12:18.199 --> 01:12:23.880 +than the other so you essentially can be + +01:12:20.760 --> 01:12:26.199 +sure that uh you're upweighting a better + +01:12:23.880 --> 01:12:29.560 +one and down weting a worse one + +01:12:26.199 --> 01:12:31.400 +um this is the idea behind DPO which is + +01:12:29.560 --> 01:12:33.719 +a recently pretty popular model but + +01:12:31.400 --> 01:12:36.800 +there's also other previous methods that + +01:12:33.719 --> 01:12:40.199 +did similar things and the way DPO works + +01:12:36.800 --> 01:12:45.040 +is it basically calculates this ratio of + +01:12:40.199 --> 01:12:49.280 +uh the probability of the new uh the new + +01:12:45.040 --> 01:12:51.639 +model to the old model but it UPS this + +01:12:49.280 --> 01:12:53.639 +probability for a good output and it + +01:12:51.639 --> 01:12:56.280 +downweights this probability for a bad + +01:12:53.639 --> 01:12:57.679 +output and so + +01:12:56.280 --> 01:13:00.120 +here we have our better outputs over + +01:12:57.679 --> 01:13:02.040 +here here we have our worse outputs and + +01:13:00.120 --> 01:13:03.600 +you just it's basically learning to + +01:13:02.040 --> 01:13:05.639 +upate the probability and downweight + +01:13:03.600 --> 01:13:09.320 +probability + +01:13:05.639 --> 01:13:09.320 +accordingly so + +01:13:09.360 --> 01:13:15.040 +um you can notice that DPO is very + +01:13:12.280 --> 01:13:18.040 +similar to PO um and that it's learning + +01:13:15.040 --> 01:13:19.679 +uh it's using these ratios but the + +01:13:18.040 --> 01:13:21.520 +disadvantage of this is you obviously + +01:13:19.679 --> 01:13:23.120 +require pairwise judgments and you can't + +01:13:21.520 --> 01:13:26.120 +learn a model if you don't have these + +01:13:23.120 --> 01:13:28.080 +pawise judgments so + +01:13:26.120 --> 01:13:30.760 +the + +01:13:28.080 --> 01:13:33.159 +beta yeah so the beta term is is + +01:13:30.760 --> 01:13:35.840 +basically a normalization term it's a + +01:13:33.159 --> 01:13:39.960 +hyper parameter um + +01:13:35.840 --> 01:13:41.840 +for DPO sorry I read the paper right + +01:13:39.960 --> 01:13:43.639 +when it came out and I don't remember if + +01:13:41.840 --> 01:13:45.600 +it's a direct derivation from the K + +01:13:43.639 --> 01:13:47.960 +Divergence term or not but I think it + +01:13:45.600 --> 01:13:49.800 +might be um I'd have to go back and look + +01:13:47.960 --> 01:13:50.480 +at the look at the paper but basically + +01:13:49.800 --> 01:13:53.600 +the + +01:13:50.480 --> 01:13:56.760 +more the larger this is the larger + +01:13:53.600 --> 01:13:59.320 +gradient steps you'll be + +01:13:56.760 --> 01:14:00.639 +it also um like you'll notice there + +01:13:59.320 --> 01:14:03.400 +sorry I didn't mention this but you'll + +01:14:00.639 --> 01:14:06.120 +notice there's a sigmoid term here so + +01:14:03.400 --> 01:14:09.000 +the the + +01:14:06.120 --> 01:14:10.080 +beta the larger you increase the beta + +01:14:09.000 --> 01:14:13.239 +the + +01:14:10.080 --> 01:14:16.600 +more small differences in these + +01:14:13.239 --> 01:14:18.719 +values like it basically like stretches + +01:14:16.600 --> 01:14:22.280 +or shrinks the sigmoid with respect to + +01:14:18.719 --> 01:14:24.120 +how beak the it is so it will um it will + +01:14:22.280 --> 01:14:25.800 +affect how much like small differences + +01:14:24.120 --> 01:14:27.960 +in this will affect + +01:14:25.800 --> 01:14:30.120 +but I I think this was derived from the + +01:14:27.960 --> 01:14:31.760 +K regularization term that we had + +01:14:30.120 --> 01:14:34.400 +previously in + +01:14:31.760 --> 01:14:35.800 +um in this slide here but I have to go + +01:14:34.400 --> 01:14:40.520 +back and double check unless somebody + +01:14:35.800 --> 01:14:43.239 +knows it is okay good yeah + +01:14:40.520 --> 01:14:45.000 +so I don't want to say wrong things but + +01:14:43.239 --> 01:14:48.239 +I also don't want + +01:14:45.000 --> 01:14:50.920 +to okay cool um and so then increasing + +01:14:48.239 --> 01:14:55.080 +batch size + +01:14:50.920 --> 01:14:57.360 +um because each uh another thing is um + +01:14:55.080 --> 01:14:58.440 +kind of NE necessarily reinforcement + +01:14:57.360 --> 01:14:59.920 +learning is going to have higher + +01:14:58.440 --> 01:15:01.400 +variance and maximum likelihood + +01:14:59.920 --> 01:15:04.199 +estimation just because we're doing samp + +01:15:01.400 --> 01:15:07.840 +playing and other things like this and + +01:15:04.199 --> 01:15:09.440 +um so one very simple thing you can do + +01:15:07.840 --> 01:15:11.280 +is just increase the number of examples + +01:15:09.440 --> 01:15:13.679 +or rollouts that you do before an update + +01:15:11.280 --> 01:15:15.800 +to stabilize and so I I would definitely + +01:15:13.679 --> 01:15:17.480 +suggest that if you're seeing any + +01:15:15.800 --> 01:15:18.679 +stability after doing all of the tricks + +01:15:17.480 --> 01:15:20.400 +that I mentioned before that you + +01:15:18.679 --> 01:15:23.040 +increase your batch size and often that + +01:15:20.400 --> 01:15:25.480 +can just resolve your problems + +01:15:23.040 --> 01:15:28.760 +um another uh + +01:15:25.480 --> 01:15:30.560 +thing that people often do is um save + +01:15:28.760 --> 01:15:32.040 +many many previous rollouts because + +01:15:30.560 --> 01:15:34.199 +generally doing rollouts is more + +01:15:32.040 --> 01:15:37.840 +expensive doing rollouts and collecting + +01:15:34.199 --> 01:15:39.560 +rewards is more expensive and so um you + +01:15:37.840 --> 01:15:42.360 +can save the roll outs that you have + +01:15:39.560 --> 01:15:43.840 +done before and uh keep them around so + +01:15:42.360 --> 01:15:46.600 +you can update parameters with larger + +01:15:43.840 --> 01:15:50.800 +batches in a more efficient + +01:15:46.600 --> 01:15:53.120 +way cool so that's all I have uh I just + +01:15:50.800 --> 01:15:54.400 +realized we're exactly at time so uh I + +01:15:53.120 --> 01:15:56.440 +should finish up here but I'll be happy + +01:15:54.400 --> 01:15:59.440 +to take any + +01:15:56.440 --> 01:15:59.440 +for + +01:16:01.679 --> 01:16:04.679 +thanks diff --git a/CMU Advanced NLP 2024 (13) Debugging and Interpretation/CMU Advanced NLP 2024 (13) Debugging and Interpretation.mp4 b/CMU Advanced NLP 2024 (13) Debugging and Interpretation/CMU Advanced NLP 2024 (13) Debugging and Interpretation.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..78c6db2bcc70055b98caffbc4aa20763a148ea50 --- /dev/null +++ b/CMU Advanced NLP 2024 (13) Debugging and Interpretation/CMU Advanced NLP 2024 (13) Debugging and Interpretation.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7de94ea1f378595ae1d957526a4f6aa5a2c75c49db3b40a36e1f0e5ab2a17152 +size 82237142 diff --git a/CMU Advanced NLP 2024 (13) Debugging and Interpretation/metadata.json b/CMU Advanced NLP 2024 (13) Debugging and Interpretation/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..7b9b8285df705fb15772e3cfcc3ac20c5bd19ea8 --- /dev/null +++ b/CMU Advanced NLP 2024 (13) Debugging and Interpretation/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=c4UwOq2J9mQ", + "title": "CMU Advanced NLP 2024 (13) Debugging and Interpretation" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (13) Debugging and Interpretation/transcript.srt b/CMU Advanced NLP 2024 (13) Debugging and Interpretation/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..21ab977d0c3b2a5536f4a75619e40c5fd5a42211 --- /dev/null +++ b/CMU Advanced NLP 2024 (13) Debugging and Interpretation/transcript.srt @@ -0,0 +1,7307 @@ +1 +00:00:00,919 --> 00:00:05,879 +so in my slides here I'm going to talk + +2 +00:00:03,760 --> 00:00:10,040 +about debugging and understanding NLP + +3 +00:00:05,879 --> 00:00:12,400 +models and this is how to tell uh when + +4 +00:00:10,040 --> 00:00:14,759 +for example both your implementations + +5 +00:00:12,400 --> 00:00:17,320 +are wrong and uh for example your + +6 +00:00:14,759 --> 00:00:19,000 +underlying assumptions are wrong or your + +7 +00:00:17,320 --> 00:00:21,240 +model is failing on particular segments + +8 +00:00:19,000 --> 00:00:23,439 +of data or stuff like that so going to + +9 +00:00:21,240 --> 00:00:26,160 +go through uh a variety of things that + +10 +00:00:23,439 --> 00:00:29,000 +can go wrong with your experiments + +11 +00:00:26,160 --> 00:00:31,679 +basically so a typical situation is + +12 +00:00:29,000 --> 00:00:33,399 +you've implemented some NLP system you + +13 +00:00:31,679 --> 00:00:35,840 +know based on neural networks of course + +14 +00:00:33,399 --> 00:00:36,920 +because that's what we use nowadays um + +15 +00:00:35,840 --> 00:00:40,000 +and you've looked at the code it + +16 +00:00:36,920 --> 00:00:42,000 +basically looks okay um but it has low + +17 +00:00:40,000 --> 00:00:44,559 +accuracy or it makes incomprehensible + +18 +00:00:42,000 --> 00:00:45,680 +errors and you would like to uh fix + +19 +00:00:44,559 --> 00:00:47,440 +these or you'd like to improve the + +20 +00:00:45,680 --> 00:00:49,120 +accuracy or something like that and so + +21 +00:00:47,440 --> 00:00:52,000 +what do I + +22 +00:00:49,120 --> 00:00:53,680 +do and I think there's three dimensions + +23 +00:00:52,000 --> 00:00:56,239 +of how you can understand your model and + +24 +00:00:53,680 --> 00:00:57,960 +your Model Behavior um the first one is + +25 +00:00:56,239 --> 00:01:00,199 +debugging the implementation so it's + +26 +00:00:57,960 --> 00:01:03,760 +identifying problems that you have when + +27 +00:01:00,199 --> 00:01:05,880 +you uh implemented something uh second + +28 +00:01:03,760 --> 00:01:07,759 +thing is actionable evaluation so + +29 +00:01:05,880 --> 00:01:09,799 +identifying typical error cases and how + +30 +00:01:07,759 --> 00:01:11,840 +you what you can do to fix them and + +31 +00:01:09,799 --> 00:01:13,720 +finally uh interpreting predictions or + +32 +00:01:11,840 --> 00:01:18,080 +interpreting what's happening inside the + +33 +00:01:13,720 --> 00:01:19,920 +model and uh this can maybe give you a + +34 +00:01:18,080 --> 00:01:21,520 +deeper idea about what's happening in + +35 +00:01:19,920 --> 00:01:22,720 +happening in individual cases and + +36 +00:01:21,520 --> 00:01:25,240 +there's a lot of reasons why you might + +37 +00:01:22,720 --> 00:01:27,920 +want to do that uh both like to make + +38 +00:01:25,240 --> 00:01:30,280 +your models better and also for example + +39 +00:01:27,920 --> 00:01:31,840 +if you want to be sure that your ition + +40 +00:01:30,280 --> 00:01:34,840 +isn't doing something illegal like + +41 +00:01:31,840 --> 00:01:36,439 +discriminating against people uh due to + +42 +00:01:34,840 --> 00:01:38,680 +protected attributes or other things + +43 +00:01:36,439 --> 00:01:41,399 +like that so um there's a number of + +44 +00:01:38,680 --> 00:01:42,920 +reasons why you'd want to do that so I'm + +45 +00:01:41,399 --> 00:01:44,399 +going to talk about the first two and + +46 +00:01:42,920 --> 00:01:48,840 +Nishant is mainly going to talk about + +47 +00:01:44,399 --> 00:01:52,000 +the second one so uh going right into + +48 +00:01:48,840 --> 00:01:55,159 +it so in neural network models uh + +49 +00:01:52,000 --> 00:01:58,880 +debugging is really important because + +50 +00:01:55,159 --> 00:02:00,920 +they're opaque they're unpredictable and + +51 +00:01:58,880 --> 00:02:03,119 +uh if you make little mistakes they can + +52 +00:02:00,920 --> 00:02:05,439 +cause big problems with your + +53 +00:02:03,119 --> 00:02:07,399 +output and another thing is that + +54 +00:02:05,439 --> 00:02:09,640 +everything is a hyperparameter including + +55 +00:02:07,399 --> 00:02:11,239 +your network size your model variations + +56 +00:02:09,640 --> 00:02:14,440 +your batch size your strategy your + +57 +00:02:11,239 --> 00:02:18,120 +Optimizer and your learning rate + +58 +00:02:14,440 --> 00:02:19,560 +and finally unlike kind of more + +59 +00:02:18,120 --> 00:02:21,200 +traditional machine learning methods + +60 +00:02:19,560 --> 00:02:23,000 +like logistic progression or support + +61 +00:02:21,200 --> 00:02:25,160 +Vector machines or something like that + +62 +00:02:23,000 --> 00:02:27,879 +you might that you might have studied in + +63 +00:02:25,160 --> 00:02:30,160 +your machine learning class um + +64 +00:02:27,879 --> 00:02:32,599 +stochastic optimization has no guarantee + +65 +00:02:30,160 --> 00:02:34,239 +about convergence um your loss might go + +66 +00:02:32,599 --> 00:02:35,720 +down then it might go up and there might + +67 +00:02:34,239 --> 00:02:38,120 +be absolutely nothing wrong with your + +68 +00:02:35,720 --> 00:02:40,200 +training or it might be you know a + +69 +00:02:38,120 --> 00:02:42,319 +serious problem so that's another issue + +70 +00:02:40,200 --> 00:02:45,440 +you need to deal + +71 +00:02:42,319 --> 00:02:48,800 +with so first I'd like to go into + +72 +00:02:45,440 --> 00:02:51,400 +possible causes of problems with your + +73 +00:02:48,800 --> 00:02:53,440 +implementation and I'm going to break + +74 +00:02:51,400 --> 00:02:55,040 +them down into a typology and based on + +75 +00:02:53,440 --> 00:02:57,040 +what part of the typology you're running + +76 +00:02:55,040 --> 00:02:59,200 +into problems with you will need to fix + +77 +00:02:57,040 --> 00:03:00,800 +them in different ways so your first + +78 +00:02:59,200 --> 00:03:02,599 +goal when you're experiencing the + +79 +00:03:00,800 --> 00:03:04,720 +problem is identifying why you're + +80 +00:03:02,599 --> 00:03:06,400 +experiencing the problem uh because that + +81 +00:03:04,720 --> 00:03:08,760 +will lead you to a + +82 +00:03:06,400 --> 00:03:10,440 +solution so for training time problems + +83 +00:03:08,760 --> 00:03:12,560 +there's a bunch of uh things that could + +84 +00:03:10,440 --> 00:03:14,360 +be wrong uh the first is a lack of model + +85 +00:03:12,560 --> 00:03:16,280 +capacity so your model is not able to + +86 +00:03:14,360 --> 00:03:18,599 +model the phenomena that you want to + +87 +00:03:16,280 --> 00:03:20,000 +model in the first place um you could + +88 +00:03:18,599 --> 00:03:22,080 +have a poor training + +89 +00:03:20,000 --> 00:03:24,920 +algorithm uh you could just have a bug + +90 +00:03:22,080 --> 00:03:27,080 +in your code at training time another + +91 +00:03:24,920 --> 00:03:29,319 +thing is uh test time problems and these + +92 +00:03:27,080 --> 00:03:30,599 +can include a disconnect between what + +93 +00:03:29,319 --> 00:03:33,040 +you're doing at training time and what + +94 +00:03:30,599 --> 00:03:35,640 +you're testing at testing time uh + +95 +00:03:33,040 --> 00:03:37,959 +failure of search + +96 +00:03:35,640 --> 00:03:39,920 +algorithms and another thing you want to + +97 +00:03:37,959 --> 00:03:41,360 +deal with is overfitting so you're + +98 +00:03:39,920 --> 00:03:44,319 +actually doing well on the training set + +99 +00:03:41,360 --> 00:03:48,360 +but you're doing poorly on the test + +100 +00:03:44,319 --> 00:03:50,400 +Set uh finally you could have um optimiz + +101 +00:03:48,360 --> 00:03:52,640 +a mismatch between the function you're + +102 +00:03:50,400 --> 00:03:54,920 +optimizing at evaluation time and uh + +103 +00:03:52,640 --> 00:03:56,519 +what you're actually evaluating sorry + +104 +00:03:54,920 --> 00:03:58,079 +the fun the function that you're + +105 +00:03:56,519 --> 00:04:01,079 +optimizing at training time and what + +106 +00:03:58,079 --> 00:04:03,720 +you're actually evaluating at test time + +107 +00:04:01,079 --> 00:04:05,280 +and my my best piece of advice for + +108 +00:04:03,720 --> 00:04:07,959 +figuring out why things are going wrong + +109 +00:04:05,280 --> 00:04:11,040 +is don't uh try to do all of them at + +110 +00:04:07,959 --> 00:04:12,560 +once and rather uh start from the top + +111 +00:04:11,040 --> 00:04:15,239 +and work it down because the ones at the + +112 +00:04:12,560 --> 00:04:17,600 +top are often easier to uh diagnose and + +113 +00:04:15,239 --> 00:04:20,680 +the ones at the + +114 +00:04:17,600 --> 00:04:23,000 +bottom so looking at how you can debug + +115 +00:04:20,680 --> 00:04:25,919 +systems at training time uh there's a + +116 +00:04:23,000 --> 00:04:27,360 +number of ways you can do this uh but + +117 +00:04:25,919 --> 00:04:30,039 +the most important thing for training + +118 +00:04:27,360 --> 00:04:33,479 +time uh issues is looking at the loss + +119 +00:04:30,039 --> 00:04:36,759 +function calculated on the training set + +120 +00:04:33,479 --> 00:04:38,960 +and what I mean by this is don't look uh + +121 +00:04:36,759 --> 00:04:41,240 +we talked about how we can't optimize + +122 +00:04:38,960 --> 00:04:45,039 +error or accuracy easily so instead we + +123 +00:04:41,240 --> 00:04:47,120 +optimize likelihood um and so you might + +124 +00:04:45,039 --> 00:04:49,080 +want to look at accuracy to see whether + +125 +00:04:47,120 --> 00:04:50,759 +your model is working well but I would + +126 +00:04:49,080 --> 00:04:53,039 +urge you first to look at your + +127 +00:04:50,759 --> 00:04:55,080 +likelihood or your loss function on the + +128 +00:04:53,039 --> 00:04:57,000 +training set instead of your accuracy on + +129 +00:04:55,080 --> 00:04:58,479 +the test set for example to diagnose + +130 +00:04:57,000 --> 00:05:00,600 +these variety of + +131 +00:04:58,479 --> 00:05:02,919 +problems and the sorts of things you + +132 +00:05:00,600 --> 00:05:05,840 +want to look at are um is the loss + +133 +00:05:02,919 --> 00:05:10,639 +function going down so is it you know + +134 +00:05:05,840 --> 00:05:14,199 +converging into a good place + +135 +00:05:10,639 --> 00:05:16,280 +um in general if this is your your + +136 +00:05:14,199 --> 00:05:18,600 +loss um the first thing you should know + +137 +00:05:16,280 --> 00:05:20,440 +is like what is a good loss uh in most + +138 +00:05:18,600 --> 00:05:22,280 +cases a good loss is zero like log + +139 +00:05:20,440 --> 00:05:26,280 +likelihood the best loss you can achieve + +140 +00:05:22,280 --> 00:05:28,639 +is zero so you have zero down here um + +141 +00:05:26,280 --> 00:05:31,639 +something + +142 +00:05:28,639 --> 00:05:31,639 +like + +143 +00:05:31,919 --> 00:05:36,680 +this is uh essentially a good loss + +144 +00:05:38,080 --> 00:05:43,120 +function something like that uh + +145 +00:05:41,360 --> 00:05:45,120 +especially if this is a relatively High + +146 +00:05:43,120 --> 00:05:47,759 +number is usually a bad loss + +147 +00:05:45,120 --> 00:05:50,319 +function + +148 +00:05:47,759 --> 00:05:52,680 +um something like that on your training + +149 +00:05:50,319 --> 00:05:54,240 +set is a very bad loss function uh + +150 +00:05:52,680 --> 00:05:55,840 +something something is going seriously + +151 +00:05:54,240 --> 00:05:57,960 +wrong if you see this on your Dev set + +152 +00:05:55,840 --> 00:05:59,800 +that could be or your test set that + +153 +00:05:57,960 --> 00:06:01,199 +could be uh overfitting but but if + +154 +00:05:59,800 --> 00:06:03,440 +you're seeing that on your training set + +155 +00:06:01,199 --> 00:06:05,759 +that's usually symptomatic of a problem + +156 +00:06:03,440 --> 00:06:09,160 +so uh these are uh things that you + +157 +00:06:05,759 --> 00:06:10,960 +should be uh knowing um is it going down + +158 +00:06:09,160 --> 00:06:13,520 +basically to zero if you run training + +159 +00:06:10,960 --> 00:06:16,000 +long enough um for many epochs over your + +160 +00:06:13,520 --> 00:06:17,479 +training data so if it's not going down + +161 +00:06:16,000 --> 00:06:20,599 +to zero and it's sticking up here then + +162 +00:06:17,479 --> 00:06:20,599 +that's also an + +163 +00:06:21,120 --> 00:06:25,759 +issue and um if it's not going down to + +164 +00:06:23,840 --> 00:06:27,919 +close to zero on whatever training set + +165 +00:06:25,759 --> 00:06:30,199 +you're training on um let's say you make + +166 +00:06:27,919 --> 00:06:31,840 +your training set extremely small + +167 +00:06:30,199 --> 00:06:33,319 +uh at least in that case it should go + +168 +00:06:31,840 --> 00:06:34,960 +down to zero otherwise you might have a + +169 +00:06:33,319 --> 00:06:37,199 +serious problem in your + +170 +00:06:34,960 --> 00:06:39,240 +implementation so these are good things + +171 +00:06:37,199 --> 00:06:41,960 +to check first when you're training a + +172 +00:06:39,240 --> 00:06:45,199 +model um and there's a number of reasons + +173 +00:06:41,960 --> 00:06:47,759 +why this might not be helping or why + +174 +00:06:45,199 --> 00:06:50,880 +this might not be happening so um your + +175 +00:06:47,759 --> 00:06:53,120 +Mo model might be too weak and so in + +176 +00:06:50,880 --> 00:06:55,440 +general larger models tend to perform + +177 +00:06:53,120 --> 00:06:58,000 +better uh especially if you're using a + +178 +00:06:55,440 --> 00:06:59,800 +pre-trained model and um this is just an + +179 +00:06:58,000 --> 00:07:03,800 +example from the T5 paper where they + +180 +00:06:59,800 --> 00:07:06,680 +scale up the T5 model um from a + +181 +00:07:03,800 --> 00:07:09,319 +relatively small model to what at the + +182 +00:07:06,680 --> 00:07:12,199 +time was a very large model of 11 + +183 +00:07:09,319 --> 00:07:14,360 +billion parameters now this is you know + +184 +00:07:12,199 --> 00:07:17,479 +a moderately sized model or maybe even + +185 +00:07:14,360 --> 00:07:20,879 +small model by some standards but anyway + +186 +00:07:17,479 --> 00:07:23,800 +you can see that it uh in continues to + +187 +00:07:20,879 --> 00:07:26,479 +increase one really interesting + +188 +00:07:23,800 --> 00:07:30,080 +phenomenon is uh that actually larger + +189 +00:07:26,479 --> 00:07:33,879 +models can learn faster or at least with + +190 +00:07:30,080 --> 00:07:36,680 +fewer steps than uh smaller + +191 +00:07:33,879 --> 00:07:40,199 +models and so this + +192 +00:07:36,680 --> 00:07:42,240 +is an interesting example this paper uh + +193 +00:07:40,199 --> 00:07:43,919 +on neural scaling was it's a very + +194 +00:07:42,240 --> 00:07:48,000 +influential paper but basically what + +195 +00:07:43,919 --> 00:07:51,000 +they show is the darker purple ones are + +196 +00:07:48,000 --> 00:07:54,599 +smaller models the yellow ones are + +197 +00:07:51,000 --> 00:07:57,159 +bigger models and what you can see here + +198 +00:07:54,599 --> 00:07:59,639 +is the purple model and on the left side + +199 +00:07:57,159 --> 00:08:02,120 +they have the number of tokens processed + +200 +00:07:59,639 --> 00:08:05,759 +the right side they have the number of + +201 +00:08:02,120 --> 00:08:08,159 +uh compute or the amount of compute um + +202 +00:08:05,759 --> 00:08:10,080 +and so what you can see is if you just + +203 +00:08:08,159 --> 00:08:12,240 +look at the number of tokens processed + +204 +00:08:10,080 --> 00:08:14,280 +the larger the model the faster it + +205 +00:08:12,240 --> 00:08:17,720 +converges which + +206 +00:08:14,280 --> 00:08:21,400 +is maybe a little bit surprising maybe a + +207 +00:08:17,720 --> 00:08:22,680 +little bit you or maybe uh like some + +208 +00:08:21,400 --> 00:08:24,879 +people have the intuition that this + +209 +00:08:22,680 --> 00:08:26,440 +should be the case but when I first saw + +210 +00:08:24,879 --> 00:08:27,759 +this I found it a little bit surprising + +211 +00:08:26,440 --> 00:08:29,000 +because I thought it would be so large + +212 +00:08:27,759 --> 00:08:29,960 +and noisy that the model would have + +213 +00:08:29,000 --> 00:08:32,320 +trouble fit + +214 +00:08:29,960 --> 00:08:34,200 +you know fitting the data as quickly but + +215 +00:08:32,320 --> 00:08:36,200 +there's actually a good reason for this + +216 +00:08:34,200 --> 00:08:37,240 +does anyone have a guess about why this + +217 +00:08:36,200 --> 00:08:39,719 +is + +218 +00:08:37,240 --> 00:08:41,240 +thee we've talked a little bit about the + +219 +00:08:39,719 --> 00:08:44,120 +underlying phenomena for this in + +220 +00:08:41,240 --> 00:08:48,360 +previous classes so you might be able to + +221 +00:08:44,120 --> 00:08:48,360 +think back to some of the things you + +222 +00:08:50,480 --> 00:08:56,040 +yeah yeah so um just to repeat there's a + +223 +00:08:54,160 --> 00:08:57,720 +lot of different parameters so it can + +224 +00:08:56,040 --> 00:08:59,880 +try to converge along a lot of different + +225 +00:08:57,720 --> 00:09:01,920 +dimensions so if we think back to the + +226 +00:08:59,880 --> 00:09:04,079 +like model pruning class and other stuff + +227 +00:09:01,920 --> 00:09:06,640 +like that um part of the reason why we + +228 +00:09:04,079 --> 00:09:08,000 +can prune large models so efficiently is + +229 +00:09:06,640 --> 00:09:10,200 +because only like a small number of the + +230 +00:09:08,000 --> 00:09:12,440 +parameters are actually useful and so if + +231 +00:09:10,200 --> 00:09:15,120 +you start out with a much larger model + +232 +00:09:12,440 --> 00:09:17,720 +it's more likely to have useful subsets + +233 +00:09:15,120 --> 00:09:20,320 +of the parameters basically um which is + +234 +00:09:17,720 --> 00:09:21,560 +called the lottery ticket hypothesis uh + +235 +00:09:20,320 --> 00:09:23,839 +there there's a famous paper called the + +236 +00:09:21,560 --> 00:09:27,560 +lottery ticket hypothesis examines this + +237 +00:09:23,839 --> 00:09:29,680 +phenomenon so um one one interesting + +238 +00:09:27,560 --> 00:09:32,160 +thing is you can see that even if you + +239 +00:09:29,680 --> 00:09:35,640 +scale up the compute even if you measure + +240 +00:09:32,160 --> 00:09:37,640 +based on compute the uh larger models + +241 +00:09:35,640 --> 00:09:38,959 +eventually surpass the smaller models in + +242 +00:09:37,640 --> 00:09:41,920 +terms of how efficient they are at + +243 +00:09:38,959 --> 00:09:44,680 +modeling the data and that's just + +244 +00:09:41,920 --> 00:09:46,760 +because models tend to learn well for a + +245 +00:09:44,680 --> 00:09:49,560 +while and then they basically reach + +246 +00:09:46,760 --> 00:09:51,760 +their capacity and stop learning well or + +247 +00:09:49,560 --> 00:09:53,680 +they start learning very slowly and once + +248 +00:09:51,760 --> 00:09:57,120 +you get to that point the larger models + +249 +00:09:53,680 --> 00:09:58,800 +work better so there's a kind of + +250 +00:09:57,120 --> 00:10:00,640 +counterintuitive thing that if you want + +251 +00:09:58,800 --> 00:10:04,160 +to train faster you actually can train a + +252 +00:10:00,640 --> 00:10:06,839 +larger model and uh that will that will + +253 +00:10:04,160 --> 00:10:08,000 +uh get you to a good solution at some + +254 +00:10:06,839 --> 00:10:09,640 +point that will get you to a good + +255 +00:10:08,000 --> 00:10:11,120 +solution faster than a smaller model + +256 +00:10:09,640 --> 00:10:15,200 +would you know of course you need memory + +257 +00:10:11,120 --> 00:10:15,200 +and stuff but why are looking + +258 +00:10:20,040 --> 00:10:26,920 +at so this is test loss training loss + +259 +00:10:22,760 --> 00:10:30,680 +also looks like this um I think on + +260 +00:10:26,920 --> 00:10:34,360 +this particular + +261 +00:10:30,680 --> 00:10:37,519 +on this particular paper they never + +262 +00:10:34,360 --> 00:10:39,399 +repeated data and if you never repeat + +263 +00:10:37,519 --> 00:10:42,560 +data actually your training loss looks + +264 +00:10:39,399 --> 00:10:44,680 +very similar to your test loss because + +265 +00:10:42,560 --> 00:10:46,079 +it if you like actually if you can + +266 +00:10:44,680 --> 00:10:48,760 +assume your training data set and your + +267 +00:10:46,079 --> 00:10:50,880 +test data set are um uh identically + +268 +00:10:48,760 --> 00:10:52,279 +distributed your training loss of new + +269 +00:10:50,880 --> 00:10:54,600 +training data should be exactly the same + +270 +00:10:52,279 --> 00:10:55,959 +as your test loss so I think that's + +271 +00:10:54,600 --> 00:10:57,760 +basically why they were justified in + +272 +00:10:55,959 --> 00:11:01,000 +doing that good but they probably did + +273 +00:10:57,760 --> 00:11:03,639 +test loss to like I swashed the concern + +274 +00:11:01,000 --> 00:11:05,839 +that this was overfitting comp or + +275 +00:11:03,639 --> 00:11:09,200 +something but good + +276 +00:11:05,839 --> 00:11:11,279 +question um cool so the these are are + +277 +00:11:09,200 --> 00:11:13,000 +good things to know um so basically if + +278 +00:11:11,279 --> 00:11:14,839 +you see your model doing something like + +279 +00:11:13,000 --> 00:11:16,279 +this um plateauing out maybe your + +280 +00:11:14,839 --> 00:11:18,680 +model's too small and you need to tr a + +281 +00:11:16,279 --> 00:11:20,920 +big + +282 +00:11:18,680 --> 00:11:22,200 +basically another uh piece of trouble + +283 +00:11:20,920 --> 00:11:26,800 +that you can have is trouble with + +284 +00:11:22,200 --> 00:11:29,519 +optimization and basically um you should + +285 +00:11:26,800 --> 00:11:31,600 +check your Optimizer um usually people + +286 +00:11:29,519 --> 00:11:35,639 +are using atom variants nowadays like + +287 +00:11:31,600 --> 00:11:37,839 +atom or atom W so just use that um + +288 +00:11:35,639 --> 00:11:39,639 +learning rate uh so make sure that the + +289 +00:11:37,839 --> 00:11:41,160 +learning rate you're using is standard + +290 +00:11:39,639 --> 00:11:43,399 +for kind of the model size that you're + +291 +00:11:41,160 --> 00:11:44,920 +using and the best way to do this is uh + +292 +00:11:43,399 --> 00:11:46,000 +look at previous papers and see what + +293 +00:11:44,920 --> 00:11:50,160 +they're + +294 +00:11:46,000 --> 00:11:51,680 +using um initialization most people + +295 +00:11:50,160 --> 00:11:53,440 +nowadays will not be training from + +296 +00:11:51,680 --> 00:11:55,440 +scratch but if you are training from + +297 +00:11:53,440 --> 00:11:58,040 +scratch how you initialize your model is + +298 +00:11:55,440 --> 00:11:59,399 +really important and normally the way + +299 +00:11:58,040 --> 00:12:03,320 +you do this is you do this with some + +300 +00:11:59,399 --> 00:12:05,079 +sort of uniform random noise and uh + +301 +00:12:03,320 --> 00:12:06,959 +specifically you can pick the uniform + +302 +00:12:05,079 --> 00:12:08,800 +random noise in intelligent ways based + +303 +00:12:06,959 --> 00:12:12,240 +on the the data size which I'll talk + +304 +00:12:08,800 --> 00:12:13,920 +about in a second um also mini batching + +305 +00:12:12,240 --> 00:12:15,639 +um are you using sufficiently large + +306 +00:12:13,920 --> 00:12:17,480 +batches of data if you're using small + +307 +00:12:15,639 --> 00:12:18,720 +batches of data you might have too much + +308 +00:12:17,480 --> 00:12:21,279 +noise in your training and it might + +309 +00:12:18,720 --> 00:12:23,839 +diverge so uh these are things you need + +310 +00:12:21,279 --> 00:12:23,839 +think about as + +311 +00:12:25,279 --> 00:12:30,560 +well + +312 +00:12:27,519 --> 00:12:35,000 +cool um so these are training time + +313 +00:12:30,560 --> 00:12:37,320 +things um the next thing is debugging at + +314 +00:12:35,000 --> 00:12:37,320 +test + +315 +00:12:38,160 --> 00:12:43,839 +time and this is particularly important + +316 +00:12:41,240 --> 00:12:47,320 +if you're doing any sort + +317 +00:12:43,839 --> 00:12:48,880 +of like I guess a lot of this has kind + +318 +00:12:47,320 --> 00:12:51,360 +of been commoditized and it's + +319 +00:12:48,880 --> 00:12:52,560 +implemented in hugging face and stuff + +320 +00:12:51,360 --> 00:12:55,120 +like that and as long as you're using + +321 +00:12:52,560 --> 00:12:57,279 +the standard implementations you're less + +322 +00:12:55,120 --> 00:12:59,000 +likely to run into these bugs but if you + +323 +00:12:57,279 --> 00:13:00,519 +are implementing anything on your own + +324 +00:12:59,000 --> 00:13:03,040 +this is actually really tricky and you + +325 +00:13:00,519 --> 00:13:07,880 +can easily make mistakes so uh it's + +326 +00:13:03,040 --> 00:13:08,959 +important to to know about it so um what + +327 +00:13:07,880 --> 00:13:10,680 +one of the reasons why you can have + +328 +00:13:08,959 --> 00:13:12,240 +training and test disconnects especially + +329 +00:13:10,680 --> 00:13:14,399 +if you're doing something like text + +330 +00:13:12,240 --> 00:13:15,959 +generation is that usually your loss + +331 +00:13:14,399 --> 00:13:17,720 +calculation and prodiction functions + +332 +00:13:15,959 --> 00:13:20,480 +will be implemented in different + +333 +00:13:17,720 --> 00:13:23,360 +functions and like anything in software + +334 +00:13:20,480 --> 00:13:25,440 +engineering Um this can be a source of + +335 +00:13:23,360 --> 00:13:26,760 +bugs duplicated sour code can be a + +336 +00:13:25,440 --> 00:13:28,440 +source of bugs because you might + +337 +00:13:26,760 --> 00:13:30,199 +Implement one thing in one place in one + +338 +00:13:28,440 --> 00:13:33,000 +way another thing in another place in + +339 +00:13:30,199 --> 00:13:35,560 +another way so this is no exception to + +340 +00:13:33,000 --> 00:13:37,399 +that um it's especially true for + +341 +00:13:35,560 --> 00:13:39,000 +structured prediction models so anything + +342 +00:13:37,399 --> 00:13:40,399 +where you're not just making a single + +343 +00:13:39,000 --> 00:13:42,079 +prediction but you're making multiple + +344 +00:13:40,399 --> 00:13:43,839 +predictions in a row so you need to be a + +345 +00:13:42,079 --> 00:13:46,959 +little bit careful about + +346 +00:13:43,839 --> 00:13:49,880 +that um another thing that you need to + +347 +00:13:46,959 --> 00:13:51,079 +be pay attention about is often uh + +348 +00:13:49,880 --> 00:13:52,680 +especially if you're doing your own + +349 +00:13:51,079 --> 00:13:55,880 +implementation loss calculation it's + +350 +00:13:52,680 --> 00:13:59,800 +mini batched and generation is not or in + +351 +00:13:55,880 --> 00:14:02,199 +highly optimized versions of um of + +352 +00:13:59,800 --> 00:14:03,880 +inference you might be doing inference + +353 +00:14:02,199 --> 00:14:05,360 +with Dynamic batching and stuff like + +354 +00:14:03,880 --> 00:14:06,720 +that and it might become complicated you + +355 +00:14:05,360 --> 00:14:09,800 +might make + +356 +00:14:06,720 --> 00:14:12,160 +mistakes um so how do + +357 +00:14:09,800 --> 00:14:15,839 +we make sure that we're not making any + +358 +00:14:12,160 --> 00:14:18,560 +mistakes here um there's a really simple + +359 +00:14:15,839 --> 00:14:21,199 +way to debug any sort of mini batched + +360 +00:14:18,560 --> 00:14:24,199 +loss calculation because normally when + +361 +00:14:21,199 --> 00:14:27,000 +we mini batch loss calculations we're + +362 +00:14:24,199 --> 00:14:31,079 +simultaneously calculating uh the loss + +363 +00:14:27,000 --> 00:14:35,600 +for like uh four four or eight or + +364 +00:14:31,079 --> 00:14:37,560 +whatever sequences at a time and so you + +365 +00:14:35,600 --> 00:14:40,279 +can calculate the loss with a large + +366 +00:14:37,560 --> 00:14:42,000 +batch size like 32 and then calculate + +367 +00:14:40,279 --> 00:14:44,920 +the loss for each uh sentence + +368 +00:14:42,000 --> 00:14:47,720 +individually and sum them together and + +369 +00:14:44,920 --> 00:14:49,480 +these uh value should be the same and + +370 +00:14:47,720 --> 00:14:52,160 +this can help make sure that you don't + +371 +00:14:49,480 --> 00:14:55,120 +have any you know issues with your + +372 +00:14:52,160 --> 00:14:57,959 +padding or your masking or other things + +373 +00:14:55,120 --> 00:14:59,800 +like this um so this is particularly + +374 +00:14:57,959 --> 00:15:01,959 +important if you're not just using out + +375 +00:14:59,800 --> 00:15:04,240 +of the box things so you have a slightly + +376 +00:15:01,959 --> 00:15:06,240 +unusually structured model with like + +377 +00:15:04,240 --> 00:15:08,880 +hierarchical encoding or anything like + +378 +00:15:06,240 --> 00:15:11,680 +that you need to be really careful about + +379 +00:15:08,880 --> 00:15:15,440 +that um you can even create unit tests + +380 +00:15:11,680 --> 00:15:17,399 +that test this so like um in machine + +381 +00:15:15,440 --> 00:15:18,959 +learning code we don't write unit test + +382 +00:15:17,399 --> 00:15:20,160 +or especially neural network based + +383 +00:15:18,959 --> 00:15:22,440 +machine learning code we don't write + +384 +00:15:20,160 --> 00:15:24,160 +unit tests that often because it's kind + +385 +00:15:22,440 --> 00:15:26,279 +of hard to do there's lots of Randomness + +386 +00:15:24,160 --> 00:15:27,959 +and other stuff like that um but this is + +387 +00:15:26,279 --> 00:15:30,959 +one thing that you can easily test and + +388 +00:15:27,959 --> 00:15:30,959 +and make sure that you don't hear the + +389 +00:15:32,440 --> 00:15:39,319 +mistakes um any sort of uh generation + +390 +00:15:36,480 --> 00:15:43,199 +algorithm uh so when you're generating + +391 +00:15:39,319 --> 00:15:44,639 +or decoding um you can make sure that + +392 +00:15:43,199 --> 00:15:47,639 +your decoding code is getting the same + +393 +00:15:44,639 --> 00:15:50,040 +score is when you calculate the loss and + +394 +00:15:47,639 --> 00:15:52,959 +an easy way to do this is you call the + +395 +00:15:50,040 --> 00:15:54,759 +decoding function to generate an output + +396 +00:15:52,959 --> 00:15:57,399 +and normally when you're doing any sort + +397 +00:15:54,759 --> 00:15:59,480 +of search or sampling or something like + +398 +00:15:57,399 --> 00:16:02,120 +that during the search or sampling + +399 +00:15:59,480 --> 00:16:05,000 +you're calculating the logits or the log + +400 +00:16:02,120 --> 00:16:07,399 +probabilities of each step that you + +401 +00:16:05,000 --> 00:16:09,120 +sample so you keep track of that during + +402 +00:16:07,399 --> 00:16:12,279 +your sampling + +403 +00:16:09,120 --> 00:16:14,319 +algorithm and then after that you call + +404 +00:16:12,279 --> 00:16:16,800 +the loss function on the generated + +405 +00:16:14,319 --> 00:16:18,639 +output and you calculate the loss + +406 +00:16:16,800 --> 00:16:20,360 +according to the loss function and the + +407 +00:16:18,639 --> 00:16:22,240 +score of these two things should be the + +408 +00:16:20,360 --> 00:16:26,440 +same uh + +409 +00:16:22,240 --> 00:16:26,440 +so um you know you do your + +410 +00:16:27,920 --> 00:16:35,279 +generate and that gives you an + +411 +00:16:32,000 --> 00:16:35,279 +output in + +412 +00:16:35,600 --> 00:16:42,360 +score and then you do um + +413 +00:16:39,319 --> 00:16:45,839 +loss on the + +414 +00:16:42,360 --> 00:16:49,040 +output and that gives you the score + +415 +00:16:45,839 --> 00:16:53,079 +two and then you just compare these two + +416 +00:16:49,040 --> 00:16:56,360 +things together and this can uh in in my + +417 +00:16:53,079 --> 00:17:01,120 +experience this has allowed me to find + +418 +00:16:56,360 --> 00:17:03,240 +the majority of the bugs in um these two + +419 +00:17:01,120 --> 00:17:04,679 +things um have allowed me to find the + +420 +00:17:03,240 --> 00:17:06,600 +majority of the bugs whenever I was + +421 +00:17:04,679 --> 00:17:09,199 +doing any sort of like complex thing + +422 +00:17:06,600 --> 00:17:11,880 +with respect to generation or models and + +423 +00:17:09,199 --> 00:17:13,360 +stuff like that so um it's a very common + +424 +00:17:11,880 --> 00:17:15,439 +place for bugs even if you're pretty + +425 +00:17:13,360 --> 00:17:17,280 +familiar with models so I I would highly + +426 +00:17:15,439 --> 00:17:19,760 +recommend + +427 +00:17:17,280 --> 00:17:21,319 +that um this is particularly bad when + +428 +00:17:19,760 --> 00:17:25,559 +you're doing something like a search + +429 +00:17:21,319 --> 00:17:28,400 +algorithm like beam search um and + +430 +00:17:25,559 --> 00:17:30,400 +so beam search uh as you know from the + +431 +00:17:28,400 --> 00:17:34,200 +generation class instead of picking one + +432 +00:17:30,400 --> 00:17:37,080 +high probability uh you know word in + +433 +00:17:34,200 --> 00:17:40,160 +your next step you maintain several + +434 +00:17:37,080 --> 00:17:41,960 +paths and one way that you can fix this + +435 +00:17:40,160 --> 00:17:44,320 +is as you make search better the model + +436 +00:17:41,960 --> 00:17:45,760 +score should get better so the log + +437 +00:17:44,320 --> 00:17:48,240 +likelihood of the output should get + +438 +00:17:45,760 --> 00:17:50,280 +better almost all of the time so you can + +439 +00:17:48,240 --> 00:17:51,840 +search with varying beam sizes and make + +440 +00:17:50,280 --> 00:17:55,280 +sure that you get a better overall model + +441 +00:17:51,840 --> 00:17:57,559 +score at the end so um and you can even + +442 +00:17:55,280 --> 00:17:59,320 +create a unit test testing this as well + +443 +00:17:57,559 --> 00:18:01,000 +I don't think that that many people will + +444 +00:17:59,320 --> 00:18:02,480 +be reimplementing beam search so you + +445 +00:18:01,000 --> 00:18:04,120 +might not need to worry about that too + +446 +00:18:02,480 --> 00:18:05,679 +much but in case you are doing anything + +447 +00:18:04,120 --> 00:18:08,159 +with respect to search algorithms it's a + +448 +00:18:05,679 --> 00:18:08,159 +good thing to + +449 +00:18:08,880 --> 00:18:15,159 +know + +450 +00:18:10,480 --> 00:18:15,159 +cool um any questions about these two so + +451 +00:18:16,919 --> 00:18:24,159 +far no okay um so the second the next + +452 +00:18:22,600 --> 00:18:25,400 +thing I want to talk about this is + +453 +00:18:24,159 --> 00:18:27,840 +something that people think about a + +454 +00:18:25,400 --> 00:18:29,400 +little bit less uh but it's actually + +455 +00:18:27,840 --> 00:18:31,280 +something really important to know + +456 +00:18:29,400 --> 00:18:34,280 +because it will affect you it will + +457 +00:18:31,280 --> 00:18:35,799 +affect everybody uh to some extent it + +458 +00:18:34,280 --> 00:18:40,760 +will affect you to a greater or lesser + +459 +00:18:35,799 --> 00:18:41,520 +extent depending on um what uh type of + +460 +00:18:40,760 --> 00:18:44,480 +you + +461 +00:18:41,520 --> 00:18:46,799 +know system you're building but it will + +462 +00:18:44,480 --> 00:18:48,760 +definitely affect everybody and that's + +463 +00:18:46,799 --> 00:18:50,960 +the mismatch between the the function + +464 +00:18:48,760 --> 00:18:53,440 +that you're optimizing at training time + +465 +00:18:50,960 --> 00:18:55,240 +and the evaluation metric that you're + +466 +00:18:53,440 --> 00:18:58,000 +evaluating and + +467 +00:18:55,240 --> 00:18:59,679 +so uh like as I said in the + +468 +00:18:58,000 --> 00:19:01,679 +reinforcement learning class it's very + +469 +00:18:59,679 --> 00:19:03,640 +common to optimize for maximum + +470 +00:19:01,679 --> 00:19:06,039 +likelihood for training uh but there's + +471 +00:19:03,640 --> 00:19:07,840 +all kinds of problems with this you know + +472 +00:19:06,039 --> 00:19:09,640 +um with respect to the mistake it not + +473 +00:19:07,840 --> 00:19:11,640 +being sensitive to mistakes it not being + +474 +00:19:09,640 --> 00:19:14,799 +sensitive to your generation + +475 +00:19:11,640 --> 00:19:16,520 +algorithm um but even though your + +476 +00:19:14,799 --> 00:19:19,880 +likelihood is getting better accuracy + +477 +00:19:16,520 --> 00:19:22,799 +can get worse and this is a super simple + +478 +00:19:19,880 --> 00:19:25,080 +example with uh image classification on + +479 +00:19:22,799 --> 00:19:27,919 +mest and I I ran this experiment with + +480 +00:19:25,080 --> 00:19:30,880 +like 10 lines of pytorch code or + +481 +00:19:27,919 --> 00:19:36,840 +something like this uh maybe more like + +482 +00:19:30,880 --> 00:19:40,080 +40 lines of P um and so here um on the + +483 +00:19:36,840 --> 00:19:43,120 +left side we have the loss on the + +484 +00:19:40,080 --> 00:19:46,600 +training set and the test set or the dev + +485 +00:19:43,120 --> 00:19:48,559 +set and here we have accuracy on the + +486 +00:19:46,600 --> 00:19:50,799 +training set in the test + +487 +00:19:48,559 --> 00:19:55,000 +set + +488 +00:19:50,799 --> 00:19:56,159 +and so oops I showed you the answer so I + +489 +00:19:55,000 --> 00:19:58,799 +was going to do a quiz but I + +490 +00:19:56,159 --> 00:20:00,559 +accidentally showed you the answer um + +491 +00:19:58,799 --> 00:20:04,440 +but the problem here is basically + +492 +00:20:00,559 --> 00:20:06,320 +because um the the loss you're + +493 +00:20:04,440 --> 00:20:09,400 +calculating the likelihood of the + +494 +00:20:06,320 --> 00:20:11,120 +correct answer and the likelihood of the + +495 +00:20:09,400 --> 00:20:12,440 +correct answer is the probability of + +496 +00:20:11,120 --> 00:20:15,000 +getting the correct + +497 +00:20:12,440 --> 00:20:17,240 +answer the accuracy is the number of + +498 +00:20:15,000 --> 00:20:20,280 +times you're getting the correct answer + +499 +00:20:17,240 --> 00:20:23,799 +so as you train a model to get more and + +500 +00:20:20,280 --> 00:20:25,440 +more confident it gets better it gets + +501 +00:20:23,799 --> 00:20:27,840 +better and better at getting more + +502 +00:20:25,440 --> 00:20:30,039 +answers correct but it also gets more + +503 +00:20:27,840 --> 00:20:33,360 +and more confident in its answers and so + +504 +00:20:30,039 --> 00:20:36,200 +if the you know there's any example that + +505 +00:20:33,360 --> 00:20:37,840 +it's really bad at um it might get very + +506 +00:20:36,200 --> 00:20:42,320 +confident in + +507 +00:20:37,840 --> 00:20:44,760 +that answer that bad answer and the log + +508 +00:20:42,320 --> 00:20:47,320 +likelihood of that answer will go up or + +509 +00:20:44,760 --> 00:20:49,679 +sorry the log likelihood will go down so + +510 +00:20:47,320 --> 00:20:54,360 +the negative log likelihood will go up + +511 +00:20:49,679 --> 00:20:56,720 +is the loss so basically + +512 +00:20:54,360 --> 00:20:59,559 +um the + +513 +00:20:56,720 --> 00:21:01,039 +uh the loss that you're calculating and + +514 +00:20:59,559 --> 00:21:03,840 +the thing that you care about in the end + +515 +00:21:01,039 --> 00:21:07,120 +accuracy can be decorrelated + +516 +00:21:03,840 --> 00:21:09,520 +um so there's also an interesting + +517 +00:21:07,120 --> 00:21:12,080 +example um in text generation and this + +518 +00:21:09,520 --> 00:21:14,000 +is part of the reason why uh we have all + +519 +00:21:12,080 --> 00:21:15,880 +these other text generation algorithms + +520 +00:21:14,000 --> 00:21:20,080 +like nucleus samp playing or topk samp + +521 +00:21:15,880 --> 00:21:23,039 +playing or other things like this is um + +522 +00:21:20,080 --> 00:21:25,080 +actually in a maximum likelihood trained + +523 +00:21:23,039 --> 00:21:27,799 +model better + +524 +00:21:25,080 --> 00:21:29,559 +search uh in in other words finding a + +525 +00:21:27,799 --> 00:21:32,159 +better model scope + +526 +00:21:29,559 --> 00:21:36,120 +doesn't necessarily give you a better + +527 +00:21:32,159 --> 00:21:37,840 +generation result and this is an example + +528 +00:21:36,120 --> 00:21:39,080 +uh from machine translation from a + +529 +00:21:37,840 --> 00:21:41,880 +really long time + +530 +00:21:39,080 --> 00:21:44,000 +ago uh but you know it still persists + +531 +00:21:41,880 --> 00:21:47,520 +today which is they did beam search with + +532 +00:21:44,000 --> 00:21:53,600 +a larger and larger beam + +533 +00:21:47,520 --> 00:21:56,640 +and the be the best Beam for finding um + +534 +00:21:53,600 --> 00:21:59,640 +the best scoring output basically was + +535 +00:21:56,640 --> 00:22:01,600 +four and then the accuracy goes down and + +536 +00:21:59,640 --> 00:22:05,559 +down and down as they find a better + +537 +00:22:01,600 --> 00:22:07,200 +output and does anyone remember when we + +538 +00:22:05,559 --> 00:22:09,679 +talked about the generation class where + +539 +00:22:07,200 --> 00:22:09,679 +this comes + +540 +00:22:10,120 --> 00:22:15,000 +from I don't know how explicitly we said + +541 +00:22:12,960 --> 00:22:18,600 +we mentioned it in the generation class + +542 +00:22:15,000 --> 00:22:20,360 +but basically the problem is um maximum + +543 +00:22:18,600 --> 00:22:22,559 +likelihood train models like shorter + +544 +00:22:20,360 --> 00:22:25,240 +outputs generally because if as we make + +545 +00:22:22,559 --> 00:22:27,760 +the output longer uh the probability of + +546 +00:22:25,240 --> 00:22:29,679 +the longer outputs goes down so as you + +547 +00:22:27,760 --> 00:22:32,039 +improve the beam it will start + +548 +00:22:29,679 --> 00:22:34,799 +generating shorter and shorter outputs + +549 +00:22:32,039 --> 00:22:36,480 +and because of that the score goes down + +550 +00:22:34,799 --> 00:22:39,039 +because blue score doesn't like outputs + +551 +00:22:36,480 --> 00:22:41,520 +that are too short essentially so there + +552 +00:22:39,039 --> 00:22:44,039 +are um there are hex around this for + +553 +00:22:41,520 --> 00:22:46,200 +beam search where essentially what you + +554 +00:22:44,039 --> 00:22:48,559 +do is you uh take the average log + +555 +00:22:46,200 --> 00:22:51,159 +likelihood of each token instead of the + +556 +00:22:48,559 --> 00:22:52,760 +overall log likelihood of each token um + +557 +00:22:51,159 --> 00:22:54,679 +and that improves a little bit but still + +558 +00:22:52,760 --> 00:22:59,720 +you can see as you search more the the + +559 +00:22:54,679 --> 00:23:01,440 +accuracy goes down so um so that's the + +560 +00:22:59,720 --> 00:23:04,039 +the general idea + +561 +00:23:01,440 --> 00:23:08,760 +here there's a bunch of ways you can fix + +562 +00:23:04,039 --> 00:23:10,600 +this um the most principled way is to + +563 +00:23:08,760 --> 00:23:12,760 +use a method like reinforcement learning + +564 +00:23:10,600 --> 00:23:14,120 +or something uh some sort of you know + +565 +00:23:12,760 --> 00:23:15,520 +structured training algorithm that + +566 +00:23:14,120 --> 00:23:17,159 +allows you to train your models so that + +567 +00:23:15,520 --> 00:23:20,159 +you don't get these bad + +568 +00:23:17,159 --> 00:23:22,159 +outputs um another way that's much + +569 +00:23:20,159 --> 00:23:25,640 +easier is to do early stopping with the + +570 +00:23:22,159 --> 00:23:30,480 +evaluation metric as opposed to um early + +571 +00:23:25,640 --> 00:23:32,840 +stopping with the loss and by doing this + +572 +00:23:30,480 --> 00:23:34,520 +you would stop here so you would stop + +573 +00:23:32,840 --> 00:23:37,159 +where you get the highest evaluation + +574 +00:23:34,520 --> 00:23:42,600 +metric uh that you care about instead of + +575 +00:23:37,159 --> 00:23:44,400 +stopping here uh so that's um that's one + +576 +00:23:42,600 --> 00:23:46,600 +way you can fix this + +577 +00:23:44,400 --> 00:23:49,760 +problem does anyone have an idea about + +578 +00:23:46,600 --> 00:23:49,760 +why this might be a bad + +579 +00:23:49,840 --> 00:23:57,159 +idea why might it be a bad idea to stop + +580 +00:23:52,480 --> 00:23:57,159 +here instead of stopping here for + +581 +00:23:57,440 --> 00:24:00,440 +example + +582 +00:24:05,320 --> 00:24:10,200 +yeah it's kind of overfitting it's + +583 +00:24:07,760 --> 00:24:13,640 +overfitting in a particular way um but + +584 +00:24:10,200 --> 00:24:16,000 +remember here this is still the accuracy + +585 +00:24:13,640 --> 00:24:18,400 +on the dev set so we're not overfitting + +586 +00:24:16,000 --> 00:24:20,080 +so much that the dev accuracy is going + +587 +00:24:18,400 --> 00:24:24,279 +down that would be a different variety + +588 +00:24:20,080 --> 00:24:27,360 +of overfitting but any any + +589 +00:24:24,279 --> 00:24:29,799 +ideas go for it we don't want to be too + +590 +00:24:27,360 --> 00:24:31,600 +confident yeah exactly we don't want it + +591 +00:24:29,799 --> 00:24:32,880 +to be too confident in its wrong answers + +592 +00:24:31,600 --> 00:24:35,279 +and we talked about + +593 +00:24:32,880 --> 00:24:38,000 +calibration um where calibration is + +594 +00:24:35,279 --> 00:24:40,039 +basically like how accurate are the + +595 +00:24:38,000 --> 00:24:41,480 +probability estimates so this model over + +596 +00:24:40,039 --> 00:24:43,600 +here is going to be really poorly + +597 +00:24:41,480 --> 00:24:45,159 +calibrated it's going to be very + +598 +00:24:43,600 --> 00:24:46,240 +confident regardless of whether it's + +599 +00:24:45,159 --> 00:24:49,440 +correct or not and that could be a + +600 +00:24:46,240 --> 00:24:50,840 +problem in dopram uh dopram tasks + +601 +00:24:49,440 --> 00:24:52,130 +there's also another thing that I I + +602 +00:24:50,840 --> 00:24:55,189 +forgot to put on + +603 +00:24:52,130 --> 00:24:55,189 +[Music] + +604 +00:24:57,320 --> 00:25:00,320 +um + +605 +00:25:02,919 --> 00:25:08,120 +that I forgot to put on the slides but + +606 +00:25:04,520 --> 00:25:10,720 +it's a um an interesting phenomenon that + +607 +00:25:08,120 --> 00:25:12,720 +actually um kind of a lot of people in + +608 +00:25:10,720 --> 00:25:16,360 +interpretability are interested in it's + +609 +00:25:12,720 --> 00:25:18,120 +this uh generalization gring + +610 +00:25:16,360 --> 00:25:19,640 +generalization Beyond overfitting on + +611 +00:25:18,120 --> 00:25:21,120 +small algorithmic data sets and + +612 +00:25:19,640 --> 00:25:27,360 +basically what they + +613 +00:25:21,120 --> 00:25:29,720 +show is um you can be training for a + +614 +00:25:27,360 --> 00:25:31,320 +very very long time + +615 +00:25:29,720 --> 00:25:34,279 +um + +616 +00:25:31,320 --> 00:25:35,919 +and uh like reducing the loss reducing + +617 +00:25:34,279 --> 00:25:40,399 +the loss reducing the loss and reducing + +618 +00:25:35,919 --> 00:25:42,480 +the loss and it's only after a very long + +619 +00:25:40,399 --> 00:25:43,840 +time does your Model start generalizing + +620 +00:25:42,480 --> 00:25:48,240 +well and getting good + +621 +00:25:43,840 --> 00:25:49,799 +accuracy um the this paper the types of + +622 +00:25:48,240 --> 00:25:52,120 +data sets it's talking about are data + +623 +00:25:49,799 --> 00:25:55,520 +sets where you need to get many things + +624 +00:25:52,120 --> 00:25:58,640 +in a row correct before you get the + +625 +00:25:55,520 --> 00:26:00,880 +final answer correct so basically you + +626 +00:25:58,640 --> 00:26:02,320 +need to get like 20 steps in a row or 50 + +627 +00:26:00,880 --> 00:26:06,200 +steps in a row correct before you get + +628 +00:26:02,320 --> 00:26:10,679 +the final answer correct and um + +629 +00:26:06,200 --> 00:26:13,000 +basically the reason why this happens is + +630 +00:26:10,679 --> 00:26:15,720 +because this accuracy will keep going up + +631 +00:26:13,000 --> 00:26:17,760 +but you only get the accuracy of each + +632 +00:26:15,720 --> 00:26:20,520 +individual decision will keep going up + +633 +00:26:17,760 --> 00:26:22,880 +but you only get marked like + +634 +00:26:20,520 --> 00:26:25,440 +correct uh + +635 +00:26:22,880 --> 00:26:29,799 +after you get like all 50 in a row + +636 +00:26:25,440 --> 00:26:31,200 +correct so um it this difference can be + +637 +00:26:29,799 --> 00:26:33,039 +even more Stark when you're talking + +638 +00:26:31,200 --> 00:26:35,399 +about things that require like 50 steps + +639 +00:26:33,039 --> 00:26:37,399 +of reasoning or like multiple steps of + +640 +00:26:35,399 --> 00:26:39,559 +reasoning but like 50 token Generations + +641 +00:26:37,399 --> 00:26:42,679 +correct before you get them right so um + +642 +00:26:39,559 --> 00:26:42,679 +that's another thing to be aware + +643 +00:26:43,000 --> 00:26:49,240 +of cool um so now I want to switch gears + +644 +00:26:46,960 --> 00:26:51,919 +a little bit to actionable evaluation + +645 +00:26:49,240 --> 00:26:54,240 +and how you can um evaluate your models + +646 +00:26:51,919 --> 00:26:56,640 +in a way that makes it easy to find uh + +647 +00:26:54,240 --> 00:26:58,600 +next steps to be + +648 +00:26:56,640 --> 00:27:00,159 +improving uh are there any questions + +649 +00:26:58,600 --> 00:27:02,600 +about the debugging part before we get + +650 +00:27:00,159 --> 00:27:02,600 +into this + +651 +00:27:03,360 --> 00:27:10,120 +part okay I'll + +652 +00:27:05,880 --> 00:27:12,840 +go so um my first suggestion with + +653 +00:27:10,120 --> 00:27:15,559 +respect to how you can actually you know + +654 +00:27:12,840 --> 00:27:17,440 +improve systems is make sure that you're + +655 +00:27:15,559 --> 00:27:21,039 +looking at the data that you're + +656 +00:27:17,440 --> 00:27:22,679 +using and um both bugs and new research + +657 +00:27:21,039 --> 00:27:24,080 +directions can be found by looking at + +658 +00:27:22,679 --> 00:27:27,159 +your model + +659 +00:27:24,080 --> 00:27:31,640 +outputs um + +660 +00:27:27,159 --> 00:27:33,279 +so to give one example um of a very + +661 +00:27:31,640 --> 00:27:36,200 +common mistake that you can make when + +662 +00:27:33,279 --> 00:27:40,159 +you're creating a a generation algorithm + +663 +00:27:36,200 --> 00:27:41,600 +it's these sort of off by one erors um + +664 +00:27:40,159 --> 00:27:43,919 +so like let's say you implemented a + +665 +00:27:41,600 --> 00:27:46,039 +translation system and it's generating + +666 +00:27:43,919 --> 00:27:49,440 +outputs like went to the store yesterday + +667 +00:27:46,039 --> 00:27:51,080 +bought a dog um you can immediately look + +668 +00:27:49,440 --> 00:27:53,440 +at this and say hey this doesn't look + +669 +00:27:51,080 --> 00:27:58,360 +like natural English what's going uh + +670 +00:27:53,440 --> 00:28:00,000 +what's going on and the the problem here + +671 +00:27:58,360 --> 00:28:04,600 +is + +672 +00:28:00,000 --> 00:28:04,600 +you're um you're doing something + +673 +00:28:05,159 --> 00:28:12,720 +like output uh + +674 +00:28:09,240 --> 00:28:14,600 +one uh and you have a slice of like one + +675 +00:28:12,720 --> 00:28:17,399 +instead of zero here or something like + +676 +00:28:14,600 --> 00:28:18,640 +this and so this is a really silly error + +677 +00:28:17,399 --> 00:28:21,000 +that you might just make a mistake on + +678 +00:28:18,640 --> 00:28:23,679 +python on your you know pre-processing + +679 +00:28:21,000 --> 00:28:26,200 +or postprocessing or something like this + +680 +00:28:23,679 --> 00:28:28,399 +um but the problem is like if you look + +681 +00:28:26,200 --> 00:28:30,600 +at your blue score based evaluation or + +682 +00:28:28,399 --> 00:28:32,840 +something like that you'll have like + +683 +00:28:30,600 --> 00:28:34,760 +you'll be one point worse or two points + +684 +00:28:32,840 --> 00:28:36,720 +worse or something like that and you'll + +685 +00:28:34,760 --> 00:28:38,600 +be like Oh I'm I'm two points worse why + +686 +00:28:36,720 --> 00:28:40,600 +am I two point wor two points worse in + +687 +00:28:38,600 --> 00:28:43,760 +the state of the art and it turns out it + +688 +00:28:40,600 --> 00:28:45,279 +was a really like silly thing like this + +689 +00:28:43,760 --> 00:28:46,519 +and immediately you'll see this if you + +690 +00:28:45,279 --> 00:28:47,960 +look at your data but if you're doing + +691 +00:28:46,519 --> 00:28:49,600 +all your experiments and just looking at + +692 +00:28:47,960 --> 00:28:51,519 +the numbers it's really hard to tell you + +693 +00:28:49,600 --> 00:28:53,720 +know why this is + +694 +00:28:51,519 --> 00:28:58,720 +happening + +695 +00:28:53,720 --> 00:29:02,360 +um another thing is uh if you + +696 +00:28:58,720 --> 00:29:04,799 +have a good eye and can like just look + +697 +00:29:02,360 --> 00:29:07,799 +through the data points + +698 +00:29:04,799 --> 00:29:09,640 +um we as humans are pretty good uh + +699 +00:29:07,799 --> 00:29:14,200 +pattern recognizers and especially you + +700 +00:29:09,640 --> 00:29:16,360 +know CMU students uh you're uh very good + +701 +00:29:14,200 --> 00:29:18,519 +and quick at picking up on things so if + +702 +00:29:16,360 --> 00:29:20,600 +you look at the data and pour through + +703 +00:29:18,519 --> 00:29:22,880 +things you can uh probably pick up + +704 +00:29:20,600 --> 00:29:24,880 +patterns about why things are failing + +705 +00:29:22,880 --> 00:29:27,720 +and so um you know you might look and + +706 +00:29:24,880 --> 00:29:29,919 +see that uh compared to some other model + +707 +00:29:27,720 --> 00:29:31,679 +your model is really bad at answering + +708 +00:29:29,919 --> 00:29:33,679 +questions about people or something like + +709 +00:29:31,679 --> 00:29:36,480 +that and then you figure out you'll need + +710 +00:29:33,679 --> 00:29:38,320 +a better model of uh people or your rag + +711 +00:29:36,480 --> 00:29:40,519 +systems uh that you're building for + +712 +00:29:38,320 --> 00:29:42,880 +assignment two is maybe failing on all + +713 +00:29:40,519 --> 00:29:45,559 +the research related questions so you + +714 +00:29:42,880 --> 00:29:47,080 +need to come up with the research uh + +715 +00:29:45,559 --> 00:29:48,320 +like scrape more research data or + +716 +00:29:47,080 --> 00:29:50,080 +something like + +717 +00:29:48,320 --> 00:29:53,840 +that + +718 +00:29:50,080 --> 00:29:55,760 +um so there are methods to do this more + +719 +00:29:53,840 --> 00:29:58,039 +systematically and this is something I + +720 +00:29:55,760 --> 00:29:59,720 +picked up when I was doing an internship + +721 +00:29:58,039 --> 00:30:04,080 +at Google and it really stuck with me + +722 +00:29:59,720 --> 00:30:09,080 +for you know 14 uh 14 years now I guess + +723 +00:30:04,080 --> 00:30:10,960 +13 years um so uh a very simple way to + +724 +00:30:09,080 --> 00:30:12,600 +do this more systematically than just + +725 +00:30:10,960 --> 00:30:16,200 +browsing through things is to randomly + +726 +00:30:12,600 --> 00:30:19,000 +sample a 100 outputs and look at a 100 + +727 +00:30:16,200 --> 00:30:21,840 +erors and try to group them into some + +728 +00:30:19,000 --> 00:30:23,799 +sort of typology and say oh uh this kind + +729 +00:30:21,840 --> 00:30:27,799 +of air is particularly + +730 +00:30:23,799 --> 00:30:31,279 +frequent and this is just one example of + +731 +00:30:27,799 --> 00:30:33,120 +a typology that was defined by V at all + +732 +00:30:31,279 --> 00:30:37,320 +um where they tried to take machine + +733 +00:30:33,120 --> 00:30:39,480 +translation errors and group them into + +734 +00:30:37,320 --> 00:30:43,440 +uh various varieties like correct words + +735 +00:30:39,480 --> 00:30:46,640 +filler words local uh local range long + +736 +00:30:43,440 --> 00:30:48,440 +range um uh sorry word word level word + +737 +00:30:46,640 --> 00:30:50,440 +ordering erors local range long range + +738 +00:30:48,440 --> 00:30:54,279 +phrase level local range long range and + +739 +00:30:50,440 --> 00:30:55,679 +stuff like this um you can definitely + +740 +00:30:54,279 --> 00:30:58,399 +look at previous work and see the + +741 +00:30:55,679 --> 00:31:00,559 +typologies of errors that they used but + +742 +00:30:58,399 --> 00:31:02,440 +the problem is like systems get better + +743 +00:31:00,559 --> 00:31:04,240 +and actually I don't think this is a + +744 +00:31:02,440 --> 00:31:06,760 +super relevant typology for machine + +745 +00:31:04,240 --> 00:31:10,120 +translation anymore uh because machine + +746 +00:31:06,760 --> 00:31:12,159 +translation systems like they don't make + +747 +00:31:10,120 --> 00:31:14,639 +a whole lot of local range Word level + +748 +00:31:12,159 --> 00:31:16,159 +errors anymore and rather we might want + +749 +00:31:14,639 --> 00:31:18,279 +to know more fine grain like are they + +750 +00:31:16,159 --> 00:31:21,720 +making mistakes on named entities or + +751 +00:31:18,279 --> 00:31:24,720 +other things like that so actually + +752 +00:31:21,720 --> 00:31:24,720 +we + +753 +00:31:26,919 --> 00:31:29,919 +um + +754 +00:31:30,519 --> 00:31:36,279 +did a re a more recent thing it's I + +755 +00:31:34,279 --> 00:31:39,159 +guess four years ago now um but it was + +756 +00:31:36,279 --> 00:31:42,720 +when uh people first started saying that + +757 +00:31:39,159 --> 00:31:46,200 +machine translation systems are about as + +758 +00:31:42,720 --> 00:31:50,720 +good as humans at doing a + +759 +00:31:46,200 --> 00:31:50,720 +translation and when we did this we + +760 +00:31:52,480 --> 00:31:58,440 +compared we compared machine translation + +761 +00:31:55,200 --> 00:31:59,960 +systems to humans and we tried to find + +762 +00:31:58,440 --> 00:32:02,240 +you know different types of things and + +763 +00:31:59,960 --> 00:32:03,919 +we were inspired by V but we recreated + +764 +00:32:02,240 --> 00:32:06,159 +our typology based on the things that we + +765 +00:32:03,919 --> 00:32:10,279 +thought were you know the most important + +766 +00:32:06,159 --> 00:32:13,399 +types of errors in like 2020 instead of + +767 +00:32:10,279 --> 00:32:16,799 +2006 so this is really helpful the + +768 +00:32:13,399 --> 00:32:19,039 +reason why it's really helpful is if you + +769 +00:32:16,799 --> 00:32:20,440 +can do this even for a small sample of + +770 +00:32:19,039 --> 00:32:23,440 +the outputs that you're looking at and + +771 +00:32:20,440 --> 00:32:25,279 +identify the most like prominent types + +772 +00:32:23,440 --> 00:32:27,440 +of eras that you're facing it often + +773 +00:32:25,279 --> 00:32:29,360 +leads you to the most successful ways of + +774 +00:32:27,440 --> 00:32:31,519 +improving the accuracy of your systems + +775 +00:32:29,360 --> 00:32:33,120 +because you might if you don't do this + +776 +00:32:31,519 --> 00:32:35,000 +you might be focusing on an air type + +777 +00:32:33,120 --> 00:32:38,000 +that's not actually an error it's kind + +778 +00:32:35,000 --> 00:32:39,200 +of like if you learned in uh programming + +779 +00:32:38,000 --> 00:32:40,799 +you know software engineering or + +780 +00:32:39,200 --> 00:32:42,639 +something like that you should never + +781 +00:32:40,799 --> 00:32:46,360 +optimize your code until you run a + +782 +00:32:42,639 --> 00:32:47,799 +profiler um because actually your code + +783 +00:32:46,360 --> 00:32:50,320 +might be slow in a place that you never + +784 +00:32:47,799 --> 00:32:52,720 +expected and so it's kind of the same + +785 +00:32:50,320 --> 00:32:56,600 +principle here right so don't optimize + +786 +00:32:52,720 --> 00:32:58,720 +your systems errors in a place uh where + +787 +00:32:56,600 --> 00:33:03,240 +like actually it's not having in years + +788 +00:32:58,720 --> 00:33:06,440 +so um that's a general principle + +789 +00:33:03,240 --> 00:33:09,440 +here uh cool another thing you can do is + +790 +00:33:06,440 --> 00:33:11,760 +quantitative analysis so um if you can + +791 +00:33:09,440 --> 00:33:13,880 +think of the phenomenon that you choose + +792 +00:33:11,760 --> 00:33:17,480 +to focus on um is that phenomenon + +793 +00:33:13,880 --> 00:33:19,159 +getting better so if you focused on uh + +794 +00:33:17,480 --> 00:33:22,240 +something that should improve the + +795 +00:33:19,159 --> 00:33:23,760 +quality of low frequency words uh you + +796 +00:33:22,240 --> 00:33:26,200 +can check if the accuracy on low + +797 +00:33:23,760 --> 00:33:27,399 +frequency words is increasing if you + +798 +00:33:26,200 --> 00:33:29,600 +focused on something that should be + +799 +00:33:27,399 --> 00:33:32,120 +improving the syntax in a low resource + +800 +00:33:29,600 --> 00:33:36,080 +language you can measure um whether it's + +801 +00:33:32,120 --> 00:33:37,360 +doing better on word ordering or uh long + +802 +00:33:36,080 --> 00:33:41,840 +distance + +803 +00:33:37,360 --> 00:33:44,360 +dependencies um if you focused on + +804 +00:33:41,840 --> 00:33:46,039 +improving a search algorithm for you + +805 +00:33:44,360 --> 00:33:47,519 +know generation or something like that + +806 +00:33:46,039 --> 00:33:49,880 +are the number of search errors that + +807 +00:33:47,519 --> 00:33:53,120 +you're encountering being reduced so + +808 +00:33:49,880 --> 00:33:56,320 +depending on what you planned on uh you + +809 +00:33:53,120 --> 00:33:57,919 +know improving it's often a good idea to + +810 +00:33:56,320 --> 00:33:59,480 +measure more directly whether it's + +811 +00:33:57,919 --> 00:34:00,559 +improving the the thing that you think + +812 +00:33:59,480 --> 00:34:04,880 +it should + +813 +00:34:00,559 --> 00:34:06,000 +improve um one example of um so I I + +814 +00:34:04,880 --> 00:34:09,240 +basically + +815 +00:34:06,000 --> 00:34:11,240 +created since my experience doing this + +816 +00:34:09,240 --> 00:34:15,159 +manually uh when I I was on an + +817 +00:34:11,240 --> 00:34:18,280 +internship at Google um I've + +818 +00:34:15,159 --> 00:34:20,639 +gradually improved my methodology for + +819 +00:34:18,280 --> 00:34:20,639 +doing + +820 +00:34:21,679 --> 00:34:26,320 +this and um and worked on automating + +821 +00:34:24,879 --> 00:34:30,599 +things and + +822 +00:34:26,320 --> 00:34:33,839 +so the first thing I had was a super + +823 +00:34:30,599 --> 00:34:35,560 +hacky uh hacky script that basically + +824 +00:34:33,839 --> 00:34:37,720 +writes out HTML + +825 +00:34:35,560 --> 00:34:39,320 +files um and then I I had something + +826 +00:34:37,720 --> 00:34:42,320 +called explainer board where we had a + +827 +00:34:39,320 --> 00:34:44,879 +leader board and uh recently one of the + +828 +00:34:42,320 --> 00:34:47,800 +things I've worked on is uh this uh + +829 +00:34:44,879 --> 00:34:53,200 +together with um Alex Alex Cabrera who's + +830 +00:34:47,800 --> 00:34:56,760 +a student here um is this toolkit called + +831 +00:34:53,200 --> 00:34:59,640 +Zeno and um this is just an example from + +832 +00:34:56,760 --> 00:34:59,640 +machine translation + +833 +00:35:03,440 --> 00:35:09,200 +it's being a little bit + +834 +00:35:06,599 --> 00:35:11,079 +slow um but basically what it does is it + +835 +00:35:09,200 --> 00:35:14,920 +allows you to look at the data on the + +836 +00:35:11,079 --> 00:35:18,000 +right side um and so these are just + +837 +00:35:14,920 --> 00:35:19,680 +examples um but you can go in and do + +838 +00:35:18,000 --> 00:35:22,760 +things like say Okay I want to look at + +839 +00:35:19,680 --> 00:35:24,640 +all machine translation examples + +840 +00:35:22,760 --> 00:35:28,040 +from + +841 +00:35:24,640 --> 00:35:30,920 +uh housea and so it shows you the ones + +842 +00:35:28,040 --> 00:35:32,960 +from housea I want to look + +843 +00:35:30,920 --> 00:35:36,240 +at all + +844 +00:35:32,960 --> 00:35:38,880 +examples let me clear that off I want to + +845 +00:35:36,240 --> 00:35:40,800 +look at all examples where the accuracy + +846 +00:35:38,880 --> 00:35:43,440 +is + +847 +00:35:40,800 --> 00:35:45,280 +low um and so now I can look at all the + +848 +00:35:43,440 --> 00:35:49,640 +examples where the accuracy is low and I + +849 +00:35:45,280 --> 00:35:52,640 +I can go in and uh uh examine them so uh + +850 +00:35:49,640 --> 00:35:54,880 +you can also go in and build charts like + +851 +00:35:52,640 --> 00:35:58,280 +this so like what is the overall + +852 +00:35:54,880 --> 00:36:02,200 +performance um what is is the + +853 +00:35:58,280 --> 00:36:05,960 +performance what is the performance + +854 +00:36:02,200 --> 00:36:07,520 +um on different scripts so you can see + +855 +00:36:05,960 --> 00:36:10,880 +which model which model is doing better + +856 +00:36:07,520 --> 00:36:13,960 +at scripts and stuff like that so um or + +857 +00:36:10,880 --> 00:36:16,000 +you can put things side by side and say + +858 +00:36:13,960 --> 00:36:20,720 +okay I want to find all the examples + +859 +00:36:16,000 --> 00:36:21,800 +where uh chat GPT is doing much worse + +860 +00:36:20,720 --> 00:36:25,280 +than GPT + +861 +00:36:21,800 --> 00:36:28,240 +4 uh or like GPT 3.5 is doing much worse + +862 +00:36:25,280 --> 00:36:29,680 +than gp4 and here we can see that oh in + +863 +00:36:28,240 --> 00:36:31,520 +this case it's generating something in + +864 +00:36:29,680 --> 00:36:34,079 +the wrong script or something like that + +865 +00:36:31,520 --> 00:36:37,839 +so um there's also tooling that you can + +866 +00:36:34,079 --> 00:36:40,480 +use to make this easier as + +867 +00:36:37,839 --> 00:36:43,520 +well and the way uh the way you use this + +868 +00:36:40,480 --> 00:36:46,079 +is you basically + +869 +00:36:43,520 --> 00:36:48,000 +um uh create a pandas data frame with + +870 +00:36:46,079 --> 00:36:49,680 +all of your data in it and you upload + +871 +00:36:48,000 --> 00:36:52,400 +the pandas data frame with any metadata + +872 +00:36:49,680 --> 00:36:54,280 +you want to use and you can uh use and I + +873 +00:36:52,400 --> 00:36:56,520 +think VJ will be having a recitation on + +874 +00:36:54,280 --> 00:37:02,560 +this if you're interested in taking a + +875 +00:36:56,520 --> 00:37:04,680 +look cool um so that is the my part and + +876 +00:37:02,560 --> 00:37:07,760 +then we'll be doing Nishant next while + +877 +00:37:04,680 --> 00:37:09,480 +Nishant comes up to set up are there any + +878 +00:37:07,760 --> 00:37:10,520 +questions about the thing that I talked + +879 +00:37:09,480 --> 00:37:14,079 +about + +880 +00:37:10,520 --> 00:37:14,079 +here yeah + +881 +00:37:14,359 --> 00:37:18,200 +so that when I + +882 +00:37:26,200 --> 00:37:30,079 +regular um + +883 +00:37:28,160 --> 00:37:32,560 +is that does that make a difference in + +884 +00:37:30,079 --> 00:37:35,400 +terms of like what we're expecting when + +885 +00:37:32,560 --> 00:37:38,800 +we're evaluating the model + +886 +00:37:35,400 --> 00:37:41,720 +model yeah so just to repeat the + +887 +00:37:38,800 --> 00:37:43,680 +question it's a a great question so if + +888 +00:37:41,720 --> 00:37:49,440 +you apply + +889 +00:37:43,680 --> 00:37:49,440 +regularization um will that change the + +890 +00:37:49,640 --> 00:37:54,079 +overall expectation for the model loss + +891 +00:37:52,040 --> 00:37:55,680 +so I was saying loss should converge to + +892 +00:37:54,079 --> 00:37:57,200 +zero once you start applying + +893 +00:37:55,680 --> 00:37:59,079 +regularization or weight Decay or + +894 +00:37:57,200 --> 00:38:02,640 +something like that it definitely might + +895 +00:37:59,079 --> 00:38:04,520 +not converge to Z um and the reason why + +896 +00:38:02,640 --> 00:38:06,520 +is because once you start applying + +897 +00:38:04,520 --> 00:38:09,319 +regularization there is no zero loss + +898 +00:38:06,520 --> 00:38:11,480 +solion um because in order to reduce the + +899 +00:38:09,319 --> 00:38:14,960 +loss you need to make move things away + +900 +00:38:11,480 --> 00:38:16,359 +move weights away from zero um but when + +901 +00:38:14,960 --> 00:38:19,560 +you move weights away from zero the + +902 +00:38:16,359 --> 00:38:22,200 +regularization L becomes n negative so + +903 +00:38:19,560 --> 00:38:24,599 +one thing you can do however is measure + +904 +00:38:22,200 --> 00:38:26,880 +the losses separately so measure the + +905 +00:38:24,599 --> 00:38:27,960 +regularization component of the loss and + +906 +00:38:26,880 --> 00:38:29,760 +the um + +907 +00:38:27,960 --> 00:38:31,920 +the log like we had component with the + +908 +00:38:29,760 --> 00:38:33,560 +loss and with any reasonable + +909 +00:38:31,920 --> 00:38:35,280 +regularization and a reasonably + +910 +00:38:33,560 --> 00:38:38,000 +parameterized model I do think the loss + +911 +00:38:35,280 --> 00:38:39,760 +should be getting closer to Zer like the + +912 +00:38:38,000 --> 00:38:41,920 +actual likely should be getting closer + +913 +00:38:39,760 --> 00:38:41,920 +to + +914 +00:38:42,200 --> 00:38:46,520 +zero uh you were using an extremely + +915 +00:38:44,480 --> 00:38:49,240 +small model in the mini L signed though + +916 +00:38:46,520 --> 00:38:53,680 +so that might make it more + +917 +00:38:49,240 --> 00:38:56,440 +difficult yeah and any other + +918 +00:38:53,680 --> 00:38:59,440 +things okay if not + +919 +00:38:56,440 --> 00:38:59,440 +I'll + +920 +00:39:13,720 --> 00:39:19,160 +all right can everyone hear + +921 +00:39:15,319 --> 00:39:21,440 +me sweet okay move this it looks like + +922 +00:39:19,160 --> 00:39:24,200 +I'm talking to someone instead of + +923 +00:39:21,440 --> 00:39:24,200 +between both of + +924 +00:39:26,359 --> 00:39:29,359 +you + +925 +00:39:33,319 --> 00:39:37,680 +all right so hi everyone um I'm going to + +926 +00:39:35,720 --> 00:39:39,400 +talk about model interpretability for + +927 +00:39:37,680 --> 00:39:41,680 +for those who don't know me I'm one of + +928 +00:39:39,400 --> 00:39:44,359 +your Tas I'm a first year PhD student + +929 +00:39:41,680 --> 00:39:47,359 +working with Mona diab on model + +930 +00:39:44,359 --> 00:39:47,359 +interpretability + +931 +00:39:48,800 --> 00:39:55,400 +um where what do I + +932 +00:39:51,839 --> 00:39:59,119 +click your your mouse should be there + +933 +00:39:55,400 --> 00:40:01,599 +yeah just + +934 +00:39:59,119 --> 00:40:04,160 +cool okay um + +935 +00:40:01,599 --> 00:40:06,079 +so what I want you to take away if you + +936 +00:40:04,160 --> 00:40:08,359 +if you fall asleep this is too boring + +937 +00:40:06,079 --> 00:40:09,839 +here are sort of the two main takeaways + +938 +00:40:08,359 --> 00:40:12,040 +one I want to convince you that model + +939 +00:40:09,839 --> 00:40:14,720 +interpretability is important to study + +940 +00:40:12,040 --> 00:40:16,720 +and two I want I want you to find this + +941 +00:40:14,720 --> 00:40:18,880 +interesting um and something you want to + +942 +00:40:16,720 --> 00:40:20,079 +explore more there's a bunch of details + +943 +00:40:18,880 --> 00:40:21,800 +here this is going to be kind of a + +944 +00:40:20,079 --> 00:40:24,599 +whirlwind tour you're not going to get + +945 +00:40:21,800 --> 00:40:27,440 +super deep into anything um so hopefully + +946 +00:40:24,599 --> 00:40:28,839 +this acts as a starting point um then + +947 +00:40:27,440 --> 00:40:33,800 +than anything + +948 +00:40:28,839 --> 00:40:37,040 +else so interpretability in AI um the + +949 +00:40:33,800 --> 00:40:38,480 +the definition is it's the study of + +950 +00:40:37,040 --> 00:40:40,440 +understanding the decisions that AI + +951 +00:40:38,480 --> 00:40:42,640 +systems make and putting them into + +952 +00:40:40,440 --> 00:40:44,280 +easily human understandable terms this + +953 +00:40:42,640 --> 00:40:47,640 +can mean a lot of different things and + +954 +00:40:44,280 --> 00:40:49,280 +this is often really hard um and the why + +955 +00:40:47,640 --> 00:40:51,319 +is to use that understanding to + +956 +00:40:49,280 --> 00:40:54,040 +iteratively better Design Systems that + +957 +00:40:51,319 --> 00:40:56,240 +are better they're more more performant + +958 +00:40:54,040 --> 00:40:59,240 +but also those that are more human + +959 +00:40:56,240 --> 00:40:59,240 +understandable + +960 +00:41:00,119 --> 00:41:06,599 +um so interpretability is this big blah + +961 +00:41:03,720 --> 00:41:08,440 +but there's a bunch of other uh spheres + +962 +00:41:06,599 --> 00:41:11,920 +that intersect with it this is a super + +963 +00:41:08,440 --> 00:41:14,920 +incomplete list uh so bear with me the + +964 +00:41:11,920 --> 00:41:16,560 +causality and data integrate with this + +965 +00:41:14,920 --> 00:41:19,000 +there's aspects that are interpretable + +966 +00:41:16,560 --> 00:41:20,480 +there's aspects that matter here um + +967 +00:41:19,000 --> 00:41:22,400 +explainable AI is another thing that + +968 +00:41:20,480 --> 00:41:24,440 +you've probably heard this sits firmly + +969 +00:41:22,400 --> 00:41:27,800 +in the interpretability blob and + +970 +00:41:24,440 --> 00:41:30,520 +connects with ideas and causality and uh + +971 +00:41:27,800 --> 00:41:32,680 +in data too um model interpretability + +972 +00:41:30,520 --> 00:41:34,200 +sits on this kind of other side of + +973 +00:41:32,680 --> 00:41:37,680 +things it intersects a little bit with + +974 +00:41:34,200 --> 00:41:40,000 +causality and explainable AI but uh is a + +975 +00:41:37,680 --> 00:41:42,280 +little bit separate for it um and from + +976 +00:41:40,000 --> 00:41:43,880 +it and mechanistic interpretability + +977 +00:41:42,280 --> 00:41:45,400 +which which you've probably heard of + +978 +00:41:43,880 --> 00:41:47,680 +it's gotten a lot of Buzz recently kind + +979 +00:41:45,400 --> 00:41:48,880 +of sits inside of model interpretability + +980 +00:41:47,680 --> 00:41:51,680 +it's a special case of model + +981 +00:41:48,880 --> 00:41:53,160 +interpretability I hope the mech people + +982 +00:41:51,680 --> 00:41:56,640 +agree with me + +983 +00:41:53,160 --> 00:41:58,040 +but um so yeah so historically we've + +984 +00:41:56,640 --> 00:42:00,880 +been dealing with really really really + +985 +00:41:58,040 --> 00:42:03,680 +small models you had Bas Nets this is a + +986 +00:42:00,880 --> 00:42:07,560 +this is very small model um if all these + +987 +00:42:03,680 --> 00:42:10,000 +are binary variables this is uh eight + +988 +00:42:07,560 --> 00:42:12,680 +total parameters and only four of which + +989 +00:42:10,000 --> 00:42:14,880 +are independent uh we also used to work + +990 +00:42:12,680 --> 00:42:18,160 +with linear regression a lot and in the + +991 +00:42:14,880 --> 00:42:20,680 +first case that's a nice line can be two + +992 +00:42:18,160 --> 00:42:23,240 +parameters the multivariate case again + +993 +00:42:20,680 --> 00:42:25,880 +that's a a small number of parameters + +994 +00:42:23,240 --> 00:42:27,880 +we've moved to of more things we've + +995 +00:42:25,880 --> 00:42:30,400 +moved to + +996 +00:42:27,880 --> 00:42:32,160 +MLPs that have larger weight matrices + +997 +00:42:30,400 --> 00:42:33,920 +but all these are kind of digestible and + +998 +00:42:32,160 --> 00:42:37,200 +interpretable so the interpretability + +999 +00:42:33,920 --> 00:42:40,160 +world was sort of uh not super concerned + +1000 +00:42:37,200 --> 00:42:41,280 +with large ginormous things but we're + +1001 +00:42:40,160 --> 00:42:44,800 +not there + +1002 +00:42:41,280 --> 00:42:47,000 +anymore uh this is a language model this + +1003 +00:42:44,800 --> 00:42:50,839 +is part of still part of a language + +1004 +00:42:47,000 --> 00:42:51,960 +model now it's getting more and more and + +1005 +00:42:50,839 --> 00:42:55,119 +more + +1006 +00:42:51,960 --> 00:42:57,920 +hairing and this is just not + +1007 +00:42:55,119 --> 00:43:00,520 +interpretable um I mentioned + +1008 +00:42:57,920 --> 00:43:03,280 +on on the first day of class that I hate + +1009 +00:43:00,520 --> 00:43:05,240 +when we update parameters of models also + +1010 +00:43:03,280 --> 00:43:07,720 +hate when models are this big and this + +1011 +00:43:05,240 --> 00:43:10,000 +is a six layer Transformer this is way + +1012 +00:43:07,720 --> 00:43:15,920 +smaller than basically anything that we + +1013 +00:43:10,000 --> 00:43:18,040 +have um and this makes things very very + +1014 +00:43:15,920 --> 00:43:20,920 +uninterpretable um so we'll talk about + +1015 +00:43:18,040 --> 00:43:22,880 +one one way that people sort of uh five + +1016 +00:43:20,920 --> 00:43:24,599 +years ago started addressing this + +1017 +00:43:22,880 --> 00:43:25,680 +problem and this is and this is the idea + +1018 +00:43:24,599 --> 00:43:28,000 +of + +1019 +00:43:25,680 --> 00:43:30,880 +probing so how do we make sense of a + +1020 +00:43:28,000 --> 00:43:35,160 +giant model this is one way so we take + +1021 +00:43:30,880 --> 00:43:38,200 +our giant model we cut the top off + +1022 +00:43:35,160 --> 00:43:40,520 +basically um and now we have this thing + +1023 +00:43:38,200 --> 00:43:42,119 +we stick a probe which actually in a lot + +1024 +00:43:40,520 --> 00:43:44,559 +of cases looks very similar to a + +1025 +00:43:42,119 --> 00:43:47,280 +language modeling head uh usually it's a + +1026 +00:43:44,559 --> 00:43:51,640 +small two layer or one layer + +1027 +00:43:47,280 --> 00:43:54,319 +MLP um and we basically treat the model + +1028 +00:43:51,640 --> 00:43:56,760 +as something that uh that exists and we + +1029 +00:43:54,319 --> 00:44:00,240 +only really care about the output of of + +1030 +00:43:56,760 --> 00:44:03,240 +the model so more specifically what is a + +1031 +00:44:00,240 --> 00:44:05,720 +probe it's a classifier this this green + +1032 +00:44:03,240 --> 00:44:07,680 +thing here uh that is specifically + +1033 +00:44:05,720 --> 00:44:09,200 +trained to predict some specific + +1034 +00:44:07,680 --> 00:44:11,480 +property from the pre-trained models + +1035 +00:44:09,200 --> 00:44:16,440 +representations + +1036 +00:44:11,480 --> 00:44:18,480 +alone so um in 2019 Ian Tenny and folks + +1037 +00:44:16,440 --> 00:44:21,319 +introduced Edge probing so this is a + +1038 +00:44:18,480 --> 00:44:23,240 +general method um it works to probe + +1039 +00:44:21,319 --> 00:44:27,559 +different types of information out of a + +1040 +00:44:23,240 --> 00:44:29,960 +model so this bottom part here uh yeah + +1041 +00:44:27,559 --> 00:44:33,160 +this bottom part here it you pass it in + +1042 +00:44:29,960 --> 00:44:36,520 +a sequence you pass it into a model this + +1043 +00:44:33,160 --> 00:44:38,839 +is Burt in their experiments often uh + +1044 +00:44:36,520 --> 00:44:40,960 +and that outputs a set of contextual + +1045 +00:44:38,839 --> 00:44:44,359 +vectors these contextual vectors can be + +1046 +00:44:40,960 --> 00:44:45,920 +at any layer um often it's near the + +1047 +00:44:44,359 --> 00:44:49,280 +often it's near the top but we'll talk + +1048 +00:44:45,920 --> 00:44:51,079 +about uh the the fact that this can work + +1049 +00:44:49,280 --> 00:44:53,359 +kind of across layers and different + +1050 +00:44:51,079 --> 00:44:55,599 +layers and code different information + +1051 +00:44:53,359 --> 00:44:58,960 +and on top of this you have this MLP + +1052 +00:44:55,599 --> 00:45:02,480 +that you train to Output a prediction + +1053 +00:44:58,960 --> 00:45:05,599 +your model is always always fixed um in + +1054 +00:45:02,480 --> 00:45:08,079 +these cases so you can do things like + +1055 +00:45:05,599 --> 00:45:09,880 +part of speech tagging where each + +1056 +00:45:08,079 --> 00:45:12,400 +specific word you try to determine what + +1057 +00:45:09,880 --> 00:45:16,640 +its part of speech is and in that case + +1058 +00:45:12,400 --> 00:45:18,000 +this these S1 and S2 spans here uh only + +1059 +00:45:16,640 --> 00:45:19,440 +one of them is active because you're + +1060 +00:45:18,000 --> 00:45:21,440 +predicting for every single + +1061 +00:45:19,440 --> 00:45:23,240 +contextualized Vector you're predicting + +1062 +00:45:21,440 --> 00:45:25,359 +whether that thing is a noun or a verb + +1063 +00:45:23,240 --> 00:45:27,440 +or something like this you can have + +1064 +00:45:25,359 --> 00:45:29,599 +other sorts of tasks too like in ailment + +1065 +00:45:27,440 --> 00:45:32,520 +where you have two sequences and two + +1066 +00:45:29,599 --> 00:45:35,079 +spans um and you use the embeddings for + +1067 +00:45:32,520 --> 00:45:37,359 +those spans um for like sentence one and + +1068 +00:45:35,079 --> 00:45:39,319 +sentence two you pull them together in + +1069 +00:45:37,359 --> 00:45:43,359 +some way and then you pass them to this + +1070 +00:45:39,319 --> 00:45:47,480 +MLP and you see whether the MLP can uh + +1071 +00:45:43,359 --> 00:45:49,680 +solve that test so they did this uh in + +1072 +00:45:47,480 --> 00:45:52,559 +another paper uh Bert rediscovers the + +1073 +00:45:49,680 --> 00:45:54,280 +NLP Pipeline and this there's a lot + +1074 +00:45:52,559 --> 00:45:57,079 +going on in this figure the the only + +1075 +00:45:54,280 --> 00:45:59,599 +major thing here to take away um is + +1076 +00:45:57,079 --> 00:46:02,720 +these numbers that are in this like pink + +1077 +00:45:59,599 --> 00:46:05,359 +purple color um so these are a bunch of + +1078 +00:46:02,720 --> 00:46:07,960 +different uh properties supp part of + +1079 +00:46:05,359 --> 00:46:11,319 +speech uh and and a bunch of other + +1080 +00:46:07,960 --> 00:46:13,520 +things um and what they basically find + +1081 +00:46:11,319 --> 00:46:15,640 +is that at earlier layers in the model + +1082 +00:46:13,520 --> 00:46:18,760 +the things that are closer to the Token + +1083 +00:46:15,640 --> 00:46:21,480 +level representation are more um + +1084 +00:46:18,760 --> 00:46:23,400 +extractable using a probe and the things + +1085 +00:46:21,480 --> 00:46:26,440 +that require more contextualized + +1086 +00:46:23,400 --> 00:46:29,440 +information um is extractable later from + +1087 +00:46:26,440 --> 00:46:32,359 +later layers in the model and so here's + +1088 +00:46:29,440 --> 00:46:34,599 +sort of a brief uh description of what + +1089 +00:46:32,359 --> 00:46:37,599 +these tasks are so the ones on the + +1090 +00:46:34,599 --> 00:46:40,040 +bottom are more semantic more + +1091 +00:46:37,599 --> 00:46:42,040 +contextualized like uh semantic prot + +1092 +00:46:40,040 --> 00:46:43,880 +roles and relation relation + +1093 +00:46:42,040 --> 00:46:45,839 +classification and then the first few + +1094 +00:46:43,880 --> 00:46:48,200 +are more you know chunking in part of + +1095 +00:46:45,839 --> 00:46:51,880 +speech tagging and um dependency + +1096 +00:46:48,200 --> 00:46:51,880 +labeling in these in these sorts of + +1097 +00:46:52,040 --> 00:46:57,200 +tests um so there's a bunch of issues + +1098 +00:46:54,480 --> 00:46:59,520 +with probing um and there aren't many + +1099 +00:46:57,200 --> 00:47:03,559 +probing papers now as there were many + +1100 +00:46:59,520 --> 00:47:05,960 +years ago um and so if your probe let's + +1101 +00:47:03,559 --> 00:47:07,960 +say your probe + +1102 +00:47:05,960 --> 00:47:09,920 +works it's possible that the + +1103 +00:47:07,960 --> 00:47:12,200 +representation actually encodes that + +1104 +00:47:09,920 --> 00:47:14,520 +information it's also possible that it + +1105 +00:47:12,200 --> 00:47:16,359 +doesn't and the probe solved the task by + +1106 +00:47:14,520 --> 00:47:18,119 +itself uh keep in mind that you're + +1107 +00:47:16,359 --> 00:47:20,640 +learning this probe you're training this + +1108 +00:47:18,119 --> 00:47:22,720 +Probe on labeled data uh let's say your + +1109 +00:47:20,640 --> 00:47:24,599 +probe doesn't work does that tell you + +1110 +00:47:22,720 --> 00:47:27,119 +anything Maybe not maybe the + +1111 +00:47:24,599 --> 00:47:30,280 +representation lacks the information or + +1112 +00:47:27,119 --> 00:47:31,800 +maybe your probe doesn't doesn't + +1113 +00:47:30,280 --> 00:47:33,800 +actually isn't actually able to + +1114 +00:47:31,800 --> 00:47:35,240 +disentangle that information from your + +1115 +00:47:33,800 --> 00:47:36,720 +representation maybe the probe is not + +1116 +00:47:35,240 --> 00:47:38,359 +the right function class maybe you + +1117 +00:47:36,720 --> 00:47:40,839 +poorly trained your probe there's + +1118 +00:47:38,359 --> 00:47:42,280 +hyperparameters for your probe so often + +1119 +00:47:40,839 --> 00:47:43,000 +times your probe doesn't give you that + +1120 +00:47:42,280 --> 00:47:46,119 +much + +1121 +00:47:43,000 --> 00:47:49,040 +information there's more problems too so + +1122 +00:47:46,119 --> 00:47:50,800 +often we want to probe tasks themselves + +1123 +00:47:49,040 --> 00:47:53,240 +and that requires a lot of supervised + +1124 +00:47:50,800 --> 00:47:55,880 +data um but we can't collect a lot of + +1125 +00:47:53,240 --> 00:47:58,440 +supervised data so we collect some of it + +1126 +00:47:55,880 --> 00:48:00,040 +and then that instead produces this + +1127 +00:47:58,440 --> 00:48:02,480 +convenient sample that we have that's a + +1128 +00:48:00,040 --> 00:48:04,119 +data set that is a convenient sample of + +1129 +00:48:02,480 --> 00:48:07,000 +your task so really what you're probing + +1130 +00:48:04,119 --> 00:48:10,040 +is the data set and so with all these + +1131 +00:48:07,000 --> 00:48:11,800 +limitations it's it's fallen out of + +1132 +00:48:10,040 --> 00:48:13,599 +favor a little bit it's still very very + +1133 +00:48:11,800 --> 00:48:16,400 +useful but it's fallen out of favor as + +1134 +00:48:13,599 --> 00:48:20,000 +like a core model interpretability + +1135 +00:48:16,400 --> 00:48:22,160 +idea um also probes designed in this way + +1136 +00:48:20,000 --> 00:48:26,079 +are correlated they're correlative not + +1137 +00:48:22,160 --> 00:48:27,880 +really causitive so your your underlying + +1138 +00:48:26,079 --> 00:48:29,640 +model is trained in a specific way all + +1139 +00:48:27,880 --> 00:48:31,359 +of that information is disentangled and + +1140 +00:48:29,640 --> 00:48:32,920 +kind of thrown away and you're only + +1141 +00:48:31,359 --> 00:48:34,599 +looking at the output representation and + +1142 +00:48:32,920 --> 00:48:36,559 +you're saying is my output + +1143 +00:48:34,599 --> 00:48:39,200 +representation correlated to the thing + +1144 +00:48:36,559 --> 00:48:42,400 +that I'm training this probe for there's + +1145 +00:48:39,200 --> 00:48:44,960 +no notion of intervening on this lat and + +1146 +00:48:42,400 --> 00:48:46,559 +space there's no notion of of causation + +1147 +00:48:44,960 --> 00:48:49,119 +really so you're just seeing whether + +1148 +00:48:46,559 --> 00:48:52,559 +your representation is correlated with + +1149 +00:48:49,119 --> 00:48:54,480 +your property that you're probing for um + +1150 +00:48:52,559 --> 00:48:56,200 +and with these limitations the + +1151 +00:48:54,480 --> 00:48:58,720 +community's moved a little bit away from + +1152 +00:48:56,200 --> 00:48:58,720 +this area + +1153 +00:48:59,040 --> 00:49:02,200 +uh there's a bunch of other probing + +1154 +00:49:00,240 --> 00:49:04,920 +works so a bunch of people aim to solve + +1155 +00:49:02,200 --> 00:49:06,000 +a bunch of these problems um and uh for + +1156 +00:49:04,920 --> 00:49:09,200 +the sake of time I'm not going to go + +1157 +00:49:06,000 --> 00:49:12,599 +into all of these but uh I'd encourage + +1158 +00:49:09,200 --> 00:49:14,000 +you to look into these they for for some + +1159 +00:49:12,599 --> 00:49:17,319 +of these problems they're able to + +1160 +00:49:14,000 --> 00:49:19,520 +control for um control for like the + +1161 +00:49:17,319 --> 00:49:22,200 +complexity of the of the probe and + +1162 +00:49:19,520 --> 00:49:24,359 +things like this um but even despite + +1163 +00:49:22,200 --> 00:49:25,720 +that probing is sort of slowly kind of + +1164 +00:49:24,359 --> 00:49:28,160 +falling out of + +1165 +00:49:25,720 --> 00:49:29,640 +favor uh so before I move into model + +1166 +00:49:28,160 --> 00:49:31,920 +interpretability are there any questions + +1167 +00:49:29,640 --> 00:49:31,920 +on + +1168 +00:49:35,520 --> 00:49:40,599 +probing all right so what is model + +1169 +00:49:38,680 --> 00:49:44,000 +interpretability so this is my + +1170 +00:49:40,599 --> 00:49:45,400 +definition here uh this is the study of + +1171 +00:49:44,000 --> 00:49:46,599 +understanding the internals of models + +1172 +00:49:45,400 --> 00:49:49,079 +for example their weights and + +1173 +00:49:46,599 --> 00:49:51,160 +activations putting those insights in + +1174 +00:49:49,079 --> 00:49:53,319 +human intelligible terms and using that + +1175 +00:49:51,160 --> 00:49:55,920 +insight to both patch current models and + +1176 +00:49:53,319 --> 00:49:57,359 +develop better ones for not sort of able + +1177 +00:49:55,920 --> 00:49:58,760 +to do both of these things patching + +1178 +00:49:57,359 --> 00:50:00,160 +current models and develop better ones + +1179 +00:49:58,760 --> 00:50:02,440 +we're kind of doing interpretability for + +1180 +00:50:00,160 --> 00:50:04,960 +interpretability sake that's nice and + +1181 +00:50:02,440 --> 00:50:08,079 +fun but it's not as applicable for the + +1182 +00:50:04,960 --> 00:50:09,720 +for the community so you've probably + +1183 +00:50:08,079 --> 00:50:12,240 +heard of the term mechanistic + +1184 +00:50:09,720 --> 00:50:14,480 +interpretability it's in my opinion a + +1185 +00:50:12,240 --> 00:50:16,559 +subfield of model interpretability and + +1186 +00:50:14,480 --> 00:50:19,319 +this is sort of my definition I it + +1187 +00:50:16,559 --> 00:50:21,440 +aligns reasonably well to the core + +1188 +00:50:19,319 --> 00:50:22,720 +mechanistic interpretability people um + +1189 +00:50:21,440 --> 00:50:24,880 +but it's the study of reverse + +1190 +00:50:22,720 --> 00:50:26,280 +engineering parametric models often + +1191 +00:50:24,880 --> 00:50:28,839 +neural networks because that's what we + +1192 +00:50:26,280 --> 00:50:31,400 +use from their learned weights into more + +1193 +00:50:28,839 --> 00:50:32,839 +human interpretable algorithmic units uh + +1194 +00:50:31,400 --> 00:50:36,839 +and often they call these things + +1195 +00:50:32,839 --> 00:50:39,440 +circuits um and and these are basically + +1196 +00:50:36,839 --> 00:50:42,880 +functions that uh you can describe in a + +1197 +00:50:39,440 --> 00:50:45,000 +human interpretable way that sit inside + +1198 +00:50:42,880 --> 00:50:46,760 +models um there's a bunch of notable + +1199 +00:50:45,000 --> 00:50:50,720 +work again for the sake of time I'm + +1200 +00:50:46,760 --> 00:50:54,319 +going to just briefly talk about them + +1201 +00:50:50,720 --> 00:50:56,839 +um so the first one is they they look + +1202 +00:50:54,319 --> 00:50:58,440 +into analyzing small MLPs and + +1203 +00:50:56,839 --> 00:51:01,400 +Transformers to build out the intuition + +1204 +00:50:58,440 --> 00:51:04,119 +of what circuits exist um and this a lot + +1205 +00:51:01,400 --> 00:51:06,559 +of this work came out of earlier work on + +1206 +00:51:04,119 --> 00:51:08,480 +on lstms and doing similar sorts of + +1207 +00:51:06,559 --> 00:51:11,880 +things with with + +1208 +00:51:08,480 --> 00:51:14,319 +lstms um and they find a bunch of things + +1209 +00:51:11,880 --> 00:51:15,839 +one thing that they find is this idea of + +1210 +00:51:14,319 --> 00:51:19,599 +induction heads and these induction + +1211 +00:51:15,839 --> 00:51:21,760 +heads they say is sort of sort of helps + +1212 +00:51:19,599 --> 00:51:24,680 +prove why models can do in context + +1213 +00:51:21,760 --> 00:51:26,599 +learning so so an induction head is + +1214 +00:51:24,680 --> 00:51:28,839 +something that it's it's a specific + +1215 +00:51:26,599 --> 00:51:32,440 +attention head that kind of allows you + +1216 +00:51:28,839 --> 00:51:35,599 +to um when given a prefix allow you to + +1217 +00:51:32,440 --> 00:51:37,559 +kind of copy the necessary resulting + +1218 +00:51:35,599 --> 00:51:39,640 +token from the underlying training data + +1219 +00:51:37,559 --> 00:51:41,720 +that the model seen before so in context + +1220 +00:51:39,640 --> 00:51:44,599 +learning what you generally provide is + +1221 +00:51:41,720 --> 00:51:46,440 +some sort of prefix and then you uh + +1222 +00:51:44,599 --> 00:51:48,480 +provide some example and hopefully you + +1223 +00:51:46,440 --> 00:51:51,040 +know you can classify the thing or + +1224 +00:51:48,480 --> 00:51:53,280 +something like this um it's it's saying + +1225 +00:51:51,040 --> 00:51:56,200 +that there's these attention heads + +1226 +00:51:53,280 --> 00:51:59,400 +Loosely uh that exist that are able to + +1227 +00:51:56,200 --> 00:52:00,680 +copy unearth that information um for for + +1228 +00:51:59,400 --> 00:52:03,319 +a specific + +1229 +00:52:00,680 --> 00:52:07,200 +context um other things that they' that + +1230 +00:52:03,319 --> 00:52:09,880 +they've done is um on neurons so uh this + +1231 +00:52:07,200 --> 00:52:13,160 +poly semanticity so what this this kind + +1232 +00:52:09,880 --> 00:52:15,240 +of means is that your your neuron is a + +1233 +00:52:13,160 --> 00:52:18,000 +uh you have a set of neurons in your + +1234 +00:52:15,240 --> 00:52:20,880 +activation space so let's say at layer + +1235 +00:52:18,000 --> 00:52:23,200 +10 in your model you have an output um + +1236 +00:52:20,880 --> 00:52:26,280 +and so your activations is let's say a + +1237 +00:52:23,200 --> 00:52:28,400 +thousand dimensional here those each of + +1238 +00:52:26,280 --> 00:52:31,319 +those thousand individual neurons may + +1239 +00:52:28,400 --> 00:52:35,839 +represent more than one specific + +1240 +00:52:31,319 --> 00:52:37,839 +feature um and so they they talk about + +1241 +00:52:35,839 --> 00:52:41,280 +this in that context and this is kind of + +1242 +00:52:37,839 --> 00:52:43,240 +a theory but you can think about um + +1243 +00:52:41,280 --> 00:52:46,359 +trying to process + +1244 +00:52:43,240 --> 00:52:49,400 +input and when you're processing a a + +1245 +00:52:46,359 --> 00:52:50,960 +vocab of size 50,000 or 250,000 at some + +1246 +00:52:49,400 --> 00:52:52,359 +point in the model we're actually + +1247 +00:52:50,960 --> 00:52:55,400 +compressing it down to the hidden + +1248 +00:52:52,359 --> 00:52:58,119 +Dimension and so in some cases that + +1249 +00:52:55,400 --> 00:53:00,319 +looks like you're going to compress a + +1250 +00:52:58,119 --> 00:53:03,440 +much richer feature representation down + +1251 +00:53:00,319 --> 00:53:06,359 +into a smaller set of neurons so it is + +1252 +00:53:03,440 --> 00:53:08,319 +reasonable to believe that um a specific + +1253 +00:53:06,359 --> 00:53:10,799 +neuron will represent multiple of those + +1254 +00:53:08,319 --> 00:53:15,480 +features and given the structure of our + +1255 +00:53:10,799 --> 00:53:18,720 +weight matrices um it it is the case + +1256 +00:53:15,480 --> 00:53:21,839 +that if they are representing more + +1257 +00:53:18,720 --> 00:53:23,960 +features than uh number of elements in + +1258 +00:53:21,839 --> 00:53:26,000 +the actual or number of neurons in the + +1259 +00:53:23,960 --> 00:53:28,680 +activation space then many of these + +1260 +00:53:26,000 --> 00:53:30,880 +features linearly dependent and so we're + +1261 +00:53:28,680 --> 00:53:35,400 +not really able to utilize them that + +1262 +00:53:30,880 --> 00:53:37,960 +well um they they they talk about this + +1263 +00:53:35,400 --> 00:53:42,200 +they don't talk about this in the the + +1264 +00:53:37,960 --> 00:53:44,799 +most uh the the best way but uh it seems + +1265 +00:53:42,200 --> 00:53:48,040 +kind of clear to me that um since you + +1266 +00:53:44,799 --> 00:53:50,880 +have embedding matrices that are um not + +1267 +00:53:48,040 --> 00:53:53,599 +square that you're that these neurons + +1268 +00:53:50,880 --> 00:53:56,400 +have to exist um and they have to + +1269 +00:53:53,599 --> 00:53:59,200 +incorporate multiple features at once m + +1270 +00:53:56,400 --> 00:54:02,559 +multiple redundant features at + +1271 +00:53:59,200 --> 00:54:04,680 +once um so before I move on to the rest + +1272 +00:54:02,559 --> 00:54:07,839 +of model interpretability any questions + +1273 +00:54:04,680 --> 00:54:07,839 +about mechanistic + +1274 +00:54:09,880 --> 00:54:12,880 +interpretability + +1275 +00:54:21,480 --> 00:54:28,040 +yeah so most of their studies are for uh + +1276 +00:54:24,920 --> 00:54:29,720 +a very small set of of of models and + +1277 +00:54:28,040 --> 00:54:32,040 +most of these are old GPT models there + +1278 +00:54:29,720 --> 00:54:34,160 +have been a few works like in the last + +1279 +00:54:32,040 --> 00:54:36,760 +couple of months on doing this for the + +1280 +00:54:34,160 --> 00:54:39,720 +Llama based models um it seems like this + +1281 +00:54:36,760 --> 00:54:42,040 +is a general more General phenomenal for + +1282 +00:54:39,720 --> 00:54:43,760 +for language models it also is the case + +1283 +00:54:42,040 --> 00:54:46,839 +that certain attention heads specialize + +1284 +00:54:43,760 --> 00:54:49,480 +and talk about them a little bit um in + +1285 +00:54:46,839 --> 00:54:51,599 +in the activations part um but yeah + +1286 +00:54:49,480 --> 00:54:53,799 +there's they're not like all attention + +1287 +00:54:51,599 --> 00:54:56,400 +heads aren't created equal they start + +1288 +00:54:53,799 --> 00:55:00,280 +this way and it seems to be a general + +1289 +00:54:56,400 --> 00:55:01,799 +princip of and one one other thing I you + +1290 +00:55:00,280 --> 00:55:04,520 +might know about this better than I do + +1291 +00:55:01,799 --> 00:55:06,520 +but I think there are some preliminary + +1292 +00:55:04,520 --> 00:55:09,160 +words that say that Transformers seem to + +1293 +00:55:06,520 --> 00:55:11,720 +be particularly good at doing things + +1294 +00:55:09,160 --> 00:55:15,160 +like induction heads compared + +1295 +00:55:11,720 --> 00:55:17,200 +to uh H for current models and there was + +1296 +00:55:15,160 --> 00:55:20,720 +a paper really recently about comparing + +1297 +00:55:17,200 --> 00:55:23,599 +like Mamba and um in Transformer based + +1298 +00:55:20,720 --> 00:55:26,400 +models Mamba being a uh kind of more + +1299 +00:55:23,599 --> 00:55:30,280 +like closer to a with network which we + +1300 +00:55:26,400 --> 00:55:33,119 +also going to talk about us but um so I + +1301 +00:55:30,280 --> 00:55:37,319 +I think there's some indication that + +1302 +00:55:33,119 --> 00:55:39,920 +Transformers actually kind of are key or + +1303 +00:55:37,319 --> 00:55:43,680 +are at least like better at kind of in + +1304 +00:55:39,920 --> 00:55:46,760 +Contex learning than otheres are so + +1305 +00:55:43,680 --> 00:55:48,920 +there is some + +1306 +00:55:46,760 --> 00:55:50,839 +interesting implications of that which + +1307 +00:55:48,920 --> 00:55:53,240 +is like well if Transformers are good + +1308 +00:55:50,839 --> 00:55:57,359 +what's better than Transformer yeah like + +1309 +00:55:53,240 --> 00:55:58,799 +naturally learning this s of thing so um + +1310 +00:55:57,359 --> 00:56:00,720 +they're good at yeah they're like really + +1311 +00:55:58,799 --> 00:56:04,039 +good at copying and like maintaining + +1312 +00:56:00,720 --> 00:56:06,799 +information like more so um and yeah I + +1313 +00:56:04,039 --> 00:56:08,200 +think it'd be cool to like be able to I + +1314 +00:56:06,799 --> 00:56:09,839 +don't know how to do this but be able to + +1315 +00:56:08,200 --> 00:56:11,440 +extract that kind of information like + +1316 +00:56:09,839 --> 00:56:13,359 +what of the Transformers is actually + +1317 +00:56:11,440 --> 00:56:15,119 +helping it do this copying mechanism or + +1318 +00:56:13,359 --> 00:56:17,799 +like being a better in context learner + +1319 +00:56:15,119 --> 00:56:20,039 +then we can develop a better structure a + +1320 +00:56:17,799 --> 00:56:23,119 +slightly better structure than than a + +1321 +00:56:20,039 --> 00:56:26,000 +Transformer um hopefully someone comes + +1322 +00:56:23,119 --> 00:56:28,240 +up with that soon but cool any other + +1323 +00:56:26,000 --> 00:56:28,240 +question + +1324 +00:56:29,799 --> 00:56:34,359 +questions all right so let's move into + +1325 +00:56:32,240 --> 00:56:35,880 +model interpretability so there are + +1326 +00:56:34,359 --> 00:56:37,480 +weights and their activations I + +1327 +00:56:35,880 --> 00:56:39,160 +mentioned these are these are the two + +1328 +00:56:37,480 --> 00:56:41,119 +things these are the two things that + +1329 +00:56:39,160 --> 00:56:43,440 +we're going to look at so what can you + +1330 +00:56:41,119 --> 00:56:45,480 +do with the weights of an RD train model + +1331 +00:56:43,440 --> 00:56:47,799 +really you can just edit them and then + +1332 +00:56:45,480 --> 00:56:49,200 +kind of see what happens activations + +1333 +00:56:47,799 --> 00:56:51,240 +similarly you can look at the + +1334 +00:56:49,200 --> 00:56:52,720 +activations for different inputs you can + +1335 +00:56:51,240 --> 00:56:54,520 +poke them with a stick and see what + +1336 +00:56:52,720 --> 00:56:56,359 +happens a lot of my research is poking + +1337 +00:56:54,520 --> 00:56:58,559 +models with a stick and looking at the + +1338 +00:56:56,359 --> 00:57:00,920 +activations it's like predominantly what + +1339 +00:56:58,559 --> 00:57:02,240 +I've done so we'll talk about that um + +1340 +00:57:00,920 --> 00:57:04,359 +and the technical term for this is + +1341 +00:57:02,240 --> 00:57:06,599 +intervening on them by adding some + +1342 +00:57:04,359 --> 00:57:07,839 +vector or other sort of manipulation to + +1343 +00:57:06,599 --> 00:57:09,440 +the lat space but really what you're + +1344 +00:57:07,839 --> 00:57:13,960 +doing is like + +1345 +00:57:09,440 --> 00:57:17,599 +Pok um so when you look at weights uh + +1346 +00:57:13,960 --> 00:57:19,920 +one one class of methods or or area is + +1347 +00:57:17,599 --> 00:57:21,920 +on model editing fine-tuning is like the + +1348 +00:57:19,920 --> 00:57:23,480 +most extreme version of model editing + +1349 +00:57:21,920 --> 00:57:26,599 +usually these things are much more + +1350 +00:57:23,480 --> 00:57:29,640 +targeted um so in the model editing sort + +1351 +00:57:26,599 --> 00:57:32,160 +of landscape your goal or your target is + +1352 +00:57:29,640 --> 00:57:35,119 +you have a concept or a specific fact + +1353 +00:57:32,160 --> 00:57:37,440 +that needs to be changed in the model um + +1354 +00:57:35,119 --> 00:57:39,640 +and your approach here is you update or + +1355 +00:57:37,440 --> 00:57:41,359 +edit the weights of the model to edit + +1356 +00:57:39,640 --> 00:57:43,640 +the model's belief of that factor + +1357 +00:57:41,359 --> 00:57:45,599 +concept and ideally you do this without + +1358 +00:57:43,640 --> 00:57:47,319 +changing any of the other behavior of + +1359 +00:57:45,599 --> 00:57:49,760 +the model so for example let's say + +1360 +00:57:47,319 --> 00:57:51,920 +you're trying to say that Graham is no + +1361 +00:57:49,760 --> 00:57:54,559 +longer a professor at CMU but is a + +1362 +00:57:51,920 --> 00:57:57,319 +professor at Stanford you don't want + +1363 +00:57:54,559 --> 00:57:59,960 +every single person at CMU to now be a + +1364 +00:57:57,319 --> 00:58:02,920 +professor or uh now be affiliated with + +1365 +00:57:59,960 --> 00:58:07,839 +Stanford right um gr pleas not to + +1366 +00:58:02,920 --> 00:58:09,039 +Stanford um so here's one approach paper + +1367 +00:58:07,839 --> 00:58:11,720 +that came out a couple years ago there's + +1368 +00:58:09,039 --> 00:58:13,559 +a lot of work down here uh in in the + +1369 +00:58:11,720 --> 00:58:15,799 +model editing World I'll give you sort + +1370 +00:58:13,559 --> 00:58:17,440 +of a really brief overview of this but + +1371 +00:58:15,799 --> 00:58:20,520 +basically they have facts that they want + +1372 +00:58:17,440 --> 00:58:22,400 +to they want to manipulate um so for + +1373 +00:58:20,520 --> 00:58:24,680 +example the the example that they give + +1374 +00:58:22,400 --> 00:58:26,640 +in the figure is they want to associate + +1375 +00:58:24,680 --> 00:58:30,960 +the Space Needle with Paris the Space + +1376 +00:58:26,640 --> 00:58:32,520 +Needle is a a cool needle in in Seattle + +1377 +00:58:30,960 --> 00:58:36,000 +has nothing to do with Paris but Paris + +1378 +00:58:32,520 --> 00:58:38,400 +also has a tower so it's close um so + +1379 +00:58:36,000 --> 00:58:40,920 +they use causal tracing to isolate the + +1380 +00:58:38,400 --> 00:58:43,839 +causal effect uh of the individual + +1381 +00:58:40,920 --> 00:58:45,799 +hidden States for this fact so they + +1382 +00:58:43,839 --> 00:58:47,839 +basically continuously perturb the input + +1383 +00:58:45,799 --> 00:58:49,760 +do a bunch of forward passes and + +1384 +00:58:47,839 --> 00:58:51,720 +sequentially find the specific hidden + +1385 +00:58:49,760 --> 00:58:55,280 +states that are associated kind of with + +1386 +00:58:51,720 --> 00:58:56,839 +this fact um then they make an edit and + +1387 +00:58:55,280 --> 00:58:59,119 +their edit + +1388 +00:58:56,839 --> 00:59:02,039 +uh looks like this thing on the right um + +1389 +00:58:59,119 --> 00:59:05,280 +so they treat this pair Space Needle and + +1390 +00:59:02,039 --> 00:59:07,240 +Paris as this uh key value pair where + +1391 +00:59:05,280 --> 00:59:10,359 +Space Needle is the key you pass this + +1392 +00:59:07,240 --> 00:59:12,480 +into um into this weight Matrix this + +1393 +00:59:10,359 --> 00:59:14,640 +original part of the model you want this + +1394 +00:59:12,480 --> 00:59:16,599 +now instead of outputting Seattle to + +1395 +00:59:14,640 --> 00:59:19,119 +Output Paris and they have some nice + +1396 +00:59:16,599 --> 00:59:21,599 +math and a closed form solution to to + +1397 +00:59:19,119 --> 00:59:23,880 +identify this this is super expensive + +1398 +00:59:21,599 --> 00:59:25,359 +because they have to the causal tracing + +1399 +00:59:23,880 --> 00:59:27,680 +part have to do a bunch of forward + +1400 +00:59:25,359 --> 00:59:30,680 +passes um and they make this a little + +1401 +00:59:27,680 --> 00:59:33,480 +bit better in future future work they + +1402 +00:59:30,680 --> 00:59:37,920 +also do sort of a more + +1403 +00:59:33,480 --> 00:59:40,160 +comprehensive um edit um so these are + +1404 +00:59:37,920 --> 00:59:44,599 +kind like some of the things you can do + +1405 +00:59:40,160 --> 00:59:46,799 +um I'm less excited about model editing + +1406 +00:59:44,599 --> 00:59:49,039 +um there's there's some work on model + +1407 +00:59:46,799 --> 00:59:51,319 +editing sort of it's it's hard to + +1408 +00:59:49,039 --> 00:59:53,160 +control what other things break there's + +1409 +00:59:51,319 --> 00:59:56,240 +a and there's some work with when you + +1410 +00:59:53,160 --> 01:00:00,000 +edit a specific fact things start being + +1411 +00:59:56,240 --> 01:00:02,680 +weird and being biased in other ways um + +1412 +01:00:00,000 --> 01:00:05,760 +and so + +1413 +01:00:02,680 --> 01:00:09,119 +yeah do all kind of seual information + +1414 +01:00:05,760 --> 01:00:11,880 +like X is and Y would they Alles to the + +1415 +01:00:09,119 --> 01:00:14,319 +same layer is it just with the + +1416 +01:00:11,880 --> 01:00:16,920 +specific for this specific example it + +1417 +01:00:14,319 --> 01:00:19,039 +looks at this specific point uh for + +1418 +01:00:16,920 --> 01:00:21,039 +every example they'll probably find + +1419 +01:00:19,039 --> 01:00:22,119 +different regions in a different degree + +1420 +01:00:21,039 --> 01:00:25,680 +of + +1421 +01:00:22,119 --> 01:00:27,960 +manipulation um and yeah that it gets a + +1422 +01:00:25,680 --> 01:00:30,920 +little unprincipled kind of quickly it's + +1423 +01:00:27,960 --> 01:00:33,000 +not like they're able to find you know a + +1424 +01:00:30,920 --> 01:00:35,680 +specific attention head that or a + +1425 +01:00:33,000 --> 01:00:38,240 +specific layer or specific weight Matrix + +1426 +01:00:35,680 --> 01:00:42,400 +that corresponds to like + +1427 +01:00:38,240 --> 01:00:46,720 +all yeah relations of a specific + +1428 +01:00:42,400 --> 01:00:49,160 +type any for questions for yeah this is + +1429 +01:00:46,720 --> 01:00:51,119 +actually just a question if you know um + +1430 +01:00:49,160 --> 01:00:53,200 +it seems like more frequent facts might + +1431 +01:00:51,119 --> 01:00:55,240 +appear in both places in the model is do + +1432 +01:00:53,200 --> 01:00:59,280 +you know if that's actually the I have + +1433 +01:00:55,240 --> 01:01:02,440 +no idea but uh I would imagine that um + +1434 +01:00:59,280 --> 01:01:06,240 +it probably could occur in more places + +1435 +01:01:02,440 --> 01:01:08,160 +but also um a lot of the information is + +1436 +01:01:06,240 --> 01:01:10,119 +redundant anyway in the model especially + +1437 +01:01:08,160 --> 01:01:11,720 +for larger models so you might have to + +1438 +01:01:10,119 --> 01:01:13,599 +make targeted interventions in multiple + +1439 +01:01:11,720 --> 01:01:15,480 +places but it's possible that one + +1440 +01:01:13,599 --> 01:01:17,680 +intervention in one place sufficiently + +1441 +01:01:15,480 --> 01:01:21,039 +destroys like contextualized information + +1442 +01:01:17,680 --> 01:01:22,680 +in other places if it's close um it + +1443 +01:01:21,039 --> 01:01:24,839 +depends on how big this intervention is + +1444 +01:01:22,680 --> 01:01:28,200 +if it's like hitting it with a hammer + +1445 +01:01:24,839 --> 01:01:30,520 +rather than some like nice fine grain + +1446 +01:01:28,200 --> 01:01:33,359 +thing but that'd be a good be a good + +1447 +01:01:30,520 --> 01:01:36,839 +experiment to see + +1448 +01:01:33,359 --> 01:01:36,839 +um any other + +1449 +01:01:37,240 --> 01:01:41,559 +questions all right so we'll move into + +1450 +01:01:39,760 --> 01:01:43,680 +the stuff that I'm most familiar with + +1451 +01:01:41,559 --> 01:01:46,319 +and some of my work so looking at + +1452 +01:01:43,680 --> 01:01:48,319 +activations um so this is this is work + +1453 +01:01:46,319 --> 01:01:50,480 +I've been doing for a while uh this idea + +1454 +01:01:48,319 --> 01:01:52,799 +of steering vectors so I mentioned I + +1455 +01:01:50,480 --> 01:01:54,480 +poke models so it's thick steering + +1456 +01:01:52,799 --> 01:01:57,000 +Vector is that thick so it's basically a + +1457 +01:01:54,480 --> 01:01:59,000 +fix length vector that steers a language + +1458 +01:01:57,000 --> 01:02:00,920 +model to generate a specific sequence + +1459 +01:01:59,000 --> 01:02:02,720 +exactly when added to the hidden sites + +1460 +01:02:00,920 --> 01:02:06,319 +of a model at a specific + +1461 +01:02:02,720 --> 01:02:09,000 +location um and I'll I'll read this + +1462 +01:02:06,319 --> 01:02:11,400 +again it's there's a very like specific + +1463 +01:02:09,000 --> 01:02:13,319 +form that I wrote this in so uh it's + +1464 +01:02:11,400 --> 01:02:15,359 +it's a fix length Vector that steers a + +1465 +01:02:13,319 --> 01:02:17,640 +language model to generate a specific + +1466 +01:02:15,359 --> 01:02:19,359 +sequence exactly when added to the + +1467 +01:02:17,640 --> 01:02:22,559 +hidden states of a model at a specific + +1468 +01:02:19,359 --> 01:02:24,480 +point so this is different than um a + +1469 +01:02:22,559 --> 01:02:26,839 +soft prompt or different than a model + +1470 +01:02:24,480 --> 01:02:29,520 +editing sort of approach + +1471 +01:02:26,839 --> 01:02:31,400 +um in this case there is a vector that + +1472 +01:02:29,520 --> 01:02:32,960 +corresponds to a sequence and that + +1473 +01:02:31,400 --> 01:02:35,359 +Vector doesn't correspond to any other + +1474 +01:02:32,960 --> 01:02:36,640 +sequence there could be multiple vectors + +1475 +01:02:35,359 --> 01:02:39,079 +and it turns out there are multiple + +1476 +01:02:36,640 --> 01:02:41,799 +vectors that correspond to that sequence + +1477 +01:02:39,079 --> 01:02:44,160 +it'll be a little bit clearer um based + +1478 +01:02:41,799 --> 01:02:46,279 +on how we extract these + +1479 +01:02:44,160 --> 01:02:48,839 +things um so this is the stick that + +1480 +01:02:46,279 --> 01:02:52,000 +we're talking the language + +1481 +01:02:48,839 --> 01:02:53,599 +model um so how do we extract them so + +1482 +01:02:52,000 --> 01:02:57,400 +this is + +1483 +01:02:53,599 --> 01:03:00,200 +gpt2 um basically this Z steer thing on + +1484 +01:02:57,400 --> 01:03:03,240 +the left this is the steering Vector + +1485 +01:03:00,200 --> 01:03:05,799 +this gets initialized randomly um with + +1486 +01:03:03,240 --> 01:03:09,520 +small like in a in a reasonable way + +1487 +01:03:05,799 --> 01:03:11,440 +uniformly and small um and for any + +1488 +01:03:09,520 --> 01:03:14,000 +sequence a specific sequence that we + +1489 +01:03:11,440 --> 01:03:17,680 +want the model to generate we + +1490 +01:03:14,000 --> 01:03:19,400 +optimize this steering Vector Z steer uh + +1491 +01:03:17,680 --> 01:03:21,559 +to generate that sequence keeping the + +1492 +01:03:19,400 --> 01:03:23,960 +rest of the model entirely fixed so + +1493 +01:03:21,559 --> 01:03:26,200 +think about it as we're nudging an a + +1494 +01:03:23,960 --> 01:03:29,880 +frozen model to be able to generate a + +1495 +01:03:26,200 --> 01:03:31,680 +specific sequence at a specific time um + +1496 +01:03:29,880 --> 01:03:33,880 +and we have a lot of different options + +1497 +01:03:31,680 --> 01:03:35,559 +on where to inject the steering intive + +1498 +01:03:33,880 --> 01:03:37,520 +can put it basically anywhere in the + +1499 +01:03:35,559 --> 01:03:41,799 +model we can put it at any time step any + +1500 +01:03:37,520 --> 01:03:43,839 +number of these things in practice um + +1501 +01:03:41,799 --> 01:03:45,839 +providing it just at the first time step + +1502 +01:03:43,839 --> 01:03:48,039 +and somewhere in the middle of the model + +1503 +01:03:45,839 --> 01:03:52,480 +basically not the first layer and not + +1504 +01:03:48,039 --> 01:03:56,240 +the last layer works pretty well um and + +1505 +01:03:52,480 --> 01:04:00,279 +so more formally um forget the kind of + +1506 +01:03:56,240 --> 01:04:03,640 +notation um but right here we initialize + +1507 +01:04:00,279 --> 01:04:06,559 +um this Z steer and for a few iterations + +1508 +01:04:03,640 --> 01:04:08,039 +um we do forward passes first this + +1509 +01:04:06,559 --> 01:04:09,599 +starts as random and then this gets + +1510 +01:04:08,039 --> 01:04:11,960 +closer and closer and closer to being + +1511 +01:04:09,599 --> 01:04:14,279 +able to generate this sequence and + +1512 +01:04:11,960 --> 01:04:16,599 +eventually we get to a point uh and this + +1513 +01:04:14,279 --> 01:04:18,400 +n is pretty small it's eight or 10 or + +1514 +01:04:16,599 --> 01:04:20,160 +something like that um for most + +1515 +01:04:18,400 --> 01:04:22,200 +sequences we get to a point where we + +1516 +01:04:20,160 --> 01:04:23,920 +have found this stick that is allowed to + +1517 +01:04:22,200 --> 01:04:26,079 +poke this model to generate that + +1518 +01:04:23,920 --> 01:04:29,319 +sequence exactly now when we greedy + +1519 +01:04:26,079 --> 01:04:32,480 +decode from the model we pass in just a + +1520 +01:04:29,319 --> 01:04:34,920 +beginning of sequence token and this Z + +1521 +01:04:32,480 --> 01:04:37,119 +steer the steering vector and it's able + +1522 +01:04:34,920 --> 01:04:39,720 +to uncover a whole sequence that whole + +1523 +01:04:37,119 --> 01:04:41,319 +sequence that we had at the beginning + +1524 +01:04:39,720 --> 01:04:44,240 +entirely + +1525 +01:04:41,319 --> 01:04:46,640 +um this is weird and interesting because + +1526 +01:04:44,240 --> 01:04:48,880 +in a lot of cases um in like the + +1527 +01:04:46,640 --> 01:04:52,039 +prompting world in the soft prompt World + +1528 +01:04:48,880 --> 01:04:54,640 +usually you need a pretty large uh width + +1529 +01:04:52,039 --> 01:04:57,880 +of a prompt to be able to do things with + +1530 +01:04:54,640 --> 01:05:00,400 +um and this this generally in in that + +1531 +01:04:57,880 --> 01:05:02,000 +structure you're doing a specific task + +1532 +01:05:00,400 --> 01:05:04,200 +and you're providing kind of a large a + +1533 +01:05:02,000 --> 01:05:06,720 +large prompt to do this with large soft + +1534 +01:05:04,200 --> 01:05:10,520 +prompt to do this with this is often H + +1535 +01:05:06,720 --> 01:05:13,200 +this often has a width of 50 and a and a + +1536 +01:05:10,520 --> 01:05:15,520 +length of the the hidden size or the + +1537 +01:05:13,200 --> 01:05:17,160 +embedding size of the model in our cases + +1538 +01:05:15,520 --> 01:05:20,079 +all of our steering vectors are withth + +1539 +01:05:17,160 --> 01:05:21,440 +one and they're of just the hidden size + +1540 +01:05:20,079 --> 01:05:24,039 +of the + +1541 +01:05:21,440 --> 01:05:26,520 +model um so what ends + +1542 +01:05:24,039 --> 01:05:29,559 +up happening + +1543 +01:05:26,520 --> 01:05:31,520 +um actually before I go to results any + +1544 +01:05:29,559 --> 01:05:34,720 +questions this is a this is a weird + +1545 +01:05:31,520 --> 01:05:38,160 +setup and weird relative to what other + +1546 +01:05:34,720 --> 01:05:39,310 +people do so happy to take any + +1547 +01:05:38,160 --> 01:05:42,480 +questions + +1548 +01:05:39,310 --> 01:05:42,480 +[Music] + +1549 +01:05:42,880 --> 01:05:50,640 +yeah similarly if your prompt was um of + +1550 +01:05:47,440 --> 01:05:53,440 +a specific type so the prompt here is a + +1551 +01:05:50,640 --> 01:05:55,720 +continuous Vector passed in it's a + +1552 +01:05:53,440 --> 01:05:59,760 +single length width hidden size + +1553 +01:05:55,720 --> 01:06:02,799 +continuous Vector so um it's kind of + +1554 +01:05:59,760 --> 01:06:05,559 +like maybe collapsing your prompt into + +1555 +01:06:02,799 --> 01:06:08,480 +this compressing it into this tiny + +1556 +01:06:05,559 --> 01:06:12,119 +VOR you can think of that way + +1557 +01:06:08,480 --> 01:06:16,920 +yeah any other questions + +1558 +01:06:12,119 --> 01:06:16,920 +yeah this would be like + +1559 +01:06:18,160 --> 01:06:23,359 +I'm + +1560 +01:06:20,880 --> 01:06:28,279 +things potentially um this is something + +1561 +01:06:23,359 --> 01:06:30,640 +that I want to work on it uh like a year + +1562 +01:06:28,279 --> 01:06:32,119 +ago and didn't get didn't get this + +1563 +01:06:30,640 --> 01:06:34,559 +sufficient Buy in and then had to apply + +1564 +01:06:32,119 --> 01:06:36,880 +to grad school and all these things so + +1565 +01:06:34,559 --> 01:06:40,160 +it went by the wayside but but + +1566 +01:06:36,880 --> 01:06:43,440 +definitely something to something to + +1567 +01:06:40,160 --> 01:06:45,920 +pursue um there's a lot of scope there + +1568 +01:06:43,440 --> 01:06:45,920 +any other + +1569 +01:06:47,640 --> 01:06:54,480 +questions all right so move over to + +1570 +01:06:51,319 --> 01:06:56,119 +results so we can find steering vectors + +1571 +01:06:54,480 --> 01:06:58,520 +and that's and that's interesting thing + +1572 +01:06:56,119 --> 01:07:00,559 +um and we can find them pretty easily + +1573 +01:06:58,520 --> 01:07:02,559 +and for most sequences even sequences + +1574 +01:07:00,559 --> 01:07:04,559 +that the model hasn't seen before the + +1575 +01:07:02,559 --> 01:07:06,400 +underlying language model hasn't seen + +1576 +01:07:04,559 --> 01:07:09,640 +before + +1577 +01:07:06,400 --> 01:07:13,160 +um it also works for and this is kind of + +1578 +01:07:09,640 --> 01:07:16,799 +a negative but it also works for random + +1579 +01:07:13,160 --> 01:07:20,039 +sequences of very small length but it's + +1580 +01:07:16,799 --> 01:07:22,359 +harder to find so you can imagine if + +1581 +01:07:20,039 --> 01:07:24,760 +your uh steering Vector is basically a + +1582 +01:07:22,359 --> 01:07:26,279 +giant bulldozer it doesn't matter what + +1583 +01:07:24,760 --> 01:07:28,640 +your model is learning learned similar + +1584 +01:07:26,279 --> 01:07:30,160 +to the probe situation if you can + +1585 +01:07:28,640 --> 01:07:32,559 +compress all that information of that + +1586 +01:07:30,160 --> 01:07:35,400 +sequence into the vector you don't + +1587 +01:07:32,559 --> 01:07:37,400 +really need the language model um so + +1588 +01:07:35,400 --> 01:07:39,559 +there are cases when you're looking at + +1589 +01:07:37,400 --> 01:07:40,760 +sequences of length like five seven + +1590 +01:07:39,559 --> 01:07:43,079 +eight something like this you can + +1591 +01:07:40,760 --> 01:07:45,520 +uniformly sample from the vocabulary at + +1592 +01:07:43,079 --> 01:07:47,359 +random with replacement generate utter + +1593 +01:07:45,520 --> 01:07:49,799 +garbage and find steering vectors for + +1594 +01:07:47,359 --> 01:07:53,200 +them takes a little while but your model + +1595 +01:07:49,799 --> 01:07:55,520 +is complex enough that you can basically + +1596 +01:07:53,200 --> 01:07:57,960 +bulldo your model to be able to do this + +1597 +01:07:55,520 --> 01:08:00,200 +even if that sequence is incredibly low + +1598 +01:07:57,960 --> 01:08:01,480 +likelihood under the model but it works + +1599 +01:08:00,200 --> 01:08:05,319 +better for things that are higher + +1600 +01:08:01,480 --> 01:08:07,760 +likelihood under the model um + +1601 +01:08:05,319 --> 01:08:09,920 +predictably the I think the thing that + +1602 +01:08:07,760 --> 01:08:12,760 +surprised me the most was these steering + +1603 +01:08:09,920 --> 01:08:15,319 +vectors themselves have interpretable + +1604 +01:08:12,760 --> 01:08:17,960 +properties U so distances in steering + +1605 +01:08:15,319 --> 01:08:20,759 +Vector space reflect semantic similarity + +1606 +01:08:17,960 --> 01:08:23,640 +so if you have two sentences that are + +1607 +01:08:20,759 --> 01:08:26,719 +close um they're also close in steering + +1608 +01:08:23,640 --> 01:08:29,759 +Vector space that's kind of nice + +1609 +01:08:26,719 --> 01:08:32,359 +um it does better than for example the + +1610 +01:08:29,759 --> 01:08:34,520 +representations one would use for for + +1611 +01:08:32,359 --> 01:08:37,159 +probing so mean pooling Bert hidden + +1612 +01:08:34,520 --> 01:08:39,600 +States like we looked at before those do + +1613 +01:08:37,159 --> 01:08:42,080 +actually worse than steering vectors um + +1614 +01:08:39,600 --> 01:08:45,799 +just a bit + +1615 +01:08:42,080 --> 01:08:47,880 +surprising um style transfer is possible + +1616 +01:08:45,799 --> 01:08:49,719 +with simple Vector arithmetic so it' be + +1617 +01:08:47,880 --> 01:08:52,799 +nice to say that I have a sequence I + +1618 +01:08:49,719 --> 01:08:56,000 +want to subtract you know negativity and + +1619 +01:08:52,799 --> 01:08:58,799 +add positivity for for sentiment or + +1620 +01:08:56,000 --> 01:09:00,520 +other sorts of Styles um we can do this + +1621 +01:08:58,799 --> 01:09:02,159 +and we can do this reasonably well in + +1622 +01:09:00,520 --> 01:09:05,319 +steering VOR + +1623 +01:09:02,159 --> 01:09:07,920 +space um we can also decode from + +1624 +01:09:05,319 --> 01:09:10,600 +interpolations in the Laten space so you + +1625 +01:09:07,920 --> 01:09:12,759 +take two steering vectors for two + +1626 +01:09:10,600 --> 01:09:14,759 +sequences you look in the middle of them + +1627 +01:09:12,759 --> 01:09:17,400 +you linearly interpolate between them + +1628 +01:09:14,759 --> 01:09:20,600 +and you decode um if the space is kind + +1629 +01:09:17,400 --> 01:09:22,080 +of weirdly peaky then you would have + +1630 +01:09:20,600 --> 01:09:23,839 +issues and what you would generate is + +1631 +01:09:22,080 --> 01:09:25,080 +garbage and there's no guarantee that + +1632 +01:09:23,839 --> 01:09:27,199 +the space should be reasonable in + +1633 +01:09:25,080 --> 01:09:30,480 +between but it turns out it + +1634 +01:09:27,199 --> 01:09:33,719 +is um here's an example of one of these + +1635 +01:09:30,480 --> 01:09:36,359 +style transfer cases so very very simple + +1636 +01:09:33,719 --> 01:09:39,239 +easy easy sentence we found steering + +1637 +01:09:36,359 --> 01:09:41,679 +vectors for The Taste is excellent and + +1638 +01:09:39,239 --> 01:09:43,640 +and we took a sample of 100 positive + +1639 +01:09:41,679 --> 01:09:45,359 +sentences and 100 negative sentences + +1640 +01:09:43,640 --> 01:09:47,159 +found their steering vectors took the + +1641 +01:09:45,359 --> 01:09:48,960 +mean and thought that you know that + +1642 +01:09:47,159 --> 01:09:51,400 +looks like the positive concept steering + +1643 +01:09:48,960 --> 01:09:54,040 +Vector negative concept steering Vector + +1644 +01:09:51,400 --> 01:09:56,600 +we just did Vector arithmetic just did + +1645 +01:09:54,040 --> 01:09:59,880 +uh current steering + +1646 +01:09:56,600 --> 01:10:02,440 +Vector uh plus negative minus positive + +1647 +01:09:59,880 --> 01:10:03,520 +and we got the taste is unpleasant um + +1648 +01:10:02,440 --> 01:10:06,960 +and + +1649 +01:10:03,520 --> 01:10:08,880 +similarly um in the reverse + +1650 +01:10:06,960 --> 01:10:12,520 +directions it turns out that the + +1651 +01:10:08,880 --> 01:10:15,199 +magnitude matters because um for every + +1652 +01:10:12,520 --> 01:10:17,800 +single sequence there's kind of an end + +1653 +01:10:15,199 --> 01:10:20,640 +dimensional ball around that steering + +1654 +01:10:17,800 --> 01:10:23,640 +Vector that we found that also decodes + +1655 +01:10:20,640 --> 01:10:25,920 +that specific sequence and so that shows + +1656 +01:10:23,640 --> 01:10:28,880 +that the space is kind of reasonably + +1657 +01:10:25,920 --> 01:10:32,320 +well formed there's there's of course uh + +1658 +01:10:28,880 --> 01:10:34,280 +a lot of weird sort of areas um and so + +1659 +01:10:32,320 --> 01:10:37,120 +if you go poke around in steering Vector + +1660 +01:10:34,280 --> 01:10:38,760 +space and sort of try to sample from it + +1661 +01:10:37,120 --> 01:10:41,280 +eventually you'll find some weird edge + +1662 +01:10:38,760 --> 01:10:43,320 +cases and some garbage and repeated text + +1663 +01:10:41,280 --> 01:10:46,159 +and little things like + +1664 +01:10:43,320 --> 01:10:50,520 +this any questions here before I kind of + +1665 +01:10:46,159 --> 01:10:50,520 +Rapid Fire through the the last few + +1666 +01:10:50,920 --> 01:10:57,239 +things yeah like here + +1667 +01:10:57,400 --> 01:11:01,400 +yeah so we went uh Beyond this um there + +1668 +01:11:00,199 --> 01:11:04,280 +was + +1669 +01:11:01,400 --> 01:11:07,440 +so in in these specific experiments we + +1670 +01:11:04,280 --> 01:11:09,600 +looked at the middle of gpt2 um so this + +1671 +01:11:07,440 --> 01:11:12,679 +was like layer six layer seven and at + +1672 +01:11:09,600 --> 01:11:15,280 +the first time step we didn't do any um + +1673 +01:11:12,679 --> 01:11:17,239 +like magnitude scaling and so you can + +1674 +01:11:15,280 --> 01:11:19,480 +imagine if you put a giant Vector in + +1675 +01:11:17,239 --> 01:11:21,040 +there the models never the rest of the + +1676 +01:11:19,480 --> 01:11:24,679 +model has never seen something of that + +1677 +01:11:21,040 --> 01:11:26,159 +magnitude so it's now in a weird State + +1678 +01:11:24,679 --> 01:11:28,280 +and it's just going to break so if you + +1679 +01:11:26,159 --> 01:11:30,560 +put this to like I don't know 500 or + +1680 +01:11:28,280 --> 01:11:32,960 +something like this it break it just has + +1681 +01:11:30,560 --> 01:11:35,239 +no idea it's like basically like telling + +1682 +01:11:32,960 --> 01:11:37,199 +the rest your model hey it's like a + +1683 +01:11:35,239 --> 01:11:38,760 +completely untrained model be it looks + +1684 +01:11:37,199 --> 01:11:42,000 +similar to like random performance you + +1685 +01:11:38,760 --> 01:11:43,840 +get repeats and things like this smaller + +1686 +01:11:42,000 --> 01:11:45,800 +you end up staying in this ball for the + +1687 +01:11:43,840 --> 01:11:47,920 +sequence two two seemed pretty + +1688 +01:11:45,800 --> 01:11:50,199 +reasonable but we didn't spend a lot of + +1689 +01:11:47,920 --> 01:11:53,560 +time just like the day before the paper + +1690 +01:11:50,199 --> 01:11:56,600 +was do we were two seems reasonable we + +1691 +01:11:53,560 --> 01:11:59,159 +went to three we went to five 10 broke + +1692 +01:11:56,600 --> 01:12:01,199 +five somewhat broke two seems + +1693 +01:11:59,159 --> 01:12:03,440 +reasonable + +1694 +01:12:01,199 --> 01:12:06,400 +um decent signings + +1695 +01:12:03,440 --> 01:12:08,639 +hopefully um cool so I'll talk about uh + +1696 +01:12:06,400 --> 01:12:10,920 +a similar type of work uh that came out + +1697 +01:12:08,639 --> 01:12:13,000 +more recently on inference time + +1698 +01:12:10,920 --> 01:12:14,159 +intervention so basically they use some + +1699 +01:12:13,000 --> 01:12:16,719 +of the ideas that we talked about + +1700 +01:12:14,159 --> 01:12:18,840 +earlier they use linear probes um to + +1701 +01:12:16,719 --> 01:12:20,560 +find a tension head that correspond to a + +1702 +01:12:18,840 --> 01:12:23,600 +desired attribute they did this for + +1703 +01:12:20,560 --> 01:12:26,440 +truthful QA so uh their Hope was to find + +1704 +01:12:23,600 --> 01:12:28,639 +truthful directions in Len space + +1705 +01:12:26,440 --> 01:12:31,639 +um and then they shifted the attention + +1706 +01:12:28,639 --> 01:12:33,199 +head activations um during inference + +1707 +01:12:31,639 --> 01:12:35,280 +along the directions determined by the + +1708 +01:12:33,199 --> 01:12:38,280 +probes um so what this kind of looks + +1709 +01:12:35,280 --> 01:12:40,280 +like is you take your attention heads + +1710 +01:12:38,280 --> 01:12:42,440 +you probe them so you stick classify on + +1711 +01:12:40,280 --> 01:12:44,360 +top um this classifier learns to + +1712 +01:12:42,440 --> 01:12:47,679 +disentangle sort of truthful and + +1713 +01:12:44,360 --> 01:12:50,239 +untruthful and now you have um now you + +1714 +01:12:47,679 --> 01:12:52,080 +have a hyperplane and then you can move + +1715 +01:12:50,239 --> 01:12:54,320 +orthogonally to this hyper plane in the + +1716 +01:12:52,080 --> 01:12:55,920 +direction depending on which way you + +1717 +01:12:54,320 --> 01:12:58,080 +want to shift so if you want to move + +1718 +01:12:55,920 --> 01:13:02,040 +towards truthful you can move in that + +1719 +01:12:58,080 --> 01:13:04,400 +direction or or away um and they do this + +1720 +01:13:02,040 --> 01:13:07,560 +it works pretty well um I think they do + +1721 +01:13:04,400 --> 01:13:09,679 +this for GPT model and maybe a llama + +1722 +01:13:07,560 --> 01:13:12,960 +model um but can't can't remember the + +1723 +01:13:09,679 --> 01:13:15,960 +exact details um and it's a similar + +1724 +01:13:12,960 --> 01:13:21,040 +intervention um they basically add this + +1725 +01:13:15,960 --> 01:13:23,400 +Vector um that they found and they they + +1726 +01:13:21,040 --> 01:13:25,679 +have a little note on scaling they if + +1727 +01:13:23,400 --> 01:13:27,719 +they scale if that if the magnitude of + +1728 +01:13:25,679 --> 01:13:30,000 +the thing is too much things break so + +1729 +01:13:27,719 --> 01:13:33,880 +they have a they like hyper parameter + +1730 +01:13:30,000 --> 01:13:36,800 +search for the sort of magnitude of + +1731 +01:13:33,880 --> 01:13:38,840 +activation um but it's sort of a very + +1732 +01:13:36,800 --> 01:13:41,520 +similar approach to what we did but this + +1733 +01:13:38,840 --> 01:13:43,040 +focuses on specific attention heads and + +1734 +01:13:41,520 --> 01:13:44,440 +they don't do this for all the attention + +1735 +01:13:43,040 --> 01:13:46,600 +heads so back to like your question + +1736 +01:13:44,440 --> 01:13:49,080 +earlier do attention heads specialize it + +1737 +01:13:46,600 --> 01:13:52,360 +seems like they do and so there are many + +1738 +01:13:49,080 --> 01:13:54,320 +of them that uh have like no probing + +1739 +01:13:52,360 --> 01:13:57,719 +accuracy or limited probing accuracy and + +1740 +01:13:54,320 --> 01:13:59,400 +actually um are like distractors for the + +1741 +01:13:57,719 --> 01:14:03,400 +CH FL + +1742 +01:13:59,400 --> 01:14:03,400 +Direction any questions + +1743 +01:14:06,040 --> 01:14:11,760 +here cool so more activation + +1744 +01:14:09,120 --> 01:14:14,760 +manipulation so there's uh some work + +1745 +01:14:11,760 --> 01:14:17,600 +recently on contrastive steering vectors + +1746 +01:14:14,760 --> 01:14:19,480 +so the way we did this like sentiment + +1747 +01:14:17,600 --> 01:14:21,080 +steering was we had some positive + +1748 +01:14:19,480 --> 01:14:23,040 +sentences some negative sentences they + +1749 +01:14:21,080 --> 01:14:24,520 +weren't tied together in any reasonable + +1750 +01:14:23,040 --> 01:14:26,360 +way we found their steering vectors + +1751 +01:14:24,520 --> 01:14:30,040 +separately you could imagine the case + +1752 +01:14:26,360 --> 01:14:33,159 +and maybe a more useful case um with two + +1753 +01:14:30,040 --> 01:14:36,280 +prompts that um you can design that go + +1754 +01:14:33,159 --> 01:14:38,000 +two different ways you can sort of find + +1755 +01:14:36,280 --> 01:14:42,280 +their representations and do the + +1756 +01:14:38,000 --> 01:14:45,679 +manipulation the differences here um + +1757 +01:14:42,280 --> 01:14:48,800 +like individually rather than um for a + +1758 +01:14:45,679 --> 01:14:52,400 +whole concept or a whole attribute and + +1759 +01:14:48,800 --> 01:14:54,400 +the value here is your context is um + +1760 +01:14:52,400 --> 01:14:56,600 +preserved so if you're doing something + +1761 +01:14:54,400 --> 01:14:58,239 +like you know you're doing retrieval + +1762 +01:14:56,600 --> 01:15:00,440 +based things now you have some sort of + +1763 +01:14:58,239 --> 01:15:03,360 +document and then you have a question if + +1764 +01:15:00,440 --> 01:15:05,040 +your question sort of uh if you want to + +1765 +01:15:03,360 --> 01:15:07,560 +ask it in two different ways for two + +1766 +01:15:05,040 --> 01:15:08,880 +different things this would be a much + +1767 +01:15:07,560 --> 01:15:11,239 +better approach if you want to use + +1768 +01:15:08,880 --> 01:15:14,600 +steering vectors than the stuff I was + +1769 +01:15:11,239 --> 01:15:16,159 +doing um and it seems to work a little + +1770 +01:15:14,600 --> 01:15:17,880 +bit better they didn't compare against + +1771 +01:15:16,159 --> 01:15:19,400 +our our things because it's not like an + +1772 +01:15:17,880 --> 01:15:21,880 +Apples to Apples comparison but it seems + +1773 +01:15:19,400 --> 01:15:23,960 +to work better and be more General um + +1774 +01:15:21,880 --> 01:15:25,560 +and be more + +1775 +01:15:23,960 --> 01:15:27,840 +useful + +1776 +01:15:25,560 --> 01:15:27,840 +any + +1777 +01:15:31,400 --> 01:15:37,679 +questions cool so what can model + +1778 +01:15:35,080 --> 01:15:40,080 +interpretability give us these are these + +1779 +01:15:37,679 --> 01:15:41,960 +are my concluding remarks so hopefully + +1780 +01:15:40,080 --> 01:15:43,920 +we get a better understanding of how + +1781 +01:15:41,960 --> 01:15:46,840 +language models work their their + +1782 +01:15:43,920 --> 01:15:49,520 +internals their structure um we get to + +1783 +01:15:46,840 --> 01:15:52,800 +understand uh kind of why they do really + +1784 +01:15:49,520 --> 01:15:55,239 +well this is still like very very + +1785 +01:15:52,800 --> 01:15:57,320 +unclear um and hopefully we find + +1786 +01:15:55,239 --> 01:15:59,400 +lightweight methods to control and steer + +1787 +01:15:57,320 --> 01:16:03,360 +models as models become more and more + +1788 +01:15:59,400 --> 01:16:05,280 +useful um and and impact more more users + +1789 +01:16:03,360 --> 01:16:09,360 +we need better ways to control and steer + +1790 +01:16:05,280 --> 01:16:13,120 +them um and it's unclear how much + +1791 +01:16:09,360 --> 01:16:15,360 +industry will devote to these things um + +1792 +01:16:13,120 --> 01:16:18,080 +so it might be the role of Academia to + +1793 +01:16:15,360 --> 01:16:21,239 +do more science in in order to figure + +1794 +01:16:18,080 --> 01:16:23,920 +out how to control and steer these + +1795 +01:16:21,239 --> 01:16:25,520 +better um and hopefully we can also find + +1796 +01:16:23,920 --> 01:16:29,199 +potential Al Alternatives or + +1797 +01:16:25,520 --> 01:16:34,840 +complimentary methods to to do alignment + +1798 +01:16:29,199 --> 01:16:37,480 +um rhf is kind of expensive um and if if + +1799 +01:16:34,840 --> 01:16:40,080 +we could do this with limited data and + +1800 +01:16:37,480 --> 01:16:42,760 +um exploit structure um and information + +1801 +01:16:40,080 --> 01:16:46,400 +that's already in the model more so than + +1802 +01:16:42,760 --> 01:16:48,600 +than these methods um maybe maybe we can + +1803 +01:16:46,400 --> 01:16:50,920 +align them better and these things don't + +1804 +01:16:48,600 --> 01:16:52,480 +have to be uh Alternatives they can be + +1805 +01:16:50,920 --> 01:16:53,840 +complimentary to to + +1806 +01:16:52,480 --> 01:16:57,159 +[Music] + +1807 +01:16:53,840 --> 01:17:00,040 +rhm um here's some resources this is an + +1808 +01:16:57,159 --> 01:17:01,280 +extremely incomplete group but here are + +1809 +01:17:00,040 --> 01:17:04,080 +some folks that work on model + +1810 +01:17:01,280 --> 01:17:07,040 +interoperability there's many of these + +1811 +01:17:04,080 --> 01:17:09,120 +um I cited some some work from some of + +1812 +01:17:07,040 --> 01:17:11,280 +these teams but um there's a lot of + +1813 +01:17:09,120 --> 01:17:13,280 +people working on it and in the last + +1814 +01:17:11,280 --> 01:17:15,040 +like year there's been kind of an + +1815 +01:17:13,280 --> 01:17:17,480 +explosion especially in the mechanistic + +1816 +01:17:15,040 --> 01:17:21,639 +interpretability kind of World um Sasha + +1817 +01:17:17,480 --> 01:17:23,800 +Rush had a recent tweet that uh asked + +1818 +01:17:21,639 --> 01:17:25,320 +like prospective grad students what is + +1819 +01:17:23,800 --> 01:17:27,239 +the topic that they're most excited + +1820 +01:17:25,320 --> 01:17:29,880 +about and mechanistic interpretability + +1821 +01:17:27,239 --> 01:17:33,960 +was a thing that seemed to have won out + +1822 +01:17:29,880 --> 01:17:37,040 +um so I encourage you to to kind of dive + +1823 +01:17:33,960 --> 01:17:38,719 +into this literature and read some of + +1824 +01:17:37,040 --> 01:17:41,679 +the papers if you're if you're excited + +1825 +01:17:38,719 --> 01:17:45,199 +about it and yeah thanks for your + +1826 +01:17:41,679 --> 01:17:45,199 +attention and that's all I + +1827 +01:17:45,400 --> 01:17:48,400 +have \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (13) Debugging and Interpretation/transcript.vtt b/CMU Advanced NLP 2024 (13) Debugging and Interpretation/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..983ef7e0c0886d236749a97aebb9bf60c59a771a --- /dev/null +++ b/CMU Advanced NLP 2024 (13) Debugging and Interpretation/transcript.vtt @@ -0,0 +1,5482 @@ +WEBVTT + +00:00:00.919 --> 00:00:05.879 +so in my slides here I'm going to talk + +00:00:03.760 --> 00:00:10.040 +about debugging and understanding NLP + +00:00:05.879 --> 00:00:12.400 +models and this is how to tell uh when + +00:00:10.040 --> 00:00:14.759 +for example both your implementations + +00:00:12.400 --> 00:00:17.320 +are wrong and uh for example your + +00:00:14.759 --> 00:00:19.000 +underlying assumptions are wrong or your + +00:00:17.320 --> 00:00:21.240 +model is failing on particular segments + +00:00:19.000 --> 00:00:23.439 +of data or stuff like that so going to + +00:00:21.240 --> 00:00:26.160 +go through uh a variety of things that + +00:00:23.439 --> 00:00:29.000 +can go wrong with your experiments + +00:00:26.160 --> 00:00:31.679 +basically so a typical situation is + +00:00:29.000 --> 00:00:33.399 +you've implemented some NLP system you + +00:00:31.679 --> 00:00:35.840 +know based on neural networks of course + +00:00:33.399 --> 00:00:36.920 +because that's what we use nowadays um + +00:00:35.840 --> 00:00:40.000 +and you've looked at the code it + +00:00:36.920 --> 00:00:42.000 +basically looks okay um but it has low + +00:00:40.000 --> 00:00:44.559 +accuracy or it makes incomprehensible + +00:00:42.000 --> 00:00:45.680 +errors and you would like to uh fix + +00:00:44.559 --> 00:00:47.440 +these or you'd like to improve the + +00:00:45.680 --> 00:00:49.120 +accuracy or something like that and so + +00:00:47.440 --> 00:00:52.000 +what do I + +00:00:49.120 --> 00:00:53.680 +do and I think there's three dimensions + +00:00:52.000 --> 00:00:56.239 +of how you can understand your model and + +00:00:53.680 --> 00:00:57.960 +your Model Behavior um the first one is + +00:00:56.239 --> 00:01:00.199 +debugging the implementation so it's + +00:00:57.960 --> 00:01:03.760 +identifying problems that you have when + +00:01:00.199 --> 00:01:05.880 +you uh implemented something uh second + +00:01:03.760 --> 00:01:07.759 +thing is actionable evaluation so + +00:01:05.880 --> 00:01:09.799 +identifying typical error cases and how + +00:01:07.759 --> 00:01:11.840 +you what you can do to fix them and + +00:01:09.799 --> 00:01:13.720 +finally uh interpreting predictions or + +00:01:11.840 --> 00:01:18.080 +interpreting what's happening inside the + +00:01:13.720 --> 00:01:19.920 +model and uh this can maybe give you a + +00:01:18.080 --> 00:01:21.520 +deeper idea about what's happening in + +00:01:19.920 --> 00:01:22.720 +happening in individual cases and + +00:01:21.520 --> 00:01:25.240 +there's a lot of reasons why you might + +00:01:22.720 --> 00:01:27.920 +want to do that uh both like to make + +00:01:25.240 --> 00:01:30.280 +your models better and also for example + +00:01:27.920 --> 00:01:31.840 +if you want to be sure that your ition + +00:01:30.280 --> 00:01:34.840 +isn't doing something illegal like + +00:01:31.840 --> 00:01:36.439 +discriminating against people uh due to + +00:01:34.840 --> 00:01:38.680 +protected attributes or other things + +00:01:36.439 --> 00:01:41.399 +like that so um there's a number of + +00:01:38.680 --> 00:01:42.920 +reasons why you'd want to do that so I'm + +00:01:41.399 --> 00:01:44.399 +going to talk about the first two and + +00:01:42.920 --> 00:01:48.840 +Nishant is mainly going to talk about + +00:01:44.399 --> 00:01:52.000 +the second one so uh going right into + +00:01:48.840 --> 00:01:55.159 +it so in neural network models uh + +00:01:52.000 --> 00:01:58.880 +debugging is really important because + +00:01:55.159 --> 00:02:00.920 +they're opaque they're unpredictable and + +00:01:58.880 --> 00:02:03.119 +uh if you make little mistakes they can + +00:02:00.920 --> 00:02:05.439 +cause big problems with your + +00:02:03.119 --> 00:02:07.399 +output and another thing is that + +00:02:05.439 --> 00:02:09.640 +everything is a hyperparameter including + +00:02:07.399 --> 00:02:11.239 +your network size your model variations + +00:02:09.640 --> 00:02:14.440 +your batch size your strategy your + +00:02:11.239 --> 00:02:18.120 +Optimizer and your learning rate + +00:02:14.440 --> 00:02:19.560 +and finally unlike kind of more + +00:02:18.120 --> 00:02:21.200 +traditional machine learning methods + +00:02:19.560 --> 00:02:23.000 +like logistic progression or support + +00:02:21.200 --> 00:02:25.160 +Vector machines or something like that + +00:02:23.000 --> 00:02:27.879 +you might that you might have studied in + +00:02:25.160 --> 00:02:30.160 +your machine learning class um + +00:02:27.879 --> 00:02:32.599 +stochastic optimization has no guarantee + +00:02:30.160 --> 00:02:34.239 +about convergence um your loss might go + +00:02:32.599 --> 00:02:35.720 +down then it might go up and there might + +00:02:34.239 --> 00:02:38.120 +be absolutely nothing wrong with your + +00:02:35.720 --> 00:02:40.200 +training or it might be you know a + +00:02:38.120 --> 00:02:42.319 +serious problem so that's another issue + +00:02:40.200 --> 00:02:45.440 +you need to deal + +00:02:42.319 --> 00:02:48.800 +with so first I'd like to go into + +00:02:45.440 --> 00:02:51.400 +possible causes of problems with your + +00:02:48.800 --> 00:02:53.440 +implementation and I'm going to break + +00:02:51.400 --> 00:02:55.040 +them down into a typology and based on + +00:02:53.440 --> 00:02:57.040 +what part of the typology you're running + +00:02:55.040 --> 00:02:59.200 +into problems with you will need to fix + +00:02:57.040 --> 00:03:00.800 +them in different ways so your first + +00:02:59.200 --> 00:03:02.599 +goal when you're experiencing the + +00:03:00.800 --> 00:03:04.720 +problem is identifying why you're + +00:03:02.599 --> 00:03:06.400 +experiencing the problem uh because that + +00:03:04.720 --> 00:03:08.760 +will lead you to a + +00:03:06.400 --> 00:03:10.440 +solution so for training time problems + +00:03:08.760 --> 00:03:12.560 +there's a bunch of uh things that could + +00:03:10.440 --> 00:03:14.360 +be wrong uh the first is a lack of model + +00:03:12.560 --> 00:03:16.280 +capacity so your model is not able to + +00:03:14.360 --> 00:03:18.599 +model the phenomena that you want to + +00:03:16.280 --> 00:03:20.000 +model in the first place um you could + +00:03:18.599 --> 00:03:22.080 +have a poor training + +00:03:20.000 --> 00:03:24.920 +algorithm uh you could just have a bug + +00:03:22.080 --> 00:03:27.080 +in your code at training time another + +00:03:24.920 --> 00:03:29.319 +thing is uh test time problems and these + +00:03:27.080 --> 00:03:30.599 +can include a disconnect between what + +00:03:29.319 --> 00:03:33.040 +you're doing at training time and what + +00:03:30.599 --> 00:03:35.640 +you're testing at testing time uh + +00:03:33.040 --> 00:03:37.959 +failure of search + +00:03:35.640 --> 00:03:39.920 +algorithms and another thing you want to + +00:03:37.959 --> 00:03:41.360 +deal with is overfitting so you're + +00:03:39.920 --> 00:03:44.319 +actually doing well on the training set + +00:03:41.360 --> 00:03:48.360 +but you're doing poorly on the test + +00:03:44.319 --> 00:03:50.400 +Set uh finally you could have um optimiz + +00:03:48.360 --> 00:03:52.640 +a mismatch between the function you're + +00:03:50.400 --> 00:03:54.920 +optimizing at evaluation time and uh + +00:03:52.640 --> 00:03:56.519 +what you're actually evaluating sorry + +00:03:54.920 --> 00:03:58.079 +the fun the function that you're + +00:03:56.519 --> 00:04:01.079 +optimizing at training time and what + +00:03:58.079 --> 00:04:03.720 +you're actually evaluating at test time + +00:04:01.079 --> 00:04:05.280 +and my my best piece of advice for + +00:04:03.720 --> 00:04:07.959 +figuring out why things are going wrong + +00:04:05.280 --> 00:04:11.040 +is don't uh try to do all of them at + +00:04:07.959 --> 00:04:12.560 +once and rather uh start from the top + +00:04:11.040 --> 00:04:15.239 +and work it down because the ones at the + +00:04:12.560 --> 00:04:17.600 +top are often easier to uh diagnose and + +00:04:15.239 --> 00:04:20.680 +the ones at the + +00:04:17.600 --> 00:04:23.000 +bottom so looking at how you can debug + +00:04:20.680 --> 00:04:25.919 +systems at training time uh there's a + +00:04:23.000 --> 00:04:27.360 +number of ways you can do this uh but + +00:04:25.919 --> 00:04:30.039 +the most important thing for training + +00:04:27.360 --> 00:04:33.479 +time uh issues is looking at the loss + +00:04:30.039 --> 00:04:36.759 +function calculated on the training set + +00:04:33.479 --> 00:04:38.960 +and what I mean by this is don't look uh + +00:04:36.759 --> 00:04:41.240 +we talked about how we can't optimize + +00:04:38.960 --> 00:04:45.039 +error or accuracy easily so instead we + +00:04:41.240 --> 00:04:47.120 +optimize likelihood um and so you might + +00:04:45.039 --> 00:04:49.080 +want to look at accuracy to see whether + +00:04:47.120 --> 00:04:50.759 +your model is working well but I would + +00:04:49.080 --> 00:04:53.039 +urge you first to look at your + +00:04:50.759 --> 00:04:55.080 +likelihood or your loss function on the + +00:04:53.039 --> 00:04:57.000 +training set instead of your accuracy on + +00:04:55.080 --> 00:04:58.479 +the test set for example to diagnose + +00:04:57.000 --> 00:05:00.600 +these variety of + +00:04:58.479 --> 00:05:02.919 +problems and the sorts of things you + +00:05:00.600 --> 00:05:05.840 +want to look at are um is the loss + +00:05:02.919 --> 00:05:10.639 +function going down so is it you know + +00:05:05.840 --> 00:05:14.199 +converging into a good place + +00:05:10.639 --> 00:05:16.280 +um in general if this is your your + +00:05:14.199 --> 00:05:18.600 +loss um the first thing you should know + +00:05:16.280 --> 00:05:20.440 +is like what is a good loss uh in most + +00:05:18.600 --> 00:05:22.280 +cases a good loss is zero like log + +00:05:20.440 --> 00:05:26.280 +likelihood the best loss you can achieve + +00:05:22.280 --> 00:05:28.639 +is zero so you have zero down here um + +00:05:26.280 --> 00:05:31.639 +something + +00:05:28.639 --> 00:05:31.639 +like + +00:05:31.919 --> 00:05:36.680 +this is uh essentially a good loss + +00:05:38.080 --> 00:05:43.120 +function something like that uh + +00:05:41.360 --> 00:05:45.120 +especially if this is a relatively High + +00:05:43.120 --> 00:05:47.759 +number is usually a bad loss + +00:05:45.120 --> 00:05:50.319 +function + +00:05:47.759 --> 00:05:52.680 +um something like that on your training + +00:05:50.319 --> 00:05:54.240 +set is a very bad loss function uh + +00:05:52.680 --> 00:05:55.840 +something something is going seriously + +00:05:54.240 --> 00:05:57.960 +wrong if you see this on your Dev set + +00:05:55.840 --> 00:05:59.800 +that could be or your test set that + +00:05:57.960 --> 00:06:01.199 +could be uh overfitting but but if + +00:05:59.800 --> 00:06:03.440 +you're seeing that on your training set + +00:06:01.199 --> 00:06:05.759 +that's usually symptomatic of a problem + +00:06:03.440 --> 00:06:09.160 +so uh these are uh things that you + +00:06:05.759 --> 00:06:10.960 +should be uh knowing um is it going down + +00:06:09.160 --> 00:06:13.520 +basically to zero if you run training + +00:06:10.960 --> 00:06:16.000 +long enough um for many epochs over your + +00:06:13.520 --> 00:06:17.479 +training data so if it's not going down + +00:06:16.000 --> 00:06:20.599 +to zero and it's sticking up here then + +00:06:17.479 --> 00:06:20.599 +that's also an + +00:06:21.120 --> 00:06:25.759 +issue and um if it's not going down to + +00:06:23.840 --> 00:06:27.919 +close to zero on whatever training set + +00:06:25.759 --> 00:06:30.199 +you're training on um let's say you make + +00:06:27.919 --> 00:06:31.840 +your training set extremely small + +00:06:30.199 --> 00:06:33.319 +uh at least in that case it should go + +00:06:31.840 --> 00:06:34.960 +down to zero otherwise you might have a + +00:06:33.319 --> 00:06:37.199 +serious problem in your + +00:06:34.960 --> 00:06:39.240 +implementation so these are good things + +00:06:37.199 --> 00:06:41.960 +to check first when you're training a + +00:06:39.240 --> 00:06:45.199 +model um and there's a number of reasons + +00:06:41.960 --> 00:06:47.759 +why this might not be helping or why + +00:06:45.199 --> 00:06:50.880 +this might not be happening so um your + +00:06:47.759 --> 00:06:53.120 +Mo model might be too weak and so in + +00:06:50.880 --> 00:06:55.440 +general larger models tend to perform + +00:06:53.120 --> 00:06:58.000 +better uh especially if you're using a + +00:06:55.440 --> 00:06:59.800 +pre-trained model and um this is just an + +00:06:58.000 --> 00:07:03.800 +example from the T5 paper where they + +00:06:59.800 --> 00:07:06.680 +scale up the T5 model um from a + +00:07:03.800 --> 00:07:09.319 +relatively small model to what at the + +00:07:06.680 --> 00:07:12.199 +time was a very large model of 11 + +00:07:09.319 --> 00:07:14.360 +billion parameters now this is you know + +00:07:12.199 --> 00:07:17.479 +a moderately sized model or maybe even + +00:07:14.360 --> 00:07:20.879 +small model by some standards but anyway + +00:07:17.479 --> 00:07:23.800 +you can see that it uh in continues to + +00:07:20.879 --> 00:07:26.479 +increase one really interesting + +00:07:23.800 --> 00:07:30.080 +phenomenon is uh that actually larger + +00:07:26.479 --> 00:07:33.879 +models can learn faster or at least with + +00:07:30.080 --> 00:07:36.680 +fewer steps than uh smaller + +00:07:33.879 --> 00:07:40.199 +models and so this + +00:07:36.680 --> 00:07:42.240 +is an interesting example this paper uh + +00:07:40.199 --> 00:07:43.919 +on neural scaling was it's a very + +00:07:42.240 --> 00:07:48.000 +influential paper but basically what + +00:07:43.919 --> 00:07:51.000 +they show is the darker purple ones are + +00:07:48.000 --> 00:07:54.599 +smaller models the yellow ones are + +00:07:51.000 --> 00:07:57.159 +bigger models and what you can see here + +00:07:54.599 --> 00:07:59.639 +is the purple model and on the left side + +00:07:57.159 --> 00:08:02.120 +they have the number of tokens processed + +00:07:59.639 --> 00:08:05.759 +the right side they have the number of + +00:08:02.120 --> 00:08:08.159 +uh compute or the amount of compute um + +00:08:05.759 --> 00:08:10.080 +and so what you can see is if you just + +00:08:08.159 --> 00:08:12.240 +look at the number of tokens processed + +00:08:10.080 --> 00:08:14.280 +the larger the model the faster it + +00:08:12.240 --> 00:08:17.720 +converges which + +00:08:14.280 --> 00:08:21.400 +is maybe a little bit surprising maybe a + +00:08:17.720 --> 00:08:22.680 +little bit you or maybe uh like some + +00:08:21.400 --> 00:08:24.879 +people have the intuition that this + +00:08:22.680 --> 00:08:26.440 +should be the case but when I first saw + +00:08:24.879 --> 00:08:27.759 +this I found it a little bit surprising + +00:08:26.440 --> 00:08:29.000 +because I thought it would be so large + +00:08:27.759 --> 00:08:29.960 +and noisy that the model would have + +00:08:29.000 --> 00:08:32.320 +trouble fit + +00:08:29.960 --> 00:08:34.200 +you know fitting the data as quickly but + +00:08:32.320 --> 00:08:36.200 +there's actually a good reason for this + +00:08:34.200 --> 00:08:37.240 +does anyone have a guess about why this + +00:08:36.200 --> 00:08:39.719 +is + +00:08:37.240 --> 00:08:41.240 +thee we've talked a little bit about the + +00:08:39.719 --> 00:08:44.120 +underlying phenomena for this in + +00:08:41.240 --> 00:08:48.360 +previous classes so you might be able to + +00:08:44.120 --> 00:08:48.360 +think back to some of the things you + +00:08:50.480 --> 00:08:56.040 +yeah yeah so um just to repeat there's a + +00:08:54.160 --> 00:08:57.720 +lot of different parameters so it can + +00:08:56.040 --> 00:08:59.880 +try to converge along a lot of different + +00:08:57.720 --> 00:09:01.920 +dimensions so if we think back to the + +00:08:59.880 --> 00:09:04.079 +like model pruning class and other stuff + +00:09:01.920 --> 00:09:06.640 +like that um part of the reason why we + +00:09:04.079 --> 00:09:08.000 +can prune large models so efficiently is + +00:09:06.640 --> 00:09:10.200 +because only like a small number of the + +00:09:08.000 --> 00:09:12.440 +parameters are actually useful and so if + +00:09:10.200 --> 00:09:15.120 +you start out with a much larger model + +00:09:12.440 --> 00:09:17.720 +it's more likely to have useful subsets + +00:09:15.120 --> 00:09:20.320 +of the parameters basically um which is + +00:09:17.720 --> 00:09:21.560 +called the lottery ticket hypothesis uh + +00:09:20.320 --> 00:09:23.839 +there there's a famous paper called the + +00:09:21.560 --> 00:09:27.560 +lottery ticket hypothesis examines this + +00:09:23.839 --> 00:09:29.680 +phenomenon so um one one interesting + +00:09:27.560 --> 00:09:32.160 +thing is you can see that even if you + +00:09:29.680 --> 00:09:35.640 +scale up the compute even if you measure + +00:09:32.160 --> 00:09:37.640 +based on compute the uh larger models + +00:09:35.640 --> 00:09:38.959 +eventually surpass the smaller models in + +00:09:37.640 --> 00:09:41.920 +terms of how efficient they are at + +00:09:38.959 --> 00:09:44.680 +modeling the data and that's just + +00:09:41.920 --> 00:09:46.760 +because models tend to learn well for a + +00:09:44.680 --> 00:09:49.560 +while and then they basically reach + +00:09:46.760 --> 00:09:51.760 +their capacity and stop learning well or + +00:09:49.560 --> 00:09:53.680 +they start learning very slowly and once + +00:09:51.760 --> 00:09:57.120 +you get to that point the larger models + +00:09:53.680 --> 00:09:58.800 +work better so there's a kind of + +00:09:57.120 --> 00:10:00.640 +counterintuitive thing that if you want + +00:09:58.800 --> 00:10:04.160 +to train faster you actually can train a + +00:10:00.640 --> 00:10:06.839 +larger model and uh that will that will + +00:10:04.160 --> 00:10:08.000 +uh get you to a good solution at some + +00:10:06.839 --> 00:10:09.640 +point that will get you to a good + +00:10:08.000 --> 00:10:11.120 +solution faster than a smaller model + +00:10:09.640 --> 00:10:15.200 +would you know of course you need memory + +00:10:11.120 --> 00:10:15.200 +and stuff but why are looking + +00:10:20.040 --> 00:10:26.920 +at so this is test loss training loss + +00:10:22.760 --> 00:10:30.680 +also looks like this um I think on + +00:10:26.920 --> 00:10:34.360 +this particular + +00:10:30.680 --> 00:10:37.519 +on this particular paper they never + +00:10:34.360 --> 00:10:39.399 +repeated data and if you never repeat + +00:10:37.519 --> 00:10:42.560 +data actually your training loss looks + +00:10:39.399 --> 00:10:44.680 +very similar to your test loss because + +00:10:42.560 --> 00:10:46.079 +it if you like actually if you can + +00:10:44.680 --> 00:10:48.760 +assume your training data set and your + +00:10:46.079 --> 00:10:50.880 +test data set are um uh identically + +00:10:48.760 --> 00:10:52.279 +distributed your training loss of new + +00:10:50.880 --> 00:10:54.600 +training data should be exactly the same + +00:10:52.279 --> 00:10:55.959 +as your test loss so I think that's + +00:10:54.600 --> 00:10:57.760 +basically why they were justified in + +00:10:55.959 --> 00:11:01.000 +doing that good but they probably did + +00:10:57.760 --> 00:11:03.639 +test loss to like I swashed the concern + +00:11:01.000 --> 00:11:05.839 +that this was overfitting comp or + +00:11:03.639 --> 00:11:09.200 +something but good + +00:11:05.839 --> 00:11:11.279 +question um cool so the these are are + +00:11:09.200 --> 00:11:13.000 +good things to know um so basically if + +00:11:11.279 --> 00:11:14.839 +you see your model doing something like + +00:11:13.000 --> 00:11:16.279 +this um plateauing out maybe your + +00:11:14.839 --> 00:11:18.680 +model's too small and you need to tr a + +00:11:16.279 --> 00:11:20.920 +big + +00:11:18.680 --> 00:11:22.200 +basically another uh piece of trouble + +00:11:20.920 --> 00:11:26.800 +that you can have is trouble with + +00:11:22.200 --> 00:11:29.519 +optimization and basically um you should + +00:11:26.800 --> 00:11:31.600 +check your Optimizer um usually people + +00:11:29.519 --> 00:11:35.639 +are using atom variants nowadays like + +00:11:31.600 --> 00:11:37.839 +atom or atom W so just use that um + +00:11:35.639 --> 00:11:39.639 +learning rate uh so make sure that the + +00:11:37.839 --> 00:11:41.160 +learning rate you're using is standard + +00:11:39.639 --> 00:11:43.399 +for kind of the model size that you're + +00:11:41.160 --> 00:11:44.920 +using and the best way to do this is uh + +00:11:43.399 --> 00:11:46.000 +look at previous papers and see what + +00:11:44.920 --> 00:11:50.160 +they're + +00:11:46.000 --> 00:11:51.680 +using um initialization most people + +00:11:50.160 --> 00:11:53.440 +nowadays will not be training from + +00:11:51.680 --> 00:11:55.440 +scratch but if you are training from + +00:11:53.440 --> 00:11:58.040 +scratch how you initialize your model is + +00:11:55.440 --> 00:11:59.399 +really important and normally the way + +00:11:58.040 --> 00:12:03.320 +you do this is you do this with some + +00:11:59.399 --> 00:12:05.079 +sort of uniform random noise and uh + +00:12:03.320 --> 00:12:06.959 +specifically you can pick the uniform + +00:12:05.079 --> 00:12:08.800 +random noise in intelligent ways based + +00:12:06.959 --> 00:12:12.240 +on the the data size which I'll talk + +00:12:08.800 --> 00:12:13.920 +about in a second um also mini batching + +00:12:12.240 --> 00:12:15.639 +um are you using sufficiently large + +00:12:13.920 --> 00:12:17.480 +batches of data if you're using small + +00:12:15.639 --> 00:12:18.720 +batches of data you might have too much + +00:12:17.480 --> 00:12:21.279 +noise in your training and it might + +00:12:18.720 --> 00:12:23.839 +diverge so uh these are things you need + +00:12:21.279 --> 00:12:23.839 +think about as + +00:12:25.279 --> 00:12:30.560 +well + +00:12:27.519 --> 00:12:35.000 +cool um so these are training time + +00:12:30.560 --> 00:12:37.320 +things um the next thing is debugging at + +00:12:35.000 --> 00:12:37.320 +test + +00:12:38.160 --> 00:12:43.839 +time and this is particularly important + +00:12:41.240 --> 00:12:47.320 +if you're doing any sort + +00:12:43.839 --> 00:12:48.880 +of like I guess a lot of this has kind + +00:12:47.320 --> 00:12:51.360 +of been commoditized and it's + +00:12:48.880 --> 00:12:52.560 +implemented in hugging face and stuff + +00:12:51.360 --> 00:12:55.120 +like that and as long as you're using + +00:12:52.560 --> 00:12:57.279 +the standard implementations you're less + +00:12:55.120 --> 00:12:59.000 +likely to run into these bugs but if you + +00:12:57.279 --> 00:13:00.519 +are implementing anything on your own + +00:12:59.000 --> 00:13:03.040 +this is actually really tricky and you + +00:13:00.519 --> 00:13:07.880 +can easily make mistakes so uh it's + +00:13:03.040 --> 00:13:08.959 +important to to know about it so um what + +00:13:07.880 --> 00:13:10.680 +one of the reasons why you can have + +00:13:08.959 --> 00:13:12.240 +training and test disconnects especially + +00:13:10.680 --> 00:13:14.399 +if you're doing something like text + +00:13:12.240 --> 00:13:15.959 +generation is that usually your loss + +00:13:14.399 --> 00:13:17.720 +calculation and prodiction functions + +00:13:15.959 --> 00:13:20.480 +will be implemented in different + +00:13:17.720 --> 00:13:23.360 +functions and like anything in software + +00:13:20.480 --> 00:13:25.440 +engineering Um this can be a source of + +00:13:23.360 --> 00:13:26.760 +bugs duplicated sour code can be a + +00:13:25.440 --> 00:13:28.440 +source of bugs because you might + +00:13:26.760 --> 00:13:30.199 +Implement one thing in one place in one + +00:13:28.440 --> 00:13:33.000 +way another thing in another place in + +00:13:30.199 --> 00:13:35.560 +another way so this is no exception to + +00:13:33.000 --> 00:13:37.399 +that um it's especially true for + +00:13:35.560 --> 00:13:39.000 +structured prediction models so anything + +00:13:37.399 --> 00:13:40.399 +where you're not just making a single + +00:13:39.000 --> 00:13:42.079 +prediction but you're making multiple + +00:13:40.399 --> 00:13:43.839 +predictions in a row so you need to be a + +00:13:42.079 --> 00:13:46.959 +little bit careful about + +00:13:43.839 --> 00:13:49.880 +that um another thing that you need to + +00:13:46.959 --> 00:13:51.079 +be pay attention about is often uh + +00:13:49.880 --> 00:13:52.680 +especially if you're doing your own + +00:13:51.079 --> 00:13:55.880 +implementation loss calculation it's + +00:13:52.680 --> 00:13:59.800 +mini batched and generation is not or in + +00:13:55.880 --> 00:14:02.199 +highly optimized versions of um of + +00:13:59.800 --> 00:14:03.880 +inference you might be doing inference + +00:14:02.199 --> 00:14:05.360 +with Dynamic batching and stuff like + +00:14:03.880 --> 00:14:06.720 +that and it might become complicated you + +00:14:05.360 --> 00:14:09.800 +might make + +00:14:06.720 --> 00:14:12.160 +mistakes um so how do + +00:14:09.800 --> 00:14:15.839 +we make sure that we're not making any + +00:14:12.160 --> 00:14:18.560 +mistakes here um there's a really simple + +00:14:15.839 --> 00:14:21.199 +way to debug any sort of mini batched + +00:14:18.560 --> 00:14:24.199 +loss calculation because normally when + +00:14:21.199 --> 00:14:27.000 +we mini batch loss calculations we're + +00:14:24.199 --> 00:14:31.079 +simultaneously calculating uh the loss + +00:14:27.000 --> 00:14:35.600 +for like uh four four or eight or + +00:14:31.079 --> 00:14:37.560 +whatever sequences at a time and so you + +00:14:35.600 --> 00:14:40.279 +can calculate the loss with a large + +00:14:37.560 --> 00:14:42.000 +batch size like 32 and then calculate + +00:14:40.279 --> 00:14:44.920 +the loss for each uh sentence + +00:14:42.000 --> 00:14:47.720 +individually and sum them together and + +00:14:44.920 --> 00:14:49.480 +these uh value should be the same and + +00:14:47.720 --> 00:14:52.160 +this can help make sure that you don't + +00:14:49.480 --> 00:14:55.120 +have any you know issues with your + +00:14:52.160 --> 00:14:57.959 +padding or your masking or other things + +00:14:55.120 --> 00:14:59.800 +like this um so this is particularly + +00:14:57.959 --> 00:15:01.959 +important if you're not just using out + +00:14:59.800 --> 00:15:04.240 +of the box things so you have a slightly + +00:15:01.959 --> 00:15:06.240 +unusually structured model with like + +00:15:04.240 --> 00:15:08.880 +hierarchical encoding or anything like + +00:15:06.240 --> 00:15:11.680 +that you need to be really careful about + +00:15:08.880 --> 00:15:15.440 +that um you can even create unit tests + +00:15:11.680 --> 00:15:17.399 +that test this so like um in machine + +00:15:15.440 --> 00:15:18.959 +learning code we don't write unit test + +00:15:17.399 --> 00:15:20.160 +or especially neural network based + +00:15:18.959 --> 00:15:22.440 +machine learning code we don't write + +00:15:20.160 --> 00:15:24.160 +unit tests that often because it's kind + +00:15:22.440 --> 00:15:26.279 +of hard to do there's lots of Randomness + +00:15:24.160 --> 00:15:27.959 +and other stuff like that um but this is + +00:15:26.279 --> 00:15:30.959 +one thing that you can easily test and + +00:15:27.959 --> 00:15:30.959 +and make sure that you don't hear the + +00:15:32.440 --> 00:15:39.319 +mistakes um any sort of uh generation + +00:15:36.480 --> 00:15:43.199 +algorithm uh so when you're generating + +00:15:39.319 --> 00:15:44.639 +or decoding um you can make sure that + +00:15:43.199 --> 00:15:47.639 +your decoding code is getting the same + +00:15:44.639 --> 00:15:50.040 +score is when you calculate the loss and + +00:15:47.639 --> 00:15:52.959 +an easy way to do this is you call the + +00:15:50.040 --> 00:15:54.759 +decoding function to generate an output + +00:15:52.959 --> 00:15:57.399 +and normally when you're doing any sort + +00:15:54.759 --> 00:15:59.480 +of search or sampling or something like + +00:15:57.399 --> 00:16:02.120 +that during the search or sampling + +00:15:59.480 --> 00:16:05.000 +you're calculating the logits or the log + +00:16:02.120 --> 00:16:07.399 +probabilities of each step that you + +00:16:05.000 --> 00:16:09.120 +sample so you keep track of that during + +00:16:07.399 --> 00:16:12.279 +your sampling + +00:16:09.120 --> 00:16:14.319 +algorithm and then after that you call + +00:16:12.279 --> 00:16:16.800 +the loss function on the generated + +00:16:14.319 --> 00:16:18.639 +output and you calculate the loss + +00:16:16.800 --> 00:16:20.360 +according to the loss function and the + +00:16:18.639 --> 00:16:22.240 +score of these two things should be the + +00:16:20.360 --> 00:16:26.440 +same uh + +00:16:22.240 --> 00:16:26.440 +so um you know you do your + +00:16:27.920 --> 00:16:35.279 +generate and that gives you an + +00:16:32.000 --> 00:16:35.279 +output in + +00:16:35.600 --> 00:16:42.360 +score and then you do um + +00:16:39.319 --> 00:16:45.839 +loss on the + +00:16:42.360 --> 00:16:49.040 +output and that gives you the score + +00:16:45.839 --> 00:16:53.079 +two and then you just compare these two + +00:16:49.040 --> 00:16:56.360 +things together and this can uh in in my + +00:16:53.079 --> 00:17:01.120 +experience this has allowed me to find + +00:16:56.360 --> 00:17:03.240 +the majority of the bugs in um these two + +00:17:01.120 --> 00:17:04.679 +things um have allowed me to find the + +00:17:03.240 --> 00:17:06.600 +majority of the bugs whenever I was + +00:17:04.679 --> 00:17:09.199 +doing any sort of like complex thing + +00:17:06.600 --> 00:17:11.880 +with respect to generation or models and + +00:17:09.199 --> 00:17:13.360 +stuff like that so um it's a very common + +00:17:11.880 --> 00:17:15.439 +place for bugs even if you're pretty + +00:17:13.360 --> 00:17:17.280 +familiar with models so I I would highly + +00:17:15.439 --> 00:17:19.760 +recommend + +00:17:17.280 --> 00:17:21.319 +that um this is particularly bad when + +00:17:19.760 --> 00:17:25.559 +you're doing something like a search + +00:17:21.319 --> 00:17:28.400 +algorithm like beam search um and + +00:17:25.559 --> 00:17:30.400 +so beam search uh as you know from the + +00:17:28.400 --> 00:17:34.200 +generation class instead of picking one + +00:17:30.400 --> 00:17:37.080 +high probability uh you know word in + +00:17:34.200 --> 00:17:40.160 +your next step you maintain several + +00:17:37.080 --> 00:17:41.960 +paths and one way that you can fix this + +00:17:40.160 --> 00:17:44.320 +is as you make search better the model + +00:17:41.960 --> 00:17:45.760 +score should get better so the log + +00:17:44.320 --> 00:17:48.240 +likelihood of the output should get + +00:17:45.760 --> 00:17:50.280 +better almost all of the time so you can + +00:17:48.240 --> 00:17:51.840 +search with varying beam sizes and make + +00:17:50.280 --> 00:17:55.280 +sure that you get a better overall model + +00:17:51.840 --> 00:17:57.559 +score at the end so um and you can even + +00:17:55.280 --> 00:17:59.320 +create a unit test testing this as well + +00:17:57.559 --> 00:18:01.000 +I don't think that that many people will + +00:17:59.320 --> 00:18:02.480 +be reimplementing beam search so you + +00:18:01.000 --> 00:18:04.120 +might not need to worry about that too + +00:18:02.480 --> 00:18:05.679 +much but in case you are doing anything + +00:18:04.120 --> 00:18:08.159 +with respect to search algorithms it's a + +00:18:05.679 --> 00:18:08.159 +good thing to + +00:18:08.880 --> 00:18:15.159 +know + +00:18:10.480 --> 00:18:15.159 +cool um any questions about these two so + +00:18:16.919 --> 00:18:24.159 +far no okay um so the second the next + +00:18:22.600 --> 00:18:25.400 +thing I want to talk about this is + +00:18:24.159 --> 00:18:27.840 +something that people think about a + +00:18:25.400 --> 00:18:29.400 +little bit less uh but it's actually + +00:18:27.840 --> 00:18:31.280 +something really important to know + +00:18:29.400 --> 00:18:34.280 +because it will affect you it will + +00:18:31.280 --> 00:18:35.799 +affect everybody uh to some extent it + +00:18:34.280 --> 00:18:40.760 +will affect you to a greater or lesser + +00:18:35.799 --> 00:18:41.520 +extent depending on um what uh type of + +00:18:40.760 --> 00:18:44.480 +you + +00:18:41.520 --> 00:18:46.799 +know system you're building but it will + +00:18:44.480 --> 00:18:48.760 +definitely affect everybody and that's + +00:18:46.799 --> 00:18:50.960 +the mismatch between the the function + +00:18:48.760 --> 00:18:53.440 +that you're optimizing at training time + +00:18:50.960 --> 00:18:55.240 +and the evaluation metric that you're + +00:18:53.440 --> 00:18:58.000 +evaluating and + +00:18:55.240 --> 00:18:59.679 +so uh like as I said in the + +00:18:58.000 --> 00:19:01.679 +reinforcement learning class it's very + +00:18:59.679 --> 00:19:03.640 +common to optimize for maximum + +00:19:01.679 --> 00:19:06.039 +likelihood for training uh but there's + +00:19:03.640 --> 00:19:07.840 +all kinds of problems with this you know + +00:19:06.039 --> 00:19:09.640 +um with respect to the mistake it not + +00:19:07.840 --> 00:19:11.640 +being sensitive to mistakes it not being + +00:19:09.640 --> 00:19:14.799 +sensitive to your generation + +00:19:11.640 --> 00:19:16.520 +algorithm um but even though your + +00:19:14.799 --> 00:19:19.880 +likelihood is getting better accuracy + +00:19:16.520 --> 00:19:22.799 +can get worse and this is a super simple + +00:19:19.880 --> 00:19:25.080 +example with uh image classification on + +00:19:22.799 --> 00:19:27.919 +mest and I I ran this experiment with + +00:19:25.080 --> 00:19:30.880 +like 10 lines of pytorch code or + +00:19:27.919 --> 00:19:36.840 +something like this uh maybe more like + +00:19:30.880 --> 00:19:40.080 +40 lines of P um and so here um on the + +00:19:36.840 --> 00:19:43.120 +left side we have the loss on the + +00:19:40.080 --> 00:19:46.600 +training set and the test set or the dev + +00:19:43.120 --> 00:19:48.559 +set and here we have accuracy on the + +00:19:46.600 --> 00:19:50.799 +training set in the test + +00:19:48.559 --> 00:19:55.000 +set + +00:19:50.799 --> 00:19:56.159 +and so oops I showed you the answer so I + +00:19:55.000 --> 00:19:58.799 +was going to do a quiz but I + +00:19:56.159 --> 00:20:00.559 +accidentally showed you the answer um + +00:19:58.799 --> 00:20:04.440 +but the problem here is basically + +00:20:00.559 --> 00:20:06.320 +because um the the loss you're + +00:20:04.440 --> 00:20:09.400 +calculating the likelihood of the + +00:20:06.320 --> 00:20:11.120 +correct answer and the likelihood of the + +00:20:09.400 --> 00:20:12.440 +correct answer is the probability of + +00:20:11.120 --> 00:20:15.000 +getting the correct + +00:20:12.440 --> 00:20:17.240 +answer the accuracy is the number of + +00:20:15.000 --> 00:20:20.280 +times you're getting the correct answer + +00:20:17.240 --> 00:20:23.799 +so as you train a model to get more and + +00:20:20.280 --> 00:20:25.440 +more confident it gets better it gets + +00:20:23.799 --> 00:20:27.840 +better and better at getting more + +00:20:25.440 --> 00:20:30.039 +answers correct but it also gets more + +00:20:27.840 --> 00:20:33.360 +and more confident in its answers and so + +00:20:30.039 --> 00:20:36.200 +if the you know there's any example that + +00:20:33.360 --> 00:20:37.840 +it's really bad at um it might get very + +00:20:36.200 --> 00:20:42.320 +confident in + +00:20:37.840 --> 00:20:44.760 +that answer that bad answer and the log + +00:20:42.320 --> 00:20:47.320 +likelihood of that answer will go up or + +00:20:44.760 --> 00:20:49.679 +sorry the log likelihood will go down so + +00:20:47.320 --> 00:20:54.360 +the negative log likelihood will go up + +00:20:49.679 --> 00:20:56.720 +is the loss so basically + +00:20:54.360 --> 00:20:59.559 +um the + +00:20:56.720 --> 00:21:01.039 +uh the loss that you're calculating and + +00:20:59.559 --> 00:21:03.840 +the thing that you care about in the end + +00:21:01.039 --> 00:21:07.120 +accuracy can be decorrelated + +00:21:03.840 --> 00:21:09.520 +um so there's also an interesting + +00:21:07.120 --> 00:21:12.080 +example um in text generation and this + +00:21:09.520 --> 00:21:14.000 +is part of the reason why uh we have all + +00:21:12.080 --> 00:21:15.880 +these other text generation algorithms + +00:21:14.000 --> 00:21:20.080 +like nucleus samp playing or topk samp + +00:21:15.880 --> 00:21:23.039 +playing or other things like this is um + +00:21:20.080 --> 00:21:25.080 +actually in a maximum likelihood trained + +00:21:23.039 --> 00:21:27.799 +model better + +00:21:25.080 --> 00:21:29.559 +search uh in in other words finding a + +00:21:27.799 --> 00:21:32.159 +better model scope + +00:21:29.559 --> 00:21:36.120 +doesn't necessarily give you a better + +00:21:32.159 --> 00:21:37.840 +generation result and this is an example + +00:21:36.120 --> 00:21:39.080 +uh from machine translation from a + +00:21:37.840 --> 00:21:41.880 +really long time + +00:21:39.080 --> 00:21:44.000 +ago uh but you know it still persists + +00:21:41.880 --> 00:21:47.520 +today which is they did beam search with + +00:21:44.000 --> 00:21:53.600 +a larger and larger beam + +00:21:47.520 --> 00:21:56.640 +and the be the best Beam for finding um + +00:21:53.600 --> 00:21:59.640 +the best scoring output basically was + +00:21:56.640 --> 00:22:01.600 +four and then the accuracy goes down and + +00:21:59.640 --> 00:22:05.559 +down and down as they find a better + +00:22:01.600 --> 00:22:07.200 +output and does anyone remember when we + +00:22:05.559 --> 00:22:09.679 +talked about the generation class where + +00:22:07.200 --> 00:22:09.679 +this comes + +00:22:10.120 --> 00:22:15.000 +from I don't know how explicitly we said + +00:22:12.960 --> 00:22:18.600 +we mentioned it in the generation class + +00:22:15.000 --> 00:22:20.360 +but basically the problem is um maximum + +00:22:18.600 --> 00:22:22.559 +likelihood train models like shorter + +00:22:20.360 --> 00:22:25.240 +outputs generally because if as we make + +00:22:22.559 --> 00:22:27.760 +the output longer uh the probability of + +00:22:25.240 --> 00:22:29.679 +the longer outputs goes down so as you + +00:22:27.760 --> 00:22:32.039 +improve the beam it will start + +00:22:29.679 --> 00:22:34.799 +generating shorter and shorter outputs + +00:22:32.039 --> 00:22:36.480 +and because of that the score goes down + +00:22:34.799 --> 00:22:39.039 +because blue score doesn't like outputs + +00:22:36.480 --> 00:22:41.520 +that are too short essentially so there + +00:22:39.039 --> 00:22:44.039 +are um there are hex around this for + +00:22:41.520 --> 00:22:46.200 +beam search where essentially what you + +00:22:44.039 --> 00:22:48.559 +do is you uh take the average log + +00:22:46.200 --> 00:22:51.159 +likelihood of each token instead of the + +00:22:48.559 --> 00:22:52.760 +overall log likelihood of each token um + +00:22:51.159 --> 00:22:54.679 +and that improves a little bit but still + +00:22:52.760 --> 00:22:59.720 +you can see as you search more the the + +00:22:54.679 --> 00:23:01.440 +accuracy goes down so um so that's the + +00:22:59.720 --> 00:23:04.039 +the general idea + +00:23:01.440 --> 00:23:08.760 +here there's a bunch of ways you can fix + +00:23:04.039 --> 00:23:10.600 +this um the most principled way is to + +00:23:08.760 --> 00:23:12.760 +use a method like reinforcement learning + +00:23:10.600 --> 00:23:14.120 +or something uh some sort of you know + +00:23:12.760 --> 00:23:15.520 +structured training algorithm that + +00:23:14.120 --> 00:23:17.159 +allows you to train your models so that + +00:23:15.520 --> 00:23:20.159 +you don't get these bad + +00:23:17.159 --> 00:23:22.159 +outputs um another way that's much + +00:23:20.159 --> 00:23:25.640 +easier is to do early stopping with the + +00:23:22.159 --> 00:23:30.480 +evaluation metric as opposed to um early + +00:23:25.640 --> 00:23:32.840 +stopping with the loss and by doing this + +00:23:30.480 --> 00:23:34.520 +you would stop here so you would stop + +00:23:32.840 --> 00:23:37.159 +where you get the highest evaluation + +00:23:34.520 --> 00:23:42.600 +metric uh that you care about instead of + +00:23:37.159 --> 00:23:44.400 +stopping here uh so that's um that's one + +00:23:42.600 --> 00:23:46.600 +way you can fix this + +00:23:44.400 --> 00:23:49.760 +problem does anyone have an idea about + +00:23:46.600 --> 00:23:49.760 +why this might be a bad + +00:23:49.840 --> 00:23:57.159 +idea why might it be a bad idea to stop + +00:23:52.480 --> 00:23:57.159 +here instead of stopping here for + +00:23:57.440 --> 00:24:00.440 +example + +00:24:05.320 --> 00:24:10.200 +yeah it's kind of overfitting it's + +00:24:07.760 --> 00:24:13.640 +overfitting in a particular way um but + +00:24:10.200 --> 00:24:16.000 +remember here this is still the accuracy + +00:24:13.640 --> 00:24:18.400 +on the dev set so we're not overfitting + +00:24:16.000 --> 00:24:20.080 +so much that the dev accuracy is going + +00:24:18.400 --> 00:24:24.279 +down that would be a different variety + +00:24:20.080 --> 00:24:27.360 +of overfitting but any any + +00:24:24.279 --> 00:24:29.799 +ideas go for it we don't want to be too + +00:24:27.360 --> 00:24:31.600 +confident yeah exactly we don't want it + +00:24:29.799 --> 00:24:32.880 +to be too confident in its wrong answers + +00:24:31.600 --> 00:24:35.279 +and we talked about + +00:24:32.880 --> 00:24:38.000 +calibration um where calibration is + +00:24:35.279 --> 00:24:40.039 +basically like how accurate are the + +00:24:38.000 --> 00:24:41.480 +probability estimates so this model over + +00:24:40.039 --> 00:24:43.600 +here is going to be really poorly + +00:24:41.480 --> 00:24:45.159 +calibrated it's going to be very + +00:24:43.600 --> 00:24:46.240 +confident regardless of whether it's + +00:24:45.159 --> 00:24:49.440 +correct or not and that could be a + +00:24:46.240 --> 00:24:50.840 +problem in dopram uh dopram tasks + +00:24:49.440 --> 00:24:52.130 +there's also another thing that I I + +00:24:50.840 --> 00:24:55.189 +forgot to put on + +00:24:52.130 --> 00:24:55.189 +[Music] + +00:24:57.320 --> 00:25:00.320 +um + +00:25:02.919 --> 00:25:08.120 +that I forgot to put on the slides but + +00:25:04.520 --> 00:25:10.720 +it's a um an interesting phenomenon that + +00:25:08.120 --> 00:25:12.720 +actually um kind of a lot of people in + +00:25:10.720 --> 00:25:16.360 +interpretability are interested in it's + +00:25:12.720 --> 00:25:18.120 +this uh generalization gring + +00:25:16.360 --> 00:25:19.640 +generalization Beyond overfitting on + +00:25:18.120 --> 00:25:21.120 +small algorithmic data sets and + +00:25:19.640 --> 00:25:27.360 +basically what they + +00:25:21.120 --> 00:25:29.720 +show is um you can be training for a + +00:25:27.360 --> 00:25:31.320 +very very long time + +00:25:29.720 --> 00:25:34.279 +um + +00:25:31.320 --> 00:25:35.919 +and uh like reducing the loss reducing + +00:25:34.279 --> 00:25:40.399 +the loss reducing the loss and reducing + +00:25:35.919 --> 00:25:42.480 +the loss and it's only after a very long + +00:25:40.399 --> 00:25:43.840 +time does your Model start generalizing + +00:25:42.480 --> 00:25:48.240 +well and getting good + +00:25:43.840 --> 00:25:49.799 +accuracy um the this paper the types of + +00:25:48.240 --> 00:25:52.120 +data sets it's talking about are data + +00:25:49.799 --> 00:25:55.520 +sets where you need to get many things + +00:25:52.120 --> 00:25:58.640 +in a row correct before you get the + +00:25:55.520 --> 00:26:00.880 +final answer correct so basically you + +00:25:58.640 --> 00:26:02.320 +need to get like 20 steps in a row or 50 + +00:26:00.880 --> 00:26:06.200 +steps in a row correct before you get + +00:26:02.320 --> 00:26:10.679 +the final answer correct and um + +00:26:06.200 --> 00:26:13.000 +basically the reason why this happens is + +00:26:10.679 --> 00:26:15.720 +because this accuracy will keep going up + +00:26:13.000 --> 00:26:17.760 +but you only get the accuracy of each + +00:26:15.720 --> 00:26:20.520 +individual decision will keep going up + +00:26:17.760 --> 00:26:22.880 +but you only get marked like + +00:26:20.520 --> 00:26:25.440 +correct uh + +00:26:22.880 --> 00:26:29.799 +after you get like all 50 in a row + +00:26:25.440 --> 00:26:31.200 +correct so um it this difference can be + +00:26:29.799 --> 00:26:33.039 +even more Stark when you're talking + +00:26:31.200 --> 00:26:35.399 +about things that require like 50 steps + +00:26:33.039 --> 00:26:37.399 +of reasoning or like multiple steps of + +00:26:35.399 --> 00:26:39.559 +reasoning but like 50 token Generations + +00:26:37.399 --> 00:26:42.679 +correct before you get them right so um + +00:26:39.559 --> 00:26:42.679 +that's another thing to be aware + +00:26:43.000 --> 00:26:49.240 +of cool um so now I want to switch gears + +00:26:46.960 --> 00:26:51.919 +a little bit to actionable evaluation + +00:26:49.240 --> 00:26:54.240 +and how you can um evaluate your models + +00:26:51.919 --> 00:26:56.640 +in a way that makes it easy to find uh + +00:26:54.240 --> 00:26:58.600 +next steps to be + +00:26:56.640 --> 00:27:00.159 +improving uh are there any questions + +00:26:58.600 --> 00:27:02.600 +about the debugging part before we get + +00:27:00.159 --> 00:27:02.600 +into this + +00:27:03.360 --> 00:27:10.120 +part okay I'll + +00:27:05.880 --> 00:27:12.840 +go so um my first suggestion with + +00:27:10.120 --> 00:27:15.559 +respect to how you can actually you know + +00:27:12.840 --> 00:27:17.440 +improve systems is make sure that you're + +00:27:15.559 --> 00:27:21.039 +looking at the data that you're + +00:27:17.440 --> 00:27:22.679 +using and um both bugs and new research + +00:27:21.039 --> 00:27:24.080 +directions can be found by looking at + +00:27:22.679 --> 00:27:27.159 +your model + +00:27:24.080 --> 00:27:31.640 +outputs um + +00:27:27.159 --> 00:27:33.279 +so to give one example um of a very + +00:27:31.640 --> 00:27:36.200 +common mistake that you can make when + +00:27:33.279 --> 00:27:40.159 +you're creating a a generation algorithm + +00:27:36.200 --> 00:27:41.600 +it's these sort of off by one erors um + +00:27:40.159 --> 00:27:43.919 +so like let's say you implemented a + +00:27:41.600 --> 00:27:46.039 +translation system and it's generating + +00:27:43.919 --> 00:27:49.440 +outputs like went to the store yesterday + +00:27:46.039 --> 00:27:51.080 +bought a dog um you can immediately look + +00:27:49.440 --> 00:27:53.440 +at this and say hey this doesn't look + +00:27:51.080 --> 00:27:58.360 +like natural English what's going uh + +00:27:53.440 --> 00:28:00.000 +what's going on and the the problem here + +00:27:58.360 --> 00:28:04.600 +is + +00:28:00.000 --> 00:28:04.600 +you're um you're doing something + +00:28:05.159 --> 00:28:12.720 +like output uh + +00:28:09.240 --> 00:28:14.600 +one uh and you have a slice of like one + +00:28:12.720 --> 00:28:17.399 +instead of zero here or something like + +00:28:14.600 --> 00:28:18.640 +this and so this is a really silly error + +00:28:17.399 --> 00:28:21.000 +that you might just make a mistake on + +00:28:18.640 --> 00:28:23.679 +python on your you know pre-processing + +00:28:21.000 --> 00:28:26.200 +or postprocessing or something like this + +00:28:23.679 --> 00:28:28.399 +um but the problem is like if you look + +00:28:26.200 --> 00:28:30.600 +at your blue score based evaluation or + +00:28:28.399 --> 00:28:32.840 +something like that you'll have like + +00:28:30.600 --> 00:28:34.760 +you'll be one point worse or two points + +00:28:32.840 --> 00:28:36.720 +worse or something like that and you'll + +00:28:34.760 --> 00:28:38.600 +be like Oh I'm I'm two points worse why + +00:28:36.720 --> 00:28:40.600 +am I two point wor two points worse in + +00:28:38.600 --> 00:28:43.760 +the state of the art and it turns out it + +00:28:40.600 --> 00:28:45.279 +was a really like silly thing like this + +00:28:43.760 --> 00:28:46.519 +and immediately you'll see this if you + +00:28:45.279 --> 00:28:47.960 +look at your data but if you're doing + +00:28:46.519 --> 00:28:49.600 +all your experiments and just looking at + +00:28:47.960 --> 00:28:51.519 +the numbers it's really hard to tell you + +00:28:49.600 --> 00:28:53.720 +know why this is + +00:28:51.519 --> 00:28:58.720 +happening + +00:28:53.720 --> 00:29:02.360 +um another thing is uh if you + +00:28:58.720 --> 00:29:04.799 +have a good eye and can like just look + +00:29:02.360 --> 00:29:07.799 +through the data points + +00:29:04.799 --> 00:29:09.640 +um we as humans are pretty good uh + +00:29:07.799 --> 00:29:14.200 +pattern recognizers and especially you + +00:29:09.640 --> 00:29:16.360 +know CMU students uh you're uh very good + +00:29:14.200 --> 00:29:18.519 +and quick at picking up on things so if + +00:29:16.360 --> 00:29:20.600 +you look at the data and pour through + +00:29:18.519 --> 00:29:22.880 +things you can uh probably pick up + +00:29:20.600 --> 00:29:24.880 +patterns about why things are failing + +00:29:22.880 --> 00:29:27.720 +and so um you know you might look and + +00:29:24.880 --> 00:29:29.919 +see that uh compared to some other model + +00:29:27.720 --> 00:29:31.679 +your model is really bad at answering + +00:29:29.919 --> 00:29:33.679 +questions about people or something like + +00:29:31.679 --> 00:29:36.480 +that and then you figure out you'll need + +00:29:33.679 --> 00:29:38.320 +a better model of uh people or your rag + +00:29:36.480 --> 00:29:40.519 +systems uh that you're building for + +00:29:38.320 --> 00:29:42.880 +assignment two is maybe failing on all + +00:29:40.519 --> 00:29:45.559 +the research related questions so you + +00:29:42.880 --> 00:29:47.080 +need to come up with the research uh + +00:29:45.559 --> 00:29:48.320 +like scrape more research data or + +00:29:47.080 --> 00:29:50.080 +something like + +00:29:48.320 --> 00:29:53.840 +that + +00:29:50.080 --> 00:29:55.760 +um so there are methods to do this more + +00:29:53.840 --> 00:29:58.039 +systematically and this is something I + +00:29:55.760 --> 00:29:59.720 +picked up when I was doing an internship + +00:29:58.039 --> 00:30:04.080 +at Google and it really stuck with me + +00:29:59.720 --> 00:30:09.080 +for you know 14 uh 14 years now I guess + +00:30:04.080 --> 00:30:10.960 +13 years um so uh a very simple way to + +00:30:09.080 --> 00:30:12.600 +do this more systematically than just + +00:30:10.960 --> 00:30:16.200 +browsing through things is to randomly + +00:30:12.600 --> 00:30:19.000 +sample a 100 outputs and look at a 100 + +00:30:16.200 --> 00:30:21.840 +erors and try to group them into some + +00:30:19.000 --> 00:30:23.799 +sort of typology and say oh uh this kind + +00:30:21.840 --> 00:30:27.799 +of air is particularly + +00:30:23.799 --> 00:30:31.279 +frequent and this is just one example of + +00:30:27.799 --> 00:30:33.120 +a typology that was defined by V at all + +00:30:31.279 --> 00:30:37.320 +um where they tried to take machine + +00:30:33.120 --> 00:30:39.480 +translation errors and group them into + +00:30:37.320 --> 00:30:43.440 +uh various varieties like correct words + +00:30:39.480 --> 00:30:46.640 +filler words local uh local range long + +00:30:43.440 --> 00:30:48.440 +range um uh sorry word word level word + +00:30:46.640 --> 00:30:50.440 +ordering erors local range long range + +00:30:48.440 --> 00:30:54.279 +phrase level local range long range and + +00:30:50.440 --> 00:30:55.679 +stuff like this um you can definitely + +00:30:54.279 --> 00:30:58.399 +look at previous work and see the + +00:30:55.679 --> 00:31:00.559 +typologies of errors that they used but + +00:30:58.399 --> 00:31:02.440 +the problem is like systems get better + +00:31:00.559 --> 00:31:04.240 +and actually I don't think this is a + +00:31:02.440 --> 00:31:06.760 +super relevant typology for machine + +00:31:04.240 --> 00:31:10.120 +translation anymore uh because machine + +00:31:06.760 --> 00:31:12.159 +translation systems like they don't make + +00:31:10.120 --> 00:31:14.639 +a whole lot of local range Word level + +00:31:12.159 --> 00:31:16.159 +errors anymore and rather we might want + +00:31:14.639 --> 00:31:18.279 +to know more fine grain like are they + +00:31:16.159 --> 00:31:21.720 +making mistakes on named entities or + +00:31:18.279 --> 00:31:24.720 +other things like that so actually + +00:31:21.720 --> 00:31:24.720 +we + +00:31:26.919 --> 00:31:29.919 +um + +00:31:30.519 --> 00:31:36.279 +did a re a more recent thing it's I + +00:31:34.279 --> 00:31:39.159 +guess four years ago now um but it was + +00:31:36.279 --> 00:31:42.720 +when uh people first started saying that + +00:31:39.159 --> 00:31:46.200 +machine translation systems are about as + +00:31:42.720 --> 00:31:50.720 +good as humans at doing a + +00:31:46.200 --> 00:31:50.720 +translation and when we did this we + +00:31:52.480 --> 00:31:58.440 +compared we compared machine translation + +00:31:55.200 --> 00:31:59.960 +systems to humans and we tried to find + +00:31:58.440 --> 00:32:02.240 +you know different types of things and + +00:31:59.960 --> 00:32:03.919 +we were inspired by V but we recreated + +00:32:02.240 --> 00:32:06.159 +our typology based on the things that we + +00:32:03.919 --> 00:32:10.279 +thought were you know the most important + +00:32:06.159 --> 00:32:13.399 +types of errors in like 2020 instead of + +00:32:10.279 --> 00:32:16.799 +2006 so this is really helpful the + +00:32:13.399 --> 00:32:19.039 +reason why it's really helpful is if you + +00:32:16.799 --> 00:32:20.440 +can do this even for a small sample of + +00:32:19.039 --> 00:32:23.440 +the outputs that you're looking at and + +00:32:20.440 --> 00:32:25.279 +identify the most like prominent types + +00:32:23.440 --> 00:32:27.440 +of eras that you're facing it often + +00:32:25.279 --> 00:32:29.360 +leads you to the most successful ways of + +00:32:27.440 --> 00:32:31.519 +improving the accuracy of your systems + +00:32:29.360 --> 00:32:33.120 +because you might if you don't do this + +00:32:31.519 --> 00:32:35.000 +you might be focusing on an air type + +00:32:33.120 --> 00:32:38.000 +that's not actually an error it's kind + +00:32:35.000 --> 00:32:39.200 +of like if you learned in uh programming + +00:32:38.000 --> 00:32:40.799 +you know software engineering or + +00:32:39.200 --> 00:32:42.639 +something like that you should never + +00:32:40.799 --> 00:32:46.360 +optimize your code until you run a + +00:32:42.639 --> 00:32:47.799 +profiler um because actually your code + +00:32:46.360 --> 00:32:50.320 +might be slow in a place that you never + +00:32:47.799 --> 00:32:52.720 +expected and so it's kind of the same + +00:32:50.320 --> 00:32:56.600 +principle here right so don't optimize + +00:32:52.720 --> 00:32:58.720 +your systems errors in a place uh where + +00:32:56.600 --> 00:33:03.240 +like actually it's not having in years + +00:32:58.720 --> 00:33:06.440 +so um that's a general principle + +00:33:03.240 --> 00:33:09.440 +here uh cool another thing you can do is + +00:33:06.440 --> 00:33:11.760 +quantitative analysis so um if you can + +00:33:09.440 --> 00:33:13.880 +think of the phenomenon that you choose + +00:33:11.760 --> 00:33:17.480 +to focus on um is that phenomenon + +00:33:13.880 --> 00:33:19.159 +getting better so if you focused on uh + +00:33:17.480 --> 00:33:22.240 +something that should improve the + +00:33:19.159 --> 00:33:23.760 +quality of low frequency words uh you + +00:33:22.240 --> 00:33:26.200 +can check if the accuracy on low + +00:33:23.760 --> 00:33:27.399 +frequency words is increasing if you + +00:33:26.200 --> 00:33:29.600 +focused on something that should be + +00:33:27.399 --> 00:33:32.120 +improving the syntax in a low resource + +00:33:29.600 --> 00:33:36.080 +language you can measure um whether it's + +00:33:32.120 --> 00:33:37.360 +doing better on word ordering or uh long + +00:33:36.080 --> 00:33:41.840 +distance + +00:33:37.360 --> 00:33:44.360 +dependencies um if you focused on + +00:33:41.840 --> 00:33:46.039 +improving a search algorithm for you + +00:33:44.360 --> 00:33:47.519 +know generation or something like that + +00:33:46.039 --> 00:33:49.880 +are the number of search errors that + +00:33:47.519 --> 00:33:53.120 +you're encountering being reduced so + +00:33:49.880 --> 00:33:56.320 +depending on what you planned on uh you + +00:33:53.120 --> 00:33:57.919 +know improving it's often a good idea to + +00:33:56.320 --> 00:33:59.480 +measure more directly whether it's + +00:33:57.919 --> 00:34:00.559 +improving the the thing that you think + +00:33:59.480 --> 00:34:04.880 +it should + +00:34:00.559 --> 00:34:06.000 +improve um one example of um so I I + +00:34:04.880 --> 00:34:09.240 +basically + +00:34:06.000 --> 00:34:11.240 +created since my experience doing this + +00:34:09.240 --> 00:34:15.159 +manually uh when I I was on an + +00:34:11.240 --> 00:34:18.280 +internship at Google um I've + +00:34:15.159 --> 00:34:20.639 +gradually improved my methodology for + +00:34:18.280 --> 00:34:20.639 +doing + +00:34:21.679 --> 00:34:26.320 +this and um and worked on automating + +00:34:24.879 --> 00:34:30.599 +things and + +00:34:26.320 --> 00:34:33.839 +so the first thing I had was a super + +00:34:30.599 --> 00:34:35.560 +hacky uh hacky script that basically + +00:34:33.839 --> 00:34:37.720 +writes out HTML + +00:34:35.560 --> 00:34:39.320 +files um and then I I had something + +00:34:37.720 --> 00:34:42.320 +called explainer board where we had a + +00:34:39.320 --> 00:34:44.879 +leader board and uh recently one of the + +00:34:42.320 --> 00:34:47.800 +things I've worked on is uh this uh + +00:34:44.879 --> 00:34:53.200 +together with um Alex Alex Cabrera who's + +00:34:47.800 --> 00:34:56.760 +a student here um is this toolkit called + +00:34:53.200 --> 00:34:59.640 +Zeno and um this is just an example from + +00:34:56.760 --> 00:34:59.640 +machine translation + +00:35:03.440 --> 00:35:09.200 +it's being a little bit + +00:35:06.599 --> 00:35:11.079 +slow um but basically what it does is it + +00:35:09.200 --> 00:35:14.920 +allows you to look at the data on the + +00:35:11.079 --> 00:35:18.000 +right side um and so these are just + +00:35:14.920 --> 00:35:19.680 +examples um but you can go in and do + +00:35:18.000 --> 00:35:22.760 +things like say Okay I want to look at + +00:35:19.680 --> 00:35:24.640 +all machine translation examples + +00:35:22.760 --> 00:35:28.040 +from + +00:35:24.640 --> 00:35:30.920 +uh housea and so it shows you the ones + +00:35:28.040 --> 00:35:32.960 +from housea I want to look + +00:35:30.920 --> 00:35:36.240 +at all + +00:35:32.960 --> 00:35:38.880 +examples let me clear that off I want to + +00:35:36.240 --> 00:35:40.800 +look at all examples where the accuracy + +00:35:38.880 --> 00:35:43.440 +is + +00:35:40.800 --> 00:35:45.280 +low um and so now I can look at all the + +00:35:43.440 --> 00:35:49.640 +examples where the accuracy is low and I + +00:35:45.280 --> 00:35:52.640 +I can go in and uh uh examine them so uh + +00:35:49.640 --> 00:35:54.880 +you can also go in and build charts like + +00:35:52.640 --> 00:35:58.280 +this so like what is the overall + +00:35:54.880 --> 00:36:02.200 +performance um what is is the + +00:35:58.280 --> 00:36:05.960 +performance what is the performance + +00:36:02.200 --> 00:36:07.520 +um on different scripts so you can see + +00:36:05.960 --> 00:36:10.880 +which model which model is doing better + +00:36:07.520 --> 00:36:13.960 +at scripts and stuff like that so um or + +00:36:10.880 --> 00:36:16.000 +you can put things side by side and say + +00:36:13.960 --> 00:36:20.720 +okay I want to find all the examples + +00:36:16.000 --> 00:36:21.800 +where uh chat GPT is doing much worse + +00:36:20.720 --> 00:36:25.280 +than GPT + +00:36:21.800 --> 00:36:28.240 +4 uh or like GPT 3.5 is doing much worse + +00:36:25.280 --> 00:36:29.680 +than gp4 and here we can see that oh in + +00:36:28.240 --> 00:36:31.520 +this case it's generating something in + +00:36:29.680 --> 00:36:34.079 +the wrong script or something like that + +00:36:31.520 --> 00:36:37.839 +so um there's also tooling that you can + +00:36:34.079 --> 00:36:40.480 +use to make this easier as + +00:36:37.839 --> 00:36:43.520 +well and the way uh the way you use this + +00:36:40.480 --> 00:36:46.079 +is you basically + +00:36:43.520 --> 00:36:48.000 +um uh create a pandas data frame with + +00:36:46.079 --> 00:36:49.680 +all of your data in it and you upload + +00:36:48.000 --> 00:36:52.400 +the pandas data frame with any metadata + +00:36:49.680 --> 00:36:54.280 +you want to use and you can uh use and I + +00:36:52.400 --> 00:36:56.520 +think VJ will be having a recitation on + +00:36:54.280 --> 00:37:02.560 +this if you're interested in taking a + +00:36:56.520 --> 00:37:04.680 +look cool um so that is the my part and + +00:37:02.560 --> 00:37:07.760 +then we'll be doing Nishant next while + +00:37:04.680 --> 00:37:09.480 +Nishant comes up to set up are there any + +00:37:07.760 --> 00:37:10.520 +questions about the thing that I talked + +00:37:09.480 --> 00:37:14.079 +about + +00:37:10.520 --> 00:37:14.079 +here yeah + +00:37:14.359 --> 00:37:18.200 +so that when I + +00:37:26.200 --> 00:37:30.079 +regular um + +00:37:28.160 --> 00:37:32.560 +is that does that make a difference in + +00:37:30.079 --> 00:37:35.400 +terms of like what we're expecting when + +00:37:32.560 --> 00:37:38.800 +we're evaluating the model + +00:37:35.400 --> 00:37:41.720 +model yeah so just to repeat the + +00:37:38.800 --> 00:37:43.680 +question it's a a great question so if + +00:37:41.720 --> 00:37:49.440 +you apply + +00:37:43.680 --> 00:37:49.440 +regularization um will that change the + +00:37:49.640 --> 00:37:54.079 +overall expectation for the model loss + +00:37:52.040 --> 00:37:55.680 +so I was saying loss should converge to + +00:37:54.079 --> 00:37:57.200 +zero once you start applying + +00:37:55.680 --> 00:37:59.079 +regularization or weight Decay or + +00:37:57.200 --> 00:38:02.640 +something like that it definitely might + +00:37:59.079 --> 00:38:04.520 +not converge to Z um and the reason why + +00:38:02.640 --> 00:38:06.520 +is because once you start applying + +00:38:04.520 --> 00:38:09.319 +regularization there is no zero loss + +00:38:06.520 --> 00:38:11.480 +solion um because in order to reduce the + +00:38:09.319 --> 00:38:14.960 +loss you need to make move things away + +00:38:11.480 --> 00:38:16.359 +move weights away from zero um but when + +00:38:14.960 --> 00:38:19.560 +you move weights away from zero the + +00:38:16.359 --> 00:38:22.200 +regularization L becomes n negative so + +00:38:19.560 --> 00:38:24.599 +one thing you can do however is measure + +00:38:22.200 --> 00:38:26.880 +the losses separately so measure the + +00:38:24.599 --> 00:38:27.960 +regularization component of the loss and + +00:38:26.880 --> 00:38:29.760 +the um + +00:38:27.960 --> 00:38:31.920 +the log like we had component with the + +00:38:29.760 --> 00:38:33.560 +loss and with any reasonable + +00:38:31.920 --> 00:38:35.280 +regularization and a reasonably + +00:38:33.560 --> 00:38:38.000 +parameterized model I do think the loss + +00:38:35.280 --> 00:38:39.760 +should be getting closer to Zer like the + +00:38:38.000 --> 00:38:41.920 +actual likely should be getting closer + +00:38:39.760 --> 00:38:41.920 +to + +00:38:42.200 --> 00:38:46.520 +zero uh you were using an extremely + +00:38:44.480 --> 00:38:49.240 +small model in the mini L signed though + +00:38:46.520 --> 00:38:53.680 +so that might make it more + +00:38:49.240 --> 00:38:56.440 +difficult yeah and any other + +00:38:53.680 --> 00:38:59.440 +things okay if not + +00:38:56.440 --> 00:38:59.440 +I'll + +00:39:13.720 --> 00:39:19.160 +all right can everyone hear + +00:39:15.319 --> 00:39:21.440 +me sweet okay move this it looks like + +00:39:19.160 --> 00:39:24.200 +I'm talking to someone instead of + +00:39:21.440 --> 00:39:24.200 +between both of + +00:39:26.359 --> 00:39:29.359 +you + +00:39:33.319 --> 00:39:37.680 +all right so hi everyone um I'm going to + +00:39:35.720 --> 00:39:39.400 +talk about model interpretability for + +00:39:37.680 --> 00:39:41.680 +for those who don't know me I'm one of + +00:39:39.400 --> 00:39:44.359 +your Tas I'm a first year PhD student + +00:39:41.680 --> 00:39:47.359 +working with Mona diab on model + +00:39:44.359 --> 00:39:47.359 +interpretability + +00:39:48.800 --> 00:39:55.400 +um where what do I + +00:39:51.839 --> 00:39:59.119 +click your your mouse should be there + +00:39:55.400 --> 00:40:01.599 +yeah just + +00:39:59.119 --> 00:40:04.160 +cool okay um + +00:40:01.599 --> 00:40:06.079 +so what I want you to take away if you + +00:40:04.160 --> 00:40:08.359 +if you fall asleep this is too boring + +00:40:06.079 --> 00:40:09.839 +here are sort of the two main takeaways + +00:40:08.359 --> 00:40:12.040 +one I want to convince you that model + +00:40:09.839 --> 00:40:14.720 +interpretability is important to study + +00:40:12.040 --> 00:40:16.720 +and two I want I want you to find this + +00:40:14.720 --> 00:40:18.880 +interesting um and something you want to + +00:40:16.720 --> 00:40:20.079 +explore more there's a bunch of details + +00:40:18.880 --> 00:40:21.800 +here this is going to be kind of a + +00:40:20.079 --> 00:40:24.599 +whirlwind tour you're not going to get + +00:40:21.800 --> 00:40:27.440 +super deep into anything um so hopefully + +00:40:24.599 --> 00:40:28.839 +this acts as a starting point um then + +00:40:27.440 --> 00:40:33.800 +than anything + +00:40:28.839 --> 00:40:37.040 +else so interpretability in AI um the + +00:40:33.800 --> 00:40:38.480 +the definition is it's the study of + +00:40:37.040 --> 00:40:40.440 +understanding the decisions that AI + +00:40:38.480 --> 00:40:42.640 +systems make and putting them into + +00:40:40.440 --> 00:40:44.280 +easily human understandable terms this + +00:40:42.640 --> 00:40:47.640 +can mean a lot of different things and + +00:40:44.280 --> 00:40:49.280 +this is often really hard um and the why + +00:40:47.640 --> 00:40:51.319 +is to use that understanding to + +00:40:49.280 --> 00:40:54.040 +iteratively better Design Systems that + +00:40:51.319 --> 00:40:56.240 +are better they're more more performant + +00:40:54.040 --> 00:40:59.240 +but also those that are more human + +00:40:56.240 --> 00:40:59.240 +understandable + +00:41:00.119 --> 00:41:06.599 +um so interpretability is this big blah + +00:41:03.720 --> 00:41:08.440 +but there's a bunch of other uh spheres + +00:41:06.599 --> 00:41:11.920 +that intersect with it this is a super + +00:41:08.440 --> 00:41:14.920 +incomplete list uh so bear with me the + +00:41:11.920 --> 00:41:16.560 +causality and data integrate with this + +00:41:14.920 --> 00:41:19.000 +there's aspects that are interpretable + +00:41:16.560 --> 00:41:20.480 +there's aspects that matter here um + +00:41:19.000 --> 00:41:22.400 +explainable AI is another thing that + +00:41:20.480 --> 00:41:24.440 +you've probably heard this sits firmly + +00:41:22.400 --> 00:41:27.800 +in the interpretability blob and + +00:41:24.440 --> 00:41:30.520 +connects with ideas and causality and uh + +00:41:27.800 --> 00:41:32.680 +in data too um model interpretability + +00:41:30.520 --> 00:41:34.200 +sits on this kind of other side of + +00:41:32.680 --> 00:41:37.680 +things it intersects a little bit with + +00:41:34.200 --> 00:41:40.000 +causality and explainable AI but uh is a + +00:41:37.680 --> 00:41:42.280 +little bit separate for it um and from + +00:41:40.000 --> 00:41:43.880 +it and mechanistic interpretability + +00:41:42.280 --> 00:41:45.400 +which which you've probably heard of + +00:41:43.880 --> 00:41:47.680 +it's gotten a lot of Buzz recently kind + +00:41:45.400 --> 00:41:48.880 +of sits inside of model interpretability + +00:41:47.680 --> 00:41:51.680 +it's a special case of model + +00:41:48.880 --> 00:41:53.160 +interpretability I hope the mech people + +00:41:51.680 --> 00:41:56.640 +agree with me + +00:41:53.160 --> 00:41:58.040 +but um so yeah so historically we've + +00:41:56.640 --> 00:42:00.880 +been dealing with really really really + +00:41:58.040 --> 00:42:03.680 +small models you had Bas Nets this is a + +00:42:00.880 --> 00:42:07.560 +this is very small model um if all these + +00:42:03.680 --> 00:42:10.000 +are binary variables this is uh eight + +00:42:07.560 --> 00:42:12.680 +total parameters and only four of which + +00:42:10.000 --> 00:42:14.880 +are independent uh we also used to work + +00:42:12.680 --> 00:42:18.160 +with linear regression a lot and in the + +00:42:14.880 --> 00:42:20.680 +first case that's a nice line can be two + +00:42:18.160 --> 00:42:23.240 +parameters the multivariate case again + +00:42:20.680 --> 00:42:25.880 +that's a a small number of parameters + +00:42:23.240 --> 00:42:27.880 +we've moved to of more things we've + +00:42:25.880 --> 00:42:30.400 +moved to + +00:42:27.880 --> 00:42:32.160 +MLPs that have larger weight matrices + +00:42:30.400 --> 00:42:33.920 +but all these are kind of digestible and + +00:42:32.160 --> 00:42:37.200 +interpretable so the interpretability + +00:42:33.920 --> 00:42:40.160 +world was sort of uh not super concerned + +00:42:37.200 --> 00:42:41.280 +with large ginormous things but we're + +00:42:40.160 --> 00:42:44.800 +not there + +00:42:41.280 --> 00:42:47.000 +anymore uh this is a language model this + +00:42:44.800 --> 00:42:50.839 +is part of still part of a language + +00:42:47.000 --> 00:42:51.960 +model now it's getting more and more and + +00:42:50.839 --> 00:42:55.119 +more + +00:42:51.960 --> 00:42:57.920 +hairing and this is just not + +00:42:55.119 --> 00:43:00.520 +interpretable um I mentioned + +00:42:57.920 --> 00:43:03.280 +on on the first day of class that I hate + +00:43:00.520 --> 00:43:05.240 +when we update parameters of models also + +00:43:03.280 --> 00:43:07.720 +hate when models are this big and this + +00:43:05.240 --> 00:43:10.000 +is a six layer Transformer this is way + +00:43:07.720 --> 00:43:15.920 +smaller than basically anything that we + +00:43:10.000 --> 00:43:18.040 +have um and this makes things very very + +00:43:15.920 --> 00:43:20.920 +uninterpretable um so we'll talk about + +00:43:18.040 --> 00:43:22.880 +one one way that people sort of uh five + +00:43:20.920 --> 00:43:24.599 +years ago started addressing this + +00:43:22.880 --> 00:43:25.680 +problem and this is and this is the idea + +00:43:24.599 --> 00:43:28.000 +of + +00:43:25.680 --> 00:43:30.880 +probing so how do we make sense of a + +00:43:28.000 --> 00:43:35.160 +giant model this is one way so we take + +00:43:30.880 --> 00:43:38.200 +our giant model we cut the top off + +00:43:35.160 --> 00:43:40.520 +basically um and now we have this thing + +00:43:38.200 --> 00:43:42.119 +we stick a probe which actually in a lot + +00:43:40.520 --> 00:43:44.559 +of cases looks very similar to a + +00:43:42.119 --> 00:43:47.280 +language modeling head uh usually it's a + +00:43:44.559 --> 00:43:51.640 +small two layer or one layer + +00:43:47.280 --> 00:43:54.319 +MLP um and we basically treat the model + +00:43:51.640 --> 00:43:56.760 +as something that uh that exists and we + +00:43:54.319 --> 00:44:00.240 +only really care about the output of of + +00:43:56.760 --> 00:44:03.240 +the model so more specifically what is a + +00:44:00.240 --> 00:44:05.720 +probe it's a classifier this this green + +00:44:03.240 --> 00:44:07.680 +thing here uh that is specifically + +00:44:05.720 --> 00:44:09.200 +trained to predict some specific + +00:44:07.680 --> 00:44:11.480 +property from the pre-trained models + +00:44:09.200 --> 00:44:16.440 +representations + +00:44:11.480 --> 00:44:18.480 +alone so um in 2019 Ian Tenny and folks + +00:44:16.440 --> 00:44:21.319 +introduced Edge probing so this is a + +00:44:18.480 --> 00:44:23.240 +general method um it works to probe + +00:44:21.319 --> 00:44:27.559 +different types of information out of a + +00:44:23.240 --> 00:44:29.960 +model so this bottom part here uh yeah + +00:44:27.559 --> 00:44:33.160 +this bottom part here it you pass it in + +00:44:29.960 --> 00:44:36.520 +a sequence you pass it into a model this + +00:44:33.160 --> 00:44:38.839 +is Burt in their experiments often uh + +00:44:36.520 --> 00:44:40.960 +and that outputs a set of contextual + +00:44:38.839 --> 00:44:44.359 +vectors these contextual vectors can be + +00:44:40.960 --> 00:44:45.920 +at any layer um often it's near the + +00:44:44.359 --> 00:44:49.280 +often it's near the top but we'll talk + +00:44:45.920 --> 00:44:51.079 +about uh the the fact that this can work + +00:44:49.280 --> 00:44:53.359 +kind of across layers and different + +00:44:51.079 --> 00:44:55.599 +layers and code different information + +00:44:53.359 --> 00:44:58.960 +and on top of this you have this MLP + +00:44:55.599 --> 00:45:02.480 +that you train to Output a prediction + +00:44:58.960 --> 00:45:05.599 +your model is always always fixed um in + +00:45:02.480 --> 00:45:08.079 +these cases so you can do things like + +00:45:05.599 --> 00:45:09.880 +part of speech tagging where each + +00:45:08.079 --> 00:45:12.400 +specific word you try to determine what + +00:45:09.880 --> 00:45:16.640 +its part of speech is and in that case + +00:45:12.400 --> 00:45:18.000 +this these S1 and S2 spans here uh only + +00:45:16.640 --> 00:45:19.440 +one of them is active because you're + +00:45:18.000 --> 00:45:21.440 +predicting for every single + +00:45:19.440 --> 00:45:23.240 +contextualized Vector you're predicting + +00:45:21.440 --> 00:45:25.359 +whether that thing is a noun or a verb + +00:45:23.240 --> 00:45:27.440 +or something like this you can have + +00:45:25.359 --> 00:45:29.599 +other sorts of tasks too like in ailment + +00:45:27.440 --> 00:45:32.520 +where you have two sequences and two + +00:45:29.599 --> 00:45:35.079 +spans um and you use the embeddings for + +00:45:32.520 --> 00:45:37.359 +those spans um for like sentence one and + +00:45:35.079 --> 00:45:39.319 +sentence two you pull them together in + +00:45:37.359 --> 00:45:43.359 +some way and then you pass them to this + +00:45:39.319 --> 00:45:47.480 +MLP and you see whether the MLP can uh + +00:45:43.359 --> 00:45:49.680 +solve that test so they did this uh in + +00:45:47.480 --> 00:45:52.559 +another paper uh Bert rediscovers the + +00:45:49.680 --> 00:45:54.280 +NLP Pipeline and this there's a lot + +00:45:52.559 --> 00:45:57.079 +going on in this figure the the only + +00:45:54.280 --> 00:45:59.599 +major thing here to take away um is + +00:45:57.079 --> 00:46:02.720 +these numbers that are in this like pink + +00:45:59.599 --> 00:46:05.359 +purple color um so these are a bunch of + +00:46:02.720 --> 00:46:07.960 +different uh properties supp part of + +00:46:05.359 --> 00:46:11.319 +speech uh and and a bunch of other + +00:46:07.960 --> 00:46:13.520 +things um and what they basically find + +00:46:11.319 --> 00:46:15.640 +is that at earlier layers in the model + +00:46:13.520 --> 00:46:18.760 +the things that are closer to the Token + +00:46:15.640 --> 00:46:21.480 +level representation are more um + +00:46:18.760 --> 00:46:23.400 +extractable using a probe and the things + +00:46:21.480 --> 00:46:26.440 +that require more contextualized + +00:46:23.400 --> 00:46:29.440 +information um is extractable later from + +00:46:26.440 --> 00:46:32.359 +later layers in the model and so here's + +00:46:29.440 --> 00:46:34.599 +sort of a brief uh description of what + +00:46:32.359 --> 00:46:37.599 +these tasks are so the ones on the + +00:46:34.599 --> 00:46:40.040 +bottom are more semantic more + +00:46:37.599 --> 00:46:42.040 +contextualized like uh semantic prot + +00:46:40.040 --> 00:46:43.880 +roles and relation relation + +00:46:42.040 --> 00:46:45.839 +classification and then the first few + +00:46:43.880 --> 00:46:48.200 +are more you know chunking in part of + +00:46:45.839 --> 00:46:51.880 +speech tagging and um dependency + +00:46:48.200 --> 00:46:51.880 +labeling in these in these sorts of + +00:46:52.040 --> 00:46:57.200 +tests um so there's a bunch of issues + +00:46:54.480 --> 00:46:59.520 +with probing um and there aren't many + +00:46:57.200 --> 00:47:03.559 +probing papers now as there were many + +00:46:59.520 --> 00:47:05.960 +years ago um and so if your probe let's + +00:47:03.559 --> 00:47:07.960 +say your probe + +00:47:05.960 --> 00:47:09.920 +works it's possible that the + +00:47:07.960 --> 00:47:12.200 +representation actually encodes that + +00:47:09.920 --> 00:47:14.520 +information it's also possible that it + +00:47:12.200 --> 00:47:16.359 +doesn't and the probe solved the task by + +00:47:14.520 --> 00:47:18.119 +itself uh keep in mind that you're + +00:47:16.359 --> 00:47:20.640 +learning this probe you're training this + +00:47:18.119 --> 00:47:22.720 +Probe on labeled data uh let's say your + +00:47:20.640 --> 00:47:24.599 +probe doesn't work does that tell you + +00:47:22.720 --> 00:47:27.119 +anything Maybe not maybe the + +00:47:24.599 --> 00:47:30.280 +representation lacks the information or + +00:47:27.119 --> 00:47:31.800 +maybe your probe doesn't doesn't + +00:47:30.280 --> 00:47:33.800 +actually isn't actually able to + +00:47:31.800 --> 00:47:35.240 +disentangle that information from your + +00:47:33.800 --> 00:47:36.720 +representation maybe the probe is not + +00:47:35.240 --> 00:47:38.359 +the right function class maybe you + +00:47:36.720 --> 00:47:40.839 +poorly trained your probe there's + +00:47:38.359 --> 00:47:42.280 +hyperparameters for your probe so often + +00:47:40.839 --> 00:47:43.000 +times your probe doesn't give you that + +00:47:42.280 --> 00:47:46.119 +much + +00:47:43.000 --> 00:47:49.040 +information there's more problems too so + +00:47:46.119 --> 00:47:50.800 +often we want to probe tasks themselves + +00:47:49.040 --> 00:47:53.240 +and that requires a lot of supervised + +00:47:50.800 --> 00:47:55.880 +data um but we can't collect a lot of + +00:47:53.240 --> 00:47:58.440 +supervised data so we collect some of it + +00:47:55.880 --> 00:48:00.040 +and then that instead produces this + +00:47:58.440 --> 00:48:02.480 +convenient sample that we have that's a + +00:48:00.040 --> 00:48:04.119 +data set that is a convenient sample of + +00:48:02.480 --> 00:48:07.000 +your task so really what you're probing + +00:48:04.119 --> 00:48:10.040 +is the data set and so with all these + +00:48:07.000 --> 00:48:11.800 +limitations it's it's fallen out of + +00:48:10.040 --> 00:48:13.599 +favor a little bit it's still very very + +00:48:11.800 --> 00:48:16.400 +useful but it's fallen out of favor as + +00:48:13.599 --> 00:48:20.000 +like a core model interpretability + +00:48:16.400 --> 00:48:22.160 +idea um also probes designed in this way + +00:48:20.000 --> 00:48:26.079 +are correlated they're correlative not + +00:48:22.160 --> 00:48:27.880 +really causitive so your your underlying + +00:48:26.079 --> 00:48:29.640 +model is trained in a specific way all + +00:48:27.880 --> 00:48:31.359 +of that information is disentangled and + +00:48:29.640 --> 00:48:32.920 +kind of thrown away and you're only + +00:48:31.359 --> 00:48:34.599 +looking at the output representation and + +00:48:32.920 --> 00:48:36.559 +you're saying is my output + +00:48:34.599 --> 00:48:39.200 +representation correlated to the thing + +00:48:36.559 --> 00:48:42.400 +that I'm training this probe for there's + +00:48:39.200 --> 00:48:44.960 +no notion of intervening on this lat and + +00:48:42.400 --> 00:48:46.559 +space there's no notion of of causation + +00:48:44.960 --> 00:48:49.119 +really so you're just seeing whether + +00:48:46.559 --> 00:48:52.559 +your representation is correlated with + +00:48:49.119 --> 00:48:54.480 +your property that you're probing for um + +00:48:52.559 --> 00:48:56.200 +and with these limitations the + +00:48:54.480 --> 00:48:58.720 +community's moved a little bit away from + +00:48:56.200 --> 00:48:58.720 +this area + +00:48:59.040 --> 00:49:02.200 +uh there's a bunch of other probing + +00:49:00.240 --> 00:49:04.920 +works so a bunch of people aim to solve + +00:49:02.200 --> 00:49:06.000 +a bunch of these problems um and uh for + +00:49:04.920 --> 00:49:09.200 +the sake of time I'm not going to go + +00:49:06.000 --> 00:49:12.599 +into all of these but uh I'd encourage + +00:49:09.200 --> 00:49:14.000 +you to look into these they for for some + +00:49:12.599 --> 00:49:17.319 +of these problems they're able to + +00:49:14.000 --> 00:49:19.520 +control for um control for like the + +00:49:17.319 --> 00:49:22.200 +complexity of the of the probe and + +00:49:19.520 --> 00:49:24.359 +things like this um but even despite + +00:49:22.200 --> 00:49:25.720 +that probing is sort of slowly kind of + +00:49:24.359 --> 00:49:28.160 +falling out of + +00:49:25.720 --> 00:49:29.640 +favor uh so before I move into model + +00:49:28.160 --> 00:49:31.920 +interpretability are there any questions + +00:49:29.640 --> 00:49:31.920 +on + +00:49:35.520 --> 00:49:40.599 +probing all right so what is model + +00:49:38.680 --> 00:49:44.000 +interpretability so this is my + +00:49:40.599 --> 00:49:45.400 +definition here uh this is the study of + +00:49:44.000 --> 00:49:46.599 +understanding the internals of models + +00:49:45.400 --> 00:49:49.079 +for example their weights and + +00:49:46.599 --> 00:49:51.160 +activations putting those insights in + +00:49:49.079 --> 00:49:53.319 +human intelligible terms and using that + +00:49:51.160 --> 00:49:55.920 +insight to both patch current models and + +00:49:53.319 --> 00:49:57.359 +develop better ones for not sort of able + +00:49:55.920 --> 00:49:58.760 +to do both of these things patching + +00:49:57.359 --> 00:50:00.160 +current models and develop better ones + +00:49:58.760 --> 00:50:02.440 +we're kind of doing interpretability for + +00:50:00.160 --> 00:50:04.960 +interpretability sake that's nice and + +00:50:02.440 --> 00:50:08.079 +fun but it's not as applicable for the + +00:50:04.960 --> 00:50:09.720 +for the community so you've probably + +00:50:08.079 --> 00:50:12.240 +heard of the term mechanistic + +00:50:09.720 --> 00:50:14.480 +interpretability it's in my opinion a + +00:50:12.240 --> 00:50:16.559 +subfield of model interpretability and + +00:50:14.480 --> 00:50:19.319 +this is sort of my definition I it + +00:50:16.559 --> 00:50:21.440 +aligns reasonably well to the core + +00:50:19.319 --> 00:50:22.720 +mechanistic interpretability people um + +00:50:21.440 --> 00:50:24.880 +but it's the study of reverse + +00:50:22.720 --> 00:50:26.280 +engineering parametric models often + +00:50:24.880 --> 00:50:28.839 +neural networks because that's what we + +00:50:26.280 --> 00:50:31.400 +use from their learned weights into more + +00:50:28.839 --> 00:50:32.839 +human interpretable algorithmic units uh + +00:50:31.400 --> 00:50:36.839 +and often they call these things + +00:50:32.839 --> 00:50:39.440 +circuits um and and these are basically + +00:50:36.839 --> 00:50:42.880 +functions that uh you can describe in a + +00:50:39.440 --> 00:50:45.000 +human interpretable way that sit inside + +00:50:42.880 --> 00:50:46.760 +models um there's a bunch of notable + +00:50:45.000 --> 00:50:50.720 +work again for the sake of time I'm + +00:50:46.760 --> 00:50:54.319 +going to just briefly talk about them + +00:50:50.720 --> 00:50:56.839 +um so the first one is they they look + +00:50:54.319 --> 00:50:58.440 +into analyzing small MLPs and + +00:50:56.839 --> 00:51:01.400 +Transformers to build out the intuition + +00:50:58.440 --> 00:51:04.119 +of what circuits exist um and this a lot + +00:51:01.400 --> 00:51:06.559 +of this work came out of earlier work on + +00:51:04.119 --> 00:51:08.480 +on lstms and doing similar sorts of + +00:51:06.559 --> 00:51:11.880 +things with with + +00:51:08.480 --> 00:51:14.319 +lstms um and they find a bunch of things + +00:51:11.880 --> 00:51:15.839 +one thing that they find is this idea of + +00:51:14.319 --> 00:51:19.599 +induction heads and these induction + +00:51:15.839 --> 00:51:21.760 +heads they say is sort of sort of helps + +00:51:19.599 --> 00:51:24.680 +prove why models can do in context + +00:51:21.760 --> 00:51:26.599 +learning so so an induction head is + +00:51:24.680 --> 00:51:28.839 +something that it's it's a specific + +00:51:26.599 --> 00:51:32.440 +attention head that kind of allows you + +00:51:28.839 --> 00:51:35.599 +to um when given a prefix allow you to + +00:51:32.440 --> 00:51:37.559 +kind of copy the necessary resulting + +00:51:35.599 --> 00:51:39.640 +token from the underlying training data + +00:51:37.559 --> 00:51:41.720 +that the model seen before so in context + +00:51:39.640 --> 00:51:44.599 +learning what you generally provide is + +00:51:41.720 --> 00:51:46.440 +some sort of prefix and then you uh + +00:51:44.599 --> 00:51:48.480 +provide some example and hopefully you + +00:51:46.440 --> 00:51:51.040 +know you can classify the thing or + +00:51:48.480 --> 00:51:53.280 +something like this um it's it's saying + +00:51:51.040 --> 00:51:56.200 +that there's these attention heads + +00:51:53.280 --> 00:51:59.400 +Loosely uh that exist that are able to + +00:51:56.200 --> 00:52:00.680 +copy unearth that information um for for + +00:51:59.400 --> 00:52:03.319 +a specific + +00:52:00.680 --> 00:52:07.200 +context um other things that they' that + +00:52:03.319 --> 00:52:09.880 +they've done is um on neurons so uh this + +00:52:07.200 --> 00:52:13.160 +poly semanticity so what this this kind + +00:52:09.880 --> 00:52:15.240 +of means is that your your neuron is a + +00:52:13.160 --> 00:52:18.000 +uh you have a set of neurons in your + +00:52:15.240 --> 00:52:20.880 +activation space so let's say at layer + +00:52:18.000 --> 00:52:23.200 +10 in your model you have an output um + +00:52:20.880 --> 00:52:26.280 +and so your activations is let's say a + +00:52:23.200 --> 00:52:28.400 +thousand dimensional here those each of + +00:52:26.280 --> 00:52:31.319 +those thousand individual neurons may + +00:52:28.400 --> 00:52:35.839 +represent more than one specific + +00:52:31.319 --> 00:52:37.839 +feature um and so they they talk about + +00:52:35.839 --> 00:52:41.280 +this in that context and this is kind of + +00:52:37.839 --> 00:52:43.240 +a theory but you can think about um + +00:52:41.280 --> 00:52:46.359 +trying to process + +00:52:43.240 --> 00:52:49.400 +input and when you're processing a a + +00:52:46.359 --> 00:52:50.960 +vocab of size 50,000 or 250,000 at some + +00:52:49.400 --> 00:52:52.359 +point in the model we're actually + +00:52:50.960 --> 00:52:55.400 +compressing it down to the hidden + +00:52:52.359 --> 00:52:58.119 +Dimension and so in some cases that + +00:52:55.400 --> 00:53:00.319 +looks like you're going to compress a + +00:52:58.119 --> 00:53:03.440 +much richer feature representation down + +00:53:00.319 --> 00:53:06.359 +into a smaller set of neurons so it is + +00:53:03.440 --> 00:53:08.319 +reasonable to believe that um a specific + +00:53:06.359 --> 00:53:10.799 +neuron will represent multiple of those + +00:53:08.319 --> 00:53:15.480 +features and given the structure of our + +00:53:10.799 --> 00:53:18.720 +weight matrices um it it is the case + +00:53:15.480 --> 00:53:21.839 +that if they are representing more + +00:53:18.720 --> 00:53:23.960 +features than uh number of elements in + +00:53:21.839 --> 00:53:26.000 +the actual or number of neurons in the + +00:53:23.960 --> 00:53:28.680 +activation space then many of these + +00:53:26.000 --> 00:53:30.880 +features linearly dependent and so we're + +00:53:28.680 --> 00:53:35.400 +not really able to utilize them that + +00:53:30.880 --> 00:53:37.960 +well um they they they talk about this + +00:53:35.400 --> 00:53:42.200 +they don't talk about this in the the + +00:53:37.960 --> 00:53:44.799 +most uh the the best way but uh it seems + +00:53:42.200 --> 00:53:48.040 +kind of clear to me that um since you + +00:53:44.799 --> 00:53:50.880 +have embedding matrices that are um not + +00:53:48.040 --> 00:53:53.599 +square that you're that these neurons + +00:53:50.880 --> 00:53:56.400 +have to exist um and they have to + +00:53:53.599 --> 00:53:59.200 +incorporate multiple features at once m + +00:53:56.400 --> 00:54:02.559 +multiple redundant features at + +00:53:59.200 --> 00:54:04.680 +once um so before I move on to the rest + +00:54:02.559 --> 00:54:07.839 +of model interpretability any questions + +00:54:04.680 --> 00:54:07.839 +about mechanistic + +00:54:09.880 --> 00:54:12.880 +interpretability + +00:54:21.480 --> 00:54:28.040 +yeah so most of their studies are for uh + +00:54:24.920 --> 00:54:29.720 +a very small set of of of models and + +00:54:28.040 --> 00:54:32.040 +most of these are old GPT models there + +00:54:29.720 --> 00:54:34.160 +have been a few works like in the last + +00:54:32.040 --> 00:54:36.760 +couple of months on doing this for the + +00:54:34.160 --> 00:54:39.720 +Llama based models um it seems like this + +00:54:36.760 --> 00:54:42.040 +is a general more General phenomenal for + +00:54:39.720 --> 00:54:43.760 +for language models it also is the case + +00:54:42.040 --> 00:54:46.839 +that certain attention heads specialize + +00:54:43.760 --> 00:54:49.480 +and talk about them a little bit um in + +00:54:46.839 --> 00:54:51.599 +in the activations part um but yeah + +00:54:49.480 --> 00:54:53.799 +there's they're not like all attention + +00:54:51.599 --> 00:54:56.400 +heads aren't created equal they start + +00:54:53.799 --> 00:55:00.280 +this way and it seems to be a general + +00:54:56.400 --> 00:55:01.799 +princip of and one one other thing I you + +00:55:00.280 --> 00:55:04.520 +might know about this better than I do + +00:55:01.799 --> 00:55:06.520 +but I think there are some preliminary + +00:55:04.520 --> 00:55:09.160 +words that say that Transformers seem to + +00:55:06.520 --> 00:55:11.720 +be particularly good at doing things + +00:55:09.160 --> 00:55:15.160 +like induction heads compared + +00:55:11.720 --> 00:55:17.200 +to uh H for current models and there was + +00:55:15.160 --> 00:55:20.720 +a paper really recently about comparing + +00:55:17.200 --> 00:55:23.599 +like Mamba and um in Transformer based + +00:55:20.720 --> 00:55:26.400 +models Mamba being a uh kind of more + +00:55:23.599 --> 00:55:30.280 +like closer to a with network which we + +00:55:26.400 --> 00:55:33.119 +also going to talk about us but um so I + +00:55:30.280 --> 00:55:37.319 +I think there's some indication that + +00:55:33.119 --> 00:55:39.920 +Transformers actually kind of are key or + +00:55:37.319 --> 00:55:43.680 +are at least like better at kind of in + +00:55:39.920 --> 00:55:46.760 +Contex learning than otheres are so + +00:55:43.680 --> 00:55:48.920 +there is some + +00:55:46.760 --> 00:55:50.839 +interesting implications of that which + +00:55:48.920 --> 00:55:53.240 +is like well if Transformers are good + +00:55:50.839 --> 00:55:57.359 +what's better than Transformer yeah like + +00:55:53.240 --> 00:55:58.799 +naturally learning this s of thing so um + +00:55:57.359 --> 00:56:00.720 +they're good at yeah they're like really + +00:55:58.799 --> 00:56:04.039 +good at copying and like maintaining + +00:56:00.720 --> 00:56:06.799 +information like more so um and yeah I + +00:56:04.039 --> 00:56:08.200 +think it'd be cool to like be able to I + +00:56:06.799 --> 00:56:09.839 +don't know how to do this but be able to + +00:56:08.200 --> 00:56:11.440 +extract that kind of information like + +00:56:09.839 --> 00:56:13.359 +what of the Transformers is actually + +00:56:11.440 --> 00:56:15.119 +helping it do this copying mechanism or + +00:56:13.359 --> 00:56:17.799 +like being a better in context learner + +00:56:15.119 --> 00:56:20.039 +then we can develop a better structure a + +00:56:17.799 --> 00:56:23.119 +slightly better structure than than a + +00:56:20.039 --> 00:56:26.000 +Transformer um hopefully someone comes + +00:56:23.119 --> 00:56:28.240 +up with that soon but cool any other + +00:56:26.000 --> 00:56:28.240 +question + +00:56:29.799 --> 00:56:34.359 +questions all right so let's move into + +00:56:32.240 --> 00:56:35.880 +model interpretability so there are + +00:56:34.359 --> 00:56:37.480 +weights and their activations I + +00:56:35.880 --> 00:56:39.160 +mentioned these are these are the two + +00:56:37.480 --> 00:56:41.119 +things these are the two things that + +00:56:39.160 --> 00:56:43.440 +we're going to look at so what can you + +00:56:41.119 --> 00:56:45.480 +do with the weights of an RD train model + +00:56:43.440 --> 00:56:47.799 +really you can just edit them and then + +00:56:45.480 --> 00:56:49.200 +kind of see what happens activations + +00:56:47.799 --> 00:56:51.240 +similarly you can look at the + +00:56:49.200 --> 00:56:52.720 +activations for different inputs you can + +00:56:51.240 --> 00:56:54.520 +poke them with a stick and see what + +00:56:52.720 --> 00:56:56.359 +happens a lot of my research is poking + +00:56:54.520 --> 00:56:58.559 +models with a stick and looking at the + +00:56:56.359 --> 00:57:00.920 +activations it's like predominantly what + +00:56:58.559 --> 00:57:02.240 +I've done so we'll talk about that um + +00:57:00.920 --> 00:57:04.359 +and the technical term for this is + +00:57:02.240 --> 00:57:06.599 +intervening on them by adding some + +00:57:04.359 --> 00:57:07.839 +vector or other sort of manipulation to + +00:57:06.599 --> 00:57:09.440 +the lat space but really what you're + +00:57:07.839 --> 00:57:13.960 +doing is like + +00:57:09.440 --> 00:57:17.599 +Pok um so when you look at weights uh + +00:57:13.960 --> 00:57:19.920 +one one class of methods or or area is + +00:57:17.599 --> 00:57:21.920 +on model editing fine-tuning is like the + +00:57:19.920 --> 00:57:23.480 +most extreme version of model editing + +00:57:21.920 --> 00:57:26.599 +usually these things are much more + +00:57:23.480 --> 00:57:29.640 +targeted um so in the model editing sort + +00:57:26.599 --> 00:57:32.160 +of landscape your goal or your target is + +00:57:29.640 --> 00:57:35.119 +you have a concept or a specific fact + +00:57:32.160 --> 00:57:37.440 +that needs to be changed in the model um + +00:57:35.119 --> 00:57:39.640 +and your approach here is you update or + +00:57:37.440 --> 00:57:41.359 +edit the weights of the model to edit + +00:57:39.640 --> 00:57:43.640 +the model's belief of that factor + +00:57:41.359 --> 00:57:45.599 +concept and ideally you do this without + +00:57:43.640 --> 00:57:47.319 +changing any of the other behavior of + +00:57:45.599 --> 00:57:49.760 +the model so for example let's say + +00:57:47.319 --> 00:57:51.920 +you're trying to say that Graham is no + +00:57:49.760 --> 00:57:54.559 +longer a professor at CMU but is a + +00:57:51.920 --> 00:57:57.319 +professor at Stanford you don't want + +00:57:54.559 --> 00:57:59.960 +every single person at CMU to now be a + +00:57:57.319 --> 00:58:02.920 +professor or uh now be affiliated with + +00:57:59.960 --> 00:58:07.839 +Stanford right um gr pleas not to + +00:58:02.920 --> 00:58:09.039 +Stanford um so here's one approach paper + +00:58:07.839 --> 00:58:11.720 +that came out a couple years ago there's + +00:58:09.039 --> 00:58:13.559 +a lot of work down here uh in in the + +00:58:11.720 --> 00:58:15.799 +model editing World I'll give you sort + +00:58:13.559 --> 00:58:17.440 +of a really brief overview of this but + +00:58:15.799 --> 00:58:20.520 +basically they have facts that they want + +00:58:17.440 --> 00:58:22.400 +to they want to manipulate um so for + +00:58:20.520 --> 00:58:24.680 +example the the example that they give + +00:58:22.400 --> 00:58:26.640 +in the figure is they want to associate + +00:58:24.680 --> 00:58:30.960 +the Space Needle with Paris the Space + +00:58:26.640 --> 00:58:32.520 +Needle is a a cool needle in in Seattle + +00:58:30.960 --> 00:58:36.000 +has nothing to do with Paris but Paris + +00:58:32.520 --> 00:58:38.400 +also has a tower so it's close um so + +00:58:36.000 --> 00:58:40.920 +they use causal tracing to isolate the + +00:58:38.400 --> 00:58:43.839 +causal effect uh of the individual + +00:58:40.920 --> 00:58:45.799 +hidden States for this fact so they + +00:58:43.839 --> 00:58:47.839 +basically continuously perturb the input + +00:58:45.799 --> 00:58:49.760 +do a bunch of forward passes and + +00:58:47.839 --> 00:58:51.720 +sequentially find the specific hidden + +00:58:49.760 --> 00:58:55.280 +states that are associated kind of with + +00:58:51.720 --> 00:58:56.839 +this fact um then they make an edit and + +00:58:55.280 --> 00:58:59.119 +their edit + +00:58:56.839 --> 00:59:02.039 +uh looks like this thing on the right um + +00:58:59.119 --> 00:59:05.280 +so they treat this pair Space Needle and + +00:59:02.039 --> 00:59:07.240 +Paris as this uh key value pair where + +00:59:05.280 --> 00:59:10.359 +Space Needle is the key you pass this + +00:59:07.240 --> 00:59:12.480 +into um into this weight Matrix this + +00:59:10.359 --> 00:59:14.640 +original part of the model you want this + +00:59:12.480 --> 00:59:16.599 +now instead of outputting Seattle to + +00:59:14.640 --> 00:59:19.119 +Output Paris and they have some nice + +00:59:16.599 --> 00:59:21.599 +math and a closed form solution to to + +00:59:19.119 --> 00:59:23.880 +identify this this is super expensive + +00:59:21.599 --> 00:59:25.359 +because they have to the causal tracing + +00:59:23.880 --> 00:59:27.680 +part have to do a bunch of forward + +00:59:25.359 --> 00:59:30.680 +passes um and they make this a little + +00:59:27.680 --> 00:59:33.480 +bit better in future future work they + +00:59:30.680 --> 00:59:37.920 +also do sort of a more + +00:59:33.480 --> 00:59:40.160 +comprehensive um edit um so these are + +00:59:37.920 --> 00:59:44.599 +kind like some of the things you can do + +00:59:40.160 --> 00:59:46.799 +um I'm less excited about model editing + +00:59:44.599 --> 00:59:49.039 +um there's there's some work on model + +00:59:46.799 --> 00:59:51.319 +editing sort of it's it's hard to + +00:59:49.039 --> 00:59:53.160 +control what other things break there's + +00:59:51.319 --> 00:59:56.240 +a and there's some work with when you + +00:59:53.160 --> 01:00:00.000 +edit a specific fact things start being + +00:59:56.240 --> 01:00:02.680 +weird and being biased in other ways um + +01:00:00.000 --> 01:00:05.760 +and so + +01:00:02.680 --> 01:00:09.119 +yeah do all kind of seual information + +01:00:05.760 --> 01:00:11.880 +like X is and Y would they Alles to the + +01:00:09.119 --> 01:00:14.319 +same layer is it just with the + +01:00:11.880 --> 01:00:16.920 +specific for this specific example it + +01:00:14.319 --> 01:00:19.039 +looks at this specific point uh for + +01:00:16.920 --> 01:00:21.039 +every example they'll probably find + +01:00:19.039 --> 01:00:22.119 +different regions in a different degree + +01:00:21.039 --> 01:00:25.680 +of + +01:00:22.119 --> 01:00:27.960 +manipulation um and yeah that it gets a + +01:00:25.680 --> 01:00:30.920 +little unprincipled kind of quickly it's + +01:00:27.960 --> 01:00:33.000 +not like they're able to find you know a + +01:00:30.920 --> 01:00:35.680 +specific attention head that or a + +01:00:33.000 --> 01:00:38.240 +specific layer or specific weight Matrix + +01:00:35.680 --> 01:00:42.400 +that corresponds to like + +01:00:38.240 --> 01:00:46.720 +all yeah relations of a specific + +01:00:42.400 --> 01:00:49.160 +type any for questions for yeah this is + +01:00:46.720 --> 01:00:51.119 +actually just a question if you know um + +01:00:49.160 --> 01:00:53.200 +it seems like more frequent facts might + +01:00:51.119 --> 01:00:55.240 +appear in both places in the model is do + +01:00:53.200 --> 01:00:59.280 +you know if that's actually the I have + +01:00:55.240 --> 01:01:02.440 +no idea but uh I would imagine that um + +01:00:59.280 --> 01:01:06.240 +it probably could occur in more places + +01:01:02.440 --> 01:01:08.160 +but also um a lot of the information is + +01:01:06.240 --> 01:01:10.119 +redundant anyway in the model especially + +01:01:08.160 --> 01:01:11.720 +for larger models so you might have to + +01:01:10.119 --> 01:01:13.599 +make targeted interventions in multiple + +01:01:11.720 --> 01:01:15.480 +places but it's possible that one + +01:01:13.599 --> 01:01:17.680 +intervention in one place sufficiently + +01:01:15.480 --> 01:01:21.039 +destroys like contextualized information + +01:01:17.680 --> 01:01:22.680 +in other places if it's close um it + +01:01:21.039 --> 01:01:24.839 +depends on how big this intervention is + +01:01:22.680 --> 01:01:28.200 +if it's like hitting it with a hammer + +01:01:24.839 --> 01:01:30.520 +rather than some like nice fine grain + +01:01:28.200 --> 01:01:33.359 +thing but that'd be a good be a good + +01:01:30.520 --> 01:01:36.839 +experiment to see + +01:01:33.359 --> 01:01:36.839 +um any other + +01:01:37.240 --> 01:01:41.559 +questions all right so we'll move into + +01:01:39.760 --> 01:01:43.680 +the stuff that I'm most familiar with + +01:01:41.559 --> 01:01:46.319 +and some of my work so looking at + +01:01:43.680 --> 01:01:48.319 +activations um so this is this is work + +01:01:46.319 --> 01:01:50.480 +I've been doing for a while uh this idea + +01:01:48.319 --> 01:01:52.799 +of steering vectors so I mentioned I + +01:01:50.480 --> 01:01:54.480 +poke models so it's thick steering + +01:01:52.799 --> 01:01:57.000 +Vector is that thick so it's basically a + +01:01:54.480 --> 01:01:59.000 +fix length vector that steers a language + +01:01:57.000 --> 01:02:00.920 +model to generate a specific sequence + +01:01:59.000 --> 01:02:02.720 +exactly when added to the hidden sites + +01:02:00.920 --> 01:02:06.319 +of a model at a specific + +01:02:02.720 --> 01:02:09.000 +location um and I'll I'll read this + +01:02:06.319 --> 01:02:11.400 +again it's there's a very like specific + +01:02:09.000 --> 01:02:13.319 +form that I wrote this in so uh it's + +01:02:11.400 --> 01:02:15.359 +it's a fix length Vector that steers a + +01:02:13.319 --> 01:02:17.640 +language model to generate a specific + +01:02:15.359 --> 01:02:19.359 +sequence exactly when added to the + +01:02:17.640 --> 01:02:22.559 +hidden states of a model at a specific + +01:02:19.359 --> 01:02:24.480 +point so this is different than um a + +01:02:22.559 --> 01:02:26.839 +soft prompt or different than a model + +01:02:24.480 --> 01:02:29.520 +editing sort of approach + +01:02:26.839 --> 01:02:31.400 +um in this case there is a vector that + +01:02:29.520 --> 01:02:32.960 +corresponds to a sequence and that + +01:02:31.400 --> 01:02:35.359 +Vector doesn't correspond to any other + +01:02:32.960 --> 01:02:36.640 +sequence there could be multiple vectors + +01:02:35.359 --> 01:02:39.079 +and it turns out there are multiple + +01:02:36.640 --> 01:02:41.799 +vectors that correspond to that sequence + +01:02:39.079 --> 01:02:44.160 +it'll be a little bit clearer um based + +01:02:41.799 --> 01:02:46.279 +on how we extract these + +01:02:44.160 --> 01:02:48.839 +things um so this is the stick that + +01:02:46.279 --> 01:02:52.000 +we're talking the language + +01:02:48.839 --> 01:02:53.599 +model um so how do we extract them so + +01:02:52.000 --> 01:02:57.400 +this is + +01:02:53.599 --> 01:03:00.200 +gpt2 um basically this Z steer thing on + +01:02:57.400 --> 01:03:03.240 +the left this is the steering Vector + +01:03:00.200 --> 01:03:05.799 +this gets initialized randomly um with + +01:03:03.240 --> 01:03:09.520 +small like in a in a reasonable way + +01:03:05.799 --> 01:03:11.440 +uniformly and small um and for any + +01:03:09.520 --> 01:03:14.000 +sequence a specific sequence that we + +01:03:11.440 --> 01:03:17.680 +want the model to generate we + +01:03:14.000 --> 01:03:19.400 +optimize this steering Vector Z steer uh + +01:03:17.680 --> 01:03:21.559 +to generate that sequence keeping the + +01:03:19.400 --> 01:03:23.960 +rest of the model entirely fixed so + +01:03:21.559 --> 01:03:26.200 +think about it as we're nudging an a + +01:03:23.960 --> 01:03:29.880 +frozen model to be able to generate a + +01:03:26.200 --> 01:03:31.680 +specific sequence at a specific time um + +01:03:29.880 --> 01:03:33.880 +and we have a lot of different options + +01:03:31.680 --> 01:03:35.559 +on where to inject the steering intive + +01:03:33.880 --> 01:03:37.520 +can put it basically anywhere in the + +01:03:35.559 --> 01:03:41.799 +model we can put it at any time step any + +01:03:37.520 --> 01:03:43.839 +number of these things in practice um + +01:03:41.799 --> 01:03:45.839 +providing it just at the first time step + +01:03:43.839 --> 01:03:48.039 +and somewhere in the middle of the model + +01:03:45.839 --> 01:03:52.480 +basically not the first layer and not + +01:03:48.039 --> 01:03:56.240 +the last layer works pretty well um and + +01:03:52.480 --> 01:04:00.279 +so more formally um forget the kind of + +01:03:56.240 --> 01:04:03.640 +notation um but right here we initialize + +01:04:00.279 --> 01:04:06.559 +um this Z steer and for a few iterations + +01:04:03.640 --> 01:04:08.039 +um we do forward passes first this + +01:04:06.559 --> 01:04:09.599 +starts as random and then this gets + +01:04:08.039 --> 01:04:11.960 +closer and closer and closer to being + +01:04:09.599 --> 01:04:14.279 +able to generate this sequence and + +01:04:11.960 --> 01:04:16.599 +eventually we get to a point uh and this + +01:04:14.279 --> 01:04:18.400 +n is pretty small it's eight or 10 or + +01:04:16.599 --> 01:04:20.160 +something like that um for most + +01:04:18.400 --> 01:04:22.200 +sequences we get to a point where we + +01:04:20.160 --> 01:04:23.920 +have found this stick that is allowed to + +01:04:22.200 --> 01:04:26.079 +poke this model to generate that + +01:04:23.920 --> 01:04:29.319 +sequence exactly now when we greedy + +01:04:26.079 --> 01:04:32.480 +decode from the model we pass in just a + +01:04:29.319 --> 01:04:34.920 +beginning of sequence token and this Z + +01:04:32.480 --> 01:04:37.119 +steer the steering vector and it's able + +01:04:34.920 --> 01:04:39.720 +to uncover a whole sequence that whole + +01:04:37.119 --> 01:04:41.319 +sequence that we had at the beginning + +01:04:39.720 --> 01:04:44.240 +entirely + +01:04:41.319 --> 01:04:46.640 +um this is weird and interesting because + +01:04:44.240 --> 01:04:48.880 +in a lot of cases um in like the + +01:04:46.640 --> 01:04:52.039 +prompting world in the soft prompt World + +01:04:48.880 --> 01:04:54.640 +usually you need a pretty large uh width + +01:04:52.039 --> 01:04:57.880 +of a prompt to be able to do things with + +01:04:54.640 --> 01:05:00.400 +um and this this generally in in that + +01:04:57.880 --> 01:05:02.000 +structure you're doing a specific task + +01:05:00.400 --> 01:05:04.200 +and you're providing kind of a large a + +01:05:02.000 --> 01:05:06.720 +large prompt to do this with large soft + +01:05:04.200 --> 01:05:10.520 +prompt to do this with this is often H + +01:05:06.720 --> 01:05:13.200 +this often has a width of 50 and a and a + +01:05:10.520 --> 01:05:15.520 +length of the the hidden size or the + +01:05:13.200 --> 01:05:17.160 +embedding size of the model in our cases + +01:05:15.520 --> 01:05:20.079 +all of our steering vectors are withth + +01:05:17.160 --> 01:05:21.440 +one and they're of just the hidden size + +01:05:20.079 --> 01:05:24.039 +of the + +01:05:21.440 --> 01:05:26.520 +model um so what ends + +01:05:24.039 --> 01:05:29.559 +up happening + +01:05:26.520 --> 01:05:31.520 +um actually before I go to results any + +01:05:29.559 --> 01:05:34.720 +questions this is a this is a weird + +01:05:31.520 --> 01:05:38.160 +setup and weird relative to what other + +01:05:34.720 --> 01:05:39.310 +people do so happy to take any + +01:05:38.160 --> 01:05:42.480 +questions + +01:05:39.310 --> 01:05:42.480 +[Music] + +01:05:42.880 --> 01:05:50.640 +yeah similarly if your prompt was um of + +01:05:47.440 --> 01:05:53.440 +a specific type so the prompt here is a + +01:05:50.640 --> 01:05:55.720 +continuous Vector passed in it's a + +01:05:53.440 --> 01:05:59.760 +single length width hidden size + +01:05:55.720 --> 01:06:02.799 +continuous Vector so um it's kind of + +01:05:59.760 --> 01:06:05.559 +like maybe collapsing your prompt into + +01:06:02.799 --> 01:06:08.480 +this compressing it into this tiny + +01:06:05.559 --> 01:06:12.119 +VOR you can think of that way + +01:06:08.480 --> 01:06:16.920 +yeah any other questions + +01:06:12.119 --> 01:06:16.920 +yeah this would be like + +01:06:18.160 --> 01:06:23.359 +I'm + +01:06:20.880 --> 01:06:28.279 +things potentially um this is something + +01:06:23.359 --> 01:06:30.640 +that I want to work on it uh like a year + +01:06:28.279 --> 01:06:32.119 +ago and didn't get didn't get this + +01:06:30.640 --> 01:06:34.559 +sufficient Buy in and then had to apply + +01:06:32.119 --> 01:06:36.880 +to grad school and all these things so + +01:06:34.559 --> 01:06:40.160 +it went by the wayside but but + +01:06:36.880 --> 01:06:43.440 +definitely something to something to + +01:06:40.160 --> 01:06:45.920 +pursue um there's a lot of scope there + +01:06:43.440 --> 01:06:45.920 +any other + +01:06:47.640 --> 01:06:54.480 +questions all right so move over to + +01:06:51.319 --> 01:06:56.119 +results so we can find steering vectors + +01:06:54.480 --> 01:06:58.520 +and that's and that's interesting thing + +01:06:56.119 --> 01:07:00.559 +um and we can find them pretty easily + +01:06:58.520 --> 01:07:02.559 +and for most sequences even sequences + +01:07:00.559 --> 01:07:04.559 +that the model hasn't seen before the + +01:07:02.559 --> 01:07:06.400 +underlying language model hasn't seen + +01:07:04.559 --> 01:07:09.640 +before + +01:07:06.400 --> 01:07:13.160 +um it also works for and this is kind of + +01:07:09.640 --> 01:07:16.799 +a negative but it also works for random + +01:07:13.160 --> 01:07:20.039 +sequences of very small length but it's + +01:07:16.799 --> 01:07:22.359 +harder to find so you can imagine if + +01:07:20.039 --> 01:07:24.760 +your uh steering Vector is basically a + +01:07:22.359 --> 01:07:26.279 +giant bulldozer it doesn't matter what + +01:07:24.760 --> 01:07:28.640 +your model is learning learned similar + +01:07:26.279 --> 01:07:30.160 +to the probe situation if you can + +01:07:28.640 --> 01:07:32.559 +compress all that information of that + +01:07:30.160 --> 01:07:35.400 +sequence into the vector you don't + +01:07:32.559 --> 01:07:37.400 +really need the language model um so + +01:07:35.400 --> 01:07:39.559 +there are cases when you're looking at + +01:07:37.400 --> 01:07:40.760 +sequences of length like five seven + +01:07:39.559 --> 01:07:43.079 +eight something like this you can + +01:07:40.760 --> 01:07:45.520 +uniformly sample from the vocabulary at + +01:07:43.079 --> 01:07:47.359 +random with replacement generate utter + +01:07:45.520 --> 01:07:49.799 +garbage and find steering vectors for + +01:07:47.359 --> 01:07:53.200 +them takes a little while but your model + +01:07:49.799 --> 01:07:55.520 +is complex enough that you can basically + +01:07:53.200 --> 01:07:57.960 +bulldo your model to be able to do this + +01:07:55.520 --> 01:08:00.200 +even if that sequence is incredibly low + +01:07:57.960 --> 01:08:01.480 +likelihood under the model but it works + +01:08:00.200 --> 01:08:05.319 +better for things that are higher + +01:08:01.480 --> 01:08:07.760 +likelihood under the model um + +01:08:05.319 --> 01:08:09.920 +predictably the I think the thing that + +01:08:07.760 --> 01:08:12.760 +surprised me the most was these steering + +01:08:09.920 --> 01:08:15.319 +vectors themselves have interpretable + +01:08:12.760 --> 01:08:17.960 +properties U so distances in steering + +01:08:15.319 --> 01:08:20.759 +Vector space reflect semantic similarity + +01:08:17.960 --> 01:08:23.640 +so if you have two sentences that are + +01:08:20.759 --> 01:08:26.719 +close um they're also close in steering + +01:08:23.640 --> 01:08:29.759 +Vector space that's kind of nice + +01:08:26.719 --> 01:08:32.359 +um it does better than for example the + +01:08:29.759 --> 01:08:34.520 +representations one would use for for + +01:08:32.359 --> 01:08:37.159 +probing so mean pooling Bert hidden + +01:08:34.520 --> 01:08:39.600 +States like we looked at before those do + +01:08:37.159 --> 01:08:42.080 +actually worse than steering vectors um + +01:08:39.600 --> 01:08:45.799 +just a bit + +01:08:42.080 --> 01:08:47.880 +surprising um style transfer is possible + +01:08:45.799 --> 01:08:49.719 +with simple Vector arithmetic so it' be + +01:08:47.880 --> 01:08:52.799 +nice to say that I have a sequence I + +01:08:49.719 --> 01:08:56.000 +want to subtract you know negativity and + +01:08:52.799 --> 01:08:58.799 +add positivity for for sentiment or + +01:08:56.000 --> 01:09:00.520 +other sorts of Styles um we can do this + +01:08:58.799 --> 01:09:02.159 +and we can do this reasonably well in + +01:09:00.520 --> 01:09:05.319 +steering VOR + +01:09:02.159 --> 01:09:07.920 +space um we can also decode from + +01:09:05.319 --> 01:09:10.600 +interpolations in the Laten space so you + +01:09:07.920 --> 01:09:12.759 +take two steering vectors for two + +01:09:10.600 --> 01:09:14.759 +sequences you look in the middle of them + +01:09:12.759 --> 01:09:17.400 +you linearly interpolate between them + +01:09:14.759 --> 01:09:20.600 +and you decode um if the space is kind + +01:09:17.400 --> 01:09:22.080 +of weirdly peaky then you would have + +01:09:20.600 --> 01:09:23.839 +issues and what you would generate is + +01:09:22.080 --> 01:09:25.080 +garbage and there's no guarantee that + +01:09:23.839 --> 01:09:27.199 +the space should be reasonable in + +01:09:25.080 --> 01:09:30.480 +between but it turns out it + +01:09:27.199 --> 01:09:33.719 +is um here's an example of one of these + +01:09:30.480 --> 01:09:36.359 +style transfer cases so very very simple + +01:09:33.719 --> 01:09:39.239 +easy easy sentence we found steering + +01:09:36.359 --> 01:09:41.679 +vectors for The Taste is excellent and + +01:09:39.239 --> 01:09:43.640 +and we took a sample of 100 positive + +01:09:41.679 --> 01:09:45.359 +sentences and 100 negative sentences + +01:09:43.640 --> 01:09:47.159 +found their steering vectors took the + +01:09:45.359 --> 01:09:48.960 +mean and thought that you know that + +01:09:47.159 --> 01:09:51.400 +looks like the positive concept steering + +01:09:48.960 --> 01:09:54.040 +Vector negative concept steering Vector + +01:09:51.400 --> 01:09:56.600 +we just did Vector arithmetic just did + +01:09:54.040 --> 01:09:59.880 +uh current steering + +01:09:56.600 --> 01:10:02.440 +Vector uh plus negative minus positive + +01:09:59.880 --> 01:10:03.520 +and we got the taste is unpleasant um + +01:10:02.440 --> 01:10:06.960 +and + +01:10:03.520 --> 01:10:08.880 +similarly um in the reverse + +01:10:06.960 --> 01:10:12.520 +directions it turns out that the + +01:10:08.880 --> 01:10:15.199 +magnitude matters because um for every + +01:10:12.520 --> 01:10:17.800 +single sequence there's kind of an end + +01:10:15.199 --> 01:10:20.640 +dimensional ball around that steering + +01:10:17.800 --> 01:10:23.640 +Vector that we found that also decodes + +01:10:20.640 --> 01:10:25.920 +that specific sequence and so that shows + +01:10:23.640 --> 01:10:28.880 +that the space is kind of reasonably + +01:10:25.920 --> 01:10:32.320 +well formed there's there's of course uh + +01:10:28.880 --> 01:10:34.280 +a lot of weird sort of areas um and so + +01:10:32.320 --> 01:10:37.120 +if you go poke around in steering Vector + +01:10:34.280 --> 01:10:38.760 +space and sort of try to sample from it + +01:10:37.120 --> 01:10:41.280 +eventually you'll find some weird edge + +01:10:38.760 --> 01:10:43.320 +cases and some garbage and repeated text + +01:10:41.280 --> 01:10:46.159 +and little things like + +01:10:43.320 --> 01:10:50.520 +this any questions here before I kind of + +01:10:46.159 --> 01:10:50.520 +Rapid Fire through the the last few + +01:10:50.920 --> 01:10:57.239 +things yeah like here + +01:10:57.400 --> 01:11:01.400 +yeah so we went uh Beyond this um there + +01:11:00.199 --> 01:11:04.280 +was + +01:11:01.400 --> 01:11:07.440 +so in in these specific experiments we + +01:11:04.280 --> 01:11:09.600 +looked at the middle of gpt2 um so this + +01:11:07.440 --> 01:11:12.679 +was like layer six layer seven and at + +01:11:09.600 --> 01:11:15.280 +the first time step we didn't do any um + +01:11:12.679 --> 01:11:17.239 +like magnitude scaling and so you can + +01:11:15.280 --> 01:11:19.480 +imagine if you put a giant Vector in + +01:11:17.239 --> 01:11:21.040 +there the models never the rest of the + +01:11:19.480 --> 01:11:24.679 +model has never seen something of that + +01:11:21.040 --> 01:11:26.159 +magnitude so it's now in a weird State + +01:11:24.679 --> 01:11:28.280 +and it's just going to break so if you + +01:11:26.159 --> 01:11:30.560 +put this to like I don't know 500 or + +01:11:28.280 --> 01:11:32.960 +something like this it break it just has + +01:11:30.560 --> 01:11:35.239 +no idea it's like basically like telling + +01:11:32.960 --> 01:11:37.199 +the rest your model hey it's like a + +01:11:35.239 --> 01:11:38.760 +completely untrained model be it looks + +01:11:37.199 --> 01:11:42.000 +similar to like random performance you + +01:11:38.760 --> 01:11:43.840 +get repeats and things like this smaller + +01:11:42.000 --> 01:11:45.800 +you end up staying in this ball for the + +01:11:43.840 --> 01:11:47.920 +sequence two two seemed pretty + +01:11:45.800 --> 01:11:50.199 +reasonable but we didn't spend a lot of + +01:11:47.920 --> 01:11:53.560 +time just like the day before the paper + +01:11:50.199 --> 01:11:56.600 +was do we were two seems reasonable we + +01:11:53.560 --> 01:11:59.159 +went to three we went to five 10 broke + +01:11:56.600 --> 01:12:01.199 +five somewhat broke two seems + +01:11:59.159 --> 01:12:03.440 +reasonable + +01:12:01.199 --> 01:12:06.400 +um decent signings + +01:12:03.440 --> 01:12:08.639 +hopefully um cool so I'll talk about uh + +01:12:06.400 --> 01:12:10.920 +a similar type of work uh that came out + +01:12:08.639 --> 01:12:13.000 +more recently on inference time + +01:12:10.920 --> 01:12:14.159 +intervention so basically they use some + +01:12:13.000 --> 01:12:16.719 +of the ideas that we talked about + +01:12:14.159 --> 01:12:18.840 +earlier they use linear probes um to + +01:12:16.719 --> 01:12:20.560 +find a tension head that correspond to a + +01:12:18.840 --> 01:12:23.600 +desired attribute they did this for + +01:12:20.560 --> 01:12:26.440 +truthful QA so uh their Hope was to find + +01:12:23.600 --> 01:12:28.639 +truthful directions in Len space + +01:12:26.440 --> 01:12:31.639 +um and then they shifted the attention + +01:12:28.639 --> 01:12:33.199 +head activations um during inference + +01:12:31.639 --> 01:12:35.280 +along the directions determined by the + +01:12:33.199 --> 01:12:38.280 +probes um so what this kind of looks + +01:12:35.280 --> 01:12:40.280 +like is you take your attention heads + +01:12:38.280 --> 01:12:42.440 +you probe them so you stick classify on + +01:12:40.280 --> 01:12:44.360 +top um this classifier learns to + +01:12:42.440 --> 01:12:47.679 +disentangle sort of truthful and + +01:12:44.360 --> 01:12:50.239 +untruthful and now you have um now you + +01:12:47.679 --> 01:12:52.080 +have a hyperplane and then you can move + +01:12:50.239 --> 01:12:54.320 +orthogonally to this hyper plane in the + +01:12:52.080 --> 01:12:55.920 +direction depending on which way you + +01:12:54.320 --> 01:12:58.080 +want to shift so if you want to move + +01:12:55.920 --> 01:13:02.040 +towards truthful you can move in that + +01:12:58.080 --> 01:13:04.400 +direction or or away um and they do this + +01:13:02.040 --> 01:13:07.560 +it works pretty well um I think they do + +01:13:04.400 --> 01:13:09.679 +this for GPT model and maybe a llama + +01:13:07.560 --> 01:13:12.960 +model um but can't can't remember the + +01:13:09.679 --> 01:13:15.960 +exact details um and it's a similar + +01:13:12.960 --> 01:13:21.040 +intervention um they basically add this + +01:13:15.960 --> 01:13:23.400 +Vector um that they found and they they + +01:13:21.040 --> 01:13:25.679 +have a little note on scaling they if + +01:13:23.400 --> 01:13:27.719 +they scale if that if the magnitude of + +01:13:25.679 --> 01:13:30.000 +the thing is too much things break so + +01:13:27.719 --> 01:13:33.880 +they have a they like hyper parameter + +01:13:30.000 --> 01:13:36.800 +search for the sort of magnitude of + +01:13:33.880 --> 01:13:38.840 +activation um but it's sort of a very + +01:13:36.800 --> 01:13:41.520 +similar approach to what we did but this + +01:13:38.840 --> 01:13:43.040 +focuses on specific attention heads and + +01:13:41.520 --> 01:13:44.440 +they don't do this for all the attention + +01:13:43.040 --> 01:13:46.600 +heads so back to like your question + +01:13:44.440 --> 01:13:49.080 +earlier do attention heads specialize it + +01:13:46.600 --> 01:13:52.360 +seems like they do and so there are many + +01:13:49.080 --> 01:13:54.320 +of them that uh have like no probing + +01:13:52.360 --> 01:13:57.719 +accuracy or limited probing accuracy and + +01:13:54.320 --> 01:13:59.400 +actually um are like distractors for the + +01:13:57.719 --> 01:14:03.400 +CH FL + +01:13:59.400 --> 01:14:03.400 +Direction any questions + +01:14:06.040 --> 01:14:11.760 +here cool so more activation + +01:14:09.120 --> 01:14:14.760 +manipulation so there's uh some work + +01:14:11.760 --> 01:14:17.600 +recently on contrastive steering vectors + +01:14:14.760 --> 01:14:19.480 +so the way we did this like sentiment + +01:14:17.600 --> 01:14:21.080 +steering was we had some positive + +01:14:19.480 --> 01:14:23.040 +sentences some negative sentences they + +01:14:21.080 --> 01:14:24.520 +weren't tied together in any reasonable + +01:14:23.040 --> 01:14:26.360 +way we found their steering vectors + +01:14:24.520 --> 01:14:30.040 +separately you could imagine the case + +01:14:26.360 --> 01:14:33.159 +and maybe a more useful case um with two + +01:14:30.040 --> 01:14:36.280 +prompts that um you can design that go + +01:14:33.159 --> 01:14:38.000 +two different ways you can sort of find + +01:14:36.280 --> 01:14:42.280 +their representations and do the + +01:14:38.000 --> 01:14:45.679 +manipulation the differences here um + +01:14:42.280 --> 01:14:48.800 +like individually rather than um for a + +01:14:45.679 --> 01:14:52.400 +whole concept or a whole attribute and + +01:14:48.800 --> 01:14:54.400 +the value here is your context is um + +01:14:52.400 --> 01:14:56.600 +preserved so if you're doing something + +01:14:54.400 --> 01:14:58.239 +like you know you're doing retrieval + +01:14:56.600 --> 01:15:00.440 +based things now you have some sort of + +01:14:58.239 --> 01:15:03.360 +document and then you have a question if + +01:15:00.440 --> 01:15:05.040 +your question sort of uh if you want to + +01:15:03.360 --> 01:15:07.560 +ask it in two different ways for two + +01:15:05.040 --> 01:15:08.880 +different things this would be a much + +01:15:07.560 --> 01:15:11.239 +better approach if you want to use + +01:15:08.880 --> 01:15:14.600 +steering vectors than the stuff I was + +01:15:11.239 --> 01:15:16.159 +doing um and it seems to work a little + +01:15:14.600 --> 01:15:17.880 +bit better they didn't compare against + +01:15:16.159 --> 01:15:19.400 +our our things because it's not like an + +01:15:17.880 --> 01:15:21.880 +Apples to Apples comparison but it seems + +01:15:19.400 --> 01:15:23.960 +to work better and be more General um + +01:15:21.880 --> 01:15:25.560 +and be more + +01:15:23.960 --> 01:15:27.840 +useful + +01:15:25.560 --> 01:15:27.840 +any + +01:15:31.400 --> 01:15:37.679 +questions cool so what can model + +01:15:35.080 --> 01:15:40.080 +interpretability give us these are these + +01:15:37.679 --> 01:15:41.960 +are my concluding remarks so hopefully + +01:15:40.080 --> 01:15:43.920 +we get a better understanding of how + +01:15:41.960 --> 01:15:46.840 +language models work their their + +01:15:43.920 --> 01:15:49.520 +internals their structure um we get to + +01:15:46.840 --> 01:15:52.800 +understand uh kind of why they do really + +01:15:49.520 --> 01:15:55.239 +well this is still like very very + +01:15:52.800 --> 01:15:57.320 +unclear um and hopefully we find + +01:15:55.239 --> 01:15:59.400 +lightweight methods to control and steer + +01:15:57.320 --> 01:16:03.360 +models as models become more and more + +01:15:59.400 --> 01:16:05.280 +useful um and and impact more more users + +01:16:03.360 --> 01:16:09.360 +we need better ways to control and steer + +01:16:05.280 --> 01:16:13.120 +them um and it's unclear how much + +01:16:09.360 --> 01:16:15.360 +industry will devote to these things um + +01:16:13.120 --> 01:16:18.080 +so it might be the role of Academia to + +01:16:15.360 --> 01:16:21.239 +do more science in in order to figure + +01:16:18.080 --> 01:16:23.920 +out how to control and steer these + +01:16:21.239 --> 01:16:25.520 +better um and hopefully we can also find + +01:16:23.920 --> 01:16:29.199 +potential Al Alternatives or + +01:16:25.520 --> 01:16:34.840 +complimentary methods to to do alignment + +01:16:29.199 --> 01:16:37.480 +um rhf is kind of expensive um and if if + +01:16:34.840 --> 01:16:40.080 +we could do this with limited data and + +01:16:37.480 --> 01:16:42.760 +um exploit structure um and information + +01:16:40.080 --> 01:16:46.400 +that's already in the model more so than + +01:16:42.760 --> 01:16:48.600 +than these methods um maybe maybe we can + +01:16:46.400 --> 01:16:50.920 +align them better and these things don't + +01:16:48.600 --> 01:16:52.480 +have to be uh Alternatives they can be + +01:16:50.920 --> 01:16:53.840 +complimentary to to + +01:16:52.480 --> 01:16:57.159 +[Music] + +01:16:53.840 --> 01:17:00.040 +rhm um here's some resources this is an + +01:16:57.159 --> 01:17:01.280 +extremely incomplete group but here are + +01:17:00.040 --> 01:17:04.080 +some folks that work on model + +01:17:01.280 --> 01:17:07.040 +interoperability there's many of these + +01:17:04.080 --> 01:17:09.120 +um I cited some some work from some of + +01:17:07.040 --> 01:17:11.280 +these teams but um there's a lot of + +01:17:09.120 --> 01:17:13.280 +people working on it and in the last + +01:17:11.280 --> 01:17:15.040 +like year there's been kind of an + +01:17:13.280 --> 01:17:17.480 +explosion especially in the mechanistic + +01:17:15.040 --> 01:17:21.639 +interpretability kind of World um Sasha + +01:17:17.480 --> 01:17:23.800 +Rush had a recent tweet that uh asked + +01:17:21.639 --> 01:17:25.320 +like prospective grad students what is + +01:17:23.800 --> 01:17:27.239 +the topic that they're most excited + +01:17:25.320 --> 01:17:29.880 +about and mechanistic interpretability + +01:17:27.239 --> 01:17:33.960 +was a thing that seemed to have won out + +01:17:29.880 --> 01:17:37.040 +um so I encourage you to to kind of dive + +01:17:33.960 --> 01:17:38.719 +into this literature and read some of + +01:17:37.040 --> 01:17:41.679 +the papers if you're if you're excited + +01:17:38.719 --> 01:17:45.199 +about it and yeah thanks for your + +01:17:41.679 --> 01:17:45.199 +attention and that's all I + +01:17:45.400 --> 01:17:48.400 +have diff --git a/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts.mp4 b/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..599df3d8c4b25c5654e5f3efdae38f333e0a503a --- /dev/null +++ b/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22c0ccf96269123bc7e4f6390f0c0220a4e05848e941a58ed5e57085ae2d8432 +size 80561402 diff --git a/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/metadata.json b/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..ee6b482a634ecb6046b44c5a06719d13af8bc592 --- /dev/null +++ b/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=MueCRSZ3RQ0", + "title": "CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/transcript.srt b/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..0f0da9f6c85bd340bbc514975de4da4d0b62c21a --- /dev/null +++ b/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/transcript.srt @@ -0,0 +1,6599 @@ +1 +00:00:00,760 --> 00:00:07,240 +he everyone so I'd like to get + +2 +00:00:03,279 --> 00:00:09,320 +started the first thing is that um I + +3 +00:00:07,240 --> 00:00:11,160 +heard from the adws people that they + +4 +00:00:09,320 --> 00:00:14,440 +started the + +5 +00:00:11,160 --> 00:00:17,840 +process of + +6 +00:00:14,440 --> 00:00:19,400 +getting things issued on the 26th which + +7 +00:00:17,840 --> 00:00:21,480 +is three days ago so you should be + +8 +00:00:19,400 --> 00:00:23,560 +getting it soon uh for reference I + +9 +00:00:21,480 --> 00:00:25,599 +submitted the form about seven days + +10 +00:00:23,560 --> 00:00:28,359 +before that so they're moving very + +11 +00:00:25,599 --> 00:00:29,599 +slowly but I think you should have AWS + +12 +00:00:28,359 --> 00:00:31,920 +credits by the end of the week if you + +13 +00:00:29,599 --> 00:00:35,120 +need them to run uh GPU machines or + +14 +00:00:31,920 --> 00:00:37,960 +stuff like that the moment you get AWS + +15 +00:00:35,120 --> 00:00:39,960 +credits or maybe even before you get AWS + +16 +00:00:37,960 --> 00:00:43,320 +credits I might suggest that you try to + +17 +00:00:39,960 --> 00:00:46,760 +start uh a GPU machine like a P2 machine + +18 +00:00:43,320 --> 00:00:49,160 +or something like that because um + +19 +00:00:46,760 --> 00:00:51,760 +sometimes you need to file for a limit + +20 +00:00:49,160 --> 00:00:53,640 +increase uh to get a P2 machine and that + +21 +00:00:51,760 --> 00:00:55,879 +also takes a little bit of time so I I + +22 +00:00:53,640 --> 00:00:59,160 +would suggest that you uh you take a + +23 +00:00:55,879 --> 00:01:01,160 +look at doing that um so you go to like + +24 +00:00:59,160 --> 00:01:02,800 +if you're using AWS if you're not using + +25 +00:01:01,160 --> 00:01:05,119 +AWS it doesn't matter but if you're + +26 +00:01:02,800 --> 00:01:08,119 +using AWS you can go to launch instance + +27 +00:01:05,119 --> 00:01:11,520 +and try to launch a p2x large machine um + +28 +00:01:08,119 --> 00:01:13,159 +or something like that so uh but yeah + +29 +00:01:11,520 --> 00:01:14,920 +anyway hopefully that will be done soon + +30 +00:01:13,159 --> 00:01:16,600 +I'm sorry about the delay on this they + +31 +00:01:14,920 --> 00:01:21,400 +said it would take seven days and it's + +32 +00:01:16,600 --> 00:01:24,280 +taken almost twice at now so um my + +33 +00:01:21,400 --> 00:01:26,439 +apologies any other uh things before we + +34 +00:01:24,280 --> 00:01:26,439 +get + +35 +00:01:28,759 --> 00:01:34,520 +started um okay I I don't see any so + +36 +00:01:31,920 --> 00:01:37,280 +I'll go ahead with this um I have + +37 +00:01:34,520 --> 00:01:39,240 +slightly fewer slides today so I might + +38 +00:01:37,280 --> 00:01:40,960 +go a little bit off the slides and talk + +39 +00:01:39,240 --> 00:01:44,759 +about papers and stuff or we might + +40 +00:01:40,960 --> 00:01:46,920 +finish early uh either way so um but + +41 +00:01:44,759 --> 00:01:48,439 +what I would like to talk about is um + +42 +00:01:46,920 --> 00:01:53,320 +combining multiple + +43 +00:01:48,439 --> 00:01:55,479 +models and this is uh really important + +44 +00:01:53,320 --> 00:01:57,520 +and useful if you want to get like an + +45 +00:01:55,479 --> 00:02:00,719 +extra few points of + +46 +00:01:57,520 --> 00:02:03,159 +accuracy uh for anything basically + +47 +00:02:00,719 --> 00:02:04,039 +because it's a pretty reliable way to + +48 +00:02:03,159 --> 00:02:06,960 +get + +49 +00:02:04,039 --> 00:02:08,879 +improvements um and there's a a bunch of + +50 +00:02:06,960 --> 00:02:11,239 +different kind of related but different + +51 +00:02:08,879 --> 00:02:13,680 +topics that I'm going to talk about + +52 +00:02:11,239 --> 00:02:15,519 +today but anyway the the basic + +53 +00:02:13,680 --> 00:02:19,239 +background is that we have many models + +54 +00:02:15,519 --> 00:02:22,920 +uh that exist and the reason why we have + +55 +00:02:19,239 --> 00:02:25,840 +many models that exist is multiple fold + +56 +00:02:22,920 --> 00:02:28,160 +number one we could have different model + +57 +00:02:25,840 --> 00:02:30,080 +architectures um and we could also have + +58 +00:02:28,160 --> 00:02:34,440 +different initializations of those model + +59 +00:02:30,080 --> 00:02:37,879 +architectures so um normally you know if + +60 +00:02:34,440 --> 00:02:40,319 +we do initialization we will initial + +61 +00:02:37,879 --> 00:02:42,360 +initialize our model architecture like + +62 +00:02:40,319 --> 00:02:44,680 +let's say we initialize a llama + +63 +00:02:42,360 --> 00:02:45,920 +architecture uh we start out with random + +64 +00:02:44,680 --> 00:02:49,319 +7B + +65 +00:02:45,920 --> 00:02:52,879 +parameters and then we train and we get + +66 +00:02:49,319 --> 00:02:53,840 +llama 7B for uh our pre-training or + +67 +00:02:52,879 --> 00:02:57,280 +llama + +68 +00:02:53,840 --> 00:02:58,599 +27b um we might initialize another model + +69 +00:02:57,280 --> 00:03:00,599 +this could be you know the same + +70 +00:02:58,599 --> 00:03:02,360 +architecture different architecture Ure + +71 +00:03:00,599 --> 00:03:04,840 +train it on the same data or different + +72 +00:03:02,360 --> 00:03:07,000 +data and get something like mistol + +73 +00:03:04,840 --> 00:03:08,599 +mistol 7B in this case actually maybe + +74 +00:03:07,000 --> 00:03:10,080 +these are I should have indicated that + +75 +00:03:08,599 --> 00:03:11,680 +these are different architectures but + +76 +00:03:10,080 --> 00:03:13,879 +you know we get a different pre-rain + +77 +00:03:11,680 --> 00:03:15,599 +model and of course uh we could also + +78 +00:03:13,879 --> 00:03:18,640 +make it bigger or smaller or whatever + +79 +00:03:15,599 --> 00:03:21,720 +else and then we get llama 270b over + +80 +00:03:18,640 --> 00:03:23,519 +here and then after we do that there's a + +81 +00:03:21,720 --> 00:03:25,319 +lot of fine tuning that goes on + +82 +00:03:23,519 --> 00:03:29,360 +according to different strategies so we + +83 +00:03:25,319 --> 00:03:32,640 +have um you know llama 27b instruct uh + +84 +00:03:29,360 --> 00:03:37,760 +vun 7B uh version + +85 +00:03:32,640 --> 00:03:41,000 +1.5 um mistol 7B instruct uh news uh + +86 +00:03:37,760 --> 00:03:45,239 +Hermes 2 mistal 7B or llama 270b + +87 +00:03:41,000 --> 00:03:47,239 +instruct so we have um a variety of + +88 +00:03:45,239 --> 00:03:49,400 +architectures a variety of random + +89 +00:03:47,239 --> 00:03:51,480 +initializations of those architectures a + +90 +00:03:49,400 --> 00:03:54,799 +variety of pre-train models due to + +91 +00:03:51,480 --> 00:03:57,439 +pre-training data or base models and + +92 +00:03:54,799 --> 00:03:58,920 +then a variety of fine dun models um and + +93 +00:03:57,439 --> 00:04:01,120 +so we have this kind of like branching + +94 +00:03:58,920 --> 00:04:02,959 +tree basically + +95 +00:04:01,120 --> 00:04:04,319 +um the reason why this is important is + +96 +00:04:02,959 --> 00:04:06,680 +because when we're combining multiple + +97 +00:04:04,319 --> 00:04:08,400 +models together some of the methods are + +98 +00:04:06,680 --> 00:04:09,959 +applicable to completely different + +99 +00:04:08,400 --> 00:04:12,439 +models some of the methods are only + +100 +00:04:09,959 --> 00:04:15,000 +applicable to models that share the same + +101 +00:04:12,439 --> 00:04:16,720 +architecture and some of them are only + +102 +00:04:15,000 --> 00:04:19,199 +applicable to models that share the same + +103 +00:04:16,720 --> 00:04:20,959 +initialization and training trajectory + +104 +00:04:19,199 --> 00:04:23,680 +and so I'll try to distinguish between + +105 +00:04:20,959 --> 00:04:23,680 +those as we go + +106 +00:04:24,040 --> 00:04:27,919 +forward + +107 +00:04:25,560 --> 00:04:29,960 +cool so the first thing I I'll talk + +108 +00:04:27,919 --> 00:04:32,600 +about is model ensembling and and + +109 +00:04:29,960 --> 00:04:34,320 +ensembling is kind of the a very general + +110 +00:04:32,600 --> 00:04:37,600 +technique that you can use in a lot of + +111 +00:04:34,320 --> 00:04:39,360 +different uh ways but it has its + +112 +00:04:37,600 --> 00:04:43,039 +disadvantages as + +113 +00:04:39,360 --> 00:04:47,199 +well so basically embling is combining + +114 +00:04:43,039 --> 00:04:50,320 +the predictions from multiple models + +115 +00:04:47,199 --> 00:04:52,400 +and the easiest way to do this ignore + +116 +00:04:50,320 --> 00:04:53,800 +the lstm here this is just any sequence + +117 +00:04:52,400 --> 00:04:56,320 +modeling thing it's because the slides + +118 +00:04:53,800 --> 00:05:00,120 +are old but like let's say this is a a + +119 +00:04:56,320 --> 00:05:03,360 +Transformer it is calculating the + +120 +00:05:00,120 --> 00:05:05,600 +current decoder State and you make a + +121 +00:05:03,360 --> 00:05:07,600 +prediction um this is calculating a + +122 +00:05:05,600 --> 00:05:09,199 +current decoder State and make uh + +123 +00:05:07,600 --> 00:05:11,560 +current decoders sayate in making a + +124 +00:05:09,199 --> 00:05:13,039 +prediction and based on some combination + +125 +00:05:11,560 --> 00:05:17,120 +of the two predictions you decide what + +126 +00:05:13,039 --> 00:05:17,120 +you actually want to Output at the next + +127 +00:05:17,680 --> 00:05:23,840 +step so why would we want to do this um + +128 +00:05:22,080 --> 00:05:25,880 +does anyone have any ideas why we want + +129 +00:05:23,840 --> 00:05:28,639 +to use two models instead of using one + +130 +00:05:25,880 --> 00:05:31,639 +model or just using the best + +131 +00:05:28,639 --> 00:05:31,639 +model + +132 +00:05:32,319 --> 00:05:36,440 +or maybe in what situations we would + +133 +00:05:34,520 --> 00:05:39,440 +want to do + +134 +00:05:36,440 --> 00:05:39,440 +this + +135 +00:05:45,400 --> 00:05:50,319 +yeah and what what's the advantage of + +136 +00:05:47,960 --> 00:05:50,319 +doing + +137 +00:05:51,600 --> 00:05:57,000 +that yeah it reduces a bias kind kind of + +138 +00:05:54,800 --> 00:05:57,000 +yeah + +139 +00:05:58,639 --> 00:06:01,639 +sure + +140 +00:06:28,560 --> 00:06:31,560 +m + +141 +00:06:35,400 --> 00:06:40,360 +yeah so um I I'll repeat all of these I + +142 +00:06:38,599 --> 00:06:43,960 +think all of these are correct so number + +143 +00:06:40,360 --> 00:06:47,479 +one um it reduces the bias uh caused by + +144 +00:06:43,960 --> 00:06:49,199 +a single model uh number two it was it's + +145 +00:06:47,479 --> 00:06:52,199 +kind of like a beian perspective which + +146 +00:06:49,199 --> 00:06:54,000 +I'll talk about in a second and then + +147 +00:06:52,199 --> 00:06:56,039 +number three we have different models + +148 +00:06:54,000 --> 00:06:58,520 +and models are better at some things and + +149 +00:06:56,039 --> 00:07:00,400 +worse at other things + +150 +00:06:58,520 --> 00:07:02,720 +um + +151 +00:07:00,400 --> 00:07:05,960 +so talking about the better at some + +152 +00:07:02,720 --> 00:07:08,319 +things and worse at other things um the + +153 +00:07:05,960 --> 00:07:10,960 +basic idea behind embling is that the + +154 +00:07:08,319 --> 00:07:14,240 +errors that model m models make tend to + +155 +00:07:10,960 --> 00:07:15,840 +not be consistent it not tend to not be + +156 +00:07:14,240 --> 00:07:21,520 +as consistent as when the model is + +157 +00:07:15,840 --> 00:07:24,800 +getting it correct so we might have um + +158 +00:07:21,520 --> 00:07:26,160 +we might have one model that says uh + +159 +00:07:24,800 --> 00:07:28,199 +like let's say we just have really + +160 +00:07:26,160 --> 00:07:30,680 +really bad models this is kind of a + +161 +00:07:28,199 --> 00:07:31,720 +really um + +162 +00:07:30,680 --> 00:07:35,960 +obvious + +163 +00:07:31,720 --> 00:07:38,440 +example but we have like the dog the dog + +164 +00:07:35,960 --> 00:07:42,639 +barks and then + +165 +00:07:38,440 --> 00:07:46,039 +runs and then uh Dives or something like + +166 +00:07:42,639 --> 00:07:49,000 +that and we have uh one one model that + +167 +00:07:46,039 --> 00:07:50,560 +just had tons of stuff about diving in + +168 +00:07:49,000 --> 00:07:52,120 +its training data another model that had + +169 +00:07:50,560 --> 00:07:54,240 +tons of stuff about running in its + +170 +00:07:52,120 --> 00:07:56,560 +training data or or marathons or + +171 +00:07:54,240 --> 00:08:00,039 +something staining data so we'll get + +172 +00:07:56,560 --> 00:08:01,800 +model one and model one we'll to give + +173 +00:08:00,039 --> 00:08:06,240 +like a probability of like + +174 +00:08:01,800 --> 00:08:08,280 +0.3 maybe 0.4 and + +175 +00:08:06,240 --> 00:08:10,360 +0.05 and then we'll have another one + +176 +00:08:08,280 --> 00:08:13,039 +over here that's like + +177 +00:08:10,360 --> 00:08:17,319 +0.32 + +178 +00:08:13,039 --> 00:08:19,759 +0.41 and 0 sorry + +179 +00:08:17,319 --> 00:08:23,039 +0.05 and + +180 +00:08:19,759 --> 00:08:25,759 +0.41 or something like this and so when + +181 +00:08:23,039 --> 00:08:27,639 +you average the two together you tend to + +182 +00:08:25,759 --> 00:08:29,240 +get the right answer more often because + +183 +00:08:27,639 --> 00:08:31,720 +kind of the mistakes that they make tend + +184 +00:08:29,240 --> 00:08:33,479 +to less correlated than the probability + +185 +00:08:31,720 --> 00:08:35,880 +of getting and of course it's not + +186 +00:08:33,479 --> 00:08:38,200 +perfect because unbled models are not + +187 +00:08:35,880 --> 00:08:39,880 +perfect but this is a a general tendency + +188 +00:08:38,200 --> 00:08:42,240 +that we see a lot in + +189 +00:08:39,880 --> 00:08:45,959 +models + +190 +00:08:42,240 --> 00:08:47,720 +um and um it's because of this it kind + +191 +00:08:45,959 --> 00:08:52,320 +of Smooths over the idiosyncrasies of + +192 +00:08:47,720 --> 00:08:54,800 +the models you can even um gist Ensemble + +193 +00:08:52,320 --> 00:08:57,519 +models from different checkpoints and + +194 +00:08:54,800 --> 00:08:58,959 +that still gives you improvements and so + +195 +00:08:57,519 --> 00:09:00,560 +when you Ensemble models from different + +196 +00:08:58,959 --> 00:09:02,600 +checkpoints it's basically just what + +197 +00:09:00,560 --> 00:09:05,920 +data did they see most recently and that + +198 +00:09:02,600 --> 00:09:07,839 +also Smooths over you know uh the fact + +199 +00:09:05,920 --> 00:09:10,600 +that like this model happened to see + +200 +00:09:07,839 --> 00:09:13,000 +some data more recently and so it's less + +201 +00:09:10,600 --> 00:09:16,120 +uh you know it's biased towards doing + +202 +00:09:13,000 --> 00:09:18,440 +that so uh this is a a pretty effective + +203 +00:09:16,120 --> 00:09:20,079 +method this is one of the few methods + +204 +00:09:18,440 --> 00:09:21,959 +that I know is going to improve my + +205 +00:09:20,079 --> 00:09:25,120 +accuracy almost every time like there's + +206 +00:09:21,959 --> 00:09:27,880 +a bunch of methods that you can apply um + +207 +00:09:25,120 --> 00:09:29,680 +and I ensembling it's very rare for me + +208 +00:09:27,880 --> 00:09:31,959 +to Ensemble two models together not get + +209 +00:09:29,680 --> 00:09:34,839 +a boost in accuracy in some way so it's + +210 +00:09:31,959 --> 00:09:34,839 +a good thing to + +211 +00:09:35,600 --> 00:09:41,040 +that there's two main ways to combine + +212 +00:09:38,680 --> 00:09:42,560 +models together and both of them are + +213 +00:09:41,040 --> 00:09:45,800 +useful in different + +214 +00:09:42,560 --> 00:09:48,079 +situations the first one is linear + +215 +00:09:45,800 --> 00:09:49,600 +interpolation and when you do linear + +216 +00:09:48,079 --> 00:09:51,240 +interpolation basically what you're + +217 +00:09:49,600 --> 00:09:53,720 +doing is you're taking the weighted + +218 +00:09:51,240 --> 00:09:56,839 +average of model + +219 +00:09:53,720 --> 00:10:00,360 +probabilities and the way that looks + +220 +00:09:56,839 --> 00:10:04,040 +mathematically is like this um this is a + +221 +00:10:00,360 --> 00:10:05,680 +probability according to the model M so + +222 +00:10:04,040 --> 00:10:08,000 +this is just you know the probability of + +223 +00:10:05,680 --> 00:10:11,720 +the next token according to model M this + +224 +00:10:08,000 --> 00:10:13,200 +is the probability of selecting model M + +225 +00:10:11,720 --> 00:10:18,040 +so you talked a little bit about the + +226 +00:10:13,200 --> 00:10:19,920 +basian approach uh to this and this is + +227 +00:10:18,040 --> 00:10:23,519 +basically saying what is the probability + +228 +00:10:19,920 --> 00:10:26,519 +that the parameters of model M + +229 +00:10:23,519 --> 00:10:30,320 +are the ones that we want to be choosing + +230 +00:10:26,519 --> 00:10:32,680 +in this at this particular time step and + +231 +00:10:30,320 --> 00:10:34,640 +then we will we will calculate this and + +232 +00:10:32,680 --> 00:10:38,120 +so then you take the sum over this and + +233 +00:10:34,640 --> 00:10:38,120 +this gives you the next + +234 +00:10:39,560 --> 00:10:44,800 +probability for the second term you can + +235 +00:10:42,639 --> 00:10:47,120 +do this in two ways the most common way + +236 +00:10:44,800 --> 00:10:51,800 +to do this is just to have this be a + +237 +00:10:47,120 --> 00:10:55,279 +constant so you you basically + +238 +00:10:51,800 --> 00:10:55,279 +Define mixture + +239 +00:10:55,920 --> 00:11:01,240 +weights uh which are like um + +240 +00:11:08,480 --> 00:11:13,480 +where the sum of the mixture weights is + +241 +00:11:10,760 --> 00:11:16,160 +equal to one and this is always between + +242 +00:11:13,480 --> 00:11:18,639 +zero and one and so if you do this then + +243 +00:11:16,160 --> 00:11:21,000 +this is just constant and you can uh + +244 +00:11:18,639 --> 00:11:23,519 +interpolate them together constantly but + +245 +00:11:21,000 --> 00:11:25,680 +you can also actually explicitly model + +246 +00:11:23,519 --> 00:11:27,240 +this probability and say oh I'm + +247 +00:11:25,680 --> 00:11:30,279 +currently in a situation where I really + +248 +00:11:27,240 --> 00:11:31,880 +think model M will do a good job of uh + +249 +00:11:30,279 --> 00:11:33,440 +you know predicting the probability so I + +250 +00:11:31,880 --> 00:11:36,160 +want to put most of my probability on + +251 +00:11:33,440 --> 00:11:39,000 +model M so you can actually learn this + +252 +00:11:36,160 --> 00:11:40,079 +dynamically as well um and so if you + +253 +00:11:39,000 --> 00:11:44,360 +have + +254 +00:11:40,079 --> 00:11:45,920 +uh this actually um is rather practical + +255 +00:11:44,360 --> 00:11:47,120 +and easy to do because what you can do + +256 +00:11:45,920 --> 00:11:48,920 +is you can just calculate the + +257 +00:11:47,120 --> 00:11:51,399 +probability according to each model at + +258 +00:11:48,920 --> 00:11:53,120 +each time step and train this model + +259 +00:11:51,399 --> 00:11:55,519 +separately without loading these models + +260 +00:11:53,120 --> 00:11:59,399 +into memory at at the time of training + +261 +00:11:55,519 --> 00:12:00,959 +those models so uh yeah this is um some + +262 +00:11:59,399 --> 00:12:04,800 +you can do as + +263 +00:12:00,959 --> 00:12:04,800 +well any questions about + +264 +00:12:06,680 --> 00:12:11,920 +this + +265 +00:12:08,519 --> 00:12:14,000 +Okay cool so the other option is log + +266 +00:12:11,920 --> 00:12:15,800 +linear interpolation and so linear + +267 +00:12:14,000 --> 00:12:18,680 +interpolation you're taking a linear + +268 +00:12:15,800 --> 00:12:22,040 +combination of the probabilities of each + +269 +00:12:18,680 --> 00:12:24,959 +model log linear interpolation you're + +270 +00:12:22,040 --> 00:12:26,079 +combining together the log probabilities + +271 +00:12:24,959 --> 00:12:29,519 +of each + +272 +00:12:26,079 --> 00:12:32,639 +model and then renormalizing so so that + +273 +00:12:29,519 --> 00:12:34,920 +you get um that you get an actual + +274 +00:12:32,639 --> 00:12:37,760 +probabilistic output so basically what + +275 +00:12:34,920 --> 00:12:40,720 +you do is you have this uh interpolation + +276 +00:12:37,760 --> 00:12:44,040 +coefficient like I had before but you're + +277 +00:12:40,720 --> 00:12:44,040 +combining together the log + +278 +00:12:44,639 --> 00:12:49,639 +probabilities and so here we need to + +279 +00:12:47,680 --> 00:12:51,320 +take the soft + +280 +00:12:49,639 --> 00:12:53,760 +Max + +281 +00:12:51,320 --> 00:12:55,760 +um thinking back here I didn't take the + +282 +00:12:53,760 --> 00:12:58,120 +softmax does anyone have an idea why I + +283 +00:12:55,760 --> 00:13:02,000 +didn't take the soft + +284 +00:12:58,120 --> 00:13:02,000 +Max or why I didn't need + +285 +00:13:08,160 --> 00:13:12,199 +to why why I need to + +286 +00:13:21,600 --> 00:13:27,680 +here yeah + +287 +00:13:23,639 --> 00:13:30,440 +so this probability is gu to be z z and + +288 +00:13:27,680 --> 00:13:32,240 +one and add up to one this probability + +289 +00:13:30,440 --> 00:13:33,760 +is also guaranteed to be zero and one + +290 +00:13:32,240 --> 00:13:35,680 +and add up to one and then when you + +291 +00:13:33,760 --> 00:13:37,120 +multiply those together uh you can do a + +292 +00:13:35,680 --> 00:13:39,160 +little bit of math and demonstrate that + +293 +00:13:37,120 --> 00:13:41,440 +the resulting thing will be between zero + +294 +00:13:39,160 --> 00:13:42,839 +and one and add up to one that's not the + +295 +00:13:41,440 --> 00:13:44,399 +case anymore when we start doing things + +296 +00:13:42,839 --> 00:13:47,639 +in log space because it's just not a + +297 +00:13:44,399 --> 00:13:50,160 +linear function anyway so um you need to + +298 +00:13:47,639 --> 00:13:51,959 +renormalize like this luckily this is + +299 +00:13:50,160 --> 00:13:54,920 +super easy like anything else you do in + +300 +00:13:51,959 --> 00:13:56,959 +py torch you just add things together + +301 +00:13:54,920 --> 00:13:59,320 +and take a soft Max and you'll you'll + +302 +00:13:56,959 --> 00:14:02,519 +get an output but you do need to do + +303 +00:13:59,320 --> 00:14:05,279 +otherwise you're going to get something + +304 +00:14:02,519 --> 00:14:07,279 +weird um the interpolation coefficient + +305 +00:14:05,279 --> 00:14:09,639 +here also can be set to a constant so + +306 +00:14:07,279 --> 00:14:12,759 +you can you could learn it uh kind of + +307 +00:14:09,639 --> 00:14:15,320 +dynamically or it could be + +308 +00:14:12,759 --> 00:14:17,720 +separate cool and these actually have + +309 +00:14:15,320 --> 00:14:19,639 +different meaning oh sorry go ahead you + +310 +00:14:17,720 --> 00:14:23,880 +T on + +311 +00:14:19,639 --> 00:14:26,759 +the Yeah Yeah so basically the + +312 +00:14:23,880 --> 00:14:29,880 +way the way you would do this is you + +313 +00:14:26,759 --> 00:14:32,399 +would have either + +314 +00:14:29,880 --> 00:14:33,920 +the same model you you would either take + +315 +00:14:32,399 --> 00:14:35,279 +representations from one of these + +316 +00:14:33,920 --> 00:14:37,480 +language models or you would take + +317 +00:14:35,279 --> 00:14:38,440 +representations from another model and + +318 +00:14:37,480 --> 00:14:41,639 +you would + +319 +00:14:38,440 --> 00:14:43,959 +just have a model that + +320 +00:14:41,639 --> 00:14:46,480 +predicts uh what this interpolation + +321 +00:14:43,959 --> 00:14:48,279 +coefficient would be and the + +322 +00:14:46,480 --> 00:14:49,720 +optimization objective for that + +323 +00:14:48,279 --> 00:14:52,759 +interpolation coefficient is just + +324 +00:14:49,720 --> 00:14:56,120 +maximizing the probability + +325 +00:14:52,759 --> 00:14:59,600 +whatever so this could also be good um + +326 +00:14:56,120 --> 00:15:01,839 +because this interpolation coefficient + +327 +00:14:59,600 --> 00:15:07,160 +only like let's say you're interpolating + +328 +00:15:01,839 --> 00:15:09,399 +two models together it has one degree of + +329 +00:15:07,160 --> 00:15:13,320 +Freedom at each time step right because + +330 +00:15:09,399 --> 00:15:15,320 +you're only predicting a probability um + +331 +00:15:13,320 --> 00:15:17,839 +if you have uh if you have five models + +332 +00:15:15,320 --> 00:15:20,240 +you have uh you basically would be doing + +333 +00:15:17,839 --> 00:15:24,199 +a soft match over + +334 +00:15:20,240 --> 00:15:25,519 +five five outputs and that's a lot fewer + +335 +00:15:24,199 --> 00:15:27,600 +that's a lot fewer than the whole + +336 +00:15:25,519 --> 00:15:29,880 +vocabulary right and so this is + +337 +00:15:27,600 --> 00:15:31,639 +relatively learning a good interpolation + +338 +00:15:29,880 --> 00:15:34,160 +coefficient is relatively easy compared + +339 +00:15:31,639 --> 00:15:35,800 +to learning what word to predict next + +340 +00:15:34,160 --> 00:15:36,880 +and because of this you could actually + +341 +00:15:35,800 --> 00:15:39,759 +tune + +342 +00:15:36,880 --> 00:15:42,880 +this um sorry you could tune this + +343 +00:15:39,759 --> 00:15:44,600 +probability on a very small data set and + +344 +00:15:42,880 --> 00:15:46,959 +you could even have it be context + +345 +00:15:44,600 --> 00:15:48,480 +independent so you could just be you + +346 +00:15:46,959 --> 00:15:51,399 +know + +347 +00:15:48,480 --> 00:15:55,880 +calculating literally five five + +348 +00:15:51,399 --> 00:15:57,399 +parameters here um and so because of + +349 +00:15:55,880 --> 00:16:00,319 +that like let's say you have a special + +350 +00:15:57,399 --> 00:16:02,639 +domain or a special task where you have + +351 +00:16:00,319 --> 00:16:04,920 +like 50 training examples or something + +352 +00:16:02,639 --> 00:16:07,399 +like that or you know 100 training + +353 +00:16:04,920 --> 00:16:08,959 +examples you can learn this + +354 +00:16:07,399 --> 00:16:12,480 +interpolation coefficient very + +355 +00:16:08,959 --> 00:16:15,880 +effectively uh on just a few a very + +356 +00:16:12,480 --> 00:16:18,120 +small number of training examples um but + +357 +00:16:15,880 --> 00:16:20,000 +like it could be very useful because + +358 +00:16:18,120 --> 00:16:23,920 +like let's say you have a special domain + +359 +00:16:20,000 --> 00:16:25,639 +medical language model that's 1.3 + +360 +00:16:23,920 --> 00:16:27,759 +billion parameters that you trained + +361 +00:16:25,639 --> 00:16:29,639 +yourself and then you have a 70 billion + +362 +00:16:27,759 --> 00:16:31,079 +parameter language model + +363 +00:16:29,639 --> 00:16:33,680 +that's like really good at modeling + +364 +00:16:31,079 --> 00:16:35,399 +General English um so then you could + +365 +00:16:33,680 --> 00:16:39,120 +learn the interpolation coefficient + +366 +00:16:35,399 --> 00:16:40,600 +between those two such that um the large + +367 +00:16:39,120 --> 00:16:41,800 +general purpose language model will be + +368 +00:16:40,600 --> 00:16:43,959 +generating all of the kind of + +369 +00:16:41,800 --> 00:16:46,360 +grammatical stuff but whenever you + +370 +00:16:43,959 --> 00:16:48,480 +switch over to modeling technical terms + +371 +00:16:46,360 --> 00:16:50,040 +from the medical domain then it learns + +372 +00:16:48,480 --> 00:16:52,480 +to upweight the medical language model + +373 +00:16:50,040 --> 00:16:54,199 +or something so this can be quite uh + +374 +00:16:52,480 --> 00:16:57,000 +this can be quite effective if you have + +375 +00:16:54,199 --> 00:17:00,839 +a limited amount of data that you want + +376 +00:16:57,000 --> 00:17:00,839 +toing thiss + +377 +00:17:01,240 --> 00:17:05,600 +um any other questions about that + +378 +00:17:09,079 --> 00:17:14,880 +yeah yeah I'm just gonna talk about that + +379 +00:17:11,760 --> 00:17:17,640 +next so linear versus log linear you can + +380 +00:17:14,880 --> 00:17:20,880 +actually think of this in logic um and + +381 +00:17:17,640 --> 00:17:23,640 +what I mean by that is um linear is kind + +382 +00:17:20,880 --> 00:17:26,640 +of like a logical or it tries to come up + +383 +00:17:23,640 --> 00:17:29,600 +with examples where either one of the + +384 +00:17:26,640 --> 00:17:31,679 +two assigns a high probability so we + +385 +00:17:29,600 --> 00:17:36,200 +have the example of like bark + +386 +00:17:31,679 --> 00:17:36,200 +run um bark run + +387 +00:17:55,640 --> 00:18:03,840 +diet so if we take the average of these + +388 +00:18:00,360 --> 00:18:03,840 +two in linear + +389 +00:18:04,120 --> 00:18:10,240 +space this would be + +390 +00:18:07,159 --> 00:18:13,679 +0.2 this would be + +391 +00:18:10,240 --> 00:18:17,240 +0.26 and this would + +392 +00:18:13,679 --> 00:18:17,240 +be um + +393 +00:18:17,400 --> 00:18:26,280 +0.21 and so a a linear combination + +394 +00:18:21,480 --> 00:18:28,600 +between the two will find run to be the + +395 +00:18:26,280 --> 00:18:30,600 +highest scoring one because on the left + +396 +00:18:28,600 --> 00:18:32,280 +side we have one model that really likes + +397 +00:18:30,600 --> 00:18:33,159 +this output and we have another model + +398 +00:18:32,280 --> 00:18:35,159 +that + +399 +00:18:33,159 --> 00:18:39,280 +doesn't + +400 +00:18:35,159 --> 00:18:42,159 +um this is this can be good at using + +401 +00:18:39,280 --> 00:18:44,440 +models that capture uh different traits + +402 +00:18:42,159 --> 00:18:47,679 +or it can also be useful if like for + +403 +00:18:44,440 --> 00:18:49,840 +example you have a you have a small + +404 +00:18:47,679 --> 00:18:52,320 +model that you really that really + +405 +00:18:49,840 --> 00:18:53,840 +captures like very specific vocabulary + +406 +00:18:52,320 --> 00:18:55,520 +and you want to upgrate that specific + +407 +00:18:53,840 --> 00:18:56,799 +vocabulary that gets a really low + +408 +00:18:55,520 --> 00:18:57,720 +probability according to a general + +409 +00:18:56,799 --> 00:19:01,360 +purpose + +410 +00:18:57,720 --> 00:19:03,200 +model um this is also necessary when any + +411 +00:19:01,360 --> 00:19:04,520 +model can assign zero probabilities so + +412 +00:19:03,200 --> 00:19:06,720 +if you have like an example of + +413 +00:19:04,520 --> 00:19:10,080 +vocabulary that isn't included in the + +414 +00:19:06,720 --> 00:19:11,159 +the like vocabulary of another model or + +415 +00:19:10,080 --> 00:19:14,280 +you have models with different + +416 +00:19:11,159 --> 00:19:17,200 +vocabularies it's necessary to do this + +417 +00:19:14,280 --> 00:19:19,200 +log linear is more like logical and um + +418 +00:19:17,200 --> 00:19:22,240 +so the interpolated model only likes + +419 +00:19:19,200 --> 00:19:23,799 +choices where all the models agree and + +420 +00:19:22,240 --> 00:19:25,640 +this is particularly good when you want + +421 +00:19:23,799 --> 00:19:27,440 +to restrict possible answers like you + +422 +00:19:25,640 --> 00:19:29,280 +want to have one model be able to say no + +423 +00:19:27,440 --> 00:19:32,080 +I really don't like this so never output + +424 +00:19:29,280 --> 00:19:34,200 +it so um for example if you wanted to + +425 +00:19:32,080 --> 00:19:37,360 +train a model that you knew was very + +426 +00:19:34,200 --> 00:19:38,919 +adverse to toxic language and prevent uh + +427 +00:19:37,360 --> 00:19:42,600 +the model from outputting toxic language + +428 +00:19:38,919 --> 00:19:45,200 +you could use log linear mod so I I + +429 +00:19:42,600 --> 00:19:47,559 +can't unfortunately uh calculate logs + +430 +00:19:45,200 --> 00:19:50,080 +and exponents in my head well enough to + +431 +00:19:47,559 --> 00:19:51,600 +uh to decide this but I'm sure that a + +432 +00:19:50,080 --> 00:19:53,840 +linear + +433 +00:19:51,600 --> 00:19:56,840 +model the linear model would pick the + +434 +00:19:53,840 --> 00:19:59,600 +first one here and the log linear + +435 +00:19:56,840 --> 00:20:01,679 +model would pick the second one because + +436 +00:19:59,600 --> 00:20:05,640 +the second one has a very low score here + +437 +00:20:01,679 --> 00:20:08,640 +so that would be downrated um + +438 +00:20:05,640 --> 00:20:08,640 +by + +439 +00:20:16,919 --> 00:20:20,640 +yeah yeah so + +440 +00:20:25,840 --> 00:20:31,000 +if yeah and if there's any chance of + +441 +00:20:28,760 --> 00:20:34,159 +assigning zero probability according to + +442 +00:20:31,000 --> 00:20:36,520 +a language model then really you can't + +443 +00:20:34,159 --> 00:20:38,200 +even test that language model on that on + +444 +00:20:36,520 --> 00:20:42,120 +that test set + +445 +00:20:38,200 --> 00:20:43,640 +um so the issue becomes like let's say + +446 +00:20:42,120 --> 00:20:45,559 +you have two models with different + +447 +00:20:43,640 --> 00:20:47,080 +vocabulary if you have two models with + +448 +00:20:45,559 --> 00:20:49,080 +different vocabulary it becomes very + +449 +00:20:47,080 --> 00:20:50,559 +tricky how to reconcile those two but + +450 +00:20:49,080 --> 00:20:53,440 +you could do linear interpolation + +451 +00:20:50,559 --> 00:20:55,200 +between them like match the vocab the + +452 +00:20:53,440 --> 00:20:57,559 +output vocabularies that they do have + +453 +00:20:55,200 --> 00:21:00,120 +and then just not worry about the fact + +454 +00:20:57,559 --> 00:21:02,760 +that the vocabularies are dis jointed + +455 +00:21:00,120 --> 00:21:05,039 +and because one will assign a zero + +456 +00:21:02,760 --> 00:21:07,280 +probability to those vocabulary items + +457 +00:21:05,039 --> 00:21:12,240 +but the other one is fine so you can + +458 +00:21:07,280 --> 00:21:14,919 +just do that but if you're in general it + +459 +00:21:12,240 --> 00:21:16,480 +will be very tricky to try to get two + +460 +00:21:14,919 --> 00:21:18,559 +models with different vocabularies to + +461 +00:21:16,480 --> 00:21:21,480 +play together nicely so I I would + +462 +00:21:18,559 --> 00:21:22,919 +suggest um thinking about thinking + +463 +00:21:21,480 --> 00:21:25,600 +seriously about whether you need to do + +464 +00:21:22,919 --> 00:21:31,360 +that or not before you start out but + +465 +00:21:25,600 --> 00:21:31,360 +yeah um uh yes there any + +466 +00:21:35,559 --> 00:21:40,960 +other + +467 +00:21:38,039 --> 00:21:43,360 +um you could definitely so the question + +468 +00:21:40,960 --> 00:21:45,000 +is are there any other types of + +469 +00:21:43,360 --> 00:21:47,760 +interpolation that have other types of + +470 +00:21:45,000 --> 00:21:50,159 +logical components like exor or nor um + +471 +00:21:47,760 --> 00:21:52,840 +you could definitely come up with one uh + +472 +00:21:50,159 --> 00:21:55,440 +I I am struggling a little bit to think + +473 +00:21:52,840 --> 00:21:57,520 +about when you would want to do that but + +474 +00:21:55,440 --> 00:22:02,840 +I'm sure + +475 +00:21:57,520 --> 00:22:05,840 +you is is the inherent that the + +476 +00:22:02,840 --> 00:22:05,840 +err + +477 +00:22:09,120 --> 00:22:14,480 +not so what what if the errors are not + +478 +00:22:12,640 --> 00:22:15,919 +what if the errors are correlated so + +479 +00:22:14,480 --> 00:22:18,200 +think about what happens if the errors + +480 +00:22:15,919 --> 00:22:20,000 +are perfectly correlated um which is + +481 +00:22:18,200 --> 00:22:25,840 +when you're using the same model in two + +482 +00:22:20,000 --> 00:22:25,840 +parts of the uh like on top so you + +483 +00:22:27,000 --> 00:22:30,520 +literally uh these + +484 +00:22:29,159 --> 00:22:32,679 +model one and model two are the same + +485 +00:22:30,520 --> 00:22:36,720 +model if that's the case nothing happens + +486 +00:22:32,679 --> 00:22:39,200 +it doesn't get worse um and + +487 +00:22:36,720 --> 00:22:43,039 +so of course because this is machine + +488 +00:22:39,200 --> 00:22:45,080 +learning there's no guarantee like you + +489 +00:22:43,039 --> 00:22:47,559 +know unless we make some assumptions + +490 +00:22:45,080 --> 00:22:49,200 +about the relationship between like the + +491 +00:22:47,559 --> 00:22:52,279 +training set and the test set or the + +492 +00:22:49,200 --> 00:22:53,760 +models errors in the test set um you can + +493 +00:22:52,279 --> 00:22:57,039 +always do something that will make your + +494 +00:22:53,760 --> 00:22:59,240 +accuracy worse um like let's say we flip + +495 +00:22:57,039 --> 00:23:00,360 +the labels of a binary class + +496 +00:22:59,240 --> 00:23:03,120 +no matter what you do you're going to + +497 +00:23:00,360 --> 00:23:06,320 +make your accuracy worse but + +498 +00:23:03,120 --> 00:23:09,000 +um no matter what the normal thing you + +499 +00:23:06,320 --> 00:23:10,640 +would do is it would make your if it + +500 +00:23:09,000 --> 00:23:12,480 +would improve accuracy normally it would + +501 +00:23:10,640 --> 00:23:14,760 +decrease your accuracy but like under + +502 +00:23:12,480 --> 00:23:16,080 +pretty reasonable assumptions it's + +503 +00:23:14,760 --> 00:23:20,400 +mostly going to be the case that errors + +504 +00:23:16,080 --> 00:23:22,320 +are deated to some extent um + +505 +00:23:20,400 --> 00:23:25,559 +so + +506 +00:23:22,320 --> 00:23:30,440 +yeah you and because of that ensembly + +507 +00:23:25,559 --> 00:23:30,440 +usually helps yeah + +508 +00:23:36,120 --> 00:23:42,019 +um about which one + +509 +00:23:38,760 --> 00:23:42,019 +[Music] + +510 +00:23:53,559 --> 00:24:01,240 +which let me make sure I didn't mess it + +511 +00:23:55,640 --> 00:24:01,240 +up on sides okay so in my + +512 +00:24:06,960 --> 00:24:13,120 +example yeah yeah + +513 +00:24:09,640 --> 00:24:13,120 +yeah sorry about + +514 +00:24:14,360 --> 00:24:19,320 +that because this is this is where the + +515 +00:24:17,039 --> 00:24:21,840 +average is higher and then this is + +516 +00:24:19,320 --> 00:24:27,200 +one take + +517 +00:24:21,840 --> 00:24:29,039 +you uh cool any other any other + +518 +00:24:27,200 --> 00:24:31,840 +questions okay + +519 +00:24:29,039 --> 00:24:34,440 +okay so + +520 +00:24:31,840 --> 00:24:36,320 +um another thing I should point out is + +521 +00:24:34,440 --> 00:24:39,600 +that we don't + +522 +00:24:36,320 --> 00:24:41,840 +necessarily need to use models only as + +523 +00:24:39,600 --> 00:24:44,080 +positive evidence so if you're using log + +524 +00:24:41,840 --> 00:24:46,039 +linear interpolation actually your + +525 +00:24:44,080 --> 00:24:49,919 +interpolation coefficients do not need + +526 +00:24:46,039 --> 00:24:52,520 +to be positive they can also be negative + +527 +00:24:49,919 --> 00:24:55,360 +and you can have uh things where you + +528 +00:24:52,520 --> 00:24:57,840 +penalize the probabilities given by a + +529 +00:24:55,360 --> 00:24:59,679 +particular model and this has actually + +530 +00:24:57,840 --> 00:25:01,520 +been used for a long time it was + +531 +00:24:59,679 --> 00:25:04,440 +actually used in machine translation + +532 +00:25:01,520 --> 00:25:08,840 +since like uh 2005 or something like + +533 +00:25:04,440 --> 00:25:11,480 +this but the basic idea is um that you + +534 +00:25:08,840 --> 00:25:13,600 +have some models that serve as negative + +535 +00:25:11,480 --> 00:25:15,559 +evidence so you have kind of a core + +536 +00:25:13,600 --> 00:25:17,880 +model this might be your really strong + +537 +00:25:15,559 --> 00:25:21,520 +general purpose language model you have + +538 +00:25:17,880 --> 00:25:23,080 +a positive uh model which is the model + +539 +00:25:21,520 --> 00:25:25,240 +that you want to kind of boost up and + +540 +00:25:23,080 --> 00:25:27,320 +improve and a negative model which you + +541 +00:25:25,240 --> 00:25:31,159 +want to + +542 +00:25:27,320 --> 00:25:33,679 +decrease and um one example of this is + +543 +00:25:31,159 --> 00:25:36,760 +in uh a paper that we did in + +544 +00:25:33,679 --> 00:25:40,159 +2019 um the core was a machine + +545 +00:25:36,760 --> 00:25:42,960 +translation model and the negative model + +546 +00:25:40,159 --> 00:25:44,880 +is an outof domain language model and + +547 +00:25:42,960 --> 00:25:46,960 +the positive model is an in domain + +548 +00:25:44,880 --> 00:25:51,039 +language model and so the idea behind + +549 +00:25:46,960 --> 00:25:53,880 +this is a machine translation model um + +550 +00:25:51,039 --> 00:25:55,600 +you have to train it on machine + +551 +00:25:53,880 --> 00:25:58,320 +translation data and machine translation + +552 +00:25:55,600 --> 00:26:00,640 +data is not very easy to get for + +553 +00:25:58,320 --> 00:26:02,360 +particular domains for example um you + +554 +00:26:00,640 --> 00:26:03,880 +might only have machine translation data + +555 +00:26:02,360 --> 00:26:06,919 +in the news domain and you actually want + +556 +00:26:03,880 --> 00:26:09,240 +to be uh doing uh translation in the + +557 +00:26:06,919 --> 00:26:12,720 +medical domain or something so what you + +558 +00:26:09,240 --> 00:26:14,640 +do is you have your positive model here + +559 +00:26:12,720 --> 00:26:17,600 +this could be a new this is a machine + +560 +00:26:14,640 --> 00:26:19,919 +translation model this could be a news + +561 +00:26:17,600 --> 00:26:21,320 +domain or sorry this could be a medical + +562 +00:26:19,919 --> 00:26:22,919 +domain language model and this could be + +563 +00:26:21,320 --> 00:26:24,360 +a news domain language model so you're + +564 +00:26:22,919 --> 00:26:25,840 +subtracting out the news domain + +565 +00:26:24,360 --> 00:26:27,600 +probabilities and adding in medical + +566 +00:26:25,840 --> 00:26:30,240 +domain probabilities move it in that + +567 +00:26:27,600 --> 00:26:30,240 +direction + +568 +00:26:30,440 --> 00:26:36,799 +um another example of this is uh + +569 +00:26:32,919 --> 00:26:40,000 +something called uh D experts um or + +570 +00:26:36,799 --> 00:26:43,440 +dexperts and the idea here is here you + +571 +00:26:40,000 --> 00:26:46,120 +have a strong language model as your + +572 +00:26:43,440 --> 00:26:48,320 +core and then as negative you have a + +573 +00:26:46,120 --> 00:26:50,240 +weak toxic language model so it was + +574 +00:26:48,320 --> 00:26:52,760 +trained on lot lots of like bad texts + +575 +00:26:50,240 --> 00:26:55,799 +that you don't want to be generating and + +576 +00:26:52,760 --> 00:26:57,159 +the positive is a weak non-toxic + +577 +00:26:55,799 --> 00:26:59,279 +language model that was trained on lots + +578 +00:26:57,159 --> 00:27:03,200 +of like inocua + +579 +00:26:59,279 --> 00:27:04,399 +posts so that would help you detoxify + +580 +00:27:03,200 --> 00:27:06,679 +the outputs of the + +581 +00:27:04,399 --> 00:27:09,799 +language so there's lots of examples of + +582 +00:27:06,679 --> 00:27:09,799 +things like this that you can do + +583 +00:27:10,720 --> 00:27:15,880 +through + +584 +00:27:12,880 --> 00:27:15,880 +yeah + +585 +00:27:19,320 --> 00:27:25,880 +yeah um so the positive in the machine + +586 +00:27:22,840 --> 00:27:27,679 +translation example this is a so this is + +587 +00:27:25,880 --> 00:27:31,760 +a machine translation model where the + +588 +00:27:27,679 --> 00:27:34,080 +input is is like in um English and out + +589 +00:27:31,760 --> 00:27:37,880 +is in Japanese something like + +590 +00:27:34,080 --> 00:27:39,679 +that this is only trained on Japanese + +591 +00:27:37,880 --> 00:27:42,919 +but it's trained on like medical + +592 +00:27:39,679 --> 00:27:44,440 +Japanese for example Med the domain one + +593 +00:27:42,919 --> 00:27:48,480 +this is a language model that was + +594 +00:27:44,440 --> 00:27:50,600 +trained on like news domain um Japanese + +595 +00:27:48,480 --> 00:27:54,039 +or it could even literally just be + +596 +00:27:50,600 --> 00:27:56,360 +trained on the side of the machine + +597 +00:27:54,039 --> 00:28:00,120 +trans um so it's trying to remove out + +598 +00:27:56,360 --> 00:28:00,120 +the language modeling component from the + +599 +00:28:03,720 --> 00:28:06,720 +cool + +600 +00:28:06,880 --> 00:28:11,480 +okay so another thing that I should + +601 +00:28:09,880 --> 00:28:14,720 +point out I didn't actually put it on + +602 +00:28:11,480 --> 00:28:18,399 +the slides is um there's a lot of other + +603 +00:28:14,720 --> 00:28:19,640 +ways to get multiple models and um I + +604 +00:28:18,399 --> 00:28:22,600 +think a lot of people are probably + +605 +00:28:19,640 --> 00:28:23,559 +familiar with Dropout um it's a method + +606 +00:28:22,600 --> 00:28:27,120 +for + +607 +00:28:23,559 --> 00:28:29,080 +regularizing um it's a method for + +608 +00:28:27,120 --> 00:28:31,120 +regularizing + +609 +00:28:29,080 --> 00:28:33,760 +neural networks or deep learning models + +610 +00:28:31,120 --> 00:28:37,279 +in general and basically the idea is + +611 +00:28:33,760 --> 00:28:41,840 +every once in a while um during training + +612 +00:28:37,279 --> 00:28:45,720 +you drop out some portion of the uh like + +613 +00:28:41,840 --> 00:28:48,919 +nodes in the neural network model and + +614 +00:28:45,720 --> 00:28:51,320 +you can actually drop + +615 +00:28:48,919 --> 00:28:52,640 +out and normally what you do is at test + +616 +00:28:51,320 --> 00:28:53,919 +time then you just don't drop out + +617 +00:28:52,640 --> 00:28:56,039 +anything and you use the whole neural + +618 +00:28:53,919 --> 00:28:59,960 +network model but another thing you can + +619 +00:28:56,039 --> 00:29:02,559 +do is you can drop out a test time drop + +620 +00:28:59,960 --> 00:29:04,679 +out five times and combine those + +621 +00:29:02,559 --> 00:29:06,600 +different models together through ensom + +622 +00:29:04,679 --> 00:29:10,600 +and that's actually something uh that + +623 +00:29:06,600 --> 00:29:14,480 +people tried in the uh in the Dropout + +624 +00:29:10,600 --> 00:29:17,600 +paper and this is one way to get + +625 +00:29:14,480 --> 00:29:19,640 +multiple models uh and actually you can + +626 +00:29:17,600 --> 00:29:21,919 +demonstrate that this helps the original + +627 +00:29:19,640 --> 00:29:24,519 +motivation behind Dropout was precisely + +628 +00:29:21,919 --> 00:29:26,279 +coming from this idea of + +629 +00:29:24,519 --> 00:29:29,080 +ensembling + +630 +00:29:26,279 --> 00:29:31,399 +another method + +631 +00:29:29,080 --> 00:29:34,799 +that has been around for a very long + +632 +00:29:31,399 --> 00:29:37,760 +time it's another embling method is + +633 +00:29:34,799 --> 00:29:41,919 +bagging and basically the way bagging + +634 +00:29:37,760 --> 00:29:41,919 +works is you have a data + +635 +00:29:44,000 --> 00:29:50,159 +set like this and you just resample the + +636 +00:29:47,519 --> 00:29:52,919 +data set so you sample all of the output + +637 +00:29:50,159 --> 00:29:55,200 +with uh replacement and you get another + +638 +00:29:52,919 --> 00:29:57,799 +data set of equal size and then you + +639 +00:29:55,200 --> 00:29:58,559 +train on this but you do that like 10 + +640 +00:29:57,799 --> 00:30:00,120 +times + +641 +00:29:58,559 --> 00:30:02,679 +and you train 10 different models and + +642 +00:30:00,120 --> 00:30:04,360 +then you emble those models together and + +643 +00:30:02,679 --> 00:30:06,000 +so this is another way to get multiple + +644 +00:30:04,360 --> 00:30:07,519 +models and both of these still improve + +645 +00:30:06,000 --> 00:30:09,640 +your robustness because they basically + +646 +00:30:07,519 --> 00:30:11,440 +get a different view on the data so they + +647 +00:30:09,640 --> 00:30:13,440 +smooth over some of the + +648 +00:30:11,440 --> 00:30:15,360 +idiosyncrasies um and as I mentioned + +649 +00:30:13,440 --> 00:30:17,960 +before you can also get multiple models + +650 +00:30:15,360 --> 00:30:20,120 +from different checkpoints and then uh + +651 +00:30:17,960 --> 00:30:22,159 +put them together and all of these + +652 +00:30:20,120 --> 00:30:24,159 +methods are pretty related both of them + +653 +00:30:22,159 --> 00:30:25,960 +basically what they're doing is they're + +654 +00:30:24,159 --> 00:30:28,279 +taking advantage of the fact that you + +655 +00:30:25,960 --> 00:30:29,919 +have particular models that saw + +656 +00:30:28,279 --> 00:30:32,760 +different data or saw data in a + +657 +00:30:29,919 --> 00:30:34,120 +different order or different nodes saw + +658 +00:30:32,760 --> 00:30:35,679 +different parts of the data because you + +659 +00:30:34,120 --> 00:30:37,799 +dropped out some of the nodes when they + +660 +00:30:35,679 --> 00:30:41,840 +were back propping on particular + +661 +00:30:37,799 --> 00:30:44,840 +varieties of the data so um even things + +662 +00:30:41,840 --> 00:30:46,720 +like this can give you models that are + +663 +00:30:44,840 --> 00:30:49,760 +different enough that to help uh when + +664 +00:30:46,720 --> 00:30:49,760 +you're onbling or + +665 +00:30:52,559 --> 00:30:59,360 +combining and then of course um you can + +666 +00:30:56,919 --> 00:31:00,799 +also + +667 +00:30:59,360 --> 00:31:02,480 +then of course you can also combine + +668 +00:31:00,799 --> 00:31:06,960 +together like very different models like + +669 +00:31:02,480 --> 00:31:06,960 +this and that also works in different + +670 +00:31:07,240 --> 00:31:11,159 +ways + +671 +00:31:09,000 --> 00:31:13,039 +cool part of the reason why I wanted to + +672 +00:31:11,159 --> 00:31:15,320 +mention that Dropout though in + +673 +00:31:13,039 --> 00:31:17,120 +particular is there's also other + +674 +00:31:15,320 --> 00:31:19,240 +efficient methods for using multiple + +675 +00:31:17,120 --> 00:31:22,000 +models so the big problem with + +676 +00:31:19,240 --> 00:31:25,399 +ensembling is the cost + +677 +00:31:22,000 --> 00:31:27,159 +and simple ensembling is very expensive + +678 +00:31:25,399 --> 00:31:29,240 +because it requires you to run multiple + +679 +00:31:27,159 --> 00:31:30,519 +models at test test time at inference + +680 +00:31:29,240 --> 00:31:33,720 +time and this is something you don't + +681 +00:31:30,519 --> 00:31:35,279 +want to be doing if you're you know + +682 +00:31:33,720 --> 00:31:38,679 +deploying a service or something because + +683 +00:31:35,279 --> 00:31:41,080 +it like linearly increases your cost by + +684 +00:31:38,679 --> 00:31:45,200 +um the amount of bottles that you're + +685 +00:31:41,080 --> 00:31:47,799 +running and it requires both end times + +686 +00:31:45,200 --> 00:31:50,120 +of computation and end times of memory + +687 +00:31:47,799 --> 00:31:51,720 +and memory is actually probably the + +688 +00:31:50,120 --> 00:31:54,279 +worst thing because you need to deploy + +689 +00:31:51,720 --> 00:31:58,159 +extra GPU machines and other stuff like + +690 +00:31:54,279 --> 00:31:59,880 +that so um the question is is there any + +691 +00:31:58,159 --> 00:32:03,279 +way we can get some of the benefits of + +692 +00:31:59,880 --> 00:32:06,519 +embling without having to create + +693 +00:32:03,279 --> 00:32:07,320 +multiple models and luckily the answer + +694 +00:32:06,519 --> 00:32:09,240 +is + +695 +00:32:07,320 --> 00:32:11,919 +yes + +696 +00:32:09,240 --> 00:32:13,960 +the method the easiest method for doing + +697 +00:32:11,919 --> 00:32:16,600 +so is something called parameter + +698 +00:32:13,960 --> 00:32:18,399 +averaging and basically what you do is + +699 +00:32:16,600 --> 00:32:21,960 +you just average the parameters of + +700 +00:32:18,399 --> 00:32:26,039 +multiple models together um this only + +701 +00:32:21,960 --> 00:32:29,200 +works under certain conditions so does + +702 +00:32:26,039 --> 00:32:31,120 +anyone um does anyone know what these + +703 +00:32:29,200 --> 00:32:33,320 +conditions might be there's a few + +704 +00:32:31,120 --> 00:32:35,919 +obvious ones and maybe a few slightly + +705 +00:32:33,320 --> 00:32:35,919 +less obvious + +706 +00:32:36,039 --> 00:32:40,799 +ones so like first question do you think + +707 +00:32:38,799 --> 00:32:41,919 +you could combine together do you think + +708 +00:32:40,799 --> 00:32:45,880 +you could average together the + +709 +00:32:41,919 --> 00:32:45,880 +parameters of llama 7B and Lama + +710 +00:32:46,440 --> 00:32:52,639 +70b + +711 +00:32:48,480 --> 00:32:52,639 +no the answer is no but why + +712 +00:32:54,480 --> 00:32:58,440 +not I mean what does that even mean in + +713 +00:32:56,760 --> 00:33:00,480 +the first place right like they have + +714 +00:32:58,440 --> 00:33:02,799 +totally different numbers of parameters + +715 +00:33:00,480 --> 00:33:05,840 +uh you wouldn't be able to find a one + +716 +00:33:02,799 --> 00:33:07,840 +toone association between 7B and like 7 + +717 +00:33:05,840 --> 00:33:12,320 +billion parameters and 70 billion + +718 +00:33:07,840 --> 00:33:16,880 +parameters um what about averaging + +719 +00:33:12,320 --> 00:33:19,399 +together uh let's let's say llama 7B and + +720 +00:33:16,880 --> 00:33:19,399 +mistol + +721 +00:33:23,080 --> 00:33:29,760 +7bs yes no y I'm guessing that like for + +722 +00:33:27,440 --> 00:33:29,760 +the + +723 +00:33:33,760 --> 00:33:38,120 +yeah for different architectures the um + +724 +00:33:36,760 --> 00:33:41,799 +the parameters could mean different + +725 +00:33:38,120 --> 00:33:44,159 +things and even if the architecture is + +726 +00:33:41,799 --> 00:33:45,880 +exactly the same even if your random + +727 +00:33:44,159 --> 00:33:49,880 +initialization is different then that + +728 +00:33:45,880 --> 00:33:52,360 +would be a disastrous because basically + +729 +00:33:49,880 --> 00:33:54,760 +in neural networks there's no inherent + +730 +00:33:52,360 --> 00:33:58,559 +meaning to like parameter number one + +731 +00:33:54,760 --> 00:34:01,919 +right um and there's the idea of permut + +732 +00:33:58,559 --> 00:34:06,679 +Inari which is + +733 +00:34:01,919 --> 00:34:07,639 +um you could like randomly Swap all of + +734 +00:34:06,679 --> 00:34:10,280 +the + +735 +00:34:07,639 --> 00:34:12,079 +dimensions uh between within a neural + +736 +00:34:10,280 --> 00:34:14,760 +network and get exactly the same + +737 +00:34:12,079 --> 00:34:17,919 +function + +738 +00:34:14,760 --> 00:34:22,560 +uh as long as kind + +739 +00:34:17,919 --> 00:34:24,839 +of in layer number one you swap and then + +740 +00:34:22,560 --> 00:34:30,359 +also take the inputs in the next layer + +741 +00:34:24,839 --> 00:34:30,359 +also so um you know you know as long + +742 +00:34:30,960 --> 00:34:36,399 +as if you have a weight Matrix that + +743 +00:34:33,679 --> 00:34:40,800 +results in the um in the outputs being + +744 +00:34:36,399 --> 00:34:49,639 +ordered like 1 two three four + +745 +00:34:40,800 --> 00:34:54,159 +five one or 2 1 3 five four as long as + +746 +00:34:49,639 --> 00:34:55,720 +you also swap the input direct input + +747 +00:34:54,159 --> 00:34:58,400 +dimensions of this weight Matrix you get + +748 +00:34:55,720 --> 00:35:01,520 +exactly the same because they + +749 +00:34:58,400 --> 00:35:04,200 +linear combinations of the parameters + +750 +00:35:01,520 --> 00:35:06,480 +together so neural networks have this + +751 +00:35:04,200 --> 00:35:08,599 +feature of permutation and variance so + +752 +00:35:06,480 --> 00:35:11,800 +models that were trained from like + +753 +00:35:08,599 --> 00:35:13,280 +different uh different initializations + +754 +00:35:11,800 --> 00:35:15,040 +won't be able to be combined together in + +755 +00:35:13,280 --> 00:35:18,320 +this + +756 +00:35:15,040 --> 00:35:20,079 +way um but the good luck the good thing + +757 +00:35:18,320 --> 00:35:21,359 +is actually we have a whole bunch of + +758 +00:35:20,079 --> 00:35:25,320 +models that come from the same + +759 +00:35:21,359 --> 00:35:26,720 +pre-trained model right uh so we we have + +760 +00:35:25,320 --> 00:35:28,640 +this initialization here this + +761 +00:35:26,720 --> 00:35:31,280 +initialization was used to train Lama + +762 +00:35:28,640 --> 00:35:32,920 +27b but now we have like hundreds + +763 +00:35:31,280 --> 00:35:34,440 +hundreds of models that are DED from + +764 +00:35:32,920 --> 00:35:37,400 +Lama 2 we have hundreds of models that + +765 +00:35:34,440 --> 00:35:39,599 +are DED from mixol and there all of the + +766 +00:35:37,400 --> 00:35:40,920 +dimensions actually mean the same thing + +767 +00:35:39,599 --> 00:35:43,280 +because they're derived from the same + +768 +00:35:40,920 --> 00:35:46,680 +parameters in the first place so those + +769 +00:35:43,280 --> 00:35:48,119 +ones we can average together and um + +770 +00:35:46,680 --> 00:35:50,359 +there's basically two ways that we can + +771 +00:35:48,119 --> 00:35:53,520 +do this uh one is by averaging together + +772 +00:35:50,359 --> 00:35:55,240 +multiple checkpoints during training so + +773 +00:35:53,520 --> 00:35:57,960 +originally this was the big thing that + +774 +00:35:55,240 --> 00:36:00,359 +people did uh like you would train model + +775 +00:35:57,960 --> 00:36:02,119 +from scratch for a really long time but + +776 +00:36:00,359 --> 00:36:03,920 +then you would take the final five + +777 +00:36:02,119 --> 00:36:07,520 +checkpoints and you would just average + +778 +00:36:03,920 --> 00:36:09,280 +them together and this helps reduce some + +779 +00:36:07,520 --> 00:36:11,040 +of the noise that you get from + +780 +00:36:09,280 --> 00:36:13,839 +stochastic gradient descent and can + +781 +00:36:11,040 --> 00:36:15,520 +improve your overall accuracy if you're + +782 +00:36:13,839 --> 00:36:17,280 +fine-tuning any models this is something + +783 +00:36:15,520 --> 00:36:18,680 +you can do also uh because you're + +784 +00:36:17,280 --> 00:36:19,800 +probably going to be saving checkpoints + +785 +00:36:18,680 --> 00:36:21,160 +you can just take the best five + +786 +00:36:19,800 --> 00:36:23,079 +checkpoints and average them together + +787 +00:36:21,160 --> 00:36:27,280 +and that actually can improve your + +788 +00:36:23,079 --> 00:36:28,160 +accuracy quite a bit um another thing is + +789 +00:36:27,280 --> 00:36:31,520 +find + +790 +00:36:28,160 --> 00:36:32,880 +uh tuned model merging soine tune um in + +791 +00:36:31,520 --> 00:36:35,000 +several ways and then merge them + +792 +00:36:32,880 --> 00:36:39,079 +together and so for example we might + +793 +00:36:35,000 --> 00:36:41,240 +take Lama 27b instruct and um vuna 7B + +794 +00:36:39,079 --> 00:36:44,760 +1.5 and merg them together with some + +795 +00:36:41,240 --> 00:36:47,599 +weights and uh we could you + +796 +00:36:44,760 --> 00:36:50,319 +know smooth over their idos synchr dises + +797 +00:36:47,599 --> 00:36:52,520 +and get better results + +798 +00:36:50,319 --> 00:36:56,280 +too + +799 +00:36:52,520 --> 00:36:56,280 +cool uh any questions + +800 +00:36:56,520 --> 00:36:59,520 +here + +801 +00:37:00,920 --> 00:37:03,119 +oh + +802 +00:37:04,680 --> 00:37:11,920 +yeah want to so I just + +803 +00:37:09,680 --> 00:37:14,079 +came + +804 +00:37:11,920 --> 00:37:19,040 +non I + +805 +00:37:14,079 --> 00:37:19,040 +use like those different chain and + +806 +00:37:19,640 --> 00:37:23,319 +just + +807 +00:37:21,160 --> 00:37:26,640 +I pretty + +808 +00:37:23,319 --> 00:37:29,520 +efficient because on the same model you + +809 +00:37:26,640 --> 00:37:29,520 +get + +810 +00:37:35,640 --> 00:37:40,839 +yeah so would this would this parameter + +811 +00:37:38,000 --> 00:37:46,119 +averaging be a good method for U making + +812 +00:37:40,839 --> 00:37:49,839 +a model less toxic for example the + +813 +00:37:46,119 --> 00:37:53,200 +answer is a little bit trickier there I + +814 +00:37:49,839 --> 00:37:56,119 +guess because um I I feel like this is + +815 +00:37:53,200 --> 00:37:58,160 +good for mixing two models together so + +816 +00:37:56,119 --> 00:38:01,400 +if you're mixing your + +817 +00:37:58,160 --> 00:38:03,359 +like non-toxicity tuned model or your + +818 +00:38:01,400 --> 00:38:06,079 +safety tuned model with the original + +819 +00:38:03,359 --> 00:38:07,520 +base model that was not uh safety tuned + +820 +00:38:06,079 --> 00:38:08,800 +or something like that then you might + +821 +00:38:07,520 --> 00:38:11,240 +get something in the middle so you might + +822 +00:38:08,800 --> 00:38:13,319 +get something that's less safe than the + +823 +00:38:11,240 --> 00:38:18,720 +uh like the model that was tuned to not + +824 +00:38:13,319 --> 00:38:21,400 +be toxic so it might be uh yeah I'm not + +825 +00:38:18,720 --> 00:38:23,920 +sure but like let's say you let's say + +826 +00:38:21,400 --> 00:38:26,240 +you have a model that somebody + +827 +00:38:23,920 --> 00:38:28,640 +else did like a really good job + +828 +00:38:26,240 --> 00:38:31,359 +instruction tuning for you + +829 +00:38:28,640 --> 00:38:33,640 +um and anytime you start using safety + +830 +00:38:31,359 --> 00:38:35,560 +tuning on it you like hurt the + +831 +00:38:33,640 --> 00:38:38,680 +instruction tuning like the model gets + +832 +00:38:35,560 --> 00:38:40,560 +worse I could see a world where you take + +833 +00:38:38,680 --> 00:38:43,000 +the base model the same base model you + +834 +00:38:40,560 --> 00:38:45,280 +take llama 27b you train like a less + +835 +00:38:43,000 --> 00:38:47,480 +toxic version of llama 27d and then do + +836 +00:38:45,280 --> 00:38:51,319 +parameter averaging with the like well + +837 +00:38:47,480 --> 00:38:53,160 +instruction tuned model um that might + +838 +00:38:51,319 --> 00:38:55,359 +work that might make something that's + +839 +00:38:53,160 --> 00:38:57,560 +more safe and like not much worse + +840 +00:38:55,359 --> 00:39:01,440 +instruction to so there's definitely I + +841 +00:38:57,560 --> 00:39:01,440 +think creative things that you can do + +842 +00:39:01,520 --> 00:39:08,400 +that um maybe I'll go directly into the + +843 +00:39:04,960 --> 00:39:11,480 +methods um + +844 +00:39:08,400 --> 00:39:13,240 +so uh there's a few uh recent papers on + +845 +00:39:11,480 --> 00:39:16,000 +this like this method has been around + +846 +00:39:13,240 --> 00:39:17,880 +for a long time since at least 1996 but + +847 +00:39:16,000 --> 00:39:20,880 +uh recently people have examined it a + +848 +00:39:17,880 --> 00:39:24,800 +lot in the context of uh kind of modern + +849 +00:39:20,880 --> 00:39:27,400 +networks and uh this paper model soup uh + +850 +00:39:24,800 --> 00:39:29,000 +examines two strategies the first one is + +851 +00:39:27,400 --> 00:39:31,400 +uniform averaging where you just average + +852 +00:39:29,000 --> 00:39:33,560 +all the parameters together uh like as + +853 +00:39:31,400 --> 00:39:35,480 +you would expect but they also have a + +854 +00:39:33,560 --> 00:39:38,319 +greedy averaging method and basically + +855 +00:39:35,480 --> 00:39:40,240 +what they do here is they add one model + +856 +00:39:38,319 --> 00:39:42,119 +and check if the whole like averaged + +857 +00:39:40,240 --> 00:39:43,680 +model improves and then only if the + +858 +00:39:42,119 --> 00:39:45,760 +whole averaged model improves do they + +859 +00:39:43,680 --> 00:39:49,040 +keep that model otherwise they throw it + +860 +00:39:45,760 --> 00:39:52,960 +out and then they um they don't uh use + +861 +00:39:49,040 --> 00:39:54,520 +it so what they demonstrate uh this is a + +862 +00:39:52,960 --> 00:39:57,560 +little bit small but basically the + +863 +00:39:54,520 --> 00:40:00,520 +purple star here is uh when the use + +864 +00:39:57,560 --> 00:40:02,480 +greedy averaging and then the blue + +865 +00:40:00,520 --> 00:40:05,119 +circle here is when they use the uniform + +866 +00:40:02,480 --> 00:40:08,280 +averaging and then green is all of the + +867 +00:40:05,119 --> 00:40:09,960 +models that they they put into this + +868 +00:40:08,280 --> 00:40:12,560 +average + +869 +00:40:09,960 --> 00:40:16,680 +and what they found + +870 +00:40:12,560 --> 00:40:18,480 +is this is average uh accuracy on image + +871 +00:40:16,680 --> 00:40:22,400 +net which is the thing that they they + +872 +00:40:18,480 --> 00:40:25,160 +used in deciding which models to merge + +873 +00:40:22,400 --> 00:40:26,920 +in greedily and then this is on + +874 +00:40:25,160 --> 00:40:28,640 +distribution shifts so this is on other + +875 +00:40:26,920 --> 00:40:31,119 +data sets other than the ones they use + +876 +00:40:28,640 --> 00:40:33,040 +specifically for training and what you + +877 +00:40:31,119 --> 00:40:34,720 +can see is the greedy averaging method + +878 +00:40:33,040 --> 00:40:38,720 +does + +879 +00:40:34,720 --> 00:40:40,839 +better um than the best single model on + +880 +00:40:38,720 --> 00:40:42,319 +the data set that they used to decide + +881 +00:40:40,839 --> 00:40:44,800 +that greedy + +882 +00:40:42,319 --> 00:40:46,560 +average the uniform average actually + +883 +00:40:44,800 --> 00:40:48,359 +does worse than the best model so you + +884 +00:40:46,560 --> 00:40:50,960 +would actually be better off for image + +885 +00:40:48,359 --> 00:40:52,960 +net accuracy to just use the best model + +886 +00:40:50,960 --> 00:40:56,000 +but it's more robust so on the + +887 +00:40:52,960 --> 00:40:57,319 +distribution shift like data set it + +888 +00:40:56,000 --> 00:41:00,000 +actually does better than any of them + +889 +00:40:57,319 --> 00:41:02,280 +models so um you can see that there's + +890 +00:41:00,000 --> 00:41:04,720 +kind of trade-offs between choosing + +891 +00:41:02,280 --> 00:41:06,480 +those + +892 +00:41:04,720 --> 00:41:09,319 +essentially + +893 +00:41:06,480 --> 00:41:12,040 +um whoops that's a that's a typo that + +894 +00:41:09,319 --> 00:41:15,760 +should be ensembling but um they also + +895 +00:41:12,040 --> 00:41:18,440 +demonstrate that um averaging is + +896 +00:41:15,760 --> 00:41:22,720 +correlated with ensembling so this is + +897 +00:41:18,440 --> 00:41:25,200 +the um image accuracy of the parameter + +898 +00:41:22,720 --> 00:41:27,000 +average model this is image not accuracy + +899 +00:41:25,200 --> 00:41:30,200 +of the Ensemble so this is actually I + +900 +00:41:27,000 --> 00:41:33,720 +think really interesting figure um what + +901 +00:41:30,200 --> 00:41:36,440 +it shows is that there's a pretty strong + +902 +00:41:33,720 --> 00:41:38,760 +correlation between the two averaging is + +903 +00:41:36,440 --> 00:41:41,400 +almost never better than ensembling the + +904 +00:41:38,760 --> 00:41:44,800 +two together but it's faster of course + +905 +00:41:41,400 --> 00:41:48,119 +so it's better because it's faster and + +906 +00:41:44,800 --> 00:41:50,000 +there are situations where the Ensemble + +907 +00:41:48,119 --> 00:41:51,680 +is much better than the average model so + +908 +00:41:50,000 --> 00:41:55,720 +like the average model hurts the + +909 +00:41:51,680 --> 00:41:58,560 +averaging hurts um onbling does not hurt + +910 +00:41:55,720 --> 00:42:01,319 +so what this shows you is parameter + +911 +00:41:58,560 --> 00:42:03,119 +averaging is is safe and it nearly + +912 +00:42:01,319 --> 00:42:04,359 +approximates model on samping most of + +913 +00:42:03,119 --> 00:42:06,720 +the time but there are cases where it + +914 +00:42:04,359 --> 00:42:08,119 +doesn't so you do need to be a little + +915 +00:42:06,720 --> 00:42:11,720 +bit careful and it might hurt your + +916 +00:42:08,119 --> 00:42:11,720 +accuracy in some cases + +917 +00:42:16,680 --> 00:42:21,520 +yeah oh yeah sorry very good point yes + +918 +00:42:19,280 --> 00:42:21,520 +it's + +919 +00:42:22,319 --> 00:42:29,119 +paralel yeah + +920 +00:42:26,119 --> 00:42:29,119 +this + +921 +00:42:36,480 --> 00:42:41,520 +um how do you know + +922 +00:42:39,400 --> 00:42:45,720 +it's + +923 +00:42:41,520 --> 00:42:48,280 +particular yeah so notably all of these + +924 +00:42:45,720 --> 00:42:48,280 +are + +925 +00:42:48,800 --> 00:42:52,240 +initialized it's been a little while + +926 +00:42:50,800 --> 00:42:54,079 +since I read this but I know all of + +927 +00:42:52,240 --> 00:42:56,520 +these were initialized from a model that + +928 +00:42:54,079 --> 00:42:58,160 +was already pretty good on image that + +929 +00:42:56,520 --> 00:43:01,760 +and then they were tuned in different + +930 +00:42:58,160 --> 00:43:03,800 +ways I guess and so this I think this + +931 +00:43:01,760 --> 00:43:05,319 +might be initialized with a model that + +932 +00:43:03,800 --> 00:43:09,160 +was trained on a different data set or + +933 +00:43:05,319 --> 00:43:10,160 +something like that um and so they are + +934 +00:43:09,160 --> 00:43:12,480 +all starting from the same + +935 +00:43:10,160 --> 00:43:14,480 +initialization so parameter U + +936 +00:43:12,480 --> 00:43:16,599 +permutation inv variance is not an issue + +937 +00:43:14,480 --> 00:43:19,200 +there because they're starting from the + +938 +00:43:16,599 --> 00:43:23,480 +pre um but despite the fact that it's + +939 +00:43:19,200 --> 00:43:26,520 +not a problem there are there are cases + +940 +00:43:23,480 --> 00:43:29,119 +where like averaging is detrimental + +941 +00:43:26,520 --> 00:43:29,119 +compared to + +942 +00:43:32,839 --> 00:43:37,559 +um okay so + +943 +00:43:42,800 --> 00:43:45,800 +yeah + +944 +00:43:51,720 --> 00:43:54,720 +yep + +945 +00:43:56,040 --> 00:43:59,040 +y + +946 +00:44:07,079 --> 00:44:10,079 +okay + +947 +00:44:26,040 --> 00:44:29,040 +y + +948 +00:44:46,319 --> 00:44:52,520 +yeah so that's a great question um I'll + +949 +00:44:48,240 --> 00:44:54,920 +just repeat it which is um the these + +950 +00:44:52,520 --> 00:44:57,520 +experiments were done on CNN's or image + +951 +00:44:54,920 --> 00:44:59,280 +net like uh CNN based image that + +952 +00:44:57,520 --> 00:45:01,119 +classifiers is there something different + +953 +00:44:59,280 --> 00:45:04,040 +than Transformers particularly because + +954 +00:45:01,119 --> 00:45:06,240 +Transformer representations tend to be + +955 +00:45:04,040 --> 00:45:09,000 +uh like very concentrated in particular + +956 +00:45:06,240 --> 00:45:11,359 +parts of the space that's an excellent + +957 +00:45:09,000 --> 00:45:14,040 +question um what I do know is a lot of + +958 +00:45:11,359 --> 00:45:15,319 +people do merge together Transformer + +959 +00:45:14,040 --> 00:45:18,319 +models in fact if you look at the + +960 +00:45:15,319 --> 00:45:20,079 +hugging face leaderboard there's like + +961 +00:45:18,319 --> 00:45:22,240 +something and something merg together + +962 +00:45:20,079 --> 00:45:24,200 +like all over the leader board and it + +963 +00:45:22,240 --> 00:45:25,960 +does tend to improve accuracy so I I + +964 +00:45:24,200 --> 00:45:27,480 +know it is definitely effective for + +965 +00:45:25,960 --> 00:45:28,559 +Transformers + +966 +00:45:27,480 --> 00:45:32,040 +however Are + +967 +00:45:28,559 --> 00:45:34,640 +there specific model like parameter + +968 +00:45:32,040 --> 00:45:37,040 +averaging or model merging methods that + +969 +00:45:34,640 --> 00:45:38,599 +could improve accuracy by taking + +970 +00:45:37,040 --> 00:45:40,680 +advantage of the fact that Transformers + +971 +00:45:38,599 --> 00:45:42,480 +behaving a c certain way I think that's + +972 +00:45:40,680 --> 00:45:44,920 +totally possible and you know it would + +973 +00:45:42,480 --> 00:45:48,800 +be an interesting research Direction um + +974 +00:45:44,920 --> 00:45:51,680 +I'm not familiar enough with that + +975 +00:45:48,800 --> 00:45:53,359 +particular part myself to say oh I have + +976 +00:45:51,680 --> 00:45:55,160 +this great idea that you should work on + +977 +00:45:53,359 --> 00:45:55,920 +but I think if you're interested in it + +978 +00:45:55,160 --> 00:45:58,160 +you + +979 +00:45:55,920 --> 00:46:00,280 +definitely + +980 +00:45:58,160 --> 00:46:05,240 +cool anything + +981 +00:46:00,280 --> 00:46:08,920 +El okay so there's also the idea of uh + +982 +00:46:05,240 --> 00:46:12,440 +task vectors and um basically task + +983 +00:46:08,920 --> 00:46:15,280 +vectors here we are just merging + +984 +00:46:12,440 --> 00:46:17,280 +together two models by taking the + +985 +00:46:15,280 --> 00:46:18,280 +parameters of the models and averaging + +986 +00:46:17,280 --> 00:46:22,079 +them + +987 +00:46:18,280 --> 00:46:24,480 +together task vectors and other related + +988 +00:46:22,079 --> 00:46:26,040 +works specifically take advantage of the + +989 +00:46:24,480 --> 00:46:27,640 +fact that we're looking at different + +990 +00:46:26,040 --> 00:46:29,160 +fine-tuned models + +991 +00:46:27,640 --> 00:46:31,480 +and so these are models where we have a + +992 +00:46:29,160 --> 00:46:33,920 +base model and we know that uh that we + +993 +00:46:31,480 --> 00:46:35,760 +fine-tuned from this base model and the + +994 +00:46:33,920 --> 00:46:38,480 +basic idea is that we have our base + +995 +00:46:35,760 --> 00:46:40,319 +model here and the task Vector is the + +996 +00:46:38,480 --> 00:46:43,280 +difference between the base models + +997 +00:46:40,319 --> 00:46:45,559 +Vector uh parameters and the uh fine + +998 +00:46:43,280 --> 00:46:49,480 +tune models parameters so that's what + +999 +00:46:45,559 --> 00:46:52,720 +they Define as a task Vector um what + +1000 +00:46:49,480 --> 00:46:56,000 +does this allow us to do this allows us + +1001 +00:46:52,720 --> 00:46:58,040 +to do a number of interesting things um + +1002 +00:46:56,000 --> 00:47:02,359 +the first one + +1003 +00:46:58,040 --> 00:47:05,119 +is that we can actually subtract out uh + +1004 +00:47:02,359 --> 00:47:08,960 +quote unquote tasks that we don't want + +1005 +00:47:05,119 --> 00:47:11,559 +so like let's say we had a model that + +1006 +00:47:08,960 --> 00:47:13,440 +was trained on lots of toxic text or we + +1007 +00:47:11,559 --> 00:47:15,760 +had a model that was trained on lots of + +1008 +00:47:13,440 --> 00:47:18,760 +private text or something like that we + +1009 +00:47:15,760 --> 00:47:22,040 +could actually subtract out the task + +1010 +00:47:18,760 --> 00:47:24,240 +Vector from this and basically attempt + +1011 +00:47:22,040 --> 00:47:27,480 +to remove the model's ability to uh do + +1012 +00:47:24,240 --> 00:47:31,240 +that sort of things um you can also + +1013 +00:47:27,480 --> 00:47:36,040 +take two task vectors and combine them + +1014 +00:47:31,240 --> 00:47:39,280 +together and uh like get the model uh + +1015 +00:47:36,040 --> 00:47:42,200 +from the combination of the two um this + +1016 +00:47:39,280 --> 00:47:44,280 +isn't exactly the same as averaging the + +1017 +00:47:42,200 --> 00:47:45,440 +parameters because if you average the + +1018 +00:47:44,280 --> 00:47:47,400 +parameters you would probably get + +1019 +00:47:45,440 --> 00:47:49,160 +something in the middle right here but + +1020 +00:47:47,400 --> 00:47:50,440 +if you average the two vectors or add + +1021 +00:47:49,160 --> 00:47:52,040 +the two vectors together you would get + +1022 +00:47:50,440 --> 00:47:53,760 +something over here actually sorry if + +1023 +00:47:52,040 --> 00:47:56,520 +you average the vectors maybe it's the + +1024 +00:47:53,760 --> 00:47:58,119 +same so you could like add together the + +1025 +00:47:56,520 --> 00:47:59,480 +two vectors and and that would be + +1026 +00:47:58,119 --> 00:48:01,640 +something different than taking the + +1027 +00:47:59,480 --> 00:48:05,280 +average so it gives you a little bit + +1028 +00:48:01,640 --> 00:48:07,720 +more flexibility about things to do + +1029 +00:48:05,280 --> 00:48:09,599 +um and another thing this allows you to + +1030 +00:48:07,720 --> 00:48:12,920 +do is this allows you to try to resolve + +1031 +00:48:09,599 --> 00:48:15,400 +conflicts between um vectors of + +1032 +00:48:12,920 --> 00:48:19,720 +different tasks and so this is an + +1033 +00:48:15,400 --> 00:48:22,480 +illustration of of this method here + +1034 +00:48:19,720 --> 00:48:25,680 +and this has three tasks basically it + +1035 +00:48:22,480 --> 00:48:27,720 +has model one model two model three and + +1036 +00:48:25,680 --> 00:48:29,920 +each of them has vectors and you'll see + +1037 +00:48:27,720 --> 00:48:32,880 +that in some cases these vectors + +1038 +00:48:29,920 --> 00:48:34,599 +conflict so we have like pink going up + +1039 +00:48:32,880 --> 00:48:36,079 +we have yellow and purple going down we + +1040 +00:48:34,599 --> 00:48:37,800 +have yellow going up we have pink and + +1041 +00:48:36,079 --> 00:48:40,720 +purple going down etc + +1042 +00:48:37,800 --> 00:48:43,040 +etc and what this does is this + +1043 +00:48:40,720 --> 00:48:45,960 +identifies the vectors that are uh + +1044 +00:48:43,040 --> 00:48:48,040 +pointing the most strongly in particular + +1045 +00:48:45,960 --> 00:48:50,440 +directions and then it resolves + +1046 +00:48:48,040 --> 00:48:52,240 +conflicts between them and comes up with + +1047 +00:48:50,440 --> 00:48:54,559 +a vector that tries to move in a + +1048 +00:48:52,240 --> 00:48:55,920 +direction that improves all of the tasks + +1049 +00:48:54,559 --> 00:48:59,319 +at the same time and they demonstrate + +1050 +00:48:55,920 --> 00:49:01,480 +that this is better method for um kind + +1051 +00:48:59,319 --> 00:49:04,599 +of improving the ability to do all of + +1052 +00:49:01,480 --> 00:49:09,599 +the tasks compared to just averaging + +1053 +00:49:04,599 --> 00:49:09,599 +things together so yeah first + +1054 +00:49:11,920 --> 00:49:15,559 +exle like it just + +1055 +00:49:16,880 --> 00:49:23,640 +add yeah so this is + +1056 +00:49:20,680 --> 00:49:25,760 +um yeah you could move it more in that + +1057 +00:49:23,640 --> 00:49:27,319 +direction it there's obviously no + +1058 +00:49:25,760 --> 00:49:29,720 +guarantee that it would make it better + +1059 +00:49:27,319 --> 00:49:32,319 +but it might make it more extreme at + +1060 +00:49:29,720 --> 00:49:35,760 +least so uh + +1061 +00:49:32,319 --> 00:49:35,760 +yeah any other + +1062 +00:49:36,680 --> 00:49:39,960 +questions all + +1063 +00:49:55,640 --> 00:49:58,640 +yes + +1064 +00:50:25,640 --> 00:50:28,640 +one + +1065 +00:50:32,319 --> 00:50:37,240 +yeah yeah so this is a a great question + +1066 +00:50:35,599 --> 00:50:38,760 +um I can explain a little bit I'm not + +1067 +00:50:37,240 --> 00:50:40,760 +going to talk about Metal learning + +1068 +00:50:38,760 --> 00:50:42,680 +extensively in this class but just to + +1069 +00:50:40,760 --> 00:50:46,040 +give a very quick primer for people who + +1070 +00:50:42,680 --> 00:50:46,040 +don't know about it + +1071 +00:50:55,640 --> 00:50:58,640 +um + +1072 +00:51:00,359 --> 00:51:06,040 +this is an example of a paper on metal + +1073 +00:51:03,319 --> 00:51:09,559 +learning for low resource machine + +1074 +00:51:06,040 --> 00:51:12,680 +translation um I you can take a look at + +1075 +00:51:09,559 --> 00:51:16,200 +this paper um or not take a look at this + +1076 +00:51:12,680 --> 00:51:17,760 +paper um uh but the reason why I wanted + +1077 +00:51:16,200 --> 00:51:20,799 +to look at this paper is because it has + +1078 +00:51:17,760 --> 00:51:25,160 +a good um uh it has a good illustration + +1079 +00:51:20,799 --> 00:51:27,200 +of what metal learning is and basically + +1080 +00:51:25,160 --> 00:51:29,160 +um if we + +1081 +00:51:27,200 --> 00:51:33,839 +are doing transfer learning from a + +1082 +00:51:29,160 --> 00:51:35,880 +single task what we do is we have like a + +1083 +00:51:33,839 --> 00:51:37,960 +Spanish English machine translation + +1084 +00:51:35,880 --> 00:51:41,839 +system and then we fine-tune it to try + +1085 +00:51:37,960 --> 00:51:45,280 +to hit like to try to be a good Romanian + +1086 +00:51:41,839 --> 00:51:48,680 +uh English or latan English system if + +1087 +00:51:45,280 --> 00:51:50,400 +we're doing multitask learning um or + +1088 +00:51:48,680 --> 00:51:53,079 +which also could be equivalent to like + +1089 +00:51:50,400 --> 00:51:55,680 +instruction tuning for example we have + +1090 +00:51:53,079 --> 00:51:57,680 +uh French uh Spanish and Portuguese we + +1091 +00:51:55,680 --> 00:52:03,319 +train on all the then we + +1092 +00:51:57,680 --> 00:52:06,520 +fine-tune to uh to be a good Romanian uh + +1093 +00:52:03,319 --> 00:52:09,240 +translator latan trans uh + +1094 +00:52:06,520 --> 00:52:10,760 +translator whereas metal learning what + +1095 +00:52:09,240 --> 00:52:12,119 +it's trying to do is it's trying to + +1096 +00:52:10,760 --> 00:52:14,680 +learn a good + +1097 +00:52:12,119 --> 00:52:17,480 +initialization that makes it easy to + +1098 +00:52:14,680 --> 00:52:21,280 +fine-tune to try to come up with a model + +1099 +00:52:17,480 --> 00:52:23,839 +that is good uh for fine-tuning into new + +1100 +00:52:21,280 --> 00:52:29,040 +tasks + +1101 +00:52:23,839 --> 00:52:32,200 +um the way you do this is basically um + +1102 +00:52:29,040 --> 00:52:36,599 +you have two + +1103 +00:52:32,200 --> 00:52:39,400 +steps um of gradient descent and so you + +1104 +00:52:36,599 --> 00:52:42,400 +have a first step where you uh train the + +1105 +00:52:39,400 --> 00:52:42,400 +model + +1106 +00:52:42,599 --> 00:52:50,160 +um where you have an update on like data + +1107 +00:52:47,119 --> 00:52:50,160 +from French for + +1108 +00:52:55,440 --> 00:53:02,400 +example + +1109 +00:52:57,920 --> 00:53:02,400 +and then you have another + +1110 +00:53:04,640 --> 00:53:10,599 +update um where you train on like black + +1111 +00:53:07,880 --> 00:53:10,599 +or something like + +1112 +00:53:12,559 --> 00:53:17,040 +this and this is a very informal very + +1113 +00:53:15,599 --> 00:53:18,200 +informal description there's a lot of + +1114 +00:53:17,040 --> 00:53:19,599 +stuff we could talk about here I could + +1115 +00:53:18,200 --> 00:53:22,119 +have a whole class on this but we're not + +1116 +00:53:19,599 --> 00:53:27,200 +going to um I don't have one planned at + +1117 +00:53:22,119 --> 00:53:28,559 +the moment um and so you uh you up once + +1118 +00:53:27,200 --> 00:53:30,319 +and then you update again and you + +1119 +00:53:28,559 --> 00:53:33,400 +differentiate through this update + +1120 +00:53:30,319 --> 00:53:35,160 +process uh so that this becomes like + +1121 +00:53:33,400 --> 00:53:37,440 +essentially a good initialization for + +1122 +00:53:35,160 --> 00:53:40,640 +training on other languages or for other + +1123 +00:53:37,440 --> 00:53:43,000 +tasks or things like that + +1124 +00:53:40,640 --> 00:53:44,920 +um now going back to the original + +1125 +00:53:43,000 --> 00:53:46,240 +question the original question is is + +1126 +00:53:44,920 --> 00:53:50,000 +there a connection between metal + +1127 +00:53:46,240 --> 00:53:50,000 +learning in these uh task + +1128 +00:53:54,720 --> 00:53:58,440 +vectors I'm not + +1129 +00:53:59,079 --> 00:54:03,720 +100% sure about that because I think + +1130 +00:54:01,760 --> 00:54:06,599 +these test backs are generally created + +1131 +00:54:03,720 --> 00:54:08,480 +post Haw and so they're not like there's + +1132 +00:54:06,599 --> 00:54:12,680 +no explicit learning step to try to make + +1133 +00:54:08,480 --> 00:54:14,440 +them uh you know generalize well um one + +1134 +00:54:12,680 --> 00:54:15,960 +one thing that maybe might be + +1135 +00:54:14,440 --> 00:54:18,559 +interesting to people this is a paper + +1136 +00:54:15,960 --> 00:54:23,040 +that we like literally just put on + +1137 +00:54:18,559 --> 00:54:23,040 +archive about last week + +1138 +00:54:25,359 --> 00:54:28,359 +um + +1139 +00:54:34,520 --> 00:54:39,880 +and we didn't actually use metal + +1140 +00:54:36,400 --> 00:54:41,960 +learning in this uh in this paper um + +1141 +00:54:39,880 --> 00:54:44,520 +just because metal learning actually is + +1142 +00:54:41,960 --> 00:54:46,160 +hard to implement uh because you need to + +1143 +00:54:44,520 --> 00:54:48,680 +do this kind of double differentiation + +1144 +00:54:46,160 --> 00:54:50,720 +and can become very very expensive for + +1145 +00:54:48,680 --> 00:54:52,839 +large models but we did something a + +1146 +00:54:50,720 --> 00:54:55,920 +little bit motivated by + +1147 +00:54:52,839 --> 00:54:59,680 +um uh by metal learning and what we did + +1148 +00:54:55,920 --> 00:55:01,280 +is we took a pre-trained LM and normally + +1149 +00:54:59,680 --> 00:55:04,359 +what you do is something like continued + +1150 +00:55:01,280 --> 00:55:06,799 +pre-training on new documents to learn + +1151 +00:55:04,359 --> 00:55:10,160 +knowledge from the new documents or + +1152 +00:55:06,799 --> 00:55:12,200 +maybe um instruction tuning including + +1153 +00:55:10,160 --> 00:55:15,960 +instruction tuning on data on documents + +1154 +00:55:12,200 --> 00:55:17,520 +about the kind of uh data that you would + +1155 +00:55:15,960 --> 00:55:18,880 +want to be answering questions about so + +1156 +00:55:17,520 --> 00:55:20,640 +like let's say you're trying to train a + +1157 +00:55:18,880 --> 00:55:23,000 +medical language model you might train + +1158 +00:55:20,640 --> 00:55:26,680 +on lots of medical documents but what we + +1159 +00:55:23,000 --> 00:55:29,839 +did here is we had a step where we train + +1160 +00:55:26,680 --> 00:55:33,720 +in advance to + +1161 +00:55:29,839 --> 00:55:38,079 +get on question answer Pairs and + +1162 +00:55:33,720 --> 00:55:40,400 +documents from another domain and then + +1163 +00:55:38,079 --> 00:55:43,359 +we have a step after that where we train + +1164 +00:55:40,400 --> 00:55:46,400 +on documents from the domain we want to + +1165 +00:55:43,359 --> 00:55:48,400 +answer on so like we might train on + +1166 +00:55:46,400 --> 00:55:51,079 +Wikipedia question answer Pairs and + +1167 +00:55:48,400 --> 00:55:52,559 +Wikipedia documents and then in the + +1168 +00:55:51,079 --> 00:55:54,079 +second step we would train on medical + +1169 +00:55:52,559 --> 00:55:56,680 +documents and we demonstrate that + +1170 +00:55:54,079 --> 00:55:58,880 +basically this allows the model to do a + +1171 +00:55:56,680 --> 00:56:00,880 +better job of question answering over + +1172 +00:55:58,880 --> 00:56:03,640 +these uh documents that we find tune on + +1173 +00:56:00,880 --> 00:56:05,000 +over here and so kind of going back to + +1174 +00:56:03,640 --> 00:56:06,760 +the metal learning paper that I talked + +1175 +00:56:05,000 --> 00:56:08,359 +about before the metal learning paper + +1176 +00:56:06,760 --> 00:56:10,640 +tries to get the parameters in a good + +1177 +00:56:08,359 --> 00:56:12,559 +space so that after you find ton on + +1178 +00:56:10,640 --> 00:56:15,520 +another data set you do a good job of + +1179 +00:56:12,559 --> 00:56:17,799 +that in this paper our motivation is + +1180 +00:56:15,520 --> 00:56:20,359 +that the model kind of learns that when + +1181 +00:56:17,799 --> 00:56:22,039 +you train on documents you should be + +1182 +00:56:20,359 --> 00:56:24,079 +able to answer questions about those + +1183 +00:56:22,039 --> 00:56:25,480 +documents and so when you get a new set + +1184 +00:56:24,079 --> 00:56:27,200 +of documents it's kind of in a good part + +1185 +00:56:25,480 --> 00:56:31,079 +of the parameter space to make that easy + +1186 +00:56:27,200 --> 00:56:33,520 +to do so um if that if metal learning is + +1187 +00:56:31,079 --> 00:56:34,640 +interesting um there are tutorials on + +1188 +00:56:33,520 --> 00:56:37,119 +metal learning that I could probably + +1189 +00:56:34,640 --> 00:56:39,599 +share and then um if you're interested + +1190 +00:56:37,119 --> 00:56:42,599 +in kind of like learning Knowledge from + +1191 +00:56:39,599 --> 00:56:45,039 +uh learning Knowledge + +1192 +00:56:42,599 --> 00:56:46,079 +from continued pre-training or something + +1193 +00:56:45,039 --> 00:56:47,400 +like that you could take a look at this + +1194 +00:56:46,079 --> 00:56:49,920 +right there as + +1195 +00:56:47,400 --> 00:56:54,480 +well uh + +1196 +00:56:49,920 --> 00:56:54,480 +cool any questions about that + +1197 +00:56:55,240 --> 00:57:00,880 +or + +1198 +00:56:57,599 --> 00:57:02,480 +okay cool I I'll jump on this so anyway + +1199 +00:57:00,880 --> 00:57:05,520 +um I talked about several methods for + +1200 +00:57:02,480 --> 00:57:07,520 +merging models together um there's a + +1201 +00:57:05,520 --> 00:57:09,440 +popular toolkit called merge kit that + +1202 +00:57:07,520 --> 00:57:10,960 +makes it relatively easy to do this it + +1203 +00:57:09,440 --> 00:57:13,280 +implements a lot of the models that I + +1204 +00:57:10,960 --> 00:57:17,160 +talked about here including uh the + +1205 +00:57:13,280 --> 00:57:19,880 +linear methods um uh the task arithmetic + +1206 +00:57:17,160 --> 00:57:23,079 +method and ties uh so I talked about + +1207 +00:57:19,880 --> 00:57:25,480 +these there is kind of like a expansion + +1208 +00:57:23,079 --> 00:57:27,240 +on this so if you want to merge together + +1209 +00:57:25,480 --> 00:57:28,760 +models it's Rel easy to do from a + +1210 +00:57:27,240 --> 00:57:30,760 +software standpoint as so so you can + +1211 +00:57:28,760 --> 00:57:35,119 +take a look at + +1212 +00:57:30,760 --> 00:57:38,000 +that um another really simple thing uh + +1213 +00:57:35,119 --> 00:57:39,880 +is uh distilling ensembles and so we + +1214 +00:57:38,000 --> 00:57:43,039 +already talked about distillation the + +1215 +00:57:39,880 --> 00:57:45,599 +idea is simple um + +1216 +00:57:43,039 --> 00:57:47,680 +you so parameter averaging only really + +1217 +00:57:45,599 --> 00:57:49,200 +works for models within the same run uh + +1218 +00:57:47,680 --> 00:57:51,760 +same model architecture same + +1219 +00:57:49,200 --> 00:57:54,280 +initialization so knowledge distillation + +1220 +00:57:51,760 --> 00:57:55,559 +uh trains a model to copy The Ensemble + +1221 +00:57:54,280 --> 00:57:57,359 +and so it tries to match the + +1222 +00:57:55,559 --> 00:57:59,119 +distribution over the predicted words + +1223 +00:57:57,359 --> 00:58:00,760 +for an + +1224 +00:57:59,119 --> 00:58:05,319 +on + +1225 +00:58:00,760 --> 00:58:07,799 +um and so this allows the model to make + +1226 +00:58:05,319 --> 00:58:09,079 +the same you know good predictions as + +1227 +00:58:07,799 --> 00:58:11,079 +The Ensemble make the same bad + +1228 +00:58:09,079 --> 00:58:12,799 +predictions as Ensemble it just allows + +1229 +00:58:11,079 --> 00:58:14,799 +you to learn more efficiently just like + +1230 +00:58:12,799 --> 00:58:16,680 +distillation does in general and they + +1231 +00:58:14,799 --> 00:58:18,960 +actually model distillation the original + +1232 +00:58:16,680 --> 00:58:22,240 +motivation for it when Jeff Hinton + +1233 +00:58:18,960 --> 00:58:24,599 +proposed it in 2015 in in this paper was + +1234 +00:58:22,240 --> 00:58:25,680 +to copy an ensemble now we use it for a + +1235 +00:58:24,599 --> 00:58:27,039 +lot of other things like in the + +1236 +00:58:25,680 --> 00:58:31,160 +distillation + +1237 +00:58:27,039 --> 00:58:31,160 +like weed the class but was the + +1238 +00:58:34,119 --> 00:58:39,599 +original + +1239 +00:58:35,760 --> 00:58:42,640 +um next I'll move on to sparse mixture + +1240 +00:58:39,599 --> 00:58:44,960 +of experts models and this is really + +1241 +00:58:42,640 --> 00:58:47,599 +important uh this is used in a lot of + +1242 +00:58:44,960 --> 00:58:51,319 +modern models it's allegedly used in GPD + +1243 +00:58:47,599 --> 00:58:53,160 +4 um and it is uh definitely used in + +1244 +00:58:51,319 --> 00:58:55,280 +mixl uh which is kind of one of the + +1245 +00:58:53,160 --> 00:58:58,039 +state-ofthe-art open models so I think + +1246 +00:58:55,280 --> 00:58:58,039 +it's a good thing to know + +1247 +00:58:59,880 --> 00:59:05,720 +um what these do is they take advantage + +1248 +00:59:02,680 --> 00:59:08,160 +of sparse computation so if you think + +1249 +00:59:05,720 --> 00:59:09,359 +about what happens when you do a scalar + +1250 +00:59:08,160 --> 00:59:12,760 +tensor + +1251 +00:59:09,359 --> 00:59:14,720 +multiply where the scaler is zero and + +1252 +00:59:12,760 --> 00:59:17,160 +basically the result of the entire + +1253 +00:59:14,720 --> 00:59:19,680 +resulting tensor is guaranteed to be + +1254 +00:59:17,160 --> 00:59:21,440 +zero and so you don't even need to do + +1255 +00:59:19,680 --> 00:59:25,440 +the computation you don't need to even + +1256 +00:59:21,440 --> 00:59:27,520 +bother um and so this manifests itself + +1257 +00:59:25,440 --> 00:59:30,240 +in a bunch of different places in modern + +1258 +00:59:27,520 --> 00:59:35,000 +models um the first one could be single + +1259 +00:59:30,240 --> 00:59:38,400 +rows in a matrix multiply so um if you + +1260 +00:59:35,000 --> 00:59:40,480 +have a big Matrix multiply like + +1261 +00:59:38,400 --> 00:59:44,240 +this + +1262 +00:59:40,480 --> 00:59:47,880 +um or Matrix Vector multiply like this + +1263 +00:59:44,240 --> 00:59:50,200 +um and some of the rows are zero then uh + +1264 +00:59:47,880 --> 00:59:54,559 +that that's one place where it + +1265 +00:59:50,200 --> 00:59:58,200 +happens um you can also uh do this + +1266 +00:59:54,559 --> 01:00:00,119 +between zero and in not just rows but + +1267 +00:59:58,200 --> 01:00:02,200 +also larger + +1268 +01:00:00,119 --> 01:00:05,799 +tensors um and you can even do it in + +1269 +01:00:02,200 --> 01:00:07,599 +whole models in an ensemble so um the + +1270 +01:00:05,799 --> 01:00:10,799 +first one this can be optimized + +1271 +01:00:07,599 --> 01:00:13,880 +automatically by GPU um the second one + +1272 +01:00:10,799 --> 01:00:15,400 +this often occurs in uh sparse mixture + +1273 +01:00:13,880 --> 01:00:18,000 +of experts + +1274 +01:00:15,400 --> 01:00:19,400 +models and the final one uh basically + +1275 +01:00:18,000 --> 01:00:21,880 +you just don't need to even use the + +1276 +01:00:19,400 --> 01:00:24,119 +model in emble so if you somehow + +1277 +01:00:21,880 --> 01:00:25,640 +optimize an ensemble and it turns out + +1278 +01:00:24,119 --> 01:00:27,599 +that the probability of one of the + +1279 +01:00:25,640 --> 01:00:29,680 +models is zero you just can throw it out + +1280 +01:00:27,599 --> 01:00:33,640 +and not use it at + +1281 +01:00:29,680 --> 01:00:36,839 +all so um GPU level sparsity + +1282 +01:00:33,640 --> 01:00:39,839 +support uh Nvidia gpus support a bunch + +1283 +01:00:36,839 --> 01:00:42,559 +of different types of sparsity and uh + +1284 +01:00:39,839 --> 01:00:44,599 +the people the wonderful people at + +1285 +01:00:42,559 --> 01:00:48,280 +Nvidia have worked hard to make the + +1286 +01:00:44,599 --> 01:00:51,319 +support uh work to some extent anyway + +1287 +01:00:48,280 --> 01:00:53,119 +and uh there's a library called cpar and + +1288 +01:00:51,319 --> 01:00:56,119 +this is used in pytorch and all these + +1289 +01:00:53,119 --> 01:00:58,280 +other things as well and just to give + +1290 +01:00:56,119 --> 01:01:01,240 +example a vector Matrix multiply with a + +1291 +01:00:58,280 --> 01:01:03,240 +sparse Vector um such as one that comes + +1292 +01:01:01,240 --> 01:01:06,160 +from a relu activation basically what + +1293 +01:01:03,240 --> 01:01:09,319 +happens is let's say you only have three + +1294 +01:01:06,160 --> 01:01:11,799 +uh parts of this Vector that are active + +1295 +01:01:09,319 --> 01:01:15,240 +um you actually just don't need to cop + +1296 +01:01:11,799 --> 01:01:18,200 +uh calculate any of the columns here so + +1297 +01:01:15,240 --> 01:01:19,720 +that makes your life relatively + +1298 +01:01:18,200 --> 01:01:22,880 +easy + +1299 +01:01:19,720 --> 01:01:24,480 +um but the specific thing that I wanted + +1300 +01:01:22,880 --> 01:01:26,640 +to talk about is a sparsely gated + +1301 +01:01:24,480 --> 01:01:29,799 +mixture of experts layer because this is + +1302 +01:01:26,640 --> 01:01:33,960 +uh what is used in mixol and probably uh + +1303 +01:01:29,799 --> 01:01:38,200 +the GPT models as well and what you do + +1304 +01:01:33,960 --> 01:01:41,760 +is you have a feed forward Network and + +1305 +01:01:38,200 --> 01:01:41,760 +normally a feed forward Network in a + +1306 +01:01:43,640 --> 01:01:52,119 +Transformer is this like really wide + +1307 +01:01:49,319 --> 01:01:57,240 +thing this huge wide feed forward + +1308 +01:01:52,119 --> 01:01:59,359 +Network um that you use to extract a + +1309 +01:01:57,240 --> 01:02:00,520 +whole bunch of features at each layer + +1310 +01:01:59,359 --> 01:02:02,640 +and that's where a lot of the + +1311 +01:02:00,520 --> 01:02:05,799 +computation and Transformer + +1312 +01:02:02,640 --> 01:02:10,079 +happens um and what sparsely gated + +1313 +01:02:05,799 --> 01:02:13,079 +mixture of uh experts layers do is they + +1314 +01:02:10,079 --> 01:02:15,640 +first have this gating Network here + +1315 +01:02:13,079 --> 01:02:17,880 +where it calculates uh mixture + +1316 +01:02:15,640 --> 01:02:21,119 +probability but the mixture probability + +1317 +01:02:17,880 --> 01:02:23,039 +is zero and for many or most of the + +1318 +01:02:21,119 --> 01:02:26,880 +parts of this feed forward + +1319 +01:02:23,039 --> 01:02:28,760 +Network and so for the ones where it's + +1320 +01:02:26,880 --> 01:02:31,319 +zero you just don't calculate + +1321 +01:02:28,760 --> 01:02:34,319 +it um and then when you mix them + +1322 +01:02:31,319 --> 01:02:37,359 +together you use the mixture rates and + +1323 +01:02:34,319 --> 01:02:39,520 +this is actually really simple um it's + +1324 +01:02:37,359 --> 01:02:42,400 +like several lines of pytorch code maybe + +1325 +01:02:39,520 --> 01:02:45,319 +like seven or eight lines of P torch + +1326 +01:02:42,400 --> 01:02:48,720 +code but the basic uh idea here is you + +1327 +01:02:45,319 --> 01:02:50,599 +have um this gating function where you + +1328 +01:02:48,720 --> 01:02:52,799 +calculate the gating function based on + +1329 +01:02:50,599 --> 01:02:53,640 +the input and then you have this keep + +1330 +01:02:52,799 --> 01:02:56,720 +top + +1331 +01:02:53,640 --> 01:02:58,319 +K uh operation and then you take the + +1332 +01:02:56,720 --> 01:03:02,559 +soft Max over + +1333 +01:02:58,319 --> 01:03:04,359 +this and the keep top K operation is if + +1334 +01:03:02,559 --> 01:03:06,160 +the value is within the top K you just + +1335 +01:03:04,359 --> 01:03:07,319 +keep it and if it's not in the top K you + +1336 +01:03:06,160 --> 01:03:11,960 +don't keep + +1337 +01:03:07,319 --> 01:03:13,119 +it so that that's all basically but what + +1338 +01:03:11,960 --> 01:03:14,760 +what's great about this is then you + +1339 +01:03:13,119 --> 01:03:17,799 +don't have to calculate like many of + +1340 +01:03:14,760 --> 01:03:20,119 +them and so for example um uh if you + +1341 +01:03:17,799 --> 01:03:22,640 +keep the top two out of eight you reduce + +1342 +01:03:20,119 --> 01:03:26,760 +your calcul uh your computation by four + +1343 +01:03:22,640 --> 01:03:30,000 +times for this part so + +1344 +01:03:26,760 --> 01:03:33,000 +um any any questions + +1345 +01:03:30,000 --> 01:03:33,000 +here + +1346 +01:03:54,720 --> 01:03:57,720 +yeah + +1347 +01:04:03,160 --> 01:04:07,039 +um sorry what what exactly do you mean + +1348 +01:04:05,559 --> 01:04:09,400 +by easy to paralyze are you talking + +1349 +01:04:07,039 --> 01:04:12,400 +about like a GPU can calculate lots of + +1350 +01:04:09,400 --> 01:04:15,680 +things at the same time yeah so I think + +1351 +01:04:12,400 --> 01:04:17,720 +if you have a very small model um you're + +1352 +01:04:15,680 --> 01:04:21,680 +actually not going to get as much from + +1353 +01:04:17,720 --> 01:04:25,079 +this uh because you're not you're + +1354 +01:04:21,680 --> 01:04:26,359 +essentially not bound by computation uh + +1355 +01:04:25,079 --> 01:04:27,880 +like you're bound more by memory + +1356 +01:04:26,359 --> 01:04:29,079 +movement and the GPU and other stuff + +1357 +01:04:27,880 --> 01:04:30,520 +like that but once you start getting up + +1358 +01:04:29,079 --> 01:04:32,920 +to the bigger models you actually are + +1359 +01:04:30,520 --> 01:04:34,640 +bound by computation so reducing your + +1360 +01:04:32,920 --> 01:04:37,039 +computation by four actually is a big + +1361 +01:04:34,640 --> 01:04:42,559 +one so it's a really really good + +1362 +01:04:37,039 --> 01:04:42,559 +question um any any other questions + +1363 +01:04:44,039 --> 01:04:50,520 +yeah so so this will + +1364 +01:04:48,240 --> 01:04:53,160 +um probably + +1365 +01:04:50,520 --> 01:04:56,039 +be + +1366 +01:04:53,160 --> 01:04:59,279 +just oh sorry I I don't have this here + +1367 +01:04:56,039 --> 01:05:01,760 +but this will be a often a linear layer + +1368 +01:04:59,279 --> 01:05:01,760 +followed by a + +1369 +01:05:03,039 --> 01:05:08,000 +seance um or or actually no it doesn't + +1370 +01:05:06,359 --> 01:05:10,520 +even need to be followed by softb it + +1371 +01:05:08,000 --> 01:05:10,520 +could just be a + +1372 +01:05:12,520 --> 01:05:17,920 +linear and I think actually I didn't put + +1373 +01:05:14,960 --> 01:05:19,680 +it on this slide but I have the in the + +1374 +01:05:17,920 --> 01:05:21,359 +references on the website I have the + +1375 +01:05:19,680 --> 01:05:22,760 +actual implementation in mix roll you + +1376 +01:05:21,359 --> 01:05:25,279 +can go in and look at it it's really + +1377 +01:05:22,760 --> 01:05:27,160 +simple um one thing I didn't put on here + +1378 +01:05:25,279 --> 01:05:31,000 +um which actually uh relates to the + +1379 +01:05:27,160 --> 01:05:32,920 +question before is Hardware wise this + +1380 +01:05:31,000 --> 01:05:34,799 +implementation is tricky if you do + +1381 +01:05:32,920 --> 01:05:37,599 +batching um and the reason why It's + +1382 +01:05:34,799 --> 01:05:39,480 +Tricky if you do batching is because um + +1383 +01:05:37,599 --> 01:05:43,000 +different experts will be active for + +1384 +01:05:39,480 --> 01:05:45,240 +different like parts of the batch so if + +1385 +01:05:43,000 --> 01:05:48,559 +you do that you need to do some tricky + +1386 +01:05:45,240 --> 01:05:48,559 +stuff uh there's + +1387 +01:05:54,640 --> 01:05:57,640 +this + +1388 +01:06:03,240 --> 01:06:12,039 +like so much of AI research nowadays uh + +1389 +01:06:08,200 --> 01:06:12,039 +the best resource for this is social + +1390 +01:06:13,680 --> 01:06:20,000 +media so this is uh there's a kind of + +1391 +01:06:16,880 --> 01:06:23,240 +interesting discussion of + +1392 +01:06:20,000 --> 01:06:25,359 +this um if you search for like gpk Fast + +1393 +01:06:23,240 --> 01:06:28,400 +mixed r on Twitter it it talks about + +1394 +01:06:25,359 --> 01:06:30,200 +this but basically there's a bunch of uh + +1395 +01:06:28,400 --> 01:06:32,680 +little little things you need to pay + +1396 +01:06:30,200 --> 01:06:34,760 +attention to um and ways that you can do + +1397 +01:06:32,680 --> 01:06:36,960 +tricks to make this work fast on GPU + +1398 +01:06:34,760 --> 01:06:40,000 +which also kind of uh addresses the + +1399 +01:06:36,960 --> 01:06:42,359 +concern so you can look for Horus H's + +1400 +01:06:40,000 --> 01:06:44,200 +discussion + +1401 +01:06:42,359 --> 01:06:46,680 +this + +1402 +01:06:44,200 --> 01:06:49,000 +cool + +1403 +01:06:46,680 --> 01:06:50,799 +um so the final thing I'd like to talk + +1404 +01:06:49,000 --> 01:06:52,480 +about in the last 10 minutes is pipeline + +1405 +01:06:50,799 --> 01:06:55,359 +systems + +1406 +01:06:52,480 --> 01:06:57,039 +um and pipeline systems are systems + +1407 +01:06:55,359 --> 01:07:00,279 +where we + +1408 +01:06:57,039 --> 01:07:02,319 +have models that basically the output of + +1409 +01:07:00,279 --> 01:07:05,319 +one model becomes the input of another + +1410 +01:07:02,319 --> 01:07:05,319 +model + +1411 +01:07:05,599 --> 01:07:10,359 +and to give an example of this a + +1412 +01:07:08,200 --> 01:07:13,480 +cascaded system is basically a system + +1413 +01:07:10,359 --> 01:07:15,119 +like this where you uh take the output + +1414 +01:07:13,480 --> 01:07:16,960 +of one system and then you feed it into + +1415 +01:07:15,119 --> 01:07:19,640 +the input of another system so a very + +1416 +01:07:16,960 --> 01:07:22,880 +stereotypical example of This is speech + +1417 +01:07:19,640 --> 01:07:25,559 +translation um where you run speech and + +1418 +01:07:22,880 --> 01:07:27,720 +then you uh do speech recognition into + +1419 +01:07:25,559 --> 01:07:29,319 +text and then text you do machine + +1420 +01:07:27,720 --> 01:07:32,160 +translation into another + +1421 +01:07:29,319 --> 01:07:33,920 +language + +1422 +01:07:32,160 --> 01:07:36,440 +and + +1423 +01:07:33,920 --> 01:07:39,039 +um one of the frustrating things about + +1424 +01:07:36,440 --> 01:07:43,000 +speech translation is these systems are + +1425 +01:07:39,039 --> 01:07:45,799 +stubbornly better uh for a long time + +1426 +01:07:43,000 --> 01:07:47,680 +than many systems that try to do end to + +1427 +01:07:45,799 --> 01:07:49,960 +end like speech to text in another + +1428 +01:07:47,680 --> 01:07:52,160 +language there's a couple reasons for + +1429 +01:07:49,960 --> 01:07:54,440 +this does anyone have an idea why what + +1430 +01:07:52,160 --> 01:07:57,039 +one of those reasons might + +1431 +01:07:54,440 --> 01:07:58,839 +be + +1432 +01:07:57,039 --> 01:08:01,559 +yeah the + +1433 +01:07:58,839 --> 01:08:05,279 +data + +1434 +01:08:01,559 --> 01:08:08,680 +anying exactly so data data availability + +1435 +01:08:05,279 --> 01:08:10,920 +is way better for speech to text in the + +1436 +01:08:08,680 --> 01:08:13,319 +same language and text to text in + +1437 +01:08:10,920 --> 01:08:15,720 +another language than it is for uh + +1438 +01:08:13,319 --> 01:08:17,759 +Speech to te text in another language + +1439 +01:08:15,720 --> 01:08:19,319 +because there just aren't large data + +1440 +01:08:17,759 --> 01:08:21,679 +sets that have speech and text in many + +1441 +01:08:19,319 --> 01:08:25,719 +languages so there's a bunch of tricks + +1442 +01:08:21,679 --> 01:08:31,759 +that you can do uh to you know fix this + +1443 +01:08:25,719 --> 01:08:34,239 +but still it it's uh you know uh tricky + +1444 +01:08:31,759 --> 01:08:36,120 +and there's a couple other reasons + +1445 +01:08:34,239 --> 01:08:38,159 +another reason is like actually speech + +1446 +01:08:36,120 --> 01:08:39,319 +to text in the same language is just a + +1447 +01:08:38,159 --> 01:08:42,520 +much more + +1448 +01:08:39,319 --> 01:08:45,359 +straightforward task um and so it's a + +1449 +01:08:42,520 --> 01:08:47,839 +bit easier to learn another thing is + +1450 +01:08:45,359 --> 01:08:50,839 +interpretability and the reason why + +1451 +01:08:47,839 --> 01:08:52,120 +interpretability is important is + +1452 +01:08:50,839 --> 01:08:54,920 +basically + +1453 +01:08:52,120 --> 01:08:56,640 +like if I'm talking to you in a + +1454 +01:08:54,920 --> 01:08:58,000 +different language like you speak a + +1455 +01:08:56,640 --> 01:09:00,319 +different language I'm talking to you + +1456 +01:08:58,000 --> 01:09:02,679 +through a speech translation system I + +1457 +01:09:00,319 --> 01:09:05,799 +actually want to know if the speech + +1458 +01:09:02,679 --> 01:09:07,600 +recognition worked because I know if the + +1459 +01:09:05,799 --> 01:09:08,920 +speech recognition didn't work then I'll + +1460 +01:09:07,600 --> 01:09:10,440 +I'm pretty sure that the translation + +1461 +01:09:08,920 --> 01:09:11,920 +didn't work either right and I can + +1462 +01:09:10,440 --> 01:09:14,880 +verify the speech recognition but I + +1463 +01:09:11,920 --> 01:09:16,199 +can't verify the transation so um + +1464 +01:09:14,880 --> 01:09:18,279 +there's other reasons why you might want + +1465 +01:09:16,199 --> 01:09:20,239 +a Cascade system other than just like + +1466 +01:09:18,279 --> 01:09:22,440 +accuracy or or other things like that + +1467 +01:09:20,239 --> 01:09:25,880 +but this is a thing we definitely + +1468 +01:09:22,440 --> 01:09:29,120 +do um there's another idea of stacking + +1469 +01:09:25,880 --> 01:09:32,560 +and stacking is um very similar to cast + +1470 +01:09:29,120 --> 01:09:34,560 +skating but it allows you to take two + +1471 +01:09:32,560 --> 01:09:37,120 +different models for the same task but + +1472 +01:09:34,560 --> 01:09:39,400 +with predictions in different ways so + +1473 +01:09:37,120 --> 01:09:41,120 +just taking another um + +1474 +01:09:39,400 --> 01:09:43,600 +example + +1475 +01:09:41,120 --> 01:09:45,040 +uh actually maybe maybe ignore the + +1476 +01:09:43,600 --> 01:09:47,159 +example I have here but we could just + +1477 +01:09:45,040 --> 01:09:50,679 +take the example of speech uh + +1478 +01:09:47,159 --> 01:09:53,000 +translation um the speech translation + +1479 +01:09:50,679 --> 01:09:55,760 +model uh we would first do speech + +1480 +01:09:53,000 --> 01:09:57,520 +recognition into like let's say English + +1481 +01:09:55,760 --> 01:09:59,640 +and then we would do translation and the + +1482 +01:09:57,520 --> 01:10:03,840 +input to the translation model would be + +1483 +01:09:59,640 --> 01:10:05,560 +speech in English um text in English and + +1484 +01:10:03,840 --> 01:10:07,320 +we would generate the output in Japanese + +1485 +01:10:05,560 --> 01:10:10,080 +so it would take both the speech and the + +1486 +01:10:07,320 --> 01:10:12,920 +text uh when it was doing translation + +1487 +01:10:10,080 --> 01:10:14,840 +and that would allow it to number one + +1488 +01:10:12,920 --> 01:10:17,719 +basically get a second opinion about + +1489 +01:10:14,840 --> 01:10:21,080 +whether the transcription was correct + +1490 +01:10:17,719 --> 01:10:23,800 +but also like let's say there was + +1491 +01:10:21,080 --> 01:10:26,440 +some unique information that only + +1492 +01:10:23,800 --> 01:10:29,480 +appeared in the + +1493 +01:10:26,440 --> 01:10:31,679 +um uh that only appeared in the speech + +1494 +01:10:29,480 --> 01:10:34,840 +so just to give an example I read the + +1495 +01:10:31,679 --> 01:10:37,040 +book I read the book are both + +1496 +01:10:34,840 --> 01:10:38,640 +transcribed exactly the same way and + +1497 +01:10:37,040 --> 01:10:41,679 +they're different translations obviously + +1498 +01:10:38,640 --> 01:10:42,920 +because one is uh you know present or + +1499 +01:10:41,679 --> 01:10:45,560 +present tense and the other is past + +1500 +01:10:42,920 --> 01:10:47,239 +tense so there are examples where uh + +1501 +01:10:45,560 --> 01:10:51,600 +adding a cascaded system would lose + +1502 +01:10:47,239 --> 01:10:51,600 +information and a stacked system would + +1503 +01:10:53,400 --> 01:10:57,679 +not another thing is of refinement I + +1504 +01:10:56,440 --> 01:10:59,480 +think this is actually really + +1505 +01:10:57,679 --> 01:11:01,000 +interesting because large language + +1506 +01:10:59,480 --> 01:11:03,920 +models have opened up a whole bunch of + +1507 +01:11:01,000 --> 01:11:05,640 +possibilities for us in this space um + +1508 +01:11:03,920 --> 01:11:07,760 +this is like cascading and stacking but + +1509 +01:11:05,640 --> 01:11:09,640 +it it can be done multiple times and it + +1510 +01:11:07,760 --> 01:11:12,960 +can be done multiple times with the same + +1511 +01:11:09,640 --> 01:11:15,040 +model so um we have an input we feed it + +1512 +01:11:12,960 --> 01:11:17,320 +into the model we get an output and then + +1513 +01:11:15,040 --> 01:11:19,360 +we feed the output back in and gradually + +1514 +01:11:17,320 --> 01:11:23,080 +refine it and make it better and + +1515 +01:11:19,360 --> 01:11:24,760 +better and the first time this was done + +1516 +01:11:23,080 --> 01:11:27,440 +in neural networks was through something + +1517 +01:11:24,760 --> 01:11:29,679 +called Del ation networks and basically + +1518 +01:11:27,440 --> 01:11:32,360 +deliberation networks what they do is + +1519 +01:11:29,679 --> 01:11:33,760 +they uh take in an output and then they + +1520 +01:11:32,360 --> 01:11:34,920 +just gradually refine it to make it + +1521 +01:11:33,760 --> 01:11:37,280 +better and better they used a + +1522 +01:11:34,920 --> 01:11:39,159 +reinforcement learning algorithm to do + +1523 +01:11:37,280 --> 01:11:41,159 +this where you generated the output and + +1524 +01:11:39,159 --> 01:11:43,600 +then um improved + +1525 +01:11:41,159 --> 01:11:46,719 +it another thing that's really popular + +1526 +01:11:43,600 --> 01:11:48,280 +nowadays is uh diffusion models and I + +1527 +01:11:46,719 --> 01:11:50,400 +haven't quite decided whether I'll have + +1528 +01:11:48,280 --> 01:11:51,880 +time to cover diffusion models in depth + +1529 +01:11:50,400 --> 01:11:54,880 +but basically the way a diffusion model + +1530 +01:11:51,880 --> 01:11:55,880 +works is very similar you start out with + +1531 +01:11:54,880 --> 01:11:57,239 +nothing + +1532 +01:11:55,880 --> 01:11:59,840 +and then you gradually make it better + +1533 +01:11:57,239 --> 01:12:01,360 +and better um the key difference between + +1534 +01:11:59,840 --> 01:12:03,520 +deliberation networks and diffusion + +1535 +01:12:01,360 --> 01:12:05,520 +models is diffusion models um you can + +1536 +01:12:03,520 --> 01:12:08,600 +train from scratch by basically noising + +1537 +01:12:05,520 --> 01:12:10,600 +the input uh applying noise to the input + +1538 +01:12:08,600 --> 01:12:12,880 +um in training very efficiently and + +1539 +01:12:10,600 --> 01:12:15,639 +these are very widely used + +1540 +01:12:12,880 --> 01:12:18,199 +in image generation they're not super + +1541 +01:12:15,639 --> 01:12:20,120 +widely used in text just because regular + +1542 +01:12:18,199 --> 01:12:22,840 +autor regressive models are so good for + +1543 +01:12:20,120 --> 01:12:24,159 +text um but there are a few efforts to + +1544 +01:12:22,840 --> 01:12:26,880 +do + +1545 +01:12:24,159 --> 01:12:30,920 +that and then a final one is self- + +1546 +01:12:26,880 --> 01:12:35,120 +refine and the idea behind self- refine + +1547 +01:12:30,920 --> 01:12:39,400 +is you um actually maybe I can open the + +1548 +01:12:35,120 --> 01:12:39,400 +paper because the paper has a good + +1549 +01:12:54,120 --> 01:12:58,239 +figure + +1550 +01:12:56,280 --> 01:13:02,679 +actually I thought it had a good + +1551 +01:12:58,239 --> 01:13:05,600 +figure um yeah so maybe this is a figure + +1552 +01:13:02,679 --> 01:13:08,639 +um so basically uh what you do is you + +1553 +01:13:05,600 --> 01:13:10,639 +feed in the input you generate an output + +1554 +01:13:08,639 --> 01:13:12,679 +and then you ask the model to give you + +1555 +01:13:10,639 --> 01:13:15,520 +feedback on the output and say yes this + +1556 +01:13:12,679 --> 01:13:16,760 +output is good or um like let's say + +1557 +01:13:15,520 --> 01:13:19,679 +you're doing code generation it could + +1558 +01:13:16,760 --> 01:13:21,920 +say no this output has an error in it um + +1559 +01:13:19,679 --> 01:13:24,719 +this is a problem with your output and + +1560 +01:13:21,920 --> 01:13:27,840 +then you feed in both the output and the + +1561 +01:13:24,719 --> 01:13:29,480 +feedback back uh and ask the model to + +1562 +01:13:27,840 --> 01:13:32,239 +refine its output and you do this over + +1563 +01:13:29,480 --> 01:13:35,280 +and over again and this allows you to uh + +1564 +01:13:32,239 --> 01:13:36,840 +improve the output and uh this is has + +1565 +01:13:35,280 --> 01:13:39,600 +ended up being pretty effective in a + +1566 +01:13:36,840 --> 01:13:41,159 +pretty wide number of tasks one caveat + +1567 +01:13:39,600 --> 01:13:44,040 +about this is your model has to be + +1568 +01:13:41,159 --> 01:13:47,000 +really good for this to work so um only + +1569 +01:13:44,040 --> 01:13:49,239 +models kind of on the level of GPT 4 not + +1570 +01:13:47,000 --> 01:13:52,000 +on the level of GPT 3.5 have the ability + +1571 +01:13:49,239 --> 01:13:54,040 +to do this pretty consistently so it is + +1572 +01:13:52,000 --> 01:13:57,040 +something you need to be aware + +1573 +01:13:54,040 --> 01:13:57,040 +of + +1574 +01:13:59,760 --> 01:14:03,600 +cool yep that's all I I had for today + +1575 +01:14:02,400 --> 01:14:06,600 +I'm happy + +1576 +01:14:03,600 --> 01:14:06,600 +to + +1577 +01:14:07,159 --> 01:14:10,159 +take + +1578 +01:14:20,600 --> 01:14:27,320 +yep yep that this is a great question so + +1579 +01:14:23,920 --> 01:14:28,840 +if sta has the potential to address + +1580 +01:14:27,320 --> 01:14:32,120 +information loss why would we ever + +1581 +01:14:28,840 --> 01:14:33,840 +choose a Cascade model I think basically + +1582 +01:14:32,120 --> 01:14:37,440 +there's potentially two reasons one + +1583 +01:14:33,840 --> 01:14:39,199 +reason is um data availability so in + +1584 +01:14:37,440 --> 01:14:42,639 +order to train a stacked model you + +1585 +01:14:39,199 --> 01:14:43,430 +obviously need the outputs I guess you + +1586 +01:14:42,639 --> 01:14:44,639 +could + +1587 +01:14:43,430 --> 01:14:48,440 +[Music] + +1588 +01:14:44,639 --> 01:14:50,880 +um yeah I guess you could run + +1589 +01:14:48,440 --> 01:14:53,199 +the and generate outputs for every + +1590 +01:14:50,880 --> 01:14:54,840 +training example you have um but you + +1591 +01:14:53,199 --> 01:14:55,840 +would need to do that so you would need + +1592 +01:14:54,840 --> 01:14:58,639 +to to + +1593 +01:14:55,840 --> 01:14:59,920 +run speech recognition for every example + +1594 +01:14:58,639 --> 01:15:02,760 +and you also + +1595 +01:14:59,920 --> 01:15:05,199 +couldn't you couldn't use any examples + +1596 +01:15:02,760 --> 01:15:07,600 +where you don't have the original input + +1597 +01:15:05,199 --> 01:15:10,320 +so you couldn't use text to text + +1598 +01:15:07,600 --> 01:15:12,239 +examples unless you like synthesize + +1599 +01:15:10,320 --> 01:15:14,159 +speech from text for machine translation + +1600 +01:15:12,239 --> 01:15:15,840 +for example so makes it a little bit + +1601 +01:15:14,159 --> 01:15:17,360 +more tricky due to the data requirements + +1602 +01:15:15,840 --> 01:15:19,239 +but that's not + +1603 +01:15:17,360 --> 01:15:22,560 +insurmountable the second reason is + +1604 +01:15:19,239 --> 01:15:24,400 +complexity and efficiency so you know + +1605 +01:15:22,560 --> 01:15:27,920 +you do have to come up with a model that + +1606 +01:15:24,400 --> 01:15:29,520 +takes in speed and text and run set and + +1607 +01:15:27,920 --> 01:15:30,920 +it might be easier just to hook together + +1608 +01:15:29,520 --> 01:15:34,719 +a speech recognitional with a + +1609 +01:15:30,920 --> 01:15:37,920 +translation so but like I think overall + +1610 +01:15:34,719 --> 01:15:39,639 +I I like these methods I I think these + +1611 +01:15:37,920 --> 01:15:41,159 +are good methods to use if you're if + +1612 +01:15:39,639 --> 01:15:42,480 +you're thinking about using a Cascade + +1613 +01:15:41,159 --> 01:15:44,199 +system you should definitely consider + +1614 +01:15:42,480 --> 01:15:47,199 +using a stack system in + +1615 +01:15:44,199 --> 01:15:47,199 +sense + +1616 +01:15:52,080 --> 01:15:56,960 +yeah yeah can you measure the + +1617 +01:15:55,159 --> 01:15:59,400 +contribution of each component to an + +1618 +01:15:56,960 --> 01:16:00,639 +ensemble um the very very easy way to do + +1619 +01:15:59,400 --> 01:16:02,199 +that is look at the interpolation + +1620 +01:16:00,639 --> 01:16:05,360 +coefficients if you train the + +1621 +01:16:02,199 --> 01:16:06,800 +interpolation coefficients um otherwise + +1622 +01:16:05,360 --> 01:16:08,920 +I guess it depends on what you mean by + +1623 +01:16:06,800 --> 01:16:10,480 +each contribution but I you know looking + +1624 +01:16:08,920 --> 01:16:12,280 +at the interpolation coefficients is a + +1625 +01:16:10,480 --> 01:16:16,320 +pretty good way to do + +1626 +01:16:12,280 --> 01:16:16,320 +it also just how much did the + +1627 +01:16:21,480 --> 01:16:27,400 +accuracy is iterative refinement the + +1628 +01:16:24,159 --> 01:16:30,199 +same idea as boosting in traditional + +1629 +01:16:27,400 --> 01:16:30,199 +like machine Learning + +1630 +01:16:30,320 --> 01:16:34,920 +Systems I think it's a little bit + +1631 +01:16:32,920 --> 01:16:36,520 +different um because iterative + +1632 +01:16:34,920 --> 01:16:38,920 +refinement what I'm talking about here + +1633 +01:16:36,520 --> 01:16:41,120 +it's usually taking in the output like + +1634 +01:16:38,920 --> 01:16:43,320 +rather complex output of a system and + +1635 +01:16:41,120 --> 01:16:44,920 +modifying it so you're not just + +1636 +01:16:43,320 --> 01:16:47,080 +modifying the + +1637 +01:16:44,920 --> 01:16:49,880 +probabilities of like a single + +1638 +01:16:47,080 --> 01:16:53,080 +classifier you're modifying the actual + +1639 +01:16:49,880 --> 01:16:55,960 +outputs that were generated then from + +1640 +01:16:53,080 --> 01:16:59,560 +the point of view of a boosting + +1641 +01:16:55,960 --> 01:17:02,560 +model over a single categorical output + +1642 +01:16:59,560 --> 01:17:04,520 +it might actually be similar or the same + +1643 +01:17:02,560 --> 01:17:06,480 +but this is more like uh you you + +1644 +01:17:04,520 --> 01:17:08,159 +generated a textual output and then you + +1645 +01:17:06,480 --> 01:17:10,400 +feed in the textual output to the other + +1646 +01:17:08,159 --> 01:17:12,120 +model and refine like generated a new + +1647 +01:17:10,400 --> 01:17:14,239 +textual output so I feel like it's a lot + +1648 +01:17:12,120 --> 01:17:18,639 +more + +1649 +01:17:14,239 --> 01:17:18,639 +complex cool okay thank thanks a lot + +1650 +01:17:18,840 --> 01:17:21,840 +everyone \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/transcript.vtt b/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..0ce95be43c24b05be729fc1de32a37af4620f2d4 --- /dev/null +++ b/CMU Advanced NLP 2024 (14) Ensembling and Mixture of Experts/transcript.vtt @@ -0,0 +1,4951 @@ +WEBVTT + +00:00:00.760 --> 00:00:07.240 +he everyone so I'd like to get + +00:00:03.279 --> 00:00:09.320 +started the first thing is that um I + +00:00:07.240 --> 00:00:11.160 +heard from the adws people that they + +00:00:09.320 --> 00:00:14.440 +started the + +00:00:11.160 --> 00:00:17.840 +process of + +00:00:14.440 --> 00:00:19.400 +getting things issued on the 26th which + +00:00:17.840 --> 00:00:21.480 +is three days ago so you should be + +00:00:19.400 --> 00:00:23.560 +getting it soon uh for reference I + +00:00:21.480 --> 00:00:25.599 +submitted the form about seven days + +00:00:23.560 --> 00:00:28.359 +before that so they're moving very + +00:00:25.599 --> 00:00:29.599 +slowly but I think you should have AWS + +00:00:28.359 --> 00:00:31.920 +credits by the end of the week if you + +00:00:29.599 --> 00:00:35.120 +need them to run uh GPU machines or + +00:00:31.920 --> 00:00:37.960 +stuff like that the moment you get AWS + +00:00:35.120 --> 00:00:39.960 +credits or maybe even before you get AWS + +00:00:37.960 --> 00:00:43.320 +credits I might suggest that you try to + +00:00:39.960 --> 00:00:46.760 +start uh a GPU machine like a P2 machine + +00:00:43.320 --> 00:00:49.160 +or something like that because um + +00:00:46.760 --> 00:00:51.760 +sometimes you need to file for a limit + +00:00:49.160 --> 00:00:53.640 +increase uh to get a P2 machine and that + +00:00:51.760 --> 00:00:55.879 +also takes a little bit of time so I I + +00:00:53.640 --> 00:00:59.160 +would suggest that you uh you take a + +00:00:55.879 --> 00:01:01.160 +look at doing that um so you go to like + +00:00:59.160 --> 00:01:02.800 +if you're using AWS if you're not using + +00:01:01.160 --> 00:01:05.119 +AWS it doesn't matter but if you're + +00:01:02.800 --> 00:01:08.119 +using AWS you can go to launch instance + +00:01:05.119 --> 00:01:11.520 +and try to launch a p2x large machine um + +00:01:08.119 --> 00:01:13.159 +or something like that so uh but yeah + +00:01:11.520 --> 00:01:14.920 +anyway hopefully that will be done soon + +00:01:13.159 --> 00:01:16.600 +I'm sorry about the delay on this they + +00:01:14.920 --> 00:01:21.400 +said it would take seven days and it's + +00:01:16.600 --> 00:01:24.280 +taken almost twice at now so um my + +00:01:21.400 --> 00:01:26.439 +apologies any other uh things before we + +00:01:24.280 --> 00:01:26.439 +get + +00:01:28.759 --> 00:01:34.520 +started um okay I I don't see any so + +00:01:31.920 --> 00:01:37.280 +I'll go ahead with this um I have + +00:01:34.520 --> 00:01:39.240 +slightly fewer slides today so I might + +00:01:37.280 --> 00:01:40.960 +go a little bit off the slides and talk + +00:01:39.240 --> 00:01:44.759 +about papers and stuff or we might + +00:01:40.960 --> 00:01:46.920 +finish early uh either way so um but + +00:01:44.759 --> 00:01:48.439 +what I would like to talk about is um + +00:01:46.920 --> 00:01:53.320 +combining multiple + +00:01:48.439 --> 00:01:55.479 +models and this is uh really important + +00:01:53.320 --> 00:01:57.520 +and useful if you want to get like an + +00:01:55.479 --> 00:02:00.719 +extra few points of + +00:01:57.520 --> 00:02:03.159 +accuracy uh for anything basically + +00:02:00.719 --> 00:02:04.039 +because it's a pretty reliable way to + +00:02:03.159 --> 00:02:06.960 +get + +00:02:04.039 --> 00:02:08.879 +improvements um and there's a a bunch of + +00:02:06.960 --> 00:02:11.239 +different kind of related but different + +00:02:08.879 --> 00:02:13.680 +topics that I'm going to talk about + +00:02:11.239 --> 00:02:15.519 +today but anyway the the basic + +00:02:13.680 --> 00:02:19.239 +background is that we have many models + +00:02:15.519 --> 00:02:22.920 +uh that exist and the reason why we have + +00:02:19.239 --> 00:02:25.840 +many models that exist is multiple fold + +00:02:22.920 --> 00:02:28.160 +number one we could have different model + +00:02:25.840 --> 00:02:30.080 +architectures um and we could also have + +00:02:28.160 --> 00:02:34.440 +different initializations of those model + +00:02:30.080 --> 00:02:37.879 +architectures so um normally you know if + +00:02:34.440 --> 00:02:40.319 +we do initialization we will initial + +00:02:37.879 --> 00:02:42.360 +initialize our model architecture like + +00:02:40.319 --> 00:02:44.680 +let's say we initialize a llama + +00:02:42.360 --> 00:02:45.920 +architecture uh we start out with random + +00:02:44.680 --> 00:02:49.319 +7B + +00:02:45.920 --> 00:02:52.879 +parameters and then we train and we get + +00:02:49.319 --> 00:02:53.840 +llama 7B for uh our pre-training or + +00:02:52.879 --> 00:02:57.280 +llama + +00:02:53.840 --> 00:02:58.599 +27b um we might initialize another model + +00:02:57.280 --> 00:03:00.599 +this could be you know the same + +00:02:58.599 --> 00:03:02.360 +architecture different architecture Ure + +00:03:00.599 --> 00:03:04.840 +train it on the same data or different + +00:03:02.360 --> 00:03:07.000 +data and get something like mistol + +00:03:04.840 --> 00:03:08.599 +mistol 7B in this case actually maybe + +00:03:07.000 --> 00:03:10.080 +these are I should have indicated that + +00:03:08.599 --> 00:03:11.680 +these are different architectures but + +00:03:10.080 --> 00:03:13.879 +you know we get a different pre-rain + +00:03:11.680 --> 00:03:15.599 +model and of course uh we could also + +00:03:13.879 --> 00:03:18.640 +make it bigger or smaller or whatever + +00:03:15.599 --> 00:03:21.720 +else and then we get llama 270b over + +00:03:18.640 --> 00:03:23.519 +here and then after we do that there's a + +00:03:21.720 --> 00:03:25.319 +lot of fine tuning that goes on + +00:03:23.519 --> 00:03:29.360 +according to different strategies so we + +00:03:25.319 --> 00:03:32.640 +have um you know llama 27b instruct uh + +00:03:29.360 --> 00:03:37.760 +vun 7B uh version + +00:03:32.640 --> 00:03:41.000 +1.5 um mistol 7B instruct uh news uh + +00:03:37.760 --> 00:03:45.239 +Hermes 2 mistal 7B or llama 270b + +00:03:41.000 --> 00:03:47.239 +instruct so we have um a variety of + +00:03:45.239 --> 00:03:49.400 +architectures a variety of random + +00:03:47.239 --> 00:03:51.480 +initializations of those architectures a + +00:03:49.400 --> 00:03:54.799 +variety of pre-train models due to + +00:03:51.480 --> 00:03:57.439 +pre-training data or base models and + +00:03:54.799 --> 00:03:58.920 +then a variety of fine dun models um and + +00:03:57.439 --> 00:04:01.120 +so we have this kind of like branching + +00:03:58.920 --> 00:04:02.959 +tree basically + +00:04:01.120 --> 00:04:04.319 +um the reason why this is important is + +00:04:02.959 --> 00:04:06.680 +because when we're combining multiple + +00:04:04.319 --> 00:04:08.400 +models together some of the methods are + +00:04:06.680 --> 00:04:09.959 +applicable to completely different + +00:04:08.400 --> 00:04:12.439 +models some of the methods are only + +00:04:09.959 --> 00:04:15.000 +applicable to models that share the same + +00:04:12.439 --> 00:04:16.720 +architecture and some of them are only + +00:04:15.000 --> 00:04:19.199 +applicable to models that share the same + +00:04:16.720 --> 00:04:20.959 +initialization and training trajectory + +00:04:19.199 --> 00:04:23.680 +and so I'll try to distinguish between + +00:04:20.959 --> 00:04:23.680 +those as we go + +00:04:24.040 --> 00:04:27.919 +forward + +00:04:25.560 --> 00:04:29.960 +cool so the first thing I I'll talk + +00:04:27.919 --> 00:04:32.600 +about is model ensembling and and + +00:04:29.960 --> 00:04:34.320 +ensembling is kind of the a very general + +00:04:32.600 --> 00:04:37.600 +technique that you can use in a lot of + +00:04:34.320 --> 00:04:39.360 +different uh ways but it has its + +00:04:37.600 --> 00:04:43.039 +disadvantages as + +00:04:39.360 --> 00:04:47.199 +well so basically embling is combining + +00:04:43.039 --> 00:04:50.320 +the predictions from multiple models + +00:04:47.199 --> 00:04:52.400 +and the easiest way to do this ignore + +00:04:50.320 --> 00:04:53.800 +the lstm here this is just any sequence + +00:04:52.400 --> 00:04:56.320 +modeling thing it's because the slides + +00:04:53.800 --> 00:05:00.120 +are old but like let's say this is a a + +00:04:56.320 --> 00:05:03.360 +Transformer it is calculating the + +00:05:00.120 --> 00:05:05.600 +current decoder State and you make a + +00:05:03.360 --> 00:05:07.600 +prediction um this is calculating a + +00:05:05.600 --> 00:05:09.199 +current decoder State and make uh + +00:05:07.600 --> 00:05:11.560 +current decoders sayate in making a + +00:05:09.199 --> 00:05:13.039 +prediction and based on some combination + +00:05:11.560 --> 00:05:17.120 +of the two predictions you decide what + +00:05:13.039 --> 00:05:17.120 +you actually want to Output at the next + +00:05:17.680 --> 00:05:23.840 +step so why would we want to do this um + +00:05:22.080 --> 00:05:25.880 +does anyone have any ideas why we want + +00:05:23.840 --> 00:05:28.639 +to use two models instead of using one + +00:05:25.880 --> 00:05:31.639 +model or just using the best + +00:05:28.639 --> 00:05:31.639 +model + +00:05:32.319 --> 00:05:36.440 +or maybe in what situations we would + +00:05:34.520 --> 00:05:39.440 +want to do + +00:05:36.440 --> 00:05:39.440 +this + +00:05:45.400 --> 00:05:50.319 +yeah and what what's the advantage of + +00:05:47.960 --> 00:05:50.319 +doing + +00:05:51.600 --> 00:05:57.000 +that yeah it reduces a bias kind kind of + +00:05:54.800 --> 00:05:57.000 +yeah + +00:05:58.639 --> 00:06:01.639 +sure + +00:06:28.560 --> 00:06:31.560 +m + +00:06:35.400 --> 00:06:40.360 +yeah so um I I'll repeat all of these I + +00:06:38.599 --> 00:06:43.960 +think all of these are correct so number + +00:06:40.360 --> 00:06:47.479 +one um it reduces the bias uh caused by + +00:06:43.960 --> 00:06:49.199 +a single model uh number two it was it's + +00:06:47.479 --> 00:06:52.199 +kind of like a beian perspective which + +00:06:49.199 --> 00:06:54.000 +I'll talk about in a second and then + +00:06:52.199 --> 00:06:56.039 +number three we have different models + +00:06:54.000 --> 00:06:58.520 +and models are better at some things and + +00:06:56.039 --> 00:07:00.400 +worse at other things + +00:06:58.520 --> 00:07:02.720 +um + +00:07:00.400 --> 00:07:05.960 +so talking about the better at some + +00:07:02.720 --> 00:07:08.319 +things and worse at other things um the + +00:07:05.960 --> 00:07:10.960 +basic idea behind embling is that the + +00:07:08.319 --> 00:07:14.240 +errors that model m models make tend to + +00:07:10.960 --> 00:07:15.840 +not be consistent it not tend to not be + +00:07:14.240 --> 00:07:21.520 +as consistent as when the model is + +00:07:15.840 --> 00:07:24.800 +getting it correct so we might have um + +00:07:21.520 --> 00:07:26.160 +we might have one model that says uh + +00:07:24.800 --> 00:07:28.199 +like let's say we just have really + +00:07:26.160 --> 00:07:30.680 +really bad models this is kind of a + +00:07:28.199 --> 00:07:31.720 +really um + +00:07:30.680 --> 00:07:35.960 +obvious + +00:07:31.720 --> 00:07:38.440 +example but we have like the dog the dog + +00:07:35.960 --> 00:07:42.639 +barks and then + +00:07:38.440 --> 00:07:46.039 +runs and then uh Dives or something like + +00:07:42.639 --> 00:07:49.000 +that and we have uh one one model that + +00:07:46.039 --> 00:07:50.560 +just had tons of stuff about diving in + +00:07:49.000 --> 00:07:52.120 +its training data another model that had + +00:07:50.560 --> 00:07:54.240 +tons of stuff about running in its + +00:07:52.120 --> 00:07:56.560 +training data or or marathons or + +00:07:54.240 --> 00:08:00.039 +something staining data so we'll get + +00:07:56.560 --> 00:08:01.800 +model one and model one we'll to give + +00:08:00.039 --> 00:08:06.240 +like a probability of like + +00:08:01.800 --> 00:08:08.280 +0.3 maybe 0.4 and + +00:08:06.240 --> 00:08:10.360 +0.05 and then we'll have another one + +00:08:08.280 --> 00:08:13.039 +over here that's like + +00:08:10.360 --> 00:08:17.319 +0.32 + +00:08:13.039 --> 00:08:19.759 +0.41 and 0 sorry + +00:08:17.319 --> 00:08:23.039 +0.05 and + +00:08:19.759 --> 00:08:25.759 +0.41 or something like this and so when + +00:08:23.039 --> 00:08:27.639 +you average the two together you tend to + +00:08:25.759 --> 00:08:29.240 +get the right answer more often because + +00:08:27.639 --> 00:08:31.720 +kind of the mistakes that they make tend + +00:08:29.240 --> 00:08:33.479 +to less correlated than the probability + +00:08:31.720 --> 00:08:35.880 +of getting and of course it's not + +00:08:33.479 --> 00:08:38.200 +perfect because unbled models are not + +00:08:35.880 --> 00:08:39.880 +perfect but this is a a general tendency + +00:08:38.200 --> 00:08:42.240 +that we see a lot in + +00:08:39.880 --> 00:08:45.959 +models + +00:08:42.240 --> 00:08:47.720 +um and um it's because of this it kind + +00:08:45.959 --> 00:08:52.320 +of Smooths over the idiosyncrasies of + +00:08:47.720 --> 00:08:54.800 +the models you can even um gist Ensemble + +00:08:52.320 --> 00:08:57.519 +models from different checkpoints and + +00:08:54.800 --> 00:08:58.959 +that still gives you improvements and so + +00:08:57.519 --> 00:09:00.560 +when you Ensemble models from different + +00:08:58.959 --> 00:09:02.600 +checkpoints it's basically just what + +00:09:00.560 --> 00:09:05.920 +data did they see most recently and that + +00:09:02.600 --> 00:09:07.839 +also Smooths over you know uh the fact + +00:09:05.920 --> 00:09:10.600 +that like this model happened to see + +00:09:07.839 --> 00:09:13.000 +some data more recently and so it's less + +00:09:10.600 --> 00:09:16.120 +uh you know it's biased towards doing + +00:09:13.000 --> 00:09:18.440 +that so uh this is a a pretty effective + +00:09:16.120 --> 00:09:20.079 +method this is one of the few methods + +00:09:18.440 --> 00:09:21.959 +that I know is going to improve my + +00:09:20.079 --> 00:09:25.120 +accuracy almost every time like there's + +00:09:21.959 --> 00:09:27.880 +a bunch of methods that you can apply um + +00:09:25.120 --> 00:09:29.680 +and I ensembling it's very rare for me + +00:09:27.880 --> 00:09:31.959 +to Ensemble two models together not get + +00:09:29.680 --> 00:09:34.839 +a boost in accuracy in some way so it's + +00:09:31.959 --> 00:09:34.839 +a good thing to + +00:09:35.600 --> 00:09:41.040 +that there's two main ways to combine + +00:09:38.680 --> 00:09:42.560 +models together and both of them are + +00:09:41.040 --> 00:09:45.800 +useful in different + +00:09:42.560 --> 00:09:48.079 +situations the first one is linear + +00:09:45.800 --> 00:09:49.600 +interpolation and when you do linear + +00:09:48.079 --> 00:09:51.240 +interpolation basically what you're + +00:09:49.600 --> 00:09:53.720 +doing is you're taking the weighted + +00:09:51.240 --> 00:09:56.839 +average of model + +00:09:53.720 --> 00:10:00.360 +probabilities and the way that looks + +00:09:56.839 --> 00:10:04.040 +mathematically is like this um this is a + +00:10:00.360 --> 00:10:05.680 +probability according to the model M so + +00:10:04.040 --> 00:10:08.000 +this is just you know the probability of + +00:10:05.680 --> 00:10:11.720 +the next token according to model M this + +00:10:08.000 --> 00:10:13.200 +is the probability of selecting model M + +00:10:11.720 --> 00:10:18.040 +so you talked a little bit about the + +00:10:13.200 --> 00:10:19.920 +basian approach uh to this and this is + +00:10:18.040 --> 00:10:23.519 +basically saying what is the probability + +00:10:19.920 --> 00:10:26.519 +that the parameters of model M + +00:10:23.519 --> 00:10:30.320 +are the ones that we want to be choosing + +00:10:26.519 --> 00:10:32.680 +in this at this particular time step and + +00:10:30.320 --> 00:10:34.640 +then we will we will calculate this and + +00:10:32.680 --> 00:10:38.120 +so then you take the sum over this and + +00:10:34.640 --> 00:10:38.120 +this gives you the next + +00:10:39.560 --> 00:10:44.800 +probability for the second term you can + +00:10:42.639 --> 00:10:47.120 +do this in two ways the most common way + +00:10:44.800 --> 00:10:51.800 +to do this is just to have this be a + +00:10:47.120 --> 00:10:55.279 +constant so you you basically + +00:10:51.800 --> 00:10:55.279 +Define mixture + +00:10:55.920 --> 00:11:01.240 +weights uh which are like um + +00:11:08.480 --> 00:11:13.480 +where the sum of the mixture weights is + +00:11:10.760 --> 00:11:16.160 +equal to one and this is always between + +00:11:13.480 --> 00:11:18.639 +zero and one and so if you do this then + +00:11:16.160 --> 00:11:21.000 +this is just constant and you can uh + +00:11:18.639 --> 00:11:23.519 +interpolate them together constantly but + +00:11:21.000 --> 00:11:25.680 +you can also actually explicitly model + +00:11:23.519 --> 00:11:27.240 +this probability and say oh I'm + +00:11:25.680 --> 00:11:30.279 +currently in a situation where I really + +00:11:27.240 --> 00:11:31.880 +think model M will do a good job of uh + +00:11:30.279 --> 00:11:33.440 +you know predicting the probability so I + +00:11:31.880 --> 00:11:36.160 +want to put most of my probability on + +00:11:33.440 --> 00:11:39.000 +model M so you can actually learn this + +00:11:36.160 --> 00:11:40.079 +dynamically as well um and so if you + +00:11:39.000 --> 00:11:44.360 +have + +00:11:40.079 --> 00:11:45.920 +uh this actually um is rather practical + +00:11:44.360 --> 00:11:47.120 +and easy to do because what you can do + +00:11:45.920 --> 00:11:48.920 +is you can just calculate the + +00:11:47.120 --> 00:11:51.399 +probability according to each model at + +00:11:48.920 --> 00:11:53.120 +each time step and train this model + +00:11:51.399 --> 00:11:55.519 +separately without loading these models + +00:11:53.120 --> 00:11:59.399 +into memory at at the time of training + +00:11:55.519 --> 00:12:00.959 +those models so uh yeah this is um some + +00:11:59.399 --> 00:12:04.800 +you can do as + +00:12:00.959 --> 00:12:04.800 +well any questions about + +00:12:06.680 --> 00:12:11.920 +this + +00:12:08.519 --> 00:12:14.000 +Okay cool so the other option is log + +00:12:11.920 --> 00:12:15.800 +linear interpolation and so linear + +00:12:14.000 --> 00:12:18.680 +interpolation you're taking a linear + +00:12:15.800 --> 00:12:22.040 +combination of the probabilities of each + +00:12:18.680 --> 00:12:24.959 +model log linear interpolation you're + +00:12:22.040 --> 00:12:26.079 +combining together the log probabilities + +00:12:24.959 --> 00:12:29.519 +of each + +00:12:26.079 --> 00:12:32.639 +model and then renormalizing so so that + +00:12:29.519 --> 00:12:34.920 +you get um that you get an actual + +00:12:32.639 --> 00:12:37.760 +probabilistic output so basically what + +00:12:34.920 --> 00:12:40.720 +you do is you have this uh interpolation + +00:12:37.760 --> 00:12:44.040 +coefficient like I had before but you're + +00:12:40.720 --> 00:12:44.040 +combining together the log + +00:12:44.639 --> 00:12:49.639 +probabilities and so here we need to + +00:12:47.680 --> 00:12:51.320 +take the soft + +00:12:49.639 --> 00:12:53.760 +Max + +00:12:51.320 --> 00:12:55.760 +um thinking back here I didn't take the + +00:12:53.760 --> 00:12:58.120 +softmax does anyone have an idea why I + +00:12:55.760 --> 00:13:02.000 +didn't take the soft + +00:12:58.120 --> 00:13:02.000 +Max or why I didn't need + +00:13:08.160 --> 00:13:12.199 +to why why I need to + +00:13:21.600 --> 00:13:27.680 +here yeah + +00:13:23.639 --> 00:13:30.440 +so this probability is gu to be z z and + +00:13:27.680 --> 00:13:32.240 +one and add up to one this probability + +00:13:30.440 --> 00:13:33.760 +is also guaranteed to be zero and one + +00:13:32.240 --> 00:13:35.680 +and add up to one and then when you + +00:13:33.760 --> 00:13:37.120 +multiply those together uh you can do a + +00:13:35.680 --> 00:13:39.160 +little bit of math and demonstrate that + +00:13:37.120 --> 00:13:41.440 +the resulting thing will be between zero + +00:13:39.160 --> 00:13:42.839 +and one and add up to one that's not the + +00:13:41.440 --> 00:13:44.399 +case anymore when we start doing things + +00:13:42.839 --> 00:13:47.639 +in log space because it's just not a + +00:13:44.399 --> 00:13:50.160 +linear function anyway so um you need to + +00:13:47.639 --> 00:13:51.959 +renormalize like this luckily this is + +00:13:50.160 --> 00:13:54.920 +super easy like anything else you do in + +00:13:51.959 --> 00:13:56.959 +py torch you just add things together + +00:13:54.920 --> 00:13:59.320 +and take a soft Max and you'll you'll + +00:13:56.959 --> 00:14:02.519 +get an output but you do need to do + +00:13:59.320 --> 00:14:05.279 +otherwise you're going to get something + +00:14:02.519 --> 00:14:07.279 +weird um the interpolation coefficient + +00:14:05.279 --> 00:14:09.639 +here also can be set to a constant so + +00:14:07.279 --> 00:14:12.759 +you can you could learn it uh kind of + +00:14:09.639 --> 00:14:15.320 +dynamically or it could be + +00:14:12.759 --> 00:14:17.720 +separate cool and these actually have + +00:14:15.320 --> 00:14:19.639 +different meaning oh sorry go ahead you + +00:14:17.720 --> 00:14:23.880 +T on + +00:14:19.639 --> 00:14:26.759 +the Yeah Yeah so basically the + +00:14:23.880 --> 00:14:29.880 +way the way you would do this is you + +00:14:26.759 --> 00:14:32.399 +would have either + +00:14:29.880 --> 00:14:33.920 +the same model you you would either take + +00:14:32.399 --> 00:14:35.279 +representations from one of these + +00:14:33.920 --> 00:14:37.480 +language models or you would take + +00:14:35.279 --> 00:14:38.440 +representations from another model and + +00:14:37.480 --> 00:14:41.639 +you would + +00:14:38.440 --> 00:14:43.959 +just have a model that + +00:14:41.639 --> 00:14:46.480 +predicts uh what this interpolation + +00:14:43.959 --> 00:14:48.279 +coefficient would be and the + +00:14:46.480 --> 00:14:49.720 +optimization objective for that + +00:14:48.279 --> 00:14:52.759 +interpolation coefficient is just + +00:14:49.720 --> 00:14:56.120 +maximizing the probability + +00:14:52.759 --> 00:14:59.600 +whatever so this could also be good um + +00:14:56.120 --> 00:15:01.839 +because this interpolation coefficient + +00:14:59.600 --> 00:15:07.160 +only like let's say you're interpolating + +00:15:01.839 --> 00:15:09.399 +two models together it has one degree of + +00:15:07.160 --> 00:15:13.320 +Freedom at each time step right because + +00:15:09.399 --> 00:15:15.320 +you're only predicting a probability um + +00:15:13.320 --> 00:15:17.839 +if you have uh if you have five models + +00:15:15.320 --> 00:15:20.240 +you have uh you basically would be doing + +00:15:17.839 --> 00:15:24.199 +a soft match over + +00:15:20.240 --> 00:15:25.519 +five five outputs and that's a lot fewer + +00:15:24.199 --> 00:15:27.600 +that's a lot fewer than the whole + +00:15:25.519 --> 00:15:29.880 +vocabulary right and so this is + +00:15:27.600 --> 00:15:31.639 +relatively learning a good interpolation + +00:15:29.880 --> 00:15:34.160 +coefficient is relatively easy compared + +00:15:31.639 --> 00:15:35.800 +to learning what word to predict next + +00:15:34.160 --> 00:15:36.880 +and because of this you could actually + +00:15:35.800 --> 00:15:39.759 +tune + +00:15:36.880 --> 00:15:42.880 +this um sorry you could tune this + +00:15:39.759 --> 00:15:44.600 +probability on a very small data set and + +00:15:42.880 --> 00:15:46.959 +you could even have it be context + +00:15:44.600 --> 00:15:48.480 +independent so you could just be you + +00:15:46.959 --> 00:15:51.399 +know + +00:15:48.480 --> 00:15:55.880 +calculating literally five five + +00:15:51.399 --> 00:15:57.399 +parameters here um and so because of + +00:15:55.880 --> 00:16:00.319 +that like let's say you have a special + +00:15:57.399 --> 00:16:02.639 +domain or a special task where you have + +00:16:00.319 --> 00:16:04.920 +like 50 training examples or something + +00:16:02.639 --> 00:16:07.399 +like that or you know 100 training + +00:16:04.920 --> 00:16:08.959 +examples you can learn this + +00:16:07.399 --> 00:16:12.480 +interpolation coefficient very + +00:16:08.959 --> 00:16:15.880 +effectively uh on just a few a very + +00:16:12.480 --> 00:16:18.120 +small number of training examples um but + +00:16:15.880 --> 00:16:20.000 +like it could be very useful because + +00:16:18.120 --> 00:16:23.920 +like let's say you have a special domain + +00:16:20.000 --> 00:16:25.639 +medical language model that's 1.3 + +00:16:23.920 --> 00:16:27.759 +billion parameters that you trained + +00:16:25.639 --> 00:16:29.639 +yourself and then you have a 70 billion + +00:16:27.759 --> 00:16:31.079 +parameter language model + +00:16:29.639 --> 00:16:33.680 +that's like really good at modeling + +00:16:31.079 --> 00:16:35.399 +General English um so then you could + +00:16:33.680 --> 00:16:39.120 +learn the interpolation coefficient + +00:16:35.399 --> 00:16:40.600 +between those two such that um the large + +00:16:39.120 --> 00:16:41.800 +general purpose language model will be + +00:16:40.600 --> 00:16:43.959 +generating all of the kind of + +00:16:41.800 --> 00:16:46.360 +grammatical stuff but whenever you + +00:16:43.959 --> 00:16:48.480 +switch over to modeling technical terms + +00:16:46.360 --> 00:16:50.040 +from the medical domain then it learns + +00:16:48.480 --> 00:16:52.480 +to upweight the medical language model + +00:16:50.040 --> 00:16:54.199 +or something so this can be quite uh + +00:16:52.480 --> 00:16:57.000 +this can be quite effective if you have + +00:16:54.199 --> 00:17:00.839 +a limited amount of data that you want + +00:16:57.000 --> 00:17:00.839 +toing thiss + +00:17:01.240 --> 00:17:05.600 +um any other questions about that + +00:17:09.079 --> 00:17:14.880 +yeah yeah I'm just gonna talk about that + +00:17:11.760 --> 00:17:17.640 +next so linear versus log linear you can + +00:17:14.880 --> 00:17:20.880 +actually think of this in logic um and + +00:17:17.640 --> 00:17:23.640 +what I mean by that is um linear is kind + +00:17:20.880 --> 00:17:26.640 +of like a logical or it tries to come up + +00:17:23.640 --> 00:17:29.600 +with examples where either one of the + +00:17:26.640 --> 00:17:31.679 +two assigns a high probability so we + +00:17:29.600 --> 00:17:36.200 +have the example of like bark + +00:17:31.679 --> 00:17:36.200 +run um bark run + +00:17:55.640 --> 00:18:03.840 +diet so if we take the average of these + +00:18:00.360 --> 00:18:03.840 +two in linear + +00:18:04.120 --> 00:18:10.240 +space this would be + +00:18:07.159 --> 00:18:13.679 +0.2 this would be + +00:18:10.240 --> 00:18:17.240 +0.26 and this would + +00:18:13.679 --> 00:18:17.240 +be um + +00:18:17.400 --> 00:18:26.280 +0.21 and so a a linear combination + +00:18:21.480 --> 00:18:28.600 +between the two will find run to be the + +00:18:26.280 --> 00:18:30.600 +highest scoring one because on the left + +00:18:28.600 --> 00:18:32.280 +side we have one model that really likes + +00:18:30.600 --> 00:18:33.159 +this output and we have another model + +00:18:32.280 --> 00:18:35.159 +that + +00:18:33.159 --> 00:18:39.280 +doesn't + +00:18:35.159 --> 00:18:42.159 +um this is this can be good at using + +00:18:39.280 --> 00:18:44.440 +models that capture uh different traits + +00:18:42.159 --> 00:18:47.679 +or it can also be useful if like for + +00:18:44.440 --> 00:18:49.840 +example you have a you have a small + +00:18:47.679 --> 00:18:52.320 +model that you really that really + +00:18:49.840 --> 00:18:53.840 +captures like very specific vocabulary + +00:18:52.320 --> 00:18:55.520 +and you want to upgrate that specific + +00:18:53.840 --> 00:18:56.799 +vocabulary that gets a really low + +00:18:55.520 --> 00:18:57.720 +probability according to a general + +00:18:56.799 --> 00:19:01.360 +purpose + +00:18:57.720 --> 00:19:03.200 +model um this is also necessary when any + +00:19:01.360 --> 00:19:04.520 +model can assign zero probabilities so + +00:19:03.200 --> 00:19:06.720 +if you have like an example of + +00:19:04.520 --> 00:19:10.080 +vocabulary that isn't included in the + +00:19:06.720 --> 00:19:11.159 +the like vocabulary of another model or + +00:19:10.080 --> 00:19:14.280 +you have models with different + +00:19:11.159 --> 00:19:17.200 +vocabularies it's necessary to do this + +00:19:14.280 --> 00:19:19.200 +log linear is more like logical and um + +00:19:17.200 --> 00:19:22.240 +so the interpolated model only likes + +00:19:19.200 --> 00:19:23.799 +choices where all the models agree and + +00:19:22.240 --> 00:19:25.640 +this is particularly good when you want + +00:19:23.799 --> 00:19:27.440 +to restrict possible answers like you + +00:19:25.640 --> 00:19:29.280 +want to have one model be able to say no + +00:19:27.440 --> 00:19:32.080 +I really don't like this so never output + +00:19:29.280 --> 00:19:34.200 +it so um for example if you wanted to + +00:19:32.080 --> 00:19:37.360 +train a model that you knew was very + +00:19:34.200 --> 00:19:38.919 +adverse to toxic language and prevent uh + +00:19:37.360 --> 00:19:42.600 +the model from outputting toxic language + +00:19:38.919 --> 00:19:45.200 +you could use log linear mod so I I + +00:19:42.600 --> 00:19:47.559 +can't unfortunately uh calculate logs + +00:19:45.200 --> 00:19:50.080 +and exponents in my head well enough to + +00:19:47.559 --> 00:19:51.600 +uh to decide this but I'm sure that a + +00:19:50.080 --> 00:19:53.840 +linear + +00:19:51.600 --> 00:19:56.840 +model the linear model would pick the + +00:19:53.840 --> 00:19:59.600 +first one here and the log linear + +00:19:56.840 --> 00:20:01.679 +model would pick the second one because + +00:19:59.600 --> 00:20:05.640 +the second one has a very low score here + +00:20:01.679 --> 00:20:08.640 +so that would be downrated um + +00:20:05.640 --> 00:20:08.640 +by + +00:20:16.919 --> 00:20:20.640 +yeah yeah so + +00:20:25.840 --> 00:20:31.000 +if yeah and if there's any chance of + +00:20:28.760 --> 00:20:34.159 +assigning zero probability according to + +00:20:31.000 --> 00:20:36.520 +a language model then really you can't + +00:20:34.159 --> 00:20:38.200 +even test that language model on that on + +00:20:36.520 --> 00:20:42.120 +that test set + +00:20:38.200 --> 00:20:43.640 +um so the issue becomes like let's say + +00:20:42.120 --> 00:20:45.559 +you have two models with different + +00:20:43.640 --> 00:20:47.080 +vocabulary if you have two models with + +00:20:45.559 --> 00:20:49.080 +different vocabulary it becomes very + +00:20:47.080 --> 00:20:50.559 +tricky how to reconcile those two but + +00:20:49.080 --> 00:20:53.440 +you could do linear interpolation + +00:20:50.559 --> 00:20:55.200 +between them like match the vocab the + +00:20:53.440 --> 00:20:57.559 +output vocabularies that they do have + +00:20:55.200 --> 00:21:00.120 +and then just not worry about the fact + +00:20:57.559 --> 00:21:02.760 +that the vocabularies are dis jointed + +00:21:00.120 --> 00:21:05.039 +and because one will assign a zero + +00:21:02.760 --> 00:21:07.280 +probability to those vocabulary items + +00:21:05.039 --> 00:21:12.240 +but the other one is fine so you can + +00:21:07.280 --> 00:21:14.919 +just do that but if you're in general it + +00:21:12.240 --> 00:21:16.480 +will be very tricky to try to get two + +00:21:14.919 --> 00:21:18.559 +models with different vocabularies to + +00:21:16.480 --> 00:21:21.480 +play together nicely so I I would + +00:21:18.559 --> 00:21:22.919 +suggest um thinking about thinking + +00:21:21.480 --> 00:21:25.600 +seriously about whether you need to do + +00:21:22.919 --> 00:21:31.360 +that or not before you start out but + +00:21:25.600 --> 00:21:31.360 +yeah um uh yes there any + +00:21:35.559 --> 00:21:40.960 +other + +00:21:38.039 --> 00:21:43.360 +um you could definitely so the question + +00:21:40.960 --> 00:21:45.000 +is are there any other types of + +00:21:43.360 --> 00:21:47.760 +interpolation that have other types of + +00:21:45.000 --> 00:21:50.159 +logical components like exor or nor um + +00:21:47.760 --> 00:21:52.840 +you could definitely come up with one uh + +00:21:50.159 --> 00:21:55.440 +I I am struggling a little bit to think + +00:21:52.840 --> 00:21:57.520 +about when you would want to do that but + +00:21:55.440 --> 00:22:02.840 +I'm sure + +00:21:57.520 --> 00:22:05.840 +you is is the inherent that the + +00:22:02.840 --> 00:22:05.840 +err + +00:22:09.120 --> 00:22:14.480 +not so what what if the errors are not + +00:22:12.640 --> 00:22:15.919 +what if the errors are correlated so + +00:22:14.480 --> 00:22:18.200 +think about what happens if the errors + +00:22:15.919 --> 00:22:20.000 +are perfectly correlated um which is + +00:22:18.200 --> 00:22:25.840 +when you're using the same model in two + +00:22:20.000 --> 00:22:25.840 +parts of the uh like on top so you + +00:22:27.000 --> 00:22:30.520 +literally uh these + +00:22:29.159 --> 00:22:32.679 +model one and model two are the same + +00:22:30.520 --> 00:22:36.720 +model if that's the case nothing happens + +00:22:32.679 --> 00:22:39.200 +it doesn't get worse um and + +00:22:36.720 --> 00:22:43.039 +so of course because this is machine + +00:22:39.200 --> 00:22:45.080 +learning there's no guarantee like you + +00:22:43.039 --> 00:22:47.559 +know unless we make some assumptions + +00:22:45.080 --> 00:22:49.200 +about the relationship between like the + +00:22:47.559 --> 00:22:52.279 +training set and the test set or the + +00:22:49.200 --> 00:22:53.760 +models errors in the test set um you can + +00:22:52.279 --> 00:22:57.039 +always do something that will make your + +00:22:53.760 --> 00:22:59.240 +accuracy worse um like let's say we flip + +00:22:57.039 --> 00:23:00.360 +the labels of a binary class + +00:22:59.240 --> 00:23:03.120 +no matter what you do you're going to + +00:23:00.360 --> 00:23:06.320 +make your accuracy worse but + +00:23:03.120 --> 00:23:09.000 +um no matter what the normal thing you + +00:23:06.320 --> 00:23:10.640 +would do is it would make your if it + +00:23:09.000 --> 00:23:12.480 +would improve accuracy normally it would + +00:23:10.640 --> 00:23:14.760 +decrease your accuracy but like under + +00:23:12.480 --> 00:23:16.080 +pretty reasonable assumptions it's + +00:23:14.760 --> 00:23:20.400 +mostly going to be the case that errors + +00:23:16.080 --> 00:23:22.320 +are deated to some extent um + +00:23:20.400 --> 00:23:25.559 +so + +00:23:22.320 --> 00:23:30.440 +yeah you and because of that ensembly + +00:23:25.559 --> 00:23:30.440 +usually helps yeah + +00:23:36.120 --> 00:23:42.019 +um about which one + +00:23:38.760 --> 00:23:42.019 +[Music] + +00:23:53.559 --> 00:24:01.240 +which let me make sure I didn't mess it + +00:23:55.640 --> 00:24:01.240 +up on sides okay so in my + +00:24:06.960 --> 00:24:13.120 +example yeah yeah + +00:24:09.640 --> 00:24:13.120 +yeah sorry about + +00:24:14.360 --> 00:24:19.320 +that because this is this is where the + +00:24:17.039 --> 00:24:21.840 +average is higher and then this is + +00:24:19.320 --> 00:24:27.200 +one take + +00:24:21.840 --> 00:24:29.039 +you uh cool any other any other + +00:24:27.200 --> 00:24:31.840 +questions okay + +00:24:29.039 --> 00:24:34.440 +okay so + +00:24:31.840 --> 00:24:36.320 +um another thing I should point out is + +00:24:34.440 --> 00:24:39.600 +that we don't + +00:24:36.320 --> 00:24:41.840 +necessarily need to use models only as + +00:24:39.600 --> 00:24:44.080 +positive evidence so if you're using log + +00:24:41.840 --> 00:24:46.039 +linear interpolation actually your + +00:24:44.080 --> 00:24:49.919 +interpolation coefficients do not need + +00:24:46.039 --> 00:24:52.520 +to be positive they can also be negative + +00:24:49.919 --> 00:24:55.360 +and you can have uh things where you + +00:24:52.520 --> 00:24:57.840 +penalize the probabilities given by a + +00:24:55.360 --> 00:24:59.679 +particular model and this has actually + +00:24:57.840 --> 00:25:01.520 +been used for a long time it was + +00:24:59.679 --> 00:25:04.440 +actually used in machine translation + +00:25:01.520 --> 00:25:08.840 +since like uh 2005 or something like + +00:25:04.440 --> 00:25:11.480 +this but the basic idea is um that you + +00:25:08.840 --> 00:25:13.600 +have some models that serve as negative + +00:25:11.480 --> 00:25:15.559 +evidence so you have kind of a core + +00:25:13.600 --> 00:25:17.880 +model this might be your really strong + +00:25:15.559 --> 00:25:21.520 +general purpose language model you have + +00:25:17.880 --> 00:25:23.080 +a positive uh model which is the model + +00:25:21.520 --> 00:25:25.240 +that you want to kind of boost up and + +00:25:23.080 --> 00:25:27.320 +improve and a negative model which you + +00:25:25.240 --> 00:25:31.159 +want to + +00:25:27.320 --> 00:25:33.679 +decrease and um one example of this is + +00:25:31.159 --> 00:25:36.760 +in uh a paper that we did in + +00:25:33.679 --> 00:25:40.159 +2019 um the core was a machine + +00:25:36.760 --> 00:25:42.960 +translation model and the negative model + +00:25:40.159 --> 00:25:44.880 +is an outof domain language model and + +00:25:42.960 --> 00:25:46.960 +the positive model is an in domain + +00:25:44.880 --> 00:25:51.039 +language model and so the idea behind + +00:25:46.960 --> 00:25:53.880 +this is a machine translation model um + +00:25:51.039 --> 00:25:55.600 +you have to train it on machine + +00:25:53.880 --> 00:25:58.320 +translation data and machine translation + +00:25:55.600 --> 00:26:00.640 +data is not very easy to get for + +00:25:58.320 --> 00:26:02.360 +particular domains for example um you + +00:26:00.640 --> 00:26:03.880 +might only have machine translation data + +00:26:02.360 --> 00:26:06.919 +in the news domain and you actually want + +00:26:03.880 --> 00:26:09.240 +to be uh doing uh translation in the + +00:26:06.919 --> 00:26:12.720 +medical domain or something so what you + +00:26:09.240 --> 00:26:14.640 +do is you have your positive model here + +00:26:12.720 --> 00:26:17.600 +this could be a new this is a machine + +00:26:14.640 --> 00:26:19.919 +translation model this could be a news + +00:26:17.600 --> 00:26:21.320 +domain or sorry this could be a medical + +00:26:19.919 --> 00:26:22.919 +domain language model and this could be + +00:26:21.320 --> 00:26:24.360 +a news domain language model so you're + +00:26:22.919 --> 00:26:25.840 +subtracting out the news domain + +00:26:24.360 --> 00:26:27.600 +probabilities and adding in medical + +00:26:25.840 --> 00:26:30.240 +domain probabilities move it in that + +00:26:27.600 --> 00:26:30.240 +direction + +00:26:30.440 --> 00:26:36.799 +um another example of this is uh + +00:26:32.919 --> 00:26:40.000 +something called uh D experts um or + +00:26:36.799 --> 00:26:43.440 +dexperts and the idea here is here you + +00:26:40.000 --> 00:26:46.120 +have a strong language model as your + +00:26:43.440 --> 00:26:48.320 +core and then as negative you have a + +00:26:46.120 --> 00:26:50.240 +weak toxic language model so it was + +00:26:48.320 --> 00:26:52.760 +trained on lot lots of like bad texts + +00:26:50.240 --> 00:26:55.799 +that you don't want to be generating and + +00:26:52.760 --> 00:26:57.159 +the positive is a weak non-toxic + +00:26:55.799 --> 00:26:59.279 +language model that was trained on lots + +00:26:57.159 --> 00:27:03.200 +of like inocua + +00:26:59.279 --> 00:27:04.399 +posts so that would help you detoxify + +00:27:03.200 --> 00:27:06.679 +the outputs of the + +00:27:04.399 --> 00:27:09.799 +language so there's lots of examples of + +00:27:06.679 --> 00:27:09.799 +things like this that you can do + +00:27:10.720 --> 00:27:15.880 +through + +00:27:12.880 --> 00:27:15.880 +yeah + +00:27:19.320 --> 00:27:25.880 +yeah um so the positive in the machine + +00:27:22.840 --> 00:27:27.679 +translation example this is a so this is + +00:27:25.880 --> 00:27:31.760 +a machine translation model where the + +00:27:27.679 --> 00:27:34.080 +input is is like in um English and out + +00:27:31.760 --> 00:27:37.880 +is in Japanese something like + +00:27:34.080 --> 00:27:39.679 +that this is only trained on Japanese + +00:27:37.880 --> 00:27:42.919 +but it's trained on like medical + +00:27:39.679 --> 00:27:44.440 +Japanese for example Med the domain one + +00:27:42.919 --> 00:27:48.480 +this is a language model that was + +00:27:44.440 --> 00:27:50.600 +trained on like news domain um Japanese + +00:27:48.480 --> 00:27:54.039 +or it could even literally just be + +00:27:50.600 --> 00:27:56.360 +trained on the side of the machine + +00:27:54.039 --> 00:28:00.120 +trans um so it's trying to remove out + +00:27:56.360 --> 00:28:00.120 +the language modeling component from the + +00:28:03.720 --> 00:28:06.720 +cool + +00:28:06.880 --> 00:28:11.480 +okay so another thing that I should + +00:28:09.880 --> 00:28:14.720 +point out I didn't actually put it on + +00:28:11.480 --> 00:28:18.399 +the slides is um there's a lot of other + +00:28:14.720 --> 00:28:19.640 +ways to get multiple models and um I + +00:28:18.399 --> 00:28:22.600 +think a lot of people are probably + +00:28:19.640 --> 00:28:23.559 +familiar with Dropout um it's a method + +00:28:22.600 --> 00:28:27.120 +for + +00:28:23.559 --> 00:28:29.080 +regularizing um it's a method for + +00:28:27.120 --> 00:28:31.120 +regularizing + +00:28:29.080 --> 00:28:33.760 +neural networks or deep learning models + +00:28:31.120 --> 00:28:37.279 +in general and basically the idea is + +00:28:33.760 --> 00:28:41.840 +every once in a while um during training + +00:28:37.279 --> 00:28:45.720 +you drop out some portion of the uh like + +00:28:41.840 --> 00:28:48.919 +nodes in the neural network model and + +00:28:45.720 --> 00:28:51.320 +you can actually drop + +00:28:48.919 --> 00:28:52.640 +out and normally what you do is at test + +00:28:51.320 --> 00:28:53.919 +time then you just don't drop out + +00:28:52.640 --> 00:28:56.039 +anything and you use the whole neural + +00:28:53.919 --> 00:28:59.960 +network model but another thing you can + +00:28:56.039 --> 00:29:02.559 +do is you can drop out a test time drop + +00:28:59.960 --> 00:29:04.679 +out five times and combine those + +00:29:02.559 --> 00:29:06.600 +different models together through ensom + +00:29:04.679 --> 00:29:10.600 +and that's actually something uh that + +00:29:06.600 --> 00:29:14.480 +people tried in the uh in the Dropout + +00:29:10.600 --> 00:29:17.600 +paper and this is one way to get + +00:29:14.480 --> 00:29:19.640 +multiple models uh and actually you can + +00:29:17.600 --> 00:29:21.919 +demonstrate that this helps the original + +00:29:19.640 --> 00:29:24.519 +motivation behind Dropout was precisely + +00:29:21.919 --> 00:29:26.279 +coming from this idea of + +00:29:24.519 --> 00:29:29.080 +ensembling + +00:29:26.279 --> 00:29:31.399 +another method + +00:29:29.080 --> 00:29:34.799 +that has been around for a very long + +00:29:31.399 --> 00:29:37.760 +time it's another embling method is + +00:29:34.799 --> 00:29:41.919 +bagging and basically the way bagging + +00:29:37.760 --> 00:29:41.919 +works is you have a data + +00:29:44.000 --> 00:29:50.159 +set like this and you just resample the + +00:29:47.519 --> 00:29:52.919 +data set so you sample all of the output + +00:29:50.159 --> 00:29:55.200 +with uh replacement and you get another + +00:29:52.919 --> 00:29:57.799 +data set of equal size and then you + +00:29:55.200 --> 00:29:58.559 +train on this but you do that like 10 + +00:29:57.799 --> 00:30:00.120 +times + +00:29:58.559 --> 00:30:02.679 +and you train 10 different models and + +00:30:00.120 --> 00:30:04.360 +then you emble those models together and + +00:30:02.679 --> 00:30:06.000 +so this is another way to get multiple + +00:30:04.360 --> 00:30:07.519 +models and both of these still improve + +00:30:06.000 --> 00:30:09.640 +your robustness because they basically + +00:30:07.519 --> 00:30:11.440 +get a different view on the data so they + +00:30:09.640 --> 00:30:13.440 +smooth over some of the + +00:30:11.440 --> 00:30:15.360 +idiosyncrasies um and as I mentioned + +00:30:13.440 --> 00:30:17.960 +before you can also get multiple models + +00:30:15.360 --> 00:30:20.120 +from different checkpoints and then uh + +00:30:17.960 --> 00:30:22.159 +put them together and all of these + +00:30:20.120 --> 00:30:24.159 +methods are pretty related both of them + +00:30:22.159 --> 00:30:25.960 +basically what they're doing is they're + +00:30:24.159 --> 00:30:28.279 +taking advantage of the fact that you + +00:30:25.960 --> 00:30:29.919 +have particular models that saw + +00:30:28.279 --> 00:30:32.760 +different data or saw data in a + +00:30:29.919 --> 00:30:34.120 +different order or different nodes saw + +00:30:32.760 --> 00:30:35.679 +different parts of the data because you + +00:30:34.120 --> 00:30:37.799 +dropped out some of the nodes when they + +00:30:35.679 --> 00:30:41.840 +were back propping on particular + +00:30:37.799 --> 00:30:44.840 +varieties of the data so um even things + +00:30:41.840 --> 00:30:46.720 +like this can give you models that are + +00:30:44.840 --> 00:30:49.760 +different enough that to help uh when + +00:30:46.720 --> 00:30:49.760 +you're onbling or + +00:30:52.559 --> 00:30:59.360 +combining and then of course um you can + +00:30:56.919 --> 00:31:00.799 +also + +00:30:59.360 --> 00:31:02.480 +then of course you can also combine + +00:31:00.799 --> 00:31:06.960 +together like very different models like + +00:31:02.480 --> 00:31:06.960 +this and that also works in different + +00:31:07.240 --> 00:31:11.159 +ways + +00:31:09.000 --> 00:31:13.039 +cool part of the reason why I wanted to + +00:31:11.159 --> 00:31:15.320 +mention that Dropout though in + +00:31:13.039 --> 00:31:17.120 +particular is there's also other + +00:31:15.320 --> 00:31:19.240 +efficient methods for using multiple + +00:31:17.120 --> 00:31:22.000 +models so the big problem with + +00:31:19.240 --> 00:31:25.399 +ensembling is the cost + +00:31:22.000 --> 00:31:27.159 +and simple ensembling is very expensive + +00:31:25.399 --> 00:31:29.240 +because it requires you to run multiple + +00:31:27.159 --> 00:31:30.519 +models at test test time at inference + +00:31:29.240 --> 00:31:33.720 +time and this is something you don't + +00:31:30.519 --> 00:31:35.279 +want to be doing if you're you know + +00:31:33.720 --> 00:31:38.679 +deploying a service or something because + +00:31:35.279 --> 00:31:41.080 +it like linearly increases your cost by + +00:31:38.679 --> 00:31:45.200 +um the amount of bottles that you're + +00:31:41.080 --> 00:31:47.799 +running and it requires both end times + +00:31:45.200 --> 00:31:50.120 +of computation and end times of memory + +00:31:47.799 --> 00:31:51.720 +and memory is actually probably the + +00:31:50.120 --> 00:31:54.279 +worst thing because you need to deploy + +00:31:51.720 --> 00:31:58.159 +extra GPU machines and other stuff like + +00:31:54.279 --> 00:31:59.880 +that so um the question is is there any + +00:31:58.159 --> 00:32:03.279 +way we can get some of the benefits of + +00:31:59.880 --> 00:32:06.519 +embling without having to create + +00:32:03.279 --> 00:32:07.320 +multiple models and luckily the answer + +00:32:06.519 --> 00:32:09.240 +is + +00:32:07.320 --> 00:32:11.919 +yes + +00:32:09.240 --> 00:32:13.960 +the method the easiest method for doing + +00:32:11.919 --> 00:32:16.600 +so is something called parameter + +00:32:13.960 --> 00:32:18.399 +averaging and basically what you do is + +00:32:16.600 --> 00:32:21.960 +you just average the parameters of + +00:32:18.399 --> 00:32:26.039 +multiple models together um this only + +00:32:21.960 --> 00:32:29.200 +works under certain conditions so does + +00:32:26.039 --> 00:32:31.120 +anyone um does anyone know what these + +00:32:29.200 --> 00:32:33.320 +conditions might be there's a few + +00:32:31.120 --> 00:32:35.919 +obvious ones and maybe a few slightly + +00:32:33.320 --> 00:32:35.919 +less obvious + +00:32:36.039 --> 00:32:40.799 +ones so like first question do you think + +00:32:38.799 --> 00:32:41.919 +you could combine together do you think + +00:32:40.799 --> 00:32:45.880 +you could average together the + +00:32:41.919 --> 00:32:45.880 +parameters of llama 7B and Lama + +00:32:46.440 --> 00:32:52.639 +70b + +00:32:48.480 --> 00:32:52.639 +no the answer is no but why + +00:32:54.480 --> 00:32:58.440 +not I mean what does that even mean in + +00:32:56.760 --> 00:33:00.480 +the first place right like they have + +00:32:58.440 --> 00:33:02.799 +totally different numbers of parameters + +00:33:00.480 --> 00:33:05.840 +uh you wouldn't be able to find a one + +00:33:02.799 --> 00:33:07.840 +toone association between 7B and like 7 + +00:33:05.840 --> 00:33:12.320 +billion parameters and 70 billion + +00:33:07.840 --> 00:33:16.880 +parameters um what about averaging + +00:33:12.320 --> 00:33:19.399 +together uh let's let's say llama 7B and + +00:33:16.880 --> 00:33:19.399 +mistol + +00:33:23.080 --> 00:33:29.760 +7bs yes no y I'm guessing that like for + +00:33:27.440 --> 00:33:29.760 +the + +00:33:33.760 --> 00:33:38.120 +yeah for different architectures the um + +00:33:36.760 --> 00:33:41.799 +the parameters could mean different + +00:33:38.120 --> 00:33:44.159 +things and even if the architecture is + +00:33:41.799 --> 00:33:45.880 +exactly the same even if your random + +00:33:44.159 --> 00:33:49.880 +initialization is different then that + +00:33:45.880 --> 00:33:52.360 +would be a disastrous because basically + +00:33:49.880 --> 00:33:54.760 +in neural networks there's no inherent + +00:33:52.360 --> 00:33:58.559 +meaning to like parameter number one + +00:33:54.760 --> 00:34:01.919 +right um and there's the idea of permut + +00:33:58.559 --> 00:34:06.679 +Inari which is + +00:34:01.919 --> 00:34:07.639 +um you could like randomly Swap all of + +00:34:06.679 --> 00:34:10.280 +the + +00:34:07.639 --> 00:34:12.079 +dimensions uh between within a neural + +00:34:10.280 --> 00:34:14.760 +network and get exactly the same + +00:34:12.079 --> 00:34:17.919 +function + +00:34:14.760 --> 00:34:22.560 +uh as long as kind + +00:34:17.919 --> 00:34:24.839 +of in layer number one you swap and then + +00:34:22.560 --> 00:34:30.359 +also take the inputs in the next layer + +00:34:24.839 --> 00:34:30.359 +also so um you know you know as long + +00:34:30.960 --> 00:34:36.399 +as if you have a weight Matrix that + +00:34:33.679 --> 00:34:40.800 +results in the um in the outputs being + +00:34:36.399 --> 00:34:49.639 +ordered like 1 two three four + +00:34:40.800 --> 00:34:54.159 +five one or 2 1 3 five four as long as + +00:34:49.639 --> 00:34:55.720 +you also swap the input direct input + +00:34:54.159 --> 00:34:58.400 +dimensions of this weight Matrix you get + +00:34:55.720 --> 00:35:01.520 +exactly the same because they + +00:34:58.400 --> 00:35:04.200 +linear combinations of the parameters + +00:35:01.520 --> 00:35:06.480 +together so neural networks have this + +00:35:04.200 --> 00:35:08.599 +feature of permutation and variance so + +00:35:06.480 --> 00:35:11.800 +models that were trained from like + +00:35:08.599 --> 00:35:13.280 +different uh different initializations + +00:35:11.800 --> 00:35:15.040 +won't be able to be combined together in + +00:35:13.280 --> 00:35:18.320 +this + +00:35:15.040 --> 00:35:20.079 +way um but the good luck the good thing + +00:35:18.320 --> 00:35:21.359 +is actually we have a whole bunch of + +00:35:20.079 --> 00:35:25.320 +models that come from the same + +00:35:21.359 --> 00:35:26.720 +pre-trained model right uh so we we have + +00:35:25.320 --> 00:35:28.640 +this initialization here this + +00:35:26.720 --> 00:35:31.280 +initialization was used to train Lama + +00:35:28.640 --> 00:35:32.920 +27b but now we have like hundreds + +00:35:31.280 --> 00:35:34.440 +hundreds of models that are DED from + +00:35:32.920 --> 00:35:37.400 +Lama 2 we have hundreds of models that + +00:35:34.440 --> 00:35:39.599 +are DED from mixol and there all of the + +00:35:37.400 --> 00:35:40.920 +dimensions actually mean the same thing + +00:35:39.599 --> 00:35:43.280 +because they're derived from the same + +00:35:40.920 --> 00:35:46.680 +parameters in the first place so those + +00:35:43.280 --> 00:35:48.119 +ones we can average together and um + +00:35:46.680 --> 00:35:50.359 +there's basically two ways that we can + +00:35:48.119 --> 00:35:53.520 +do this uh one is by averaging together + +00:35:50.359 --> 00:35:55.240 +multiple checkpoints during training so + +00:35:53.520 --> 00:35:57.960 +originally this was the big thing that + +00:35:55.240 --> 00:36:00.359 +people did uh like you would train model + +00:35:57.960 --> 00:36:02.119 +from scratch for a really long time but + +00:36:00.359 --> 00:36:03.920 +then you would take the final five + +00:36:02.119 --> 00:36:07.520 +checkpoints and you would just average + +00:36:03.920 --> 00:36:09.280 +them together and this helps reduce some + +00:36:07.520 --> 00:36:11.040 +of the noise that you get from + +00:36:09.280 --> 00:36:13.839 +stochastic gradient descent and can + +00:36:11.040 --> 00:36:15.520 +improve your overall accuracy if you're + +00:36:13.839 --> 00:36:17.280 +fine-tuning any models this is something + +00:36:15.520 --> 00:36:18.680 +you can do also uh because you're + +00:36:17.280 --> 00:36:19.800 +probably going to be saving checkpoints + +00:36:18.680 --> 00:36:21.160 +you can just take the best five + +00:36:19.800 --> 00:36:23.079 +checkpoints and average them together + +00:36:21.160 --> 00:36:27.280 +and that actually can improve your + +00:36:23.079 --> 00:36:28.160 +accuracy quite a bit um another thing is + +00:36:27.280 --> 00:36:31.520 +find + +00:36:28.160 --> 00:36:32.880 +uh tuned model merging soine tune um in + +00:36:31.520 --> 00:36:35.000 +several ways and then merge them + +00:36:32.880 --> 00:36:39.079 +together and so for example we might + +00:36:35.000 --> 00:36:41.240 +take Lama 27b instruct and um vuna 7B + +00:36:39.079 --> 00:36:44.760 +1.5 and merg them together with some + +00:36:41.240 --> 00:36:47.599 +weights and uh we could you + +00:36:44.760 --> 00:36:50.319 +know smooth over their idos synchr dises + +00:36:47.599 --> 00:36:52.520 +and get better results + +00:36:50.319 --> 00:36:56.280 +too + +00:36:52.520 --> 00:36:56.280 +cool uh any questions + +00:36:56.520 --> 00:36:59.520 +here + +00:37:00.920 --> 00:37:03.119 +oh + +00:37:04.680 --> 00:37:11.920 +yeah want to so I just + +00:37:09.680 --> 00:37:14.079 +came + +00:37:11.920 --> 00:37:19.040 +non I + +00:37:14.079 --> 00:37:19.040 +use like those different chain and + +00:37:19.640 --> 00:37:23.319 +just + +00:37:21.160 --> 00:37:26.640 +I pretty + +00:37:23.319 --> 00:37:29.520 +efficient because on the same model you + +00:37:26.640 --> 00:37:29.520 +get + +00:37:35.640 --> 00:37:40.839 +yeah so would this would this parameter + +00:37:38.000 --> 00:37:46.119 +averaging be a good method for U making + +00:37:40.839 --> 00:37:49.839 +a model less toxic for example the + +00:37:46.119 --> 00:37:53.200 +answer is a little bit trickier there I + +00:37:49.839 --> 00:37:56.119 +guess because um I I feel like this is + +00:37:53.200 --> 00:37:58.160 +good for mixing two models together so + +00:37:56.119 --> 00:38:01.400 +if you're mixing your + +00:37:58.160 --> 00:38:03.359 +like non-toxicity tuned model or your + +00:38:01.400 --> 00:38:06.079 +safety tuned model with the original + +00:38:03.359 --> 00:38:07.520 +base model that was not uh safety tuned + +00:38:06.079 --> 00:38:08.800 +or something like that then you might + +00:38:07.520 --> 00:38:11.240 +get something in the middle so you might + +00:38:08.800 --> 00:38:13.319 +get something that's less safe than the + +00:38:11.240 --> 00:38:18.720 +uh like the model that was tuned to not + +00:38:13.319 --> 00:38:21.400 +be toxic so it might be uh yeah I'm not + +00:38:18.720 --> 00:38:23.920 +sure but like let's say you let's say + +00:38:21.400 --> 00:38:26.240 +you have a model that somebody + +00:38:23.920 --> 00:38:28.640 +else did like a really good job + +00:38:26.240 --> 00:38:31.359 +instruction tuning for you + +00:38:28.640 --> 00:38:33.640 +um and anytime you start using safety + +00:38:31.359 --> 00:38:35.560 +tuning on it you like hurt the + +00:38:33.640 --> 00:38:38.680 +instruction tuning like the model gets + +00:38:35.560 --> 00:38:40.560 +worse I could see a world where you take + +00:38:38.680 --> 00:38:43.000 +the base model the same base model you + +00:38:40.560 --> 00:38:45.280 +take llama 27b you train like a less + +00:38:43.000 --> 00:38:47.480 +toxic version of llama 27d and then do + +00:38:45.280 --> 00:38:51.319 +parameter averaging with the like well + +00:38:47.480 --> 00:38:53.160 +instruction tuned model um that might + +00:38:51.319 --> 00:38:55.359 +work that might make something that's + +00:38:53.160 --> 00:38:57.560 +more safe and like not much worse + +00:38:55.359 --> 00:39:01.440 +instruction to so there's definitely I + +00:38:57.560 --> 00:39:01.440 +think creative things that you can do + +00:39:01.520 --> 00:39:08.400 +that um maybe I'll go directly into the + +00:39:04.960 --> 00:39:11.480 +methods um + +00:39:08.400 --> 00:39:13.240 +so uh there's a few uh recent papers on + +00:39:11.480 --> 00:39:16.000 +this like this method has been around + +00:39:13.240 --> 00:39:17.880 +for a long time since at least 1996 but + +00:39:16.000 --> 00:39:20.880 +uh recently people have examined it a + +00:39:17.880 --> 00:39:24.800 +lot in the context of uh kind of modern + +00:39:20.880 --> 00:39:27.400 +networks and uh this paper model soup uh + +00:39:24.800 --> 00:39:29.000 +examines two strategies the first one is + +00:39:27.400 --> 00:39:31.400 +uniform averaging where you just average + +00:39:29.000 --> 00:39:33.560 +all the parameters together uh like as + +00:39:31.400 --> 00:39:35.480 +you would expect but they also have a + +00:39:33.560 --> 00:39:38.319 +greedy averaging method and basically + +00:39:35.480 --> 00:39:40.240 +what they do here is they add one model + +00:39:38.319 --> 00:39:42.119 +and check if the whole like averaged + +00:39:40.240 --> 00:39:43.680 +model improves and then only if the + +00:39:42.119 --> 00:39:45.760 +whole averaged model improves do they + +00:39:43.680 --> 00:39:49.040 +keep that model otherwise they throw it + +00:39:45.760 --> 00:39:52.960 +out and then they um they don't uh use + +00:39:49.040 --> 00:39:54.520 +it so what they demonstrate uh this is a + +00:39:52.960 --> 00:39:57.560 +little bit small but basically the + +00:39:54.520 --> 00:40:00.520 +purple star here is uh when the use + +00:39:57.560 --> 00:40:02.480 +greedy averaging and then the blue + +00:40:00.520 --> 00:40:05.119 +circle here is when they use the uniform + +00:40:02.480 --> 00:40:08.280 +averaging and then green is all of the + +00:40:05.119 --> 00:40:09.960 +models that they they put into this + +00:40:08.280 --> 00:40:12.560 +average + +00:40:09.960 --> 00:40:16.680 +and what they found + +00:40:12.560 --> 00:40:18.480 +is this is average uh accuracy on image + +00:40:16.680 --> 00:40:22.400 +net which is the thing that they they + +00:40:18.480 --> 00:40:25.160 +used in deciding which models to merge + +00:40:22.400 --> 00:40:26.920 +in greedily and then this is on + +00:40:25.160 --> 00:40:28.640 +distribution shifts so this is on other + +00:40:26.920 --> 00:40:31.119 +data sets other than the ones they use + +00:40:28.640 --> 00:40:33.040 +specifically for training and what you + +00:40:31.119 --> 00:40:34.720 +can see is the greedy averaging method + +00:40:33.040 --> 00:40:38.720 +does + +00:40:34.720 --> 00:40:40.839 +better um than the best single model on + +00:40:38.720 --> 00:40:42.319 +the data set that they used to decide + +00:40:40.839 --> 00:40:44.800 +that greedy + +00:40:42.319 --> 00:40:46.560 +average the uniform average actually + +00:40:44.800 --> 00:40:48.359 +does worse than the best model so you + +00:40:46.560 --> 00:40:50.960 +would actually be better off for image + +00:40:48.359 --> 00:40:52.960 +net accuracy to just use the best model + +00:40:50.960 --> 00:40:56.000 +but it's more robust so on the + +00:40:52.960 --> 00:40:57.319 +distribution shift like data set it + +00:40:56.000 --> 00:41:00.000 +actually does better than any of them + +00:40:57.319 --> 00:41:02.280 +models so um you can see that there's + +00:41:00.000 --> 00:41:04.720 +kind of trade-offs between choosing + +00:41:02.280 --> 00:41:06.480 +those + +00:41:04.720 --> 00:41:09.319 +essentially + +00:41:06.480 --> 00:41:12.040 +um whoops that's a that's a typo that + +00:41:09.319 --> 00:41:15.760 +should be ensembling but um they also + +00:41:12.040 --> 00:41:18.440 +demonstrate that um averaging is + +00:41:15.760 --> 00:41:22.720 +correlated with ensembling so this is + +00:41:18.440 --> 00:41:25.200 +the um image accuracy of the parameter + +00:41:22.720 --> 00:41:27.000 +average model this is image not accuracy + +00:41:25.200 --> 00:41:30.200 +of the Ensemble so this is actually I + +00:41:27.000 --> 00:41:33.720 +think really interesting figure um what + +00:41:30.200 --> 00:41:36.440 +it shows is that there's a pretty strong + +00:41:33.720 --> 00:41:38.760 +correlation between the two averaging is + +00:41:36.440 --> 00:41:41.400 +almost never better than ensembling the + +00:41:38.760 --> 00:41:44.800 +two together but it's faster of course + +00:41:41.400 --> 00:41:48.119 +so it's better because it's faster and + +00:41:44.800 --> 00:41:50.000 +there are situations where the Ensemble + +00:41:48.119 --> 00:41:51.680 +is much better than the average model so + +00:41:50.000 --> 00:41:55.720 +like the average model hurts the + +00:41:51.680 --> 00:41:58.560 +averaging hurts um onbling does not hurt + +00:41:55.720 --> 00:42:01.319 +so what this shows you is parameter + +00:41:58.560 --> 00:42:03.119 +averaging is is safe and it nearly + +00:42:01.319 --> 00:42:04.359 +approximates model on samping most of + +00:42:03.119 --> 00:42:06.720 +the time but there are cases where it + +00:42:04.359 --> 00:42:08.119 +doesn't so you do need to be a little + +00:42:06.720 --> 00:42:11.720 +bit careful and it might hurt your + +00:42:08.119 --> 00:42:11.720 +accuracy in some cases + +00:42:16.680 --> 00:42:21.520 +yeah oh yeah sorry very good point yes + +00:42:19.280 --> 00:42:21.520 +it's + +00:42:22.319 --> 00:42:29.119 +paralel yeah + +00:42:26.119 --> 00:42:29.119 +this + +00:42:36.480 --> 00:42:41.520 +um how do you know + +00:42:39.400 --> 00:42:45.720 +it's + +00:42:41.520 --> 00:42:48.280 +particular yeah so notably all of these + +00:42:45.720 --> 00:42:48.280 +are + +00:42:48.800 --> 00:42:52.240 +initialized it's been a little while + +00:42:50.800 --> 00:42:54.079 +since I read this but I know all of + +00:42:52.240 --> 00:42:56.520 +these were initialized from a model that + +00:42:54.079 --> 00:42:58.160 +was already pretty good on image that + +00:42:56.520 --> 00:43:01.760 +and then they were tuned in different + +00:42:58.160 --> 00:43:03.800 +ways I guess and so this I think this + +00:43:01.760 --> 00:43:05.319 +might be initialized with a model that + +00:43:03.800 --> 00:43:09.160 +was trained on a different data set or + +00:43:05.319 --> 00:43:10.160 +something like that um and so they are + +00:43:09.160 --> 00:43:12.480 +all starting from the same + +00:43:10.160 --> 00:43:14.480 +initialization so parameter U + +00:43:12.480 --> 00:43:16.599 +permutation inv variance is not an issue + +00:43:14.480 --> 00:43:19.200 +there because they're starting from the + +00:43:16.599 --> 00:43:23.480 +pre um but despite the fact that it's + +00:43:19.200 --> 00:43:26.520 +not a problem there are there are cases + +00:43:23.480 --> 00:43:29.119 +where like averaging is detrimental + +00:43:26.520 --> 00:43:29.119 +compared to + +00:43:32.839 --> 00:43:37.559 +um okay so + +00:43:42.800 --> 00:43:45.800 +yeah + +00:43:51.720 --> 00:43:54.720 +yep + +00:43:56.040 --> 00:43:59.040 +y + +00:44:07.079 --> 00:44:10.079 +okay + +00:44:26.040 --> 00:44:29.040 +y + +00:44:46.319 --> 00:44:52.520 +yeah so that's a great question um I'll + +00:44:48.240 --> 00:44:54.920 +just repeat it which is um the these + +00:44:52.520 --> 00:44:57.520 +experiments were done on CNN's or image + +00:44:54.920 --> 00:44:59.280 +net like uh CNN based image that + +00:44:57.520 --> 00:45:01.119 +classifiers is there something different + +00:44:59.280 --> 00:45:04.040 +than Transformers particularly because + +00:45:01.119 --> 00:45:06.240 +Transformer representations tend to be + +00:45:04.040 --> 00:45:09.000 +uh like very concentrated in particular + +00:45:06.240 --> 00:45:11.359 +parts of the space that's an excellent + +00:45:09.000 --> 00:45:14.040 +question um what I do know is a lot of + +00:45:11.359 --> 00:45:15.319 +people do merge together Transformer + +00:45:14.040 --> 00:45:18.319 +models in fact if you look at the + +00:45:15.319 --> 00:45:20.079 +hugging face leaderboard there's like + +00:45:18.319 --> 00:45:22.240 +something and something merg together + +00:45:20.079 --> 00:45:24.200 +like all over the leader board and it + +00:45:22.240 --> 00:45:25.960 +does tend to improve accuracy so I I + +00:45:24.200 --> 00:45:27.480 +know it is definitely effective for + +00:45:25.960 --> 00:45:28.559 +Transformers + +00:45:27.480 --> 00:45:32.040 +however Are + +00:45:28.559 --> 00:45:34.640 +there specific model like parameter + +00:45:32.040 --> 00:45:37.040 +averaging or model merging methods that + +00:45:34.640 --> 00:45:38.599 +could improve accuracy by taking + +00:45:37.040 --> 00:45:40.680 +advantage of the fact that Transformers + +00:45:38.599 --> 00:45:42.480 +behaving a c certain way I think that's + +00:45:40.680 --> 00:45:44.920 +totally possible and you know it would + +00:45:42.480 --> 00:45:48.800 +be an interesting research Direction um + +00:45:44.920 --> 00:45:51.680 +I'm not familiar enough with that + +00:45:48.800 --> 00:45:53.359 +particular part myself to say oh I have + +00:45:51.680 --> 00:45:55.160 +this great idea that you should work on + +00:45:53.359 --> 00:45:55.920 +but I think if you're interested in it + +00:45:55.160 --> 00:45:58.160 +you + +00:45:55.920 --> 00:46:00.280 +definitely + +00:45:58.160 --> 00:46:05.240 +cool anything + +00:46:00.280 --> 00:46:08.920 +El okay so there's also the idea of uh + +00:46:05.240 --> 00:46:12.440 +task vectors and um basically task + +00:46:08.920 --> 00:46:15.280 +vectors here we are just merging + +00:46:12.440 --> 00:46:17.280 +together two models by taking the + +00:46:15.280 --> 00:46:18.280 +parameters of the models and averaging + +00:46:17.280 --> 00:46:22.079 +them + +00:46:18.280 --> 00:46:24.480 +together task vectors and other related + +00:46:22.079 --> 00:46:26.040 +works specifically take advantage of the + +00:46:24.480 --> 00:46:27.640 +fact that we're looking at different + +00:46:26.040 --> 00:46:29.160 +fine-tuned models + +00:46:27.640 --> 00:46:31.480 +and so these are models where we have a + +00:46:29.160 --> 00:46:33.920 +base model and we know that uh that we + +00:46:31.480 --> 00:46:35.760 +fine-tuned from this base model and the + +00:46:33.920 --> 00:46:38.480 +basic idea is that we have our base + +00:46:35.760 --> 00:46:40.319 +model here and the task Vector is the + +00:46:38.480 --> 00:46:43.280 +difference between the base models + +00:46:40.319 --> 00:46:45.559 +Vector uh parameters and the uh fine + +00:46:43.280 --> 00:46:49.480 +tune models parameters so that's what + +00:46:45.559 --> 00:46:52.720 +they Define as a task Vector um what + +00:46:49.480 --> 00:46:56.000 +does this allow us to do this allows us + +00:46:52.720 --> 00:46:58.040 +to do a number of interesting things um + +00:46:56.000 --> 00:47:02.359 +the first one + +00:46:58.040 --> 00:47:05.119 +is that we can actually subtract out uh + +00:47:02.359 --> 00:47:08.960 +quote unquote tasks that we don't want + +00:47:05.119 --> 00:47:11.559 +so like let's say we had a model that + +00:47:08.960 --> 00:47:13.440 +was trained on lots of toxic text or we + +00:47:11.559 --> 00:47:15.760 +had a model that was trained on lots of + +00:47:13.440 --> 00:47:18.760 +private text or something like that we + +00:47:15.760 --> 00:47:22.040 +could actually subtract out the task + +00:47:18.760 --> 00:47:24.240 +Vector from this and basically attempt + +00:47:22.040 --> 00:47:27.480 +to remove the model's ability to uh do + +00:47:24.240 --> 00:47:31.240 +that sort of things um you can also + +00:47:27.480 --> 00:47:36.040 +take two task vectors and combine them + +00:47:31.240 --> 00:47:39.280 +together and uh like get the model uh + +00:47:36.040 --> 00:47:42.200 +from the combination of the two um this + +00:47:39.280 --> 00:47:44.280 +isn't exactly the same as averaging the + +00:47:42.200 --> 00:47:45.440 +parameters because if you average the + +00:47:44.280 --> 00:47:47.400 +parameters you would probably get + +00:47:45.440 --> 00:47:49.160 +something in the middle right here but + +00:47:47.400 --> 00:47:50.440 +if you average the two vectors or add + +00:47:49.160 --> 00:47:52.040 +the two vectors together you would get + +00:47:50.440 --> 00:47:53.760 +something over here actually sorry if + +00:47:52.040 --> 00:47:56.520 +you average the vectors maybe it's the + +00:47:53.760 --> 00:47:58.119 +same so you could like add together the + +00:47:56.520 --> 00:47:59.480 +two vectors and and that would be + +00:47:58.119 --> 00:48:01.640 +something different than taking the + +00:47:59.480 --> 00:48:05.280 +average so it gives you a little bit + +00:48:01.640 --> 00:48:07.720 +more flexibility about things to do + +00:48:05.280 --> 00:48:09.599 +um and another thing this allows you to + +00:48:07.720 --> 00:48:12.920 +do is this allows you to try to resolve + +00:48:09.599 --> 00:48:15.400 +conflicts between um vectors of + +00:48:12.920 --> 00:48:19.720 +different tasks and so this is an + +00:48:15.400 --> 00:48:22.480 +illustration of of this method here + +00:48:19.720 --> 00:48:25.680 +and this has three tasks basically it + +00:48:22.480 --> 00:48:27.720 +has model one model two model three and + +00:48:25.680 --> 00:48:29.920 +each of them has vectors and you'll see + +00:48:27.720 --> 00:48:32.880 +that in some cases these vectors + +00:48:29.920 --> 00:48:34.599 +conflict so we have like pink going up + +00:48:32.880 --> 00:48:36.079 +we have yellow and purple going down we + +00:48:34.599 --> 00:48:37.800 +have yellow going up we have pink and + +00:48:36.079 --> 00:48:40.720 +purple going down etc + +00:48:37.800 --> 00:48:43.040 +etc and what this does is this + +00:48:40.720 --> 00:48:45.960 +identifies the vectors that are uh + +00:48:43.040 --> 00:48:48.040 +pointing the most strongly in particular + +00:48:45.960 --> 00:48:50.440 +directions and then it resolves + +00:48:48.040 --> 00:48:52.240 +conflicts between them and comes up with + +00:48:50.440 --> 00:48:54.559 +a vector that tries to move in a + +00:48:52.240 --> 00:48:55.920 +direction that improves all of the tasks + +00:48:54.559 --> 00:48:59.319 +at the same time and they demonstrate + +00:48:55.920 --> 00:49:01.480 +that this is better method for um kind + +00:48:59.319 --> 00:49:04.599 +of improving the ability to do all of + +00:49:01.480 --> 00:49:09.599 +the tasks compared to just averaging + +00:49:04.599 --> 00:49:09.599 +things together so yeah first + +00:49:11.920 --> 00:49:15.559 +exle like it just + +00:49:16.880 --> 00:49:23.640 +add yeah so this is + +00:49:20.680 --> 00:49:25.760 +um yeah you could move it more in that + +00:49:23.640 --> 00:49:27.319 +direction it there's obviously no + +00:49:25.760 --> 00:49:29.720 +guarantee that it would make it better + +00:49:27.319 --> 00:49:32.319 +but it might make it more extreme at + +00:49:29.720 --> 00:49:35.760 +least so uh + +00:49:32.319 --> 00:49:35.760 +yeah any other + +00:49:36.680 --> 00:49:39.960 +questions all + +00:49:55.640 --> 00:49:58.640 +yes + +00:50:25.640 --> 00:50:28.640 +one + +00:50:32.319 --> 00:50:37.240 +yeah yeah so this is a a great question + +00:50:35.599 --> 00:50:38.760 +um I can explain a little bit I'm not + +00:50:37.240 --> 00:50:40.760 +going to talk about Metal learning + +00:50:38.760 --> 00:50:42.680 +extensively in this class but just to + +00:50:40.760 --> 00:50:46.040 +give a very quick primer for people who + +00:50:42.680 --> 00:50:46.040 +don't know about it + +00:50:55.640 --> 00:50:58.640 +um + +00:51:00.359 --> 00:51:06.040 +this is an example of a paper on metal + +00:51:03.319 --> 00:51:09.559 +learning for low resource machine + +00:51:06.040 --> 00:51:12.680 +translation um I you can take a look at + +00:51:09.559 --> 00:51:16.200 +this paper um or not take a look at this + +00:51:12.680 --> 00:51:17.760 +paper um uh but the reason why I wanted + +00:51:16.200 --> 00:51:20.799 +to look at this paper is because it has + +00:51:17.760 --> 00:51:25.160 +a good um uh it has a good illustration + +00:51:20.799 --> 00:51:27.200 +of what metal learning is and basically + +00:51:25.160 --> 00:51:29.160 +um if we + +00:51:27.200 --> 00:51:33.839 +are doing transfer learning from a + +00:51:29.160 --> 00:51:35.880 +single task what we do is we have like a + +00:51:33.839 --> 00:51:37.960 +Spanish English machine translation + +00:51:35.880 --> 00:51:41.839 +system and then we fine-tune it to try + +00:51:37.960 --> 00:51:45.280 +to hit like to try to be a good Romanian + +00:51:41.839 --> 00:51:48.680 +uh English or latan English system if + +00:51:45.280 --> 00:51:50.400 +we're doing multitask learning um or + +00:51:48.680 --> 00:51:53.079 +which also could be equivalent to like + +00:51:50.400 --> 00:51:55.680 +instruction tuning for example we have + +00:51:53.079 --> 00:51:57.680 +uh French uh Spanish and Portuguese we + +00:51:55.680 --> 00:52:03.319 +train on all the then we + +00:51:57.680 --> 00:52:06.520 +fine-tune to uh to be a good Romanian uh + +00:52:03.319 --> 00:52:09.240 +translator latan trans uh + +00:52:06.520 --> 00:52:10.760 +translator whereas metal learning what + +00:52:09.240 --> 00:52:12.119 +it's trying to do is it's trying to + +00:52:10.760 --> 00:52:14.680 +learn a good + +00:52:12.119 --> 00:52:17.480 +initialization that makes it easy to + +00:52:14.680 --> 00:52:21.280 +fine-tune to try to come up with a model + +00:52:17.480 --> 00:52:23.839 +that is good uh for fine-tuning into new + +00:52:21.280 --> 00:52:29.040 +tasks + +00:52:23.839 --> 00:52:32.200 +um the way you do this is basically um + +00:52:29.040 --> 00:52:36.599 +you have two + +00:52:32.200 --> 00:52:39.400 +steps um of gradient descent and so you + +00:52:36.599 --> 00:52:42.400 +have a first step where you uh train the + +00:52:39.400 --> 00:52:42.400 +model + +00:52:42.599 --> 00:52:50.160 +um where you have an update on like data + +00:52:47.119 --> 00:52:50.160 +from French for + +00:52:55.440 --> 00:53:02.400 +example + +00:52:57.920 --> 00:53:02.400 +and then you have another + +00:53:04.640 --> 00:53:10.599 +update um where you train on like black + +00:53:07.880 --> 00:53:10.599 +or something like + +00:53:12.559 --> 00:53:17.040 +this and this is a very informal very + +00:53:15.599 --> 00:53:18.200 +informal description there's a lot of + +00:53:17.040 --> 00:53:19.599 +stuff we could talk about here I could + +00:53:18.200 --> 00:53:22.119 +have a whole class on this but we're not + +00:53:19.599 --> 00:53:27.200 +going to um I don't have one planned at + +00:53:22.119 --> 00:53:28.559 +the moment um and so you uh you up once + +00:53:27.200 --> 00:53:30.319 +and then you update again and you + +00:53:28.559 --> 00:53:33.400 +differentiate through this update + +00:53:30.319 --> 00:53:35.160 +process uh so that this becomes like + +00:53:33.400 --> 00:53:37.440 +essentially a good initialization for + +00:53:35.160 --> 00:53:40.640 +training on other languages or for other + +00:53:37.440 --> 00:53:43.000 +tasks or things like that + +00:53:40.640 --> 00:53:44.920 +um now going back to the original + +00:53:43.000 --> 00:53:46.240 +question the original question is is + +00:53:44.920 --> 00:53:50.000 +there a connection between metal + +00:53:46.240 --> 00:53:50.000 +learning in these uh task + +00:53:54.720 --> 00:53:58.440 +vectors I'm not + +00:53:59.079 --> 00:54:03.720 +100% sure about that because I think + +00:54:01.760 --> 00:54:06.599 +these test backs are generally created + +00:54:03.720 --> 00:54:08.480 +post Haw and so they're not like there's + +00:54:06.599 --> 00:54:12.680 +no explicit learning step to try to make + +00:54:08.480 --> 00:54:14.440 +them uh you know generalize well um one + +00:54:12.680 --> 00:54:15.960 +one thing that maybe might be + +00:54:14.440 --> 00:54:18.559 +interesting to people this is a paper + +00:54:15.960 --> 00:54:23.040 +that we like literally just put on + +00:54:18.559 --> 00:54:23.040 +archive about last week + +00:54:25.359 --> 00:54:28.359 +um + +00:54:34.520 --> 00:54:39.880 +and we didn't actually use metal + +00:54:36.400 --> 00:54:41.960 +learning in this uh in this paper um + +00:54:39.880 --> 00:54:44.520 +just because metal learning actually is + +00:54:41.960 --> 00:54:46.160 +hard to implement uh because you need to + +00:54:44.520 --> 00:54:48.680 +do this kind of double differentiation + +00:54:46.160 --> 00:54:50.720 +and can become very very expensive for + +00:54:48.680 --> 00:54:52.839 +large models but we did something a + +00:54:50.720 --> 00:54:55.920 +little bit motivated by + +00:54:52.839 --> 00:54:59.680 +um uh by metal learning and what we did + +00:54:55.920 --> 00:55:01.280 +is we took a pre-trained LM and normally + +00:54:59.680 --> 00:55:04.359 +what you do is something like continued + +00:55:01.280 --> 00:55:06.799 +pre-training on new documents to learn + +00:55:04.359 --> 00:55:10.160 +knowledge from the new documents or + +00:55:06.799 --> 00:55:12.200 +maybe um instruction tuning including + +00:55:10.160 --> 00:55:15.960 +instruction tuning on data on documents + +00:55:12.200 --> 00:55:17.520 +about the kind of uh data that you would + +00:55:15.960 --> 00:55:18.880 +want to be answering questions about so + +00:55:17.520 --> 00:55:20.640 +like let's say you're trying to train a + +00:55:18.880 --> 00:55:23.000 +medical language model you might train + +00:55:20.640 --> 00:55:26.680 +on lots of medical documents but what we + +00:55:23.000 --> 00:55:29.839 +did here is we had a step where we train + +00:55:26.680 --> 00:55:33.720 +in advance to + +00:55:29.839 --> 00:55:38.079 +get on question answer Pairs and + +00:55:33.720 --> 00:55:40.400 +documents from another domain and then + +00:55:38.079 --> 00:55:43.359 +we have a step after that where we train + +00:55:40.400 --> 00:55:46.400 +on documents from the domain we want to + +00:55:43.359 --> 00:55:48.400 +answer on so like we might train on + +00:55:46.400 --> 00:55:51.079 +Wikipedia question answer Pairs and + +00:55:48.400 --> 00:55:52.559 +Wikipedia documents and then in the + +00:55:51.079 --> 00:55:54.079 +second step we would train on medical + +00:55:52.559 --> 00:55:56.680 +documents and we demonstrate that + +00:55:54.079 --> 00:55:58.880 +basically this allows the model to do a + +00:55:56.680 --> 00:56:00.880 +better job of question answering over + +00:55:58.880 --> 00:56:03.640 +these uh documents that we find tune on + +00:56:00.880 --> 00:56:05.000 +over here and so kind of going back to + +00:56:03.640 --> 00:56:06.760 +the metal learning paper that I talked + +00:56:05.000 --> 00:56:08.359 +about before the metal learning paper + +00:56:06.760 --> 00:56:10.640 +tries to get the parameters in a good + +00:56:08.359 --> 00:56:12.559 +space so that after you find ton on + +00:56:10.640 --> 00:56:15.520 +another data set you do a good job of + +00:56:12.559 --> 00:56:17.799 +that in this paper our motivation is + +00:56:15.520 --> 00:56:20.359 +that the model kind of learns that when + +00:56:17.799 --> 00:56:22.039 +you train on documents you should be + +00:56:20.359 --> 00:56:24.079 +able to answer questions about those + +00:56:22.039 --> 00:56:25.480 +documents and so when you get a new set + +00:56:24.079 --> 00:56:27.200 +of documents it's kind of in a good part + +00:56:25.480 --> 00:56:31.079 +of the parameter space to make that easy + +00:56:27.200 --> 00:56:33.520 +to do so um if that if metal learning is + +00:56:31.079 --> 00:56:34.640 +interesting um there are tutorials on + +00:56:33.520 --> 00:56:37.119 +metal learning that I could probably + +00:56:34.640 --> 00:56:39.599 +share and then um if you're interested + +00:56:37.119 --> 00:56:42.599 +in kind of like learning Knowledge from + +00:56:39.599 --> 00:56:45.039 +uh learning Knowledge + +00:56:42.599 --> 00:56:46.079 +from continued pre-training or something + +00:56:45.039 --> 00:56:47.400 +like that you could take a look at this + +00:56:46.079 --> 00:56:49.920 +right there as + +00:56:47.400 --> 00:56:54.480 +well uh + +00:56:49.920 --> 00:56:54.480 +cool any questions about that + +00:56:55.240 --> 00:57:00.880 +or + +00:56:57.599 --> 00:57:02.480 +okay cool I I'll jump on this so anyway + +00:57:00.880 --> 00:57:05.520 +um I talked about several methods for + +00:57:02.480 --> 00:57:07.520 +merging models together um there's a + +00:57:05.520 --> 00:57:09.440 +popular toolkit called merge kit that + +00:57:07.520 --> 00:57:10.960 +makes it relatively easy to do this it + +00:57:09.440 --> 00:57:13.280 +implements a lot of the models that I + +00:57:10.960 --> 00:57:17.160 +talked about here including uh the + +00:57:13.280 --> 00:57:19.880 +linear methods um uh the task arithmetic + +00:57:17.160 --> 00:57:23.079 +method and ties uh so I talked about + +00:57:19.880 --> 00:57:25.480 +these there is kind of like a expansion + +00:57:23.079 --> 00:57:27.240 +on this so if you want to merge together + +00:57:25.480 --> 00:57:28.760 +models it's Rel easy to do from a + +00:57:27.240 --> 00:57:30.760 +software standpoint as so so you can + +00:57:28.760 --> 00:57:35.119 +take a look at + +00:57:30.760 --> 00:57:38.000 +that um another really simple thing uh + +00:57:35.119 --> 00:57:39.880 +is uh distilling ensembles and so we + +00:57:38.000 --> 00:57:43.039 +already talked about distillation the + +00:57:39.880 --> 00:57:45.599 +idea is simple um + +00:57:43.039 --> 00:57:47.680 +you so parameter averaging only really + +00:57:45.599 --> 00:57:49.200 +works for models within the same run uh + +00:57:47.680 --> 00:57:51.760 +same model architecture same + +00:57:49.200 --> 00:57:54.280 +initialization so knowledge distillation + +00:57:51.760 --> 00:57:55.559 +uh trains a model to copy The Ensemble + +00:57:54.280 --> 00:57:57.359 +and so it tries to match the + +00:57:55.559 --> 00:57:59.119 +distribution over the predicted words + +00:57:57.359 --> 00:58:00.760 +for an + +00:57:59.119 --> 00:58:05.319 +on + +00:58:00.760 --> 00:58:07.799 +um and so this allows the model to make + +00:58:05.319 --> 00:58:09.079 +the same you know good predictions as + +00:58:07.799 --> 00:58:11.079 +The Ensemble make the same bad + +00:58:09.079 --> 00:58:12.799 +predictions as Ensemble it just allows + +00:58:11.079 --> 00:58:14.799 +you to learn more efficiently just like + +00:58:12.799 --> 00:58:16.680 +distillation does in general and they + +00:58:14.799 --> 00:58:18.960 +actually model distillation the original + +00:58:16.680 --> 00:58:22.240 +motivation for it when Jeff Hinton + +00:58:18.960 --> 00:58:24.599 +proposed it in 2015 in in this paper was + +00:58:22.240 --> 00:58:25.680 +to copy an ensemble now we use it for a + +00:58:24.599 --> 00:58:27.039 +lot of other things like in the + +00:58:25.680 --> 00:58:31.160 +distillation + +00:58:27.039 --> 00:58:31.160 +like weed the class but was the + +00:58:34.119 --> 00:58:39.599 +original + +00:58:35.760 --> 00:58:42.640 +um next I'll move on to sparse mixture + +00:58:39.599 --> 00:58:44.960 +of experts models and this is really + +00:58:42.640 --> 00:58:47.599 +important uh this is used in a lot of + +00:58:44.960 --> 00:58:51.319 +modern models it's allegedly used in GPD + +00:58:47.599 --> 00:58:53.160 +4 um and it is uh definitely used in + +00:58:51.319 --> 00:58:55.280 +mixl uh which is kind of one of the + +00:58:53.160 --> 00:58:58.039 +state-ofthe-art open models so I think + +00:58:55.280 --> 00:58:58.039 +it's a good thing to know + +00:58:59.880 --> 00:59:05.720 +um what these do is they take advantage + +00:59:02.680 --> 00:59:08.160 +of sparse computation so if you think + +00:59:05.720 --> 00:59:09.359 +about what happens when you do a scalar + +00:59:08.160 --> 00:59:12.760 +tensor + +00:59:09.359 --> 00:59:14.720 +multiply where the scaler is zero and + +00:59:12.760 --> 00:59:17.160 +basically the result of the entire + +00:59:14.720 --> 00:59:19.680 +resulting tensor is guaranteed to be + +00:59:17.160 --> 00:59:21.440 +zero and so you don't even need to do + +00:59:19.680 --> 00:59:25.440 +the computation you don't need to even + +00:59:21.440 --> 00:59:27.520 +bother um and so this manifests itself + +00:59:25.440 --> 00:59:30.240 +in a bunch of different places in modern + +00:59:27.520 --> 00:59:35.000 +models um the first one could be single + +00:59:30.240 --> 00:59:38.400 +rows in a matrix multiply so um if you + +00:59:35.000 --> 00:59:40.480 +have a big Matrix multiply like + +00:59:38.400 --> 00:59:44.240 +this + +00:59:40.480 --> 00:59:47.880 +um or Matrix Vector multiply like this + +00:59:44.240 --> 00:59:50.200 +um and some of the rows are zero then uh + +00:59:47.880 --> 00:59:54.559 +that that's one place where it + +00:59:50.200 --> 00:59:58.200 +happens um you can also uh do this + +00:59:54.559 --> 01:00:00.119 +between zero and in not just rows but + +00:59:58.200 --> 01:00:02.200 +also larger + +01:00:00.119 --> 01:00:05.799 +tensors um and you can even do it in + +01:00:02.200 --> 01:00:07.599 +whole models in an ensemble so um the + +01:00:05.799 --> 01:00:10.799 +first one this can be optimized + +01:00:07.599 --> 01:00:13.880 +automatically by GPU um the second one + +01:00:10.799 --> 01:00:15.400 +this often occurs in uh sparse mixture + +01:00:13.880 --> 01:00:18.000 +of experts + +01:00:15.400 --> 01:00:19.400 +models and the final one uh basically + +01:00:18.000 --> 01:00:21.880 +you just don't need to even use the + +01:00:19.400 --> 01:00:24.119 +model in emble so if you somehow + +01:00:21.880 --> 01:00:25.640 +optimize an ensemble and it turns out + +01:00:24.119 --> 01:00:27.599 +that the probability of one of the + +01:00:25.640 --> 01:00:29.680 +models is zero you just can throw it out + +01:00:27.599 --> 01:00:33.640 +and not use it at + +01:00:29.680 --> 01:00:36.839 +all so um GPU level sparsity + +01:00:33.640 --> 01:00:39.839 +support uh Nvidia gpus support a bunch + +01:00:36.839 --> 01:00:42.559 +of different types of sparsity and uh + +01:00:39.839 --> 01:00:44.599 +the people the wonderful people at + +01:00:42.559 --> 01:00:48.280 +Nvidia have worked hard to make the + +01:00:44.599 --> 01:00:51.319 +support uh work to some extent anyway + +01:00:48.280 --> 01:00:53.119 +and uh there's a library called cpar and + +01:00:51.319 --> 01:00:56.119 +this is used in pytorch and all these + +01:00:53.119 --> 01:00:58.280 +other things as well and just to give + +01:00:56.119 --> 01:01:01.240 +example a vector Matrix multiply with a + +01:00:58.280 --> 01:01:03.240 +sparse Vector um such as one that comes + +01:01:01.240 --> 01:01:06.160 +from a relu activation basically what + +01:01:03.240 --> 01:01:09.319 +happens is let's say you only have three + +01:01:06.160 --> 01:01:11.799 +uh parts of this Vector that are active + +01:01:09.319 --> 01:01:15.240 +um you actually just don't need to cop + +01:01:11.799 --> 01:01:18.200 +uh calculate any of the columns here so + +01:01:15.240 --> 01:01:19.720 +that makes your life relatively + +01:01:18.200 --> 01:01:22.880 +easy + +01:01:19.720 --> 01:01:24.480 +um but the specific thing that I wanted + +01:01:22.880 --> 01:01:26.640 +to talk about is a sparsely gated + +01:01:24.480 --> 01:01:29.799 +mixture of experts layer because this is + +01:01:26.640 --> 01:01:33.960 +uh what is used in mixol and probably uh + +01:01:29.799 --> 01:01:38.200 +the GPT models as well and what you do + +01:01:33.960 --> 01:01:41.760 +is you have a feed forward Network and + +01:01:38.200 --> 01:01:41.760 +normally a feed forward Network in a + +01:01:43.640 --> 01:01:52.119 +Transformer is this like really wide + +01:01:49.319 --> 01:01:57.240 +thing this huge wide feed forward + +01:01:52.119 --> 01:01:59.359 +Network um that you use to extract a + +01:01:57.240 --> 01:02:00.520 +whole bunch of features at each layer + +01:01:59.359 --> 01:02:02.640 +and that's where a lot of the + +01:02:00.520 --> 01:02:05.799 +computation and Transformer + +01:02:02.640 --> 01:02:10.079 +happens um and what sparsely gated + +01:02:05.799 --> 01:02:13.079 +mixture of uh experts layers do is they + +01:02:10.079 --> 01:02:15.640 +first have this gating Network here + +01:02:13.079 --> 01:02:17.880 +where it calculates uh mixture + +01:02:15.640 --> 01:02:21.119 +probability but the mixture probability + +01:02:17.880 --> 01:02:23.039 +is zero and for many or most of the + +01:02:21.119 --> 01:02:26.880 +parts of this feed forward + +01:02:23.039 --> 01:02:28.760 +Network and so for the ones where it's + +01:02:26.880 --> 01:02:31.319 +zero you just don't calculate + +01:02:28.760 --> 01:02:34.319 +it um and then when you mix them + +01:02:31.319 --> 01:02:37.359 +together you use the mixture rates and + +01:02:34.319 --> 01:02:39.520 +this is actually really simple um it's + +01:02:37.359 --> 01:02:42.400 +like several lines of pytorch code maybe + +01:02:39.520 --> 01:02:45.319 +like seven or eight lines of P torch + +01:02:42.400 --> 01:02:48.720 +code but the basic uh idea here is you + +01:02:45.319 --> 01:02:50.599 +have um this gating function where you + +01:02:48.720 --> 01:02:52.799 +calculate the gating function based on + +01:02:50.599 --> 01:02:53.640 +the input and then you have this keep + +01:02:52.799 --> 01:02:56.720 +top + +01:02:53.640 --> 01:02:58.319 +K uh operation and then you take the + +01:02:56.720 --> 01:03:02.559 +soft Max over + +01:02:58.319 --> 01:03:04.359 +this and the keep top K operation is if + +01:03:02.559 --> 01:03:06.160 +the value is within the top K you just + +01:03:04.359 --> 01:03:07.319 +keep it and if it's not in the top K you + +01:03:06.160 --> 01:03:11.960 +don't keep + +01:03:07.319 --> 01:03:13.119 +it so that that's all basically but what + +01:03:11.960 --> 01:03:14.760 +what's great about this is then you + +01:03:13.119 --> 01:03:17.799 +don't have to calculate like many of + +01:03:14.760 --> 01:03:20.119 +them and so for example um uh if you + +01:03:17.799 --> 01:03:22.640 +keep the top two out of eight you reduce + +01:03:20.119 --> 01:03:26.760 +your calcul uh your computation by four + +01:03:22.640 --> 01:03:30.000 +times for this part so + +01:03:26.760 --> 01:03:33.000 +um any any questions + +01:03:30.000 --> 01:03:33.000 +here + +01:03:54.720 --> 01:03:57.720 +yeah + +01:04:03.160 --> 01:04:07.039 +um sorry what what exactly do you mean + +01:04:05.559 --> 01:04:09.400 +by easy to paralyze are you talking + +01:04:07.039 --> 01:04:12.400 +about like a GPU can calculate lots of + +01:04:09.400 --> 01:04:15.680 +things at the same time yeah so I think + +01:04:12.400 --> 01:04:17.720 +if you have a very small model um you're + +01:04:15.680 --> 01:04:21.680 +actually not going to get as much from + +01:04:17.720 --> 01:04:25.079 +this uh because you're not you're + +01:04:21.680 --> 01:04:26.359 +essentially not bound by computation uh + +01:04:25.079 --> 01:04:27.880 +like you're bound more by memory + +01:04:26.359 --> 01:04:29.079 +movement and the GPU and other stuff + +01:04:27.880 --> 01:04:30.520 +like that but once you start getting up + +01:04:29.079 --> 01:04:32.920 +to the bigger models you actually are + +01:04:30.520 --> 01:04:34.640 +bound by computation so reducing your + +01:04:32.920 --> 01:04:37.039 +computation by four actually is a big + +01:04:34.640 --> 01:04:42.559 +one so it's a really really good + +01:04:37.039 --> 01:04:42.559 +question um any any other questions + +01:04:44.039 --> 01:04:50.520 +yeah so so this will + +01:04:48.240 --> 01:04:53.160 +um probably + +01:04:50.520 --> 01:04:56.039 +be + +01:04:53.160 --> 01:04:59.279 +just oh sorry I I don't have this here + +01:04:56.039 --> 01:05:01.760 +but this will be a often a linear layer + +01:04:59.279 --> 01:05:01.760 +followed by a + +01:05:03.039 --> 01:05:08.000 +seance um or or actually no it doesn't + +01:05:06.359 --> 01:05:10.520 +even need to be followed by softb it + +01:05:08.000 --> 01:05:10.520 +could just be a + +01:05:12.520 --> 01:05:17.920 +linear and I think actually I didn't put + +01:05:14.960 --> 01:05:19.680 +it on this slide but I have the in the + +01:05:17.920 --> 01:05:21.359 +references on the website I have the + +01:05:19.680 --> 01:05:22.760 +actual implementation in mix roll you + +01:05:21.359 --> 01:05:25.279 +can go in and look at it it's really + +01:05:22.760 --> 01:05:27.160 +simple um one thing I didn't put on here + +01:05:25.279 --> 01:05:31.000 +um which actually uh relates to the + +01:05:27.160 --> 01:05:32.920 +question before is Hardware wise this + +01:05:31.000 --> 01:05:34.799 +implementation is tricky if you do + +01:05:32.920 --> 01:05:37.599 +batching um and the reason why It's + +01:05:34.799 --> 01:05:39.480 +Tricky if you do batching is because um + +01:05:37.599 --> 01:05:43.000 +different experts will be active for + +01:05:39.480 --> 01:05:45.240 +different like parts of the batch so if + +01:05:43.000 --> 01:05:48.559 +you do that you need to do some tricky + +01:05:45.240 --> 01:05:48.559 +stuff uh there's + +01:05:54.640 --> 01:05:57.640 +this + +01:06:03.240 --> 01:06:12.039 +like so much of AI research nowadays uh + +01:06:08.200 --> 01:06:12.039 +the best resource for this is social + +01:06:13.680 --> 01:06:20.000 +media so this is uh there's a kind of + +01:06:16.880 --> 01:06:23.240 +interesting discussion of + +01:06:20.000 --> 01:06:25.359 +this um if you search for like gpk Fast + +01:06:23.240 --> 01:06:28.400 +mixed r on Twitter it it talks about + +01:06:25.359 --> 01:06:30.200 +this but basically there's a bunch of uh + +01:06:28.400 --> 01:06:32.680 +little little things you need to pay + +01:06:30.200 --> 01:06:34.760 +attention to um and ways that you can do + +01:06:32.680 --> 01:06:36.960 +tricks to make this work fast on GPU + +01:06:34.760 --> 01:06:40.000 +which also kind of uh addresses the + +01:06:36.960 --> 01:06:42.359 +concern so you can look for Horus H's + +01:06:40.000 --> 01:06:44.200 +discussion + +01:06:42.359 --> 01:06:46.680 +this + +01:06:44.200 --> 01:06:49.000 +cool + +01:06:46.680 --> 01:06:50.799 +um so the final thing I'd like to talk + +01:06:49.000 --> 01:06:52.480 +about in the last 10 minutes is pipeline + +01:06:50.799 --> 01:06:55.359 +systems + +01:06:52.480 --> 01:06:57.039 +um and pipeline systems are systems + +01:06:55.359 --> 01:07:00.279 +where we + +01:06:57.039 --> 01:07:02.319 +have models that basically the output of + +01:07:00.279 --> 01:07:05.319 +one model becomes the input of another + +01:07:02.319 --> 01:07:05.319 +model + +01:07:05.599 --> 01:07:10.359 +and to give an example of this a + +01:07:08.200 --> 01:07:13.480 +cascaded system is basically a system + +01:07:10.359 --> 01:07:15.119 +like this where you uh take the output + +01:07:13.480 --> 01:07:16.960 +of one system and then you feed it into + +01:07:15.119 --> 01:07:19.640 +the input of another system so a very + +01:07:16.960 --> 01:07:22.880 +stereotypical example of This is speech + +01:07:19.640 --> 01:07:25.559 +translation um where you run speech and + +01:07:22.880 --> 01:07:27.720 +then you uh do speech recognition into + +01:07:25.559 --> 01:07:29.319 +text and then text you do machine + +01:07:27.720 --> 01:07:32.160 +translation into another + +01:07:29.319 --> 01:07:33.920 +language + +01:07:32.160 --> 01:07:36.440 +and + +01:07:33.920 --> 01:07:39.039 +um one of the frustrating things about + +01:07:36.440 --> 01:07:43.000 +speech translation is these systems are + +01:07:39.039 --> 01:07:45.799 +stubbornly better uh for a long time + +01:07:43.000 --> 01:07:47.680 +than many systems that try to do end to + +01:07:45.799 --> 01:07:49.960 +end like speech to text in another + +01:07:47.680 --> 01:07:52.160 +language there's a couple reasons for + +01:07:49.960 --> 01:07:54.440 +this does anyone have an idea why what + +01:07:52.160 --> 01:07:57.039 +one of those reasons might + +01:07:54.440 --> 01:07:58.839 +be + +01:07:57.039 --> 01:08:01.559 +yeah the + +01:07:58.839 --> 01:08:05.279 +data + +01:08:01.559 --> 01:08:08.680 +anying exactly so data data availability + +01:08:05.279 --> 01:08:10.920 +is way better for speech to text in the + +01:08:08.680 --> 01:08:13.319 +same language and text to text in + +01:08:10.920 --> 01:08:15.720 +another language than it is for uh + +01:08:13.319 --> 01:08:17.759 +Speech to te text in another language + +01:08:15.720 --> 01:08:19.319 +because there just aren't large data + +01:08:17.759 --> 01:08:21.679 +sets that have speech and text in many + +01:08:19.319 --> 01:08:25.719 +languages so there's a bunch of tricks + +01:08:21.679 --> 01:08:31.759 +that you can do uh to you know fix this + +01:08:25.719 --> 01:08:34.239 +but still it it's uh you know uh tricky + +01:08:31.759 --> 01:08:36.120 +and there's a couple other reasons + +01:08:34.239 --> 01:08:38.159 +another reason is like actually speech + +01:08:36.120 --> 01:08:39.319 +to text in the same language is just a + +01:08:38.159 --> 01:08:42.520 +much more + +01:08:39.319 --> 01:08:45.359 +straightforward task um and so it's a + +01:08:42.520 --> 01:08:47.839 +bit easier to learn another thing is + +01:08:45.359 --> 01:08:50.839 +interpretability and the reason why + +01:08:47.839 --> 01:08:52.120 +interpretability is important is + +01:08:50.839 --> 01:08:54.920 +basically + +01:08:52.120 --> 01:08:56.640 +like if I'm talking to you in a + +01:08:54.920 --> 01:08:58.000 +different language like you speak a + +01:08:56.640 --> 01:09:00.319 +different language I'm talking to you + +01:08:58.000 --> 01:09:02.679 +through a speech translation system I + +01:09:00.319 --> 01:09:05.799 +actually want to know if the speech + +01:09:02.679 --> 01:09:07.600 +recognition worked because I know if the + +01:09:05.799 --> 01:09:08.920 +speech recognition didn't work then I'll + +01:09:07.600 --> 01:09:10.440 +I'm pretty sure that the translation + +01:09:08.920 --> 01:09:11.920 +didn't work either right and I can + +01:09:10.440 --> 01:09:14.880 +verify the speech recognition but I + +01:09:11.920 --> 01:09:16.199 +can't verify the transation so um + +01:09:14.880 --> 01:09:18.279 +there's other reasons why you might want + +01:09:16.199 --> 01:09:20.239 +a Cascade system other than just like + +01:09:18.279 --> 01:09:22.440 +accuracy or or other things like that + +01:09:20.239 --> 01:09:25.880 +but this is a thing we definitely + +01:09:22.440 --> 01:09:29.120 +do um there's another idea of stacking + +01:09:25.880 --> 01:09:32.560 +and stacking is um very similar to cast + +01:09:29.120 --> 01:09:34.560 +skating but it allows you to take two + +01:09:32.560 --> 01:09:37.120 +different models for the same task but + +01:09:34.560 --> 01:09:39.400 +with predictions in different ways so + +01:09:37.120 --> 01:09:41.120 +just taking another um + +01:09:39.400 --> 01:09:43.600 +example + +01:09:41.120 --> 01:09:45.040 +uh actually maybe maybe ignore the + +01:09:43.600 --> 01:09:47.159 +example I have here but we could just + +01:09:45.040 --> 01:09:50.679 +take the example of speech uh + +01:09:47.159 --> 01:09:53.000 +translation um the speech translation + +01:09:50.679 --> 01:09:55.760 +model uh we would first do speech + +01:09:53.000 --> 01:09:57.520 +recognition into like let's say English + +01:09:55.760 --> 01:09:59.640 +and then we would do translation and the + +01:09:57.520 --> 01:10:03.840 +input to the translation model would be + +01:09:59.640 --> 01:10:05.560 +speech in English um text in English and + +01:10:03.840 --> 01:10:07.320 +we would generate the output in Japanese + +01:10:05.560 --> 01:10:10.080 +so it would take both the speech and the + +01:10:07.320 --> 01:10:12.920 +text uh when it was doing translation + +01:10:10.080 --> 01:10:14.840 +and that would allow it to number one + +01:10:12.920 --> 01:10:17.719 +basically get a second opinion about + +01:10:14.840 --> 01:10:21.080 +whether the transcription was correct + +01:10:17.719 --> 01:10:23.800 +but also like let's say there was + +01:10:21.080 --> 01:10:26.440 +some unique information that only + +01:10:23.800 --> 01:10:29.480 +appeared in the + +01:10:26.440 --> 01:10:31.679 +um uh that only appeared in the speech + +01:10:29.480 --> 01:10:34.840 +so just to give an example I read the + +01:10:31.679 --> 01:10:37.040 +book I read the book are both + +01:10:34.840 --> 01:10:38.640 +transcribed exactly the same way and + +01:10:37.040 --> 01:10:41.679 +they're different translations obviously + +01:10:38.640 --> 01:10:42.920 +because one is uh you know present or + +01:10:41.679 --> 01:10:45.560 +present tense and the other is past + +01:10:42.920 --> 01:10:47.239 +tense so there are examples where uh + +01:10:45.560 --> 01:10:51.600 +adding a cascaded system would lose + +01:10:47.239 --> 01:10:51.600 +information and a stacked system would + +01:10:53.400 --> 01:10:57.679 +not another thing is of refinement I + +01:10:56.440 --> 01:10:59.480 +think this is actually really + +01:10:57.679 --> 01:11:01.000 +interesting because large language + +01:10:59.480 --> 01:11:03.920 +models have opened up a whole bunch of + +01:11:01.000 --> 01:11:05.640 +possibilities for us in this space um + +01:11:03.920 --> 01:11:07.760 +this is like cascading and stacking but + +01:11:05.640 --> 01:11:09.640 +it it can be done multiple times and it + +01:11:07.760 --> 01:11:12.960 +can be done multiple times with the same + +01:11:09.640 --> 01:11:15.040 +model so um we have an input we feed it + +01:11:12.960 --> 01:11:17.320 +into the model we get an output and then + +01:11:15.040 --> 01:11:19.360 +we feed the output back in and gradually + +01:11:17.320 --> 01:11:23.080 +refine it and make it better and + +01:11:19.360 --> 01:11:24.760 +better and the first time this was done + +01:11:23.080 --> 01:11:27.440 +in neural networks was through something + +01:11:24.760 --> 01:11:29.679 +called Del ation networks and basically + +01:11:27.440 --> 01:11:32.360 +deliberation networks what they do is + +01:11:29.679 --> 01:11:33.760 +they uh take in an output and then they + +01:11:32.360 --> 01:11:34.920 +just gradually refine it to make it + +01:11:33.760 --> 01:11:37.280 +better and better they used a + +01:11:34.920 --> 01:11:39.159 +reinforcement learning algorithm to do + +01:11:37.280 --> 01:11:41.159 +this where you generated the output and + +01:11:39.159 --> 01:11:43.600 +then um improved + +01:11:41.159 --> 01:11:46.719 +it another thing that's really popular + +01:11:43.600 --> 01:11:48.280 +nowadays is uh diffusion models and I + +01:11:46.719 --> 01:11:50.400 +haven't quite decided whether I'll have + +01:11:48.280 --> 01:11:51.880 +time to cover diffusion models in depth + +01:11:50.400 --> 01:11:54.880 +but basically the way a diffusion model + +01:11:51.880 --> 01:11:55.880 +works is very similar you start out with + +01:11:54.880 --> 01:11:57.239 +nothing + +01:11:55.880 --> 01:11:59.840 +and then you gradually make it better + +01:11:57.239 --> 01:12:01.360 +and better um the key difference between + +01:11:59.840 --> 01:12:03.520 +deliberation networks and diffusion + +01:12:01.360 --> 01:12:05.520 +models is diffusion models um you can + +01:12:03.520 --> 01:12:08.600 +train from scratch by basically noising + +01:12:05.520 --> 01:12:10.600 +the input uh applying noise to the input + +01:12:08.600 --> 01:12:12.880 +um in training very efficiently and + +01:12:10.600 --> 01:12:15.639 +these are very widely used + +01:12:12.880 --> 01:12:18.199 +in image generation they're not super + +01:12:15.639 --> 01:12:20.120 +widely used in text just because regular + +01:12:18.199 --> 01:12:22.840 +autor regressive models are so good for + +01:12:20.120 --> 01:12:24.159 +text um but there are a few efforts to + +01:12:22.840 --> 01:12:26.880 +do + +01:12:24.159 --> 01:12:30.920 +that and then a final one is self- + +01:12:26.880 --> 01:12:35.120 +refine and the idea behind self- refine + +01:12:30.920 --> 01:12:39.400 +is you um actually maybe I can open the + +01:12:35.120 --> 01:12:39.400 +paper because the paper has a good + +01:12:54.120 --> 01:12:58.239 +figure + +01:12:56.280 --> 01:13:02.679 +actually I thought it had a good + +01:12:58.239 --> 01:13:05.600 +figure um yeah so maybe this is a figure + +01:13:02.679 --> 01:13:08.639 +um so basically uh what you do is you + +01:13:05.600 --> 01:13:10.639 +feed in the input you generate an output + +01:13:08.639 --> 01:13:12.679 +and then you ask the model to give you + +01:13:10.639 --> 01:13:15.520 +feedback on the output and say yes this + +01:13:12.679 --> 01:13:16.760 +output is good or um like let's say + +01:13:15.520 --> 01:13:19.679 +you're doing code generation it could + +01:13:16.760 --> 01:13:21.920 +say no this output has an error in it um + +01:13:19.679 --> 01:13:24.719 +this is a problem with your output and + +01:13:21.920 --> 01:13:27.840 +then you feed in both the output and the + +01:13:24.719 --> 01:13:29.480 +feedback back uh and ask the model to + +01:13:27.840 --> 01:13:32.239 +refine its output and you do this over + +01:13:29.480 --> 01:13:35.280 +and over again and this allows you to uh + +01:13:32.239 --> 01:13:36.840 +improve the output and uh this is has + +01:13:35.280 --> 01:13:39.600 +ended up being pretty effective in a + +01:13:36.840 --> 01:13:41.159 +pretty wide number of tasks one caveat + +01:13:39.600 --> 01:13:44.040 +about this is your model has to be + +01:13:41.159 --> 01:13:47.000 +really good for this to work so um only + +01:13:44.040 --> 01:13:49.239 +models kind of on the level of GPT 4 not + +01:13:47.000 --> 01:13:52.000 +on the level of GPT 3.5 have the ability + +01:13:49.239 --> 01:13:54.040 +to do this pretty consistently so it is + +01:13:52.000 --> 01:13:57.040 +something you need to be aware + +01:13:54.040 --> 01:13:57.040 +of + +01:13:59.760 --> 01:14:03.600 +cool yep that's all I I had for today + +01:14:02.400 --> 01:14:06.600 +I'm happy + +01:14:03.600 --> 01:14:06.600 +to + +01:14:07.159 --> 01:14:10.159 +take + +01:14:20.600 --> 01:14:27.320 +yep yep that this is a great question so + +01:14:23.920 --> 01:14:28.840 +if sta has the potential to address + +01:14:27.320 --> 01:14:32.120 +information loss why would we ever + +01:14:28.840 --> 01:14:33.840 +choose a Cascade model I think basically + +01:14:32.120 --> 01:14:37.440 +there's potentially two reasons one + +01:14:33.840 --> 01:14:39.199 +reason is um data availability so in + +01:14:37.440 --> 01:14:42.639 +order to train a stacked model you + +01:14:39.199 --> 01:14:43.430 +obviously need the outputs I guess you + +01:14:42.639 --> 01:14:44.639 +could + +01:14:43.430 --> 01:14:48.440 +[Music] + +01:14:44.639 --> 01:14:50.880 +um yeah I guess you could run + +01:14:48.440 --> 01:14:53.199 +the and generate outputs for every + +01:14:50.880 --> 01:14:54.840 +training example you have um but you + +01:14:53.199 --> 01:14:55.840 +would need to do that so you would need + +01:14:54.840 --> 01:14:58.639 +to to + +01:14:55.840 --> 01:14:59.920 +run speech recognition for every example + +01:14:58.639 --> 01:15:02.760 +and you also + +01:14:59.920 --> 01:15:05.199 +couldn't you couldn't use any examples + +01:15:02.760 --> 01:15:07.600 +where you don't have the original input + +01:15:05.199 --> 01:15:10.320 +so you couldn't use text to text + +01:15:07.600 --> 01:15:12.239 +examples unless you like synthesize + +01:15:10.320 --> 01:15:14.159 +speech from text for machine translation + +01:15:12.239 --> 01:15:15.840 +for example so makes it a little bit + +01:15:14.159 --> 01:15:17.360 +more tricky due to the data requirements + +01:15:15.840 --> 01:15:19.239 +but that's not + +01:15:17.360 --> 01:15:22.560 +insurmountable the second reason is + +01:15:19.239 --> 01:15:24.400 +complexity and efficiency so you know + +01:15:22.560 --> 01:15:27.920 +you do have to come up with a model that + +01:15:24.400 --> 01:15:29.520 +takes in speed and text and run set and + +01:15:27.920 --> 01:15:30.920 +it might be easier just to hook together + +01:15:29.520 --> 01:15:34.719 +a speech recognitional with a + +01:15:30.920 --> 01:15:37.920 +translation so but like I think overall + +01:15:34.719 --> 01:15:39.639 +I I like these methods I I think these + +01:15:37.920 --> 01:15:41.159 +are good methods to use if you're if + +01:15:39.639 --> 01:15:42.480 +you're thinking about using a Cascade + +01:15:41.159 --> 01:15:44.199 +system you should definitely consider + +01:15:42.480 --> 01:15:47.199 +using a stack system in + +01:15:44.199 --> 01:15:47.199 +sense + +01:15:52.080 --> 01:15:56.960 +yeah yeah can you measure the + +01:15:55.159 --> 01:15:59.400 +contribution of each component to an + +01:15:56.960 --> 01:16:00.639 +ensemble um the very very easy way to do + +01:15:59.400 --> 01:16:02.199 +that is look at the interpolation + +01:16:00.639 --> 01:16:05.360 +coefficients if you train the + +01:16:02.199 --> 01:16:06.800 +interpolation coefficients um otherwise + +01:16:05.360 --> 01:16:08.920 +I guess it depends on what you mean by + +01:16:06.800 --> 01:16:10.480 +each contribution but I you know looking + +01:16:08.920 --> 01:16:12.280 +at the interpolation coefficients is a + +01:16:10.480 --> 01:16:16.320 +pretty good way to do + +01:16:12.280 --> 01:16:16.320 +it also just how much did the + +01:16:21.480 --> 01:16:27.400 +accuracy is iterative refinement the + +01:16:24.159 --> 01:16:30.199 +same idea as boosting in traditional + +01:16:27.400 --> 01:16:30.199 +like machine Learning + +01:16:30.320 --> 01:16:34.920 +Systems I think it's a little bit + +01:16:32.920 --> 01:16:36.520 +different um because iterative + +01:16:34.920 --> 01:16:38.920 +refinement what I'm talking about here + +01:16:36.520 --> 01:16:41.120 +it's usually taking in the output like + +01:16:38.920 --> 01:16:43.320 +rather complex output of a system and + +01:16:41.120 --> 01:16:44.920 +modifying it so you're not just + +01:16:43.320 --> 01:16:47.080 +modifying the + +01:16:44.920 --> 01:16:49.880 +probabilities of like a single + +01:16:47.080 --> 01:16:53.080 +classifier you're modifying the actual + +01:16:49.880 --> 01:16:55.960 +outputs that were generated then from + +01:16:53.080 --> 01:16:59.560 +the point of view of a boosting + +01:16:55.960 --> 01:17:02.560 +model over a single categorical output + +01:16:59.560 --> 01:17:04.520 +it might actually be similar or the same + +01:17:02.560 --> 01:17:06.480 +but this is more like uh you you + +01:17:04.520 --> 01:17:08.159 +generated a textual output and then you + +01:17:06.480 --> 01:17:10.400 +feed in the textual output to the other + +01:17:08.159 --> 01:17:12.120 +model and refine like generated a new + +01:17:10.400 --> 01:17:14.239 +textual output so I feel like it's a lot + +01:17:12.120 --> 01:17:18.639 +more + +01:17:14.239 --> 01:17:18.639 +complex cool okay thank thanks a lot + +01:17:18.840 --> 01:17:21.840 +everyone diff --git a/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models.mp4 b/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5854522039d28f060a98f81a5b813e72b25154d5 --- /dev/null +++ b/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5c3f854f59933275f3273f0b77753dee254d48e166d37f6f3190a5423767201 +size 79708142 diff --git a/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/metadata.json b/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..e94891bed7e3b6b51b2eb367caf3dc7c4c4588a0 --- /dev/null +++ b/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=2rOSrDtg7HQ", + "title": "CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/transcript.srt b/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..869701daec0158c16ec71ac8cebbc0e45091b4ef --- /dev/null +++ b/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/transcript.srt @@ -0,0 +1,7079 @@ +1 +00:00:00,280 --> 00:00:08,320 +can everyone hear Al set okay great so + +2 +00:00:05,400 --> 00:00:09,840 +um today I'll be talking about a tour of + +3 +00:00:08,320 --> 00:00:13,960 +modern uh + +4 +00:00:09,840 --> 00:00:16,600 +llms and basically the idea here is that + +5 +00:00:13,960 --> 00:00:18,600 +there is many many large language models + +6 +00:00:16,600 --> 00:00:20,480 +available nowadays but I wanted to go + +7 +00:00:18,600 --> 00:00:22,760 +through some of the ones that are + +8 +00:00:20,480 --> 00:00:25,880 +particularly interesting for various + +9 +00:00:22,760 --> 00:00:26,880 +reasons either because they disclose a + +10 +00:00:25,880 --> 00:00:29,519 +lot of + +11 +00:00:26,880 --> 00:00:31,119 +information uh you know about exactly + +12 +00:00:29,519 --> 00:00:34,120 +how they were trains so we can get an + +13 +00:00:31,119 --> 00:00:35,559 +idea about what is involved in training + +14 +00:00:34,120 --> 00:00:39,120 +uh a kind of state-ofthe-art large + +15 +00:00:35,559 --> 00:00:40,640 +language model or because they're kind + +16 +00:00:39,120 --> 00:00:43,200 +of the strongest models that you can + +17 +00:00:40,640 --> 00:00:45,160 +download and use on your own um like the + +18 +00:00:43,200 --> 00:00:47,360 +best open weights language models that + +19 +00:00:45,160 --> 00:00:49,559 +are available or because they're + +20 +00:00:47,360 --> 00:00:51,879 +specialized to some particular topic or + +21 +00:00:49,559 --> 00:00:53,480 +because they're the best closed uh + +22 +00:00:51,879 --> 00:00:56,399 +language models but I'm going to + +23 +00:00:53,480 --> 00:00:58,640 +particularly focus on the first two um + +24 +00:00:56,399 --> 00:01:00,640 +just so like everybody has an idea about + +25 +00:00:58,640 --> 00:01:03,239 +you know what what is going into all the + +26 +00:01:00,640 --> 00:01:07,519 +models that you're using for whatever uh + +27 +00:01:03,239 --> 00:01:07,519 +you know tasks that you're trying to + +28 +00:01:09,119 --> 00:01:14,159 +solve so one important thing is uh what + +29 +00:01:12,240 --> 00:01:18,080 +makes a model so we talk about you know + +30 +00:01:14,159 --> 00:01:21,680 +like llama 2 or M roll or mix roll or + +31 +00:01:18,080 --> 00:01:23,320 +whatever else and I think you know this + +32 +00:01:21,680 --> 00:01:24,479 +already but it's worth reiterating again + +33 +00:01:23,320 --> 00:01:27,320 +here because I'm going to talk about it + +34 +00:01:24,479 --> 00:01:29,320 +a lot today but it's basically the model + +35 +00:01:27,320 --> 00:01:31,280 +architecture so what architecture do you + +36 +00:01:29,320 --> 00:01:33,799 +decide to use + +37 +00:01:31,280 --> 00:01:35,840 +um what data do you decide to use and + +38 +00:01:33,799 --> 00:01:39,759 +what training algorithm or Training + +39 +00:01:35,840 --> 00:01:42,520 +Method do you decide to use and all of + +40 +00:01:39,759 --> 00:01:46,040 +these are important um and there was + +41 +00:01:42,520 --> 00:01:49,320 +actually uh a Twitter thread with Tom + +42 +00:01:46,040 --> 00:01:52,399 +Wolf who's I guess CSO or CTO or + +43 +00:01:49,320 --> 00:01:54,840 +something like that at hugging face um + +44 +00:01:52,399 --> 00:01:56,840 +and basically what he was saying is uh a + +45 +00:01:54,840 --> 00:01:59,240 +lot of people don't realize that the + +46 +00:01:56,840 --> 00:02:01,039 +data is actually one of the most + +47 +00:01:59,240 --> 00:02:04,320 +important parts + +48 +00:02:01,039 --> 00:02:07,680 +um and the architectures are a lot less + +49 +00:02:04,320 --> 00:02:10,920 +important nowadays and I think that + +50 +00:02:07,680 --> 00:02:14,280 +there's some truth to that there's also + +51 +00:02:10,920 --> 00:02:15,879 +some you know a counterargument to that + +52 +00:02:14,280 --> 00:02:17,920 +uh the truth to that which you'll see + +53 +00:02:15,879 --> 00:02:19,760 +today is that almost all of the models + +54 +00:02:17,920 --> 00:02:21,360 +that we're using use very similar + +55 +00:02:19,760 --> 00:02:23,120 +architectures like almost all of the + +56 +00:02:21,360 --> 00:02:26,879 +models use an architecture that's very + +57 +00:02:23,120 --> 00:02:28,760 +similar Dilma um but despite the fact + +58 +00:02:26,879 --> 00:02:31,280 +that they use very similar architectures + +59 +00:02:28,760 --> 00:02:33,599 +they're um accuracy is vastly different + +60 +00:02:31,280 --> 00:02:36,080 +or their their abilities are vastly + +61 +00:02:33,599 --> 00:02:38,519 +different so that must come from the + +62 +00:02:36,080 --> 00:02:40,040 +data or the training decisions right so + +63 +00:02:38,519 --> 00:02:41,640 +that's an argument for the fact that + +64 +00:02:40,040 --> 00:02:44,040 +architecture decisions are a lot less + +65 +00:02:41,640 --> 00:02:48,000 +important my counterargument to that is + +66 +00:02:44,040 --> 00:02:49,840 +we spent N9 to 10 years fine-tuning and + +67 +00:02:48,000 --> 00:02:51,560 +finding the Llama architecture so now we + +68 +00:02:49,840 --> 00:02:53,120 +have the Llama architecture which is a + +69 +00:02:51,560 --> 00:02:55,480 +really good architecture it works really + +70 +00:02:53,120 --> 00:02:57,640 +well when training very large models on + +71 +00:02:55,480 --> 00:02:59,239 +lots of data and so now we don't need to + +72 +00:02:57,640 --> 00:03:01,360 +use another architecture because the + +73 +00:02:59,239 --> 00:03:02,920 +architecture using is good but if we + +74 +00:03:01,360 --> 00:03:06,200 +were trying to do the same thing with + +75 +00:03:02,920 --> 00:03:07,640 +the like lstm from 2014 uh then none of + +76 +00:03:06,200 --> 00:03:09,440 +the stuff we're doing today would work + +77 +00:03:07,640 --> 00:03:11,760 +so that's an argument in favor of you + +78 +00:03:09,440 --> 00:03:13,560 +know architectures being also + +79 +00:03:11,760 --> 00:03:16,920 +architectures can make things faster and + +80 +00:03:13,560 --> 00:03:16,920 +that's included in s decisions + +81 +00:03:17,280 --> 00:03:21,280 +that + +82 +00:03:19,040 --> 00:03:22,640 +so um the first thing I'd like to talk + +83 +00:03:21,280 --> 00:03:25,280 +about before I get into any of the + +84 +00:03:22,640 --> 00:03:28,000 +actual details is um open versus closed + +85 +00:03:25,280 --> 00:03:30,480 +access uh this is not like modeling + +86 +00:03:28,000 --> 00:03:31,760 +stuff but I think it's important and + +87 +00:03:30,480 --> 00:03:35,599 +also helps you understand the + +88 +00:03:31,760 --> 00:03:39,519 +environment a little bit so um there's a + +89 +00:03:35,599 --> 00:03:42,200 +nice blog by pyang and others uh at + +90 +00:03:39,519 --> 00:03:45,560 +which is also in the reference and they + +91 +00:03:42,200 --> 00:03:47,720 +discuss several different varieties of + +92 +00:03:45,560 --> 00:03:50,599 +like openness of release of language + +93 +00:03:47,720 --> 00:03:52,560 +models in advanced AI systems and there + +94 +00:03:50,599 --> 00:03:55,200 +are some things that we can talk about + +95 +00:03:52,560 --> 00:03:59,000 +we can talk about the weights being open + +96 +00:03:55,200 --> 00:04:01,439 +um described or closed inference uh code + +97 +00:03:59,000 --> 00:04:03,319 +being open or inference methods being + +98 +00:04:01,439 --> 00:04:04,959 +described or it being fully closed + +99 +00:04:03,319 --> 00:04:08,120 +training being open described or closed + +100 +00:04:04,959 --> 00:04:13,040 +and data being open described or closed + +101 +00:04:08,120 --> 00:04:14,760 +and um in general uh we have like the + +102 +00:04:13,040 --> 00:04:16,519 +open weights models that are on hugging + +103 +00:04:14,760 --> 00:04:19,040 +face that might just mean the weights + +104 +00:04:16,519 --> 00:04:20,600 +are open the inference code also needs + +105 +00:04:19,040 --> 00:04:21,919 +to be open because otherwise you can't + +106 +00:04:20,600 --> 00:04:24,160 +do inference on them if they're on + +107 +00:04:21,919 --> 00:04:25,800 +hugging face but that doesn't mean that + +108 +00:04:24,160 --> 00:04:28,120 +the training code is open it also + +109 +00:04:25,800 --> 00:04:32,479 +doesn't mean that the data is open um + +110 +00:04:28,120 --> 00:04:34,280 +and so there's various degrees of + +111 +00:04:32,479 --> 00:04:37,320 +openness + +112 +00:04:34,280 --> 00:04:40,919 +um and then of course there are things + +113 +00:04:37,320 --> 00:04:42,520 +like uh GPT for or GPT models where + +114 +00:04:40,919 --> 00:04:45,560 +basically all of this is closed and we + +115 +00:04:42,520 --> 00:04:48,880 +don't know anything about it or know + +116 +00:04:45,560 --> 00:04:50,560 +very little about it another thing is + +117 +00:04:48,880 --> 00:04:52,600 +about licenses and + +118 +00:04:50,560 --> 00:04:54,199 +permissiveness and this is kind of + +119 +00:04:52,600 --> 00:04:56,880 +important if you want to do a research + +120 +00:04:54,199 --> 00:05:01,240 +project to know because + +121 +00:04:56,880 --> 00:05:04,080 +it means it it an impact on the things + +122 +00:05:01,240 --> 00:05:05,520 +that you legally can do or can't do in + +123 +00:05:04,080 --> 00:05:08,039 +universities I mean we should be + +124 +00:05:05,520 --> 00:05:09,479 +following the law but we're maybe people + +125 +00:05:08,039 --> 00:05:10,720 +think about this a little bit less if + +126 +00:05:09,479 --> 00:05:12,240 +you're in a big company this is + +127 +00:05:10,720 --> 00:05:14,919 +something that becomes really important + +128 +00:05:12,240 --> 00:05:17,199 +so it's uh it's important to think + +129 +00:05:14,919 --> 00:05:20,039 +about so I'm going to go through several + +130 +00:05:17,199 --> 00:05:21,440 +degrees of licenses uh that if you've + +131 +00:05:20,039 --> 00:05:25,759 +done anything in open source you + +132 +00:05:21,440 --> 00:05:27,600 +probably know but um the or you probably + +133 +00:05:25,759 --> 00:05:29,919 +know a lot of these the first one is + +134 +00:05:27,600 --> 00:05:31,479 +public domain or cc0 + +135 +00:05:29,919 --> 00:05:33,440 +and this basically means you can do + +136 +00:05:31,479 --> 00:05:37,240 +anything with it like I could I could + +137 +00:05:33,440 --> 00:05:39,280 +download it and um this includes the + +138 +00:05:37,240 --> 00:05:41,680 +download it and redistribute it not give + +139 +00:05:39,280 --> 00:05:44,560 +you any credit uh modify it in any way I + +140 +00:05:41,680 --> 00:05:47,720 +want and this includes things like old + +141 +00:05:44,560 --> 00:05:49,600 +copyrighted works and products of the US + +142 +00:05:47,720 --> 00:05:51,400 +government workers so if you work for + +143 +00:05:49,600 --> 00:05:53,240 +the US government in some capacities + +144 +00:05:51,400 --> 00:05:58,560 +anything you generate becomes public + +145 +00:05:53,240 --> 00:06:01,000 +domain um so old copyrighted Works um + +146 +00:05:58,560 --> 00:06:04,560 +How how old do you think they need to be + +147 +00:06:01,000 --> 00:06:04,560 +before they become uh + +148 +00:06:04,720 --> 00:06:12,280 +uncopyrighted + +149 +00:06:07,000 --> 00:06:12,280 +yeah uh I think that's pretty close + +150 +00:06:14,319 --> 00:06:21,280 +so it's uh 70 years I + +151 +00:06:18,520 --> 00:06:23,680 +guess oh sorry the life of the author + +152 +00:06:21,280 --> 00:06:25,120 +plus an additional 70 years so like + +153 +00:06:23,680 --> 00:06:28,479 +after the after the person has passed + +154 +00:06:25,120 --> 00:06:30,720 +away 70 years I guess it says um does + +155 +00:06:28,479 --> 00:06:34,520 +anyone know a work that just become + +156 +00:06:30,720 --> 00:06:37,520 +became non-copyrighted yeah uh Mickey + +157 +00:06:34,520 --> 00:06:43,199 +Mouse is still copyrighted + +158 +00:06:37,520 --> 00:06:45,199 +yeah SBO uh did did it I okay so that + +159 +00:06:43,199 --> 00:06:48,400 +that's some new news some other new news + +160 +00:06:45,199 --> 00:06:50,759 +is wi the Poo um so Winnie the Poo just + +161 +00:06:48,400 --> 00:06:54,199 +became non-copyrighted and actually I + +162 +00:06:50,759 --> 00:06:55,840 +just heard uh last week that somebody + +163 +00:06:54,199 --> 00:06:59,680 +made a horror movie where Winnie the + +164 +00:06:55,840 --> 00:07:01,479 +Pooh was a a killer and that one uh a + +165 +00:06:59,680 --> 00:07:04,960 +whole bunch of like bad movie awards in + +166 +00:07:01,479 --> 00:07:06,639 +2023 so um that's the kind of things + +167 +00:07:04,960 --> 00:07:09,080 +that can happen to your copyrighted + +168 +00:07:06,639 --> 00:07:11,479 +works if they are released cc0 somebody + +169 +00:07:09,080 --> 00:07:12,960 +can do anything they want with them uh + +170 +00:07:11,479 --> 00:07:14,400 +you know so you need to be a little bit + +171 +00:07:12,960 --> 00:07:18,080 +careful about + +172 +00:07:14,400 --> 00:07:20,000 +that um next are MIT and bstd these are + +173 +00:07:18,080 --> 00:07:22,400 +very common software licenses you'll see + +174 +00:07:20,000 --> 00:07:25,720 +them on a lot of research projects these + +175 +00:07:22,400 --> 00:07:27,400 +have very few restrictions um other than + +176 +00:07:25,720 --> 00:07:29,319 +maybe maintaining the copyright notice + +177 +00:07:27,400 --> 00:07:31,840 +for BC but that's about it you can do + +178 +00:07:29,319 --> 00:07:33,840 +just about anything you want with it um + +179 +00:07:31,840 --> 00:07:35,599 +actually I'm not sure if people know + +180 +00:07:33,840 --> 00:07:39,599 +this but the Mac operating system is + +181 +00:07:35,599 --> 00:07:42,199 +based on an old BSD Opera uh operating + +182 +00:07:39,599 --> 00:07:44,280 +system where they uh took the they took + +183 +00:07:42,199 --> 00:07:46,080 +the code they made it private they + +184 +00:07:44,280 --> 00:07:49,560 +forked it made it private and now it's + +185 +00:07:46,080 --> 00:07:51,919 +the proprietary Mac operating system so + +186 +00:07:49,560 --> 00:07:53,720 +uh that's something you can do with an m + +187 +00:07:51,919 --> 00:07:57,840 +m or BSD + +188 +00:07:53,720 --> 00:08:00,000 +licensed um there's also a Pachi and CC + +189 +00:07:57,840 --> 00:08:02,560 +by um + +190 +00:08:00,000 --> 00:08:05,039 +here you must acknowledge the owner of + +191 +00:08:02,560 --> 00:08:07,840 +the uh the original creators so you need + +192 +00:08:05,039 --> 00:08:08,960 +to say this person actually created uh + +193 +00:08:07,840 --> 00:08:11,520 +this + +194 +00:08:08,960 --> 00:08:14,680 +originally + +195 +00:08:11,520 --> 00:08:17,319 +um Apachi is also kind of interesting + +196 +00:08:14,680 --> 00:08:21,759 +because they will give you a license to + +197 +00:08:17,319 --> 00:08:25,960 +use that code and any patents that are + +198 +00:08:21,759 --> 00:08:29,599 +associated with that code unless you sue + +199 +00:08:25,960 --> 00:08:32,159 +the company who released it so um just + +200 +00:08:29,599 --> 00:08:34,039 +Give an example let's say uh Google + +201 +00:08:32,159 --> 00:08:36,279 +released their code under the Apache + +202 +00:08:34,039 --> 00:08:38,919 +license and that code implements + +203 +00:08:36,279 --> 00:08:42,680 +Transformers and Google has a patent on + +204 +00:08:38,919 --> 00:08:45,760 +Transformers so if you use uh kind of + +205 +00:08:42,680 --> 00:08:48,200 +jacks or tensorflow a Jack or tensorflow + +206 +00:08:45,760 --> 00:08:50,120 +implementation of Transformers uh that + +207 +00:08:48,200 --> 00:08:51,720 +was created by Google you're okay you're + +208 +00:08:50,120 --> 00:08:54,640 +safe to use that because they've + +209 +00:08:51,720 --> 00:08:57,360 +released it under uh under that license + +210 +00:08:54,640 --> 00:08:59,560 +but if you sue Google uh for anything + +211 +00:08:57,360 --> 00:09:01,760 +related to intellectual property Google + +212 +00:08:59,560 --> 00:09:04,480 +could say uh don't you can't use + +213 +00:09:01,760 --> 00:09:06,040 +Transformers anymore um and so like if + +214 +00:09:04,480 --> 00:09:08,279 +open AI ever sues Google for + +215 +00:09:06,040 --> 00:09:09,680 +intellectual property infringement + +216 +00:09:08,279 --> 00:09:12,120 +Google will say okay you can't use + +217 +00:09:09,680 --> 00:09:15,959 +Transformers or word embeddings good + +218 +00:09:12,120 --> 00:09:17,640 +luck uh open so um there's this + +219 +00:09:15,959 --> 00:09:20,760 +interesting thing where all of these uh + +220 +00:09:17,640 --> 00:09:22,760 +tech companies now are using patented um + +221 +00:09:20,760 --> 00:09:24,440 +patented things a lot of it apachi + +222 +00:09:22,760 --> 00:09:26,040 +license software and so none of them can + +223 +00:09:24,440 --> 00:09:28,959 +sue each other for patents so patents + +224 +00:09:26,040 --> 00:09:30,560 +have become basically mostly worthless + +225 +00:09:28,959 --> 00:09:35,320 +uh in big + +226 +00:09:30,560 --> 00:09:36,360 +te um moving on um there's also a g GPL + +227 +00:09:35,320 --> 00:09:39,360 +in + +228 +00:09:36,360 --> 00:09:42,800 +ccbsa these are licenses where if you + +229 +00:09:39,360 --> 00:09:45,680 +use them you need to reshare under that + +230 +00:09:42,800 --> 00:09:47,839 +license um and so like if you create + +231 +00:09:45,680 --> 00:09:49,440 +some software it's GPL licensed and you + +232 +00:09:47,839 --> 00:09:52,160 +build on it and build something new you + +233 +00:09:49,440 --> 00:09:54,839 +need to release it under the GPL license + +234 +00:09:52,160 --> 00:09:58,160 +so a lot of companies will not + +235 +00:09:54,839 --> 00:09:59,640 +use um will not use GPL software because + +236 +00:09:58,160 --> 00:10:01,920 +that would mean that if they incorporate + +237 +00:09:59,640 --> 00:10:04,959 +into their system their whole system + +238 +00:10:01,920 --> 00:10:06,720 +like for example Google uh like all of + +239 +00:10:04,959 --> 00:10:10,240 +Google would have to be GPL licensed in + +240 +00:10:06,720 --> 00:10:11,720 +Rel EAS uh so um and I'm kind of + +241 +00:10:10,240 --> 00:10:14,800 +simplifying these licenses I'm just + +242 +00:10:11,720 --> 00:10:17,519 +giving you the gist CC BSA and sorry CC + +243 +00:10:14,800 --> 00:10:20,640 +licenses are more for data so MIT BSC + +244 +00:10:17,519 --> 00:10:22,640 +Apachi and GPL are more for software CC + +245 +00:10:20,640 --> 00:10:27,640 +Creative Commons licenses are for data + +246 +00:10:22,640 --> 00:10:29,640 +so um for example Wikipedia is CC by SAA + +247 +00:10:27,640 --> 00:10:33,560 +I believe + +248 +00:10:29,640 --> 00:10:33,560 +let me make sure that I'm not lying + +249 +00:10:41,839 --> 00:10:48,240 +there yeah CC bys and so that means that + +250 +00:10:46,040 --> 00:10:52,200 +if you make any derivative work of + +251 +00:10:48,240 --> 00:10:54,160 +Wikipedia you need to share it um the + +252 +00:10:52,200 --> 00:10:57,040 +same way that Wikipedia is uh so you + +253 +00:10:54,160 --> 00:10:59,760 +need to give it the same + +254 +00:10:57,040 --> 00:11:01,560 +license there's also um cre of Commons + +255 +00:10:59,760 --> 00:11:03,240 +non-commercial licenses or software + +256 +00:11:01,560 --> 00:11:05,519 +non-commercial licenses you say you + +257 +00:11:03,240 --> 00:11:07,079 +can't use them for commercial purposes + +258 +00:11:05,519 --> 00:11:09,279 +all the ones above you can use for + +259 +00:11:07,079 --> 00:11:11,519 +commercial purposes once you start + +260 +00:11:09,279 --> 00:11:13,440 +getting down here this is no often no + +261 +00:11:11,519 --> 00:11:15,279 +longer called open source so the open + +262 +00:11:13,440 --> 00:11:16,959 +source initiative says anything with a + +263 +00:11:15,279 --> 00:11:19,839 +restriction on the way that you can use + +264 +00:11:16,959 --> 00:11:22,639 +it is no longer open source and so that + +265 +00:11:19,839 --> 00:11:25,360 +means if you say you can't use this for + +266 +00:11:22,639 --> 00:11:27,720 +commercial purposes or you can't use + +267 +00:11:25,360 --> 00:11:29,639 +this in military systems for example + +268 +00:11:27,720 --> 00:11:32,320 +which some language models say that + +269 +00:11:29,639 --> 00:11:33,680 +nowadays those are no longer called open + +270 +00:11:32,320 --> 00:11:37,040 +source according to the open source + +271 +00:11:33,680 --> 00:11:40,320 +initiative so that's a thing to know + +272 +00:11:37,040 --> 00:11:42,920 +about then separately uh there are these + +273 +00:11:40,320 --> 00:11:45,279 +licenses that a lot of people like meta + +274 +00:11:42,920 --> 00:11:48,160 +or hugging face come up with for their + +275 +00:11:45,279 --> 00:11:50,360 +um for their models recently so the + +276 +00:11:48,160 --> 00:11:51,320 +Llama license um how many people are + +277 +00:11:50,360 --> 00:11:54,200 +using + +278 +00:11:51,320 --> 00:11:56,519 +llama in your projects how many people + +279 +00:11:54,200 --> 00:11:56,519 +read the + +280 +00:11:57,000 --> 00:12:00,880 +license so um are you sure you can use + +281 +00:11:59,639 --> 00:12:04,959 +it in your + +282 +00:12:00,880 --> 00:12:06,839 +project uh so you're you're probably in + +283 +00:12:04,959 --> 00:12:09,000 +luck in your project if you're using it + +284 +00:12:06,839 --> 00:12:11,560 +the Lama license you can read into it to + +285 +00:12:09,000 --> 00:12:13,519 +see what it actually allows but it has + +286 +00:12:11,560 --> 00:12:16,399 +um the original llama license has some + +287 +00:12:13,519 --> 00:12:18,440 +interesting uh things number one you + +288 +00:12:16,399 --> 00:12:21,079 +cannot use llama to train any language + +289 +00:12:18,440 --> 00:12:23,000 +model that is not derived from llama so + +290 +00:12:21,079 --> 00:12:26,120 +you can't generate data from llama in + +291 +00:12:23,000 --> 00:12:30,040 +train M that's not allowed according to + +292 +00:12:26,120 --> 00:12:32,440 +the r Li um another thing is uh you + +293 +00:12:30,040 --> 00:12:34,680 +can't use it for military purposes so + +294 +00:12:32,440 --> 00:12:36,160 +you can't use it um in building a + +295 +00:12:34,680 --> 00:12:37,639 +missile system or something like that + +296 +00:12:36,160 --> 00:12:41,440 +hopefully none of you are doing that for + +297 +00:12:37,639 --> 00:12:42,920 +your project um and you also need to get + +298 +00:12:41,440 --> 00:12:45,399 +a license from meta if you have + +299 +00:12:42,920 --> 00:12:48,000 +something more than 300 million active + +300 +00:12:45,399 --> 00:12:53,800 +user asign your social network service + +301 +00:12:48,000 --> 00:12:56,079 +so if you're Google or um you know X or + +302 +00:12:53,800 --> 00:12:57,680 +Twitter or you know whatever else you + +303 +00:12:56,079 --> 00:13:00,519 +need to get a license for meta before + +304 +00:12:57,680 --> 00:13:02,079 +you can start using one so + +305 +00:13:00,519 --> 00:13:03,240 +basically they created that license so + +306 +00:13:02,079 --> 00:13:06,720 +their competitors don't take their + +307 +00:13:03,240 --> 00:13:08,959 +language model and just use it for free + +308 +00:13:06,720 --> 00:13:11,000 +um and then the final thing is no + +309 +00:13:08,959 --> 00:13:13,240 +license so like let's say you have some + +310 +00:13:11,000 --> 00:13:15,560 +code that you upload to GitHub and you + +311 +00:13:13,240 --> 00:13:17,839 +don't put a license on your code this + +312 +00:13:15,560 --> 00:13:20,880 +means that you have only agreed to the + +313 +00:13:17,839 --> 00:13:23,360 +GitHub licensing terms which means that + +314 +00:13:20,880 --> 00:13:26,199 +actually nobody can use their code they + +315 +00:13:23,360 --> 00:13:30,079 +can view it possibly but they can't you + +316 +00:13:26,199 --> 00:13:31,720 +download it use it they can't like um + +317 +00:13:30,079 --> 00:13:34,160 +they can't incorporate it into their own + +318 +00:13:31,720 --> 00:13:36,000 +system so actually if you release + +319 +00:13:34,160 --> 00:13:39,120 +research code I would highly encourage + +320 +00:13:36,000 --> 00:13:41,120 +you to use MIT or BSD um or one of these + +321 +00:13:39,120 --> 00:13:43,040 +permissive licenses so other people can + +322 +00:13:41,120 --> 00:13:45,720 +use it and follow up and your code can + +323 +00:13:43,040 --> 00:13:46,920 +be effectful so um this is an important + +324 +00:13:45,720 --> 00:13:49,040 +thing to know about there's obviously + +325 +00:13:46,920 --> 00:13:52,959 +lots more to know + +326 +00:13:49,040 --> 00:13:56,440 +about um so then my question my next + +327 +00:13:52,959 --> 00:13:57,360 +question is uh what is most of the text + +328 +00:13:56,440 --> 00:13:59,560 +on the + +329 +00:13:57,360 --> 00:14:01,160 +internet the majority of the text on the + +330 +00:13:59,560 --> 00:14:04,839 +internet falls into one of these + +331 +00:14:01,160 --> 00:14:04,839 +categories any idea which + +332 +00:14:05,120 --> 00:14:12,759 +one so Wikipedia is CC bya what what + +333 +00:14:09,040 --> 00:14:12,759 +about uh Mo most of the text + +334 +00:14:14,199 --> 00:14:18,959 +on yeah it's not maybe not no license + +335 +00:14:16,880 --> 00:14:21,680 +but all rights reserved so basically you + +336 +00:14:18,959 --> 00:14:23,079 +can't use it without having permission + +337 +00:14:21,680 --> 00:14:27,639 +from the copyright + +338 +00:14:23,079 --> 00:14:30,639 +holders and so because of that + +339 +00:14:27,639 --> 00:14:33,800 +um the idea of fair use becomes very + +340 +00:14:30,639 --> 00:14:35,320 +important this is a us specific thing + +341 +00:14:33,800 --> 00:14:36,880 +and the rules in other countries are + +342 +00:14:35,320 --> 00:14:39,199 +different they're not the same as the us + +343 +00:14:36,880 --> 00:14:41,680 +but in the US uh we have rules about + +344 +00:14:39,199 --> 00:14:44,600 +where you can use particular types of + +345 +00:14:41,680 --> 00:14:46,279 +data so the US fair use Doctrine is + +346 +00:14:44,600 --> 00:14:50,240 +basically that you can use copyrighted + +347 +00:14:46,279 --> 00:14:52,920 +material in some cases so + +348 +00:14:50,240 --> 00:14:56,279 +um as a gross + +349 +00:14:52,920 --> 00:15:01,800 +simplification um quoting a small amount + +350 +00:14:56,279 --> 00:15:04,320 +of material in like a textbook or slides + +351 +00:15:01,800 --> 00:15:07,079 +or something like this this is likely + +352 +00:15:04,320 --> 00:15:10,040 +okay um there are going to be very few + +353 +00:15:07,079 --> 00:15:11,399 +cases where this is not going to um you + +354 +00:15:10,040 --> 00:15:12,720 +know where you're going to get in + +355 +00:15:11,399 --> 00:15:15,600 +trouble for + +356 +00:15:12,720 --> 00:15:18,000 +this another important uh judgment + +357 +00:15:15,600 --> 00:15:19,600 +criteria for whether this is fair use is + +358 +00:15:18,000 --> 00:15:22,440 +that it doesn't diminish the value of + +359 +00:15:19,600 --> 00:15:25,120 +the original work so if I quote + +360 +00:15:22,440 --> 00:15:27,759 +something in my like let's say I quoted + +361 +00:15:25,120 --> 00:15:30,839 +all of Harry Potter in a textbook and + +362 +00:15:27,759 --> 00:15:32,600 +then I sold my textbook for $3 anybody + +363 +00:15:30,839 --> 00:15:34,279 +could take my textbook and read all of + +364 +00:15:32,600 --> 00:15:35,800 +Harry Potter for $3 and the money + +365 +00:15:34,279 --> 00:15:37,480 +wouldn't go to JK rolling and that would + +366 +00:15:35,800 --> 00:15:41,040 +not be fair use because it's diminishing + +367 +00:15:37,480 --> 00:15:42,920 +the value of similarly if I create a big + +368 +00:15:41,040 --> 00:15:44,319 +Corpus of books and I upload them to a + +369 +00:15:42,920 --> 00:15:46,079 +site where anyone can browse them that + +370 +00:15:44,319 --> 00:15:48,319 +would also probably not be for use + +371 +00:15:46,079 --> 00:15:49,160 +because the authors would not get paid + +372 +00:15:48,319 --> 00:15:52,319 +for + +373 +00:15:49,160 --> 00:15:54,480 +it another judgment Criterion is whether + +374 +00:15:52,319 --> 00:15:57,399 +it's for non commercial purposes or not + +375 +00:15:54,480 --> 00:15:59,639 +so like in universities we're actually + +376 +00:15:57,399 --> 00:16:01,120 +held to a probably held to a more + +377 +00:15:59,639 --> 00:16:03,000 +lenient standard of fa use if we're + +378 +00:16:01,120 --> 00:16:06,120 +doing non-commercial research compared + +379 +00:16:03,000 --> 00:16:08,519 +to a company that's doing it + +380 +00:16:06,120 --> 00:16:11,480 +so um most data on the Internet is + +381 +00:16:08,519 --> 00:16:13,279 +copyrighted so right now most model + +382 +00:16:11,480 --> 00:16:16,240 +training not all model training but most + +383 +00:16:13,279 --> 00:16:18,680 +model training is done um assuming fair + +384 +00:16:16,240 --> 00:16:21,800 +use which means that training an AI + +385 +00:16:18,680 --> 00:16:25,800 +model on copyrighted + +386 +00:16:21,800 --> 00:16:29,480 +data is number one it cannot reproduce + +387 +00:16:25,800 --> 00:16:32,240 +the material easily so it's instead of + +388 +00:16:29,480 --> 00:16:33,600 +quoting material directly it's kind of + +389 +00:16:32,240 --> 00:16:35,880 +combining the material together to + +390 +00:16:33,600 --> 00:16:37,519 +create a new thing they're saying it + +391 +00:16:35,880 --> 00:16:40,639 +doesn't diminish the commercial value of + +392 +00:16:37,519 --> 00:16:42,360 +the original uh data um and then the + +393 +00:16:40,639 --> 00:16:44,839 +non-commercial purposes is maybe a + +394 +00:16:42,360 --> 00:16:47,240 +secondary concern since the first two + +395 +00:16:44,839 --> 00:16:50,600 +hold um but there are lawsuits about + +396 +00:16:47,240 --> 00:16:52,360 +this and so um this is a clip from The + +397 +00:16:50,600 --> 00:16:55,560 +New York Times where the New York Times + +398 +00:16:52,360 --> 00:16:58,279 +is suing open AI in Microsoft over uh + +399 +00:16:55,560 --> 00:16:59,759 +them training on New York Times articles + +400 +00:16:58,279 --> 00:17:02,040 +and they did do a lot of things like + +401 +00:16:59,759 --> 00:17:05,799 +they demonstrate that you can get uh gp4 + +402 +00:17:02,040 --> 00:17:08,319 +to reproduce uh like um New York Times + +403 +00:17:05,799 --> 00:17:11,480 +articles and they also argue that people + +404 +00:17:08,319 --> 00:17:12,880 +are using this gp4 as a source of news + +405 +00:17:11,480 --> 00:17:14,079 +instead of going to the New York Times + +406 +00:17:12,880 --> 00:17:15,959 +site so they're losing money from + +407 +00:17:14,079 --> 00:17:19,199 +advertising and like other other things + +408 +00:17:15,959 --> 00:17:21,679 +like that um another example is GitHub + +409 +00:17:19,199 --> 00:17:24,000 +co-pilot was sued by people who uh + +410 +00:17:21,679 --> 00:17:26,439 +uploaded software to GitHub and said + +411 +00:17:24,000 --> 00:17:29,039 +that uh basically GitHub didn't have the + +412 +00:17:26,439 --> 00:17:32,400 +right to use it to profit from it and + +413 +00:17:29,039 --> 00:17:34,799 +diminish their uh you know their money + +414 +00:17:32,400 --> 00:17:37,520 +so notably uh on this slide I'm using + +415 +00:17:34,799 --> 00:17:42,039 +fair use I don't know if you've noticed + +416 +00:17:37,520 --> 00:17:44,679 +like I copy I copy pasted an image from + +417 +00:17:42,039 --> 00:17:46,360 +somebody's uh you know website and used + +418 +00:17:44,679 --> 00:17:48,520 +it here that's copyrighted material but + +419 +00:17:46,360 --> 00:17:49,640 +I'm using it because I'm quoting a small + +420 +00:17:48,520 --> 00:17:52,440 +amount of material and I'm not + +421 +00:17:49,640 --> 00:17:54,360 +diminishing the ostial values so um like + +422 +00:17:52,440 --> 00:17:56,320 +fair use is very ubiquitous it's very + +423 +00:17:54,360 --> 00:17:58,480 +important so we can do things like this + +424 +00:17:56,320 --> 00:18:00,840 +but also um it's currently under thep + +425 +00:17:58,480 --> 00:18:00,840 +with this + +426 +00:18:01,280 --> 00:18:07,799 +models so then another question is why + +427 +00:18:04,360 --> 00:18:12,520 +restrict model access why do we number + +428 +00:18:07,799 --> 00:18:14,320 +one make models closed number two um you + +429 +00:18:12,520 --> 00:18:16,159 +know maybe not even describe what we did + +430 +00:18:14,320 --> 00:18:18,880 +in our models and I think there's three + +431 +00:18:16,159 --> 00:18:21,360 +main reasons the first reason is + +432 +00:18:18,880 --> 00:18:23,480 +commercial concerns and so they want to + +433 +00:18:21,360 --> 00:18:25,760 +make money from the models so open AI + +434 +00:18:23,480 --> 00:18:27,520 +makes money from the open AI API Gemini + +435 +00:18:25,760 --> 00:18:29,480 +makes uh sorry Google makes money from + +436 +00:18:27,520 --> 00:18:31,799 +the Gemini API + +437 +00:18:29,480 --> 00:18:33,720 +um and anthropic makes money from the + +438 +00:18:31,799 --> 00:18:34,760 +CLA API these are all models that I'm + +439 +00:18:33,720 --> 00:18:37,640 +going to talk + +440 +00:18:34,760 --> 00:18:39,440 +about number two safety I I think there + +441 +00:18:37,640 --> 00:18:41,640 +are very legitimate concerns where if + +442 +00:18:39,440 --> 00:18:43,840 +you release strong models people might + +443 +00:18:41,640 --> 00:18:47,200 +use them for bad things so you know + +444 +00:18:43,840 --> 00:18:49,120 +creating fake content online or uh doing + +445 +00:18:47,200 --> 00:18:50,720 +spear fishing attacks against people and + +446 +00:18:49,120 --> 00:18:52,600 +trying to you know scam them out of + +447 +00:18:50,720 --> 00:18:55,600 +money or things like that so I think + +448 +00:18:52,600 --> 00:18:57,240 +there are legitimate concerns about this + +449 +00:18:55,600 --> 00:18:58,880 +and then the final one is legal + +450 +00:18:57,240 --> 00:19:01,520 +liability so training models on + +451 +00:18:58,880 --> 00:19:03,640 +copyrighted data is a legal gray area as + +452 +00:19:01,520 --> 00:19:05,159 +I just mentioned so they don't want to + +453 +00:19:03,640 --> 00:19:07,159 +say what data they trained on because if + +454 +00:19:05,159 --> 00:19:10,240 +they say what data they trained on then + +455 +00:19:07,159 --> 00:19:11,960 +they might get sued so these are the + +456 +00:19:10,240 --> 00:19:14,960 +three main + +457 +00:19:11,960 --> 00:19:17,960 +concerns so + +458 +00:19:14,960 --> 00:19:19,480 +um anyway this this is a preface and + +459 +00:19:17,960 --> 00:19:23,360 +then I want to go into like the actual + +460 +00:19:19,480 --> 00:19:23,360 +models but are there any questions about + +461 +00:19:24,679 --> 00:19:30,280 +this so if any of you + +462 +00:19:27,280 --> 00:19:31,720 +are working at a company or starting a + +463 +00:19:30,280 --> 00:19:33,120 +company thinking about working at a + +464 +00:19:31,720 --> 00:19:35,440 +company or starting a company this is + +465 +00:19:33,120 --> 00:19:37,320 +something you should be aware of um you + +466 +00:19:35,440 --> 00:19:39,720 +should also be aware of the fact that + +467 +00:19:37,320 --> 00:19:42,360 +you know open AI has been doing sketchy + +468 +00:19:39,720 --> 00:19:46,640 +things for a long time and look where + +469 +00:19:42,360 --> 00:19:48,440 +they are so you know it it's uh like + +470 +00:19:46,640 --> 00:19:51,400 +this is very much a legal gray area and + +471 +00:19:48,440 --> 00:19:53,880 +people are are uh moving through that + +472 +00:19:51,400 --> 00:19:55,640 +gray area but anyway it's worth knowing + +473 +00:19:53,880 --> 00:19:59,480 +that so next I'm going to talk about + +474 +00:19:55,640 --> 00:20:00,679 +open models um so first bird's eye view + +475 +00:19:59,480 --> 00:20:02,600 +I'm going to talk about five different + +476 +00:20:00,679 --> 00:20:04,080 +models and I picked them for a reason + +477 +00:20:02,600 --> 00:20:06,440 +the first two are because they're open + +478 +00:20:04,080 --> 00:20:08,159 +source and fully reproducible namely + +479 +00:20:06,440 --> 00:20:10,360 +pipia + +480 +00:20:08,159 --> 00:20:11,919 +Ino and the reason why I want to talk + +481 +00:20:10,360 --> 00:20:13,120 +about these is we know everything about + +482 +00:20:11,919 --> 00:20:14,679 +them including what data they were + +483 +00:20:13,120 --> 00:20:16,799 +trained on um what their training + +484 +00:20:14,679 --> 00:20:19,080 +procedures are you can download all the + +485 +00:20:16,799 --> 00:20:21,000 +the stuff so you can kind of know uh + +486 +00:20:19,080 --> 00:20:24,840 +exactly what goes into making a strong + +487 +00:20:21,000 --> 00:20:26,520 +model um Pia is uh actually has many + +488 +00:20:24,840 --> 00:20:28,159 +sizes in checkpoints which is pretty + +489 +00:20:26,520 --> 00:20:30,919 +interesting Ando is maybe the strongest + +490 +00:20:28,159 --> 00:20:32,559 +reproduced model at the moment um then + +491 +00:20:30,919 --> 00:20:34,120 +we have open weights models and these + +492 +00:20:32,559 --> 00:20:35,520 +are models that aren't fully open they + +493 +00:20:34,120 --> 00:20:38,679 +don't disclose everything they don't + +494 +00:20:35,520 --> 00:20:40,760 +release their training data uh or + +495 +00:20:38,679 --> 00:20:43,799 +code um but I'm going to talk about + +496 +00:20:40,760 --> 00:20:46,520 +llama 2 which is the most popular um + +497 +00:20:43,799 --> 00:20:48,280 +it's also heavily safety tuned mistol + +498 +00:20:46,520 --> 00:20:50,840 +and mixol which is a strong and fast + +499 +00:20:48,280 --> 00:20:53,200 +model um it's somewhat multilingual and + +500 +00:20:50,840 --> 00:20:55,200 +also quen which is a very uh strong + +501 +00:20:53,200 --> 00:20:57,520 +model it's more multilingual and + +502 +00:20:55,200 --> 00:21:00,600 +specifically it's good in English and + +503 +00:20:57,520 --> 00:21:03,440 +Chinese because it was train down of + +504 +00:21:00,600 --> 00:21:04,720 +that so first going into Pia for each of + +505 +00:21:03,440 --> 00:21:06,159 +them I'm going to give an overview and + +506 +00:21:04,720 --> 00:21:08,880 +then talk about some interesting points + +507 +00:21:06,159 --> 00:21:12,320 +about them so pythia was created by + +508 +00:21:08,880 --> 00:21:14,799 +alther ai alther ai is one of the first + +509 +00:21:12,320 --> 00:21:16,279 +um kind of open- source AI organizations + +510 +00:21:14,799 --> 00:21:18,720 +they've created a huge number of really + +511 +00:21:16,279 --> 00:21:21,480 +useful things including training code + +512 +00:21:18,720 --> 00:21:25,279 +models training data sets and also + +513 +00:21:21,480 --> 00:21:28,080 +evaluation that's used pretty widely um + +514 +00:21:25,279 --> 00:21:29,760 +the goal of pythia was basically joint + +515 +00:21:28,080 --> 00:21:32,159 +understanding model training Dynamics + +516 +00:21:29,760 --> 00:21:36,320 +and scaling and so from that point of + +517 +00:21:32,159 --> 00:21:39,120 +view um they released eight model sizes + +518 +00:21:36,320 --> 00:21:41,880 +from 70 million parameters to 12 billion + +519 +00:21:39,120 --> 00:21:44,960 +parameters for each model size they have + +520 +00:21:41,880 --> 00:21:47,440 +154 checkpoints throughout the training + +521 +00:21:44,960 --> 00:21:52,880 +process um so they basically trained on + +522 +00:21:47,440 --> 00:21:55,960 +uh 3300 billion uh parameter uh tokens + +523 +00:21:52,880 --> 00:21:57,400 +and uh did checkpoints you know + +524 +00:21:55,960 --> 00:21:59,000 +periodically during that training + +525 +00:21:57,400 --> 00:22:02,400 +process so you can do interest things + +526 +00:21:59,000 --> 00:22:04,400 +like say uh how quickly do small models + +527 +00:22:02,400 --> 00:22:06,919 +learn things how quickly do large models + +528 +00:22:04,400 --> 00:22:09,480 +learn things and other stuff like + +529 +00:22:06,919 --> 00:22:10,760 +that in terms of the architecture as I + +530 +00:22:09,480 --> 00:22:12,760 +mentioned at the very beginning the + +531 +00:22:10,760 --> 00:22:14,799 +architectures are actually very similar + +532 +00:22:12,760 --> 00:22:17,840 +between them so it's almost easier to + +533 +00:22:14,799 --> 00:22:21,080 +point out their differences than uh + +534 +00:22:17,840 --> 00:22:22,559 +their like their similarities um + +535 +00:22:21,080 --> 00:22:25,400 +actually one thing that's not on the + +536 +00:22:22,559 --> 00:22:27,159 +slide is um I mainly focused on the + +537 +00:22:25,400 --> 00:22:29,080 +seven billion models because almost + +538 +00:22:27,159 --> 00:22:30,320 +everybody trains a seven billi model + +539 +00:22:29,080 --> 00:22:32,720 +it's just kind of like one of the + +540 +00:22:30,320 --> 00:22:34,640 +standard sizes it's the smallest size of + +541 +00:22:32,720 --> 00:22:36,559 +llama it's one of the largest it's the + +542 +00:22:34,640 --> 00:22:40,240 +largest size ofo and one of the largest + +543 +00:22:36,559 --> 00:22:46,880 +sizes of pipon 7 billion models are + +544 +00:22:40,240 --> 00:22:52,880 +generally um 4096 wide 32 uh + +545 +00:22:46,880 --> 00:22:52,880 +deep uh 32 attention heads and they're + +546 +00:22:54,200 --> 00:23:01,159 +um and their um hidden layer size is + +547 +00:22:57,400 --> 00:23:04,400 +about like eight3 of the size of this + +548 +00:23:01,159 --> 00:23:07,360 +and this is kind of a standard llama 7B + +549 +00:23:04,400 --> 00:23:09,240 +architecture um as you scale up to + +550 +00:23:07,360 --> 00:23:11,520 +larger sizes you just increase the + +551 +00:23:09,240 --> 00:23:13,880 +number of layers you increase the the + +552 +00:23:11,520 --> 00:23:16,080 +width and other things like that so + +553 +00:23:13,880 --> 00:23:19,039 +that's very standard um the other + +554 +00:23:16,080 --> 00:23:21,320 +standard is everybody uses a Transformer + +555 +00:23:19,039 --> 00:23:24,440 +um everybody uses pre-layer Norm like I + +556 +00:23:21,320 --> 00:23:27,120 +talked about before everybody uses rope + +557 +00:23:24,440 --> 00:23:29,520 +eddings um almost everybody uses a swig + +558 +00:23:27,120 --> 00:23:30,919 +glue activation so this is just kind of + +559 +00:23:29,520 --> 00:23:31,880 +the standard recipe that almost + +560 +00:23:30,919 --> 00:23:35,120 +everybody + +561 +00:23:31,880 --> 00:23:37,000 +uses um where things start to change a + +562 +00:23:35,120 --> 00:23:38,559 +little bit between the architectures + +563 +00:23:37,000 --> 00:23:40,559 +which arguably might not be very + +564 +00:23:38,559 --> 00:23:44,679 +important is how long is the context + +565 +00:23:40,559 --> 00:23:48,320 +length so um pythia is 2K context + +566 +00:23:44,679 --> 00:23:51,360 +compared to llama llama 2's 4K context + +567 +00:23:48,320 --> 00:23:55,000 +um actually llama 1 is 1K context so + +568 +00:23:51,360 --> 00:24:00,000 +Llama Llama Or sorry llama one is 2K + +569 +00:23:55,000 --> 00:24:02,120 +context and llama 2 is 4K context um + +570 +00:24:00,000 --> 00:24:03,880 +another thing is where do they put + +571 +00:24:02,120 --> 00:24:06,240 +biases in the model most people don't + +572 +00:24:03,880 --> 00:24:08,200 +use biases uh anywhere but sometimes + +573 +00:24:06,240 --> 00:24:09,840 +they put them in various places the + +574 +00:24:08,200 --> 00:24:11,919 +other thing is a variety of layer Norm + +575 +00:24:09,840 --> 00:24:13,559 +that people use and Pia was using + +576 +00:24:11,919 --> 00:24:16,240 +standard parametric layer Norm but + +577 +00:24:13,559 --> 00:24:18,000 +gradually people are stepping back from + +578 +00:24:16,240 --> 00:24:21,360 +that and they're using like RMS Norm or + +579 +00:24:18,000 --> 00:24:22,880 +even nonparametric LMS so um small + +580 +00:24:21,360 --> 00:24:25,559 +architecture differences but almost + +581 +00:24:22,880 --> 00:24:29,240 +everybody uses something pretty + +582 +00:24:25,559 --> 00:24:31,960 +similar um the data this was trained on + +583 +00:24:29,240 --> 00:24:34,600 +300 billion tokens of the pile uh which + +584 +00:24:31,960 --> 00:24:37,440 +is on the next slide but one interesting + +585 +00:24:34,600 --> 00:24:39,000 +thing is that they also did a duplicated + +586 +00:24:37,440 --> 00:24:43,320 +training run on + +587 +00:24:39,000 --> 00:24:47,679 +270 s billions of the token ah sorry 207 + +588 +00:24:43,320 --> 00:24:50,559 +billion tokens and um the idea is that + +589 +00:24:47,679 --> 00:24:53,039 +they um they wanted to test how + +590 +00:24:50,559 --> 00:24:54,919 +important it is to duplicate how much do + +591 +00:24:53,039 --> 00:24:56,279 +you gain by D duplicating in terms of + +592 +00:24:54,919 --> 00:24:59,559 +training + +593 +00:24:56,279 --> 00:25:01,520 +efficiency and um + +594 +00:24:59,559 --> 00:25:04,760 +they have different learning rates for + +595 +00:25:01,520 --> 00:25:08,640 +different model sizes the 7B model is uh + +596 +00:25:04,760 --> 00:25:11,760 +1.2 * e to Theus 4 in contrast llama is + +597 +00:25:08,640 --> 00:25:13,120 +3 * eus 4 so this is a potentially big + +598 +00:25:11,760 --> 00:25:16,840 +change because the learning rate is + +599 +00:25:13,120 --> 00:25:18,880 +actually half the size here um is the + +600 +00:25:16,840 --> 00:25:20,559 +batch size they use 2 million tokens and + +601 +00:25:18,880 --> 00:25:23,600 +actually llama 2 uses four million + +602 +00:25:20,559 --> 00:25:26,520 +tokens for the batch size so um there + +603 +00:25:23,600 --> 00:25:29,000 +are some small differences + +604 +00:25:26,520 --> 00:25:31,480 +there so next next I'd like to talk + +605 +00:25:29,000 --> 00:25:33,760 +about the pile um this is kind of the + +606 +00:25:31,480 --> 00:25:36,279 +original open data set for training + +607 +00:25:33,760 --> 00:25:37,960 +large language models um that being said + +608 +00:25:36,279 --> 00:25:42,159 +it's a really nice data set made out of + +609 +00:25:37,960 --> 00:25:47,039 +lots of uh different types of data and + +610 +00:25:42,159 --> 00:25:49,960 +namely it's trained on academic data so + +611 +00:25:47,039 --> 00:25:52,559 +that includes things like PubMed archive + +612 +00:25:49,960 --> 00:25:55,240 +free law the US patent office other + +613 +00:25:52,559 --> 00:25:57,000 +stuff like that it's also trained on + +614 +00:25:55,240 --> 00:26:00,080 +internet data so this is data that's + +615 +00:25:57,000 --> 00:26:02,840 +just scraped from parts of the internet + +616 +00:26:00,080 --> 00:26:05,799 +but also stack Exchange in + +617 +00:26:02,840 --> 00:26:09,480 +Wikipedia um it also has some pros so + +618 +00:26:05,799 --> 00:26:12,200 +these are um like book data sets it has + +619 +00:26:09,480 --> 00:26:15,640 +some code data sets and it has some like + +620 +00:26:12,200 --> 00:26:18,799 +subtitle dialog data sets in it so this + +621 +00:26:15,640 --> 00:26:22,399 +overall is 800 gigabytes or about 300 + +622 +00:26:18,799 --> 00:26:22,399 +billion tokens according to + +623 +00:26:23,360 --> 00:26:28,080 +Tok so some of the findings from the + +624 +00:26:25,760 --> 00:26:30,919 +pipia paper in addition to just being + +625 +00:26:28,080 --> 00:26:33,399 +like one of the original strong uh open + +626 +00:26:30,919 --> 00:26:36,279 +language models is they have some + +627 +00:26:33,399 --> 00:26:38,600 +interesting analysis into um model + +628 +00:26:36,279 --> 00:26:40,960 +memorization and how quickly models + +629 +00:26:38,600 --> 00:26:44,080 +learn uh based on the number of tokens + +630 +00:26:40,960 --> 00:26:45,520 +that you show them and this graph is + +631 +00:26:44,080 --> 00:26:47,520 +maybe a little bit hard to see from the + +632 +00:26:45,520 --> 00:26:49,440 +back so I'll interpret it the left side + +633 +00:26:47,520 --> 00:26:50,840 +is one of their smaller models 160 + +634 +00:26:49,440 --> 00:26:54,880 +million the right side is their biggest + +635 +00:26:50,840 --> 00:26:57,799 +Model 12 billion um the different lines + +636 +00:26:54,880 --> 00:26:58,840 +here are different steps of the training + +637 +00:26:57,799 --> 00:27:03,120 +process + +638 +00:26:58,840 --> 00:27:09,640 +so like uh 13,000 steps uh + +639 +00:27:03,120 --> 00:27:13,840 +30 sorry 39,000 steps and uh etc etc and + +640 +00:27:09,640 --> 00:27:18,240 +the xaxis here is the frequency of a + +641 +00:27:13,840 --> 00:27:21,679 +fact in or a frequency of a fact in the + +642 +00:27:18,240 --> 00:27:24,640 +training data and the y axis is question + +643 +00:27:21,679 --> 00:27:29,159 +answering accuracy about that fact and + +644 +00:27:24,640 --> 00:27:30,919 +so what this is basically showing is + +645 +00:27:29,159 --> 00:27:35,679 +as you scale up the + +646 +00:27:30,919 --> 00:27:38,520 +model um the larger models learn faster + +647 +00:27:35,679 --> 00:27:41,120 +um up to a point so like right here you + +648 +00:27:38,520 --> 00:27:44,519 +see the 2.8 billion model is about the + +649 +00:27:41,120 --> 00:27:46,080 +same as the 12 billion model at earlier + +650 +00:27:44,519 --> 00:27:48,080 +parts of the training + +651 +00:27:46,080 --> 00:27:51,000 +process but as you get later in the + +652 +00:27:48,080 --> 00:27:54,200 +training process the 12 billion model is + +653 +00:27:51,000 --> 00:27:57,279 +like memorizing and being able to recall + +654 +00:27:54,200 --> 00:27:58,840 +more facts uh so like right at the very + +655 +00:27:57,279 --> 00:28:02,519 +beginning you need to scale up to about + +656 +00:27:58,840 --> 00:28:05,840 +2.8 billion to learn efficiently uh but + +657 +00:28:02,519 --> 00:28:07,799 +at the end this model is like better uh + +658 +00:28:05,840 --> 00:28:10,399 +further on + +659 +00:28:07,799 --> 00:28:12,000 +so this is really nice all of this all + +660 +00:28:10,399 --> 00:28:14,240 +of these checkpoints all this data is + +661 +00:28:12,000 --> 00:28:15,840 +open they even made the data loaders so + +662 +00:28:14,240 --> 00:28:17,360 +it's reproducible so you can look at the + +663 +00:28:15,840 --> 00:28:19,559 +actual data that the model was trained + +664 +00:28:17,360 --> 00:28:21,000 +on um at each of the checkpoints so if + +665 +00:28:19,559 --> 00:28:24,320 +you want to do this sort of analysis + +666 +00:28:21,000 --> 00:28:27,120 +this is a good set of um models to look + +667 +00:28:24,320 --> 00:28:28,720 +at um another thing that they did is + +668 +00:28:27,120 --> 00:28:31,120 +they actually did interv itions on the + +669 +00:28:28,720 --> 00:28:35,640 +data so they um tried to intervene on + +670 +00:28:31,120 --> 00:28:37,279 +the data to modify it because uh male or + +671 +00:28:35,640 --> 00:28:38,840 +masculine pronouns were much more + +672 +00:28:37,279 --> 00:28:42,000 +frequent than feminine pronouns in the + +673 +00:28:38,840 --> 00:28:43,919 +data so they intervened on the data um + +674 +00:28:42,000 --> 00:28:45,559 +to try to balance out the distribution + +675 +00:28:43,919 --> 00:28:48,000 +of masculine and feminine pronouns and + +676 +00:28:45,559 --> 00:28:49,559 +demonstrated that the model became less + +677 +00:28:48,000 --> 00:28:52,080 +biased towards generating masculine + +678 +00:28:49,559 --> 00:28:55,480 +pronouns later so they also were able to + +679 +00:28:52,080 --> 00:28:55,480 +do those sorts of intervention + +680 +00:28:55,919 --> 00:29:00,039 +studies um any any questions about + +681 +00:29:00,519 --> 00:29:07,919 +Pia okay um next I want to go too Soo is + +682 +00:29:04,720 --> 00:29:10,279 +a more recent model um Pia I think came + +683 +00:29:07,919 --> 00:29:13,200 +came out around a year agoo is very + +684 +00:29:10,279 --> 00:29:15,440 +recent about a month ago and um this was + +685 +00:29:13,200 --> 00:29:18,360 +created by ai2 the Ellen Institute for + +686 +00:29:15,440 --> 00:29:20,440 +AI one thing you'll notice is the two um + +687 +00:29:18,360 --> 00:29:22,279 +completely open models that I'm talking + +688 +00:29:20,440 --> 00:29:24,799 +about both came from nonprofit + +689 +00:29:22,279 --> 00:29:28,640 +organizations um so Al Luther is + +690 +00:29:24,799 --> 00:29:30,039 +nonprofit uh ai2 is nonprofit so uh + +691 +00:29:28,640 --> 00:29:31,519 +they're maybe a little bit less worried + +692 +00:29:30,039 --> 00:29:34,919 +about people trying to sue them for lots + +693 +00:29:31,519 --> 00:29:36,720 +of money for fair use violations uh so + +694 +00:29:34,919 --> 00:29:38,120 +uh that's the cynical point of view the + +695 +00:29:36,720 --> 00:29:39,679 +the non cynical point of view is they + +696 +00:29:38,120 --> 00:29:42,279 +have nothing to profit by creating a + +697 +00:29:39,679 --> 00:29:44,240 +better model uh by having other people + +698 +00:29:42,279 --> 00:29:47,039 +create a better model so um they're + +699 +00:29:44,240 --> 00:29:50,840 +willing to do this for open uh in good + +700 +00:29:47,039 --> 00:29:54,080 +science um their goal is better science + +701 +00:29:50,840 --> 00:29:55,880 +of State ofth art LMS and uh some of the + +702 +00:29:54,080 --> 00:29:57,600 +unique features are top performance of a + +703 +00:29:55,880 --> 00:29:59,840 +fully documented model and they also + +704 +00:29:57,600 --> 00:30:02,960 +have in construction tun models + +705 +00:29:59,840 --> 00:30:04,960 +Etc looking at the parameters um + +706 +00:30:02,960 --> 00:30:06,240 +basically similar to llama the one big + +707 +00:30:04,960 --> 00:30:08,440 +difference is they're using + +708 +00:30:06,240 --> 00:30:10,440 +non-parametric layer Norm instead of RMS + +709 +00:30:08,440 --> 00:30:13,640 +Norm so this is basically layer Norm + +710 +00:30:10,440 --> 00:30:15,960 +with no parameters whatsoever um they + +711 +00:30:13,640 --> 00:30:18,880 +they didn't super clearly justify why + +712 +00:30:15,960 --> 00:30:21,760 +they decided to do this one difference + +713 +00:30:18,880 --> 00:30:25,519 +from Pia uh this was actually trained on + +714 +00:30:21,760 --> 00:30:29,559 +2.46 trillion tokens uh so compare this + +715 +00:30:25,519 --> 00:30:32,600 +to uh to Pia which was trained on 300 + +716 +00:30:29,559 --> 00:30:34,480 +billion tokens and so they basically + +717 +00:30:32,600 --> 00:30:36,120 +trained it for a lot longer they trained + +718 +00:30:34,480 --> 00:30:37,960 +it on something called the dolma Corpus + +719 +00:30:36,120 --> 00:30:41,480 +which they also created at + +720 +00:30:37,960 --> 00:30:44,279 +ai2 um actually I think this might be + +721 +00:30:41,480 --> 00:30:47,279 +wrong uh so just ignore that that was + +722 +00:30:44,279 --> 00:30:49,760 +copy paste mistake from typ so um they + +723 +00:30:47,279 --> 00:30:52,039 +always use 3E to the minus 4 is a + +724 +00:30:49,760 --> 00:30:53,679 +learning rate which is the same as uh as + +725 +00:30:52,039 --> 00:30:56,039 +llama and the batch size is 4 million + +726 +00:30:53,679 --> 00:30:59,960 +tokens which is also the same as + +727 +00:30:56,039 --> 00:31:02,000 +llama so the domma that they created is + +728 +00:30:59,960 --> 00:31:04,320 +um actually pretty similar to the pile + +729 +00:31:02,000 --> 00:31:07,320 +but it's a larger Corpus it's three + +730 +00:31:04,320 --> 00:31:09,240 +trillion tokens this is also fully open + +731 +00:31:07,320 --> 00:31:11,480 +so you can download it from hugging face + +732 +00:31:09,240 --> 00:31:15,399 +uh if you could find some dis to put + +733 +00:31:11,480 --> 00:31:19,200 +three trillion tokens on um + +734 +00:31:15,399 --> 00:31:21,080 +so uh another thing is that they have a + +735 +00:31:19,200 --> 00:31:23,360 +data processing pipeline of language + +736 +00:31:21,080 --> 00:31:26,240 +filtering quality filtering content + +737 +00:31:23,360 --> 00:31:28,399 +filtering D duplication uh multisource + +738 +00:31:26,240 --> 00:31:31,440 +mixing and tokenization + +739 +00:31:28,399 --> 00:31:33,279 +and so the nice thing about this is a + +740 +00:31:31,440 --> 00:31:35,639 +lot of this stuff is usually proprietary + +741 +00:31:33,279 --> 00:31:38,240 +for most language modeling creators so + +742 +00:31:35,639 --> 00:31:39,600 +if you want to see all of the like data + +743 +00:31:38,240 --> 00:31:41,039 +processing pipeline that goes into + +744 +00:31:39,600 --> 00:31:42,799 +training a model this is a pretty good + +745 +00:31:41,039 --> 00:31:45,320 +example of + +746 +00:31:42,799 --> 00:31:48,120 +that um the document types that are + +747 +00:31:45,320 --> 00:31:51,080 +included are the common crawl and so the + +748 +00:31:48,120 --> 00:31:53,919 +common crawl is just um data crawled + +749 +00:31:51,080 --> 00:31:56,760 +from the Internet it's uh about 2.2 + +750 +00:31:53,919 --> 00:32:00,039 +trillion tokens uh they also have the + +751 +00:31:56,760 --> 00:32:03,399 +stack which is um lots of code about 400 + +752 +00:32:00,039 --> 00:32:09,120 +billion tokens of code um C4 which is + +753 +00:32:03,399 --> 00:32:13,039 +also uh web data uh Reddit um stem + +754 +00:32:09,120 --> 00:32:16,960 +papers books and uh Wikipedia + +755 +00:32:13,039 --> 00:32:19,039 +encyclopedia T so um you can see that it + +756 +00:32:16,960 --> 00:32:21,440 +has a fairly large amount of coverage + +757 +00:32:19,039 --> 00:32:24,480 +although mostly in + +758 +00:32:21,440 --> 00:32:26,799 +English um so some findings from omo + +759 +00:32:24,480 --> 00:32:29,440 +that I found interesting um number one + +760 +00:32:26,799 --> 00:32:31,279 +it has competitive average performance + +761 +00:32:29,440 --> 00:32:34,320 +so as I mentioned I think this is the + +762 +00:32:31,279 --> 00:32:38,519 +first fully open and documented language + +763 +00:32:34,320 --> 00:32:40,639 +model on the 7 billion range that is + +764 +00:32:38,519 --> 00:32:43,360 +competitive with all the other uh kind + +765 +00:32:40,639 --> 00:32:47,080 +of like Less open models in this range + +766 +00:32:43,360 --> 00:32:49,200 +so uh for example uh llama 2 is 70.5 + +767 +00:32:47,080 --> 00:32:51,840 +average on on all of the data sets that + +768 +00:32:49,200 --> 00:32:53,960 +they're evaluating on Falcon is + +769 +00:32:51,840 --> 00:32:58,000 +70.3 MPT is + +770 +00:32:53,960 --> 00:33:00,000 +69.8 and almost 69.3 so it's not a + +771 +00:32:58,000 --> 00:33:04,639 +slouch with respect to accuracy compared + +772 +00:33:00,000 --> 00:33:06,399 +to pipia which had 63 um much of the + +773 +00:33:04,639 --> 00:33:09,120 +issue with pipia could just be that they + +774 +00:33:06,399 --> 00:33:12,080 +didn't train for long enough and some + +775 +00:33:09,120 --> 00:33:15,039 +evidence of this is this is + +776 +00:33:12,080 --> 00:33:17,000 +um where they measured performance + +777 +00:33:15,039 --> 00:33:18,880 +constantly as they train for longer so + +778 +00:33:17,000 --> 00:33:21,440 +the left side is training on 500 billion + +779 +00:33:18,880 --> 00:33:24,080 +tokens which is already more than what + +780 +00:33:21,440 --> 00:33:25,840 +pipia trained on the right side is uh + +781 +00:33:24,080 --> 00:33:30,360 +two uh + +782 +00:33:25,840 --> 00:33:32,679 +2.4 or 2.5 TR I tokens and you can see + +783 +00:33:30,360 --> 00:33:34,440 +interestingly that the numbers are just + +784 +00:33:32,679 --> 00:33:36,760 +continuing to increase as they train for + +785 +00:33:34,440 --> 00:33:39,480 +longer so it seems that training for + +786 +00:33:36,760 --> 00:33:43,679 +longer and longer just kind of + +787 +00:33:39,480 --> 00:33:47,000 +helps um one question is whether they're + +788 +00:33:43,679 --> 00:33:48,679 +like overfitting to uh the data set like + +789 +00:33:47,000 --> 00:33:52,000 +is any of the test data included in + +790 +00:33:48,679 --> 00:33:53,799 +their training data here um they did do + +791 +00:33:52,000 --> 00:33:57,440 +D duplication to some extent to try to + +792 +00:33:53,799 --> 00:33:59,320 +remove the test data so um I I think + +793 +00:33:57,440 --> 00:34:00,919 +it's quite probable that this these are + +794 +00:33:59,320 --> 00:34:02,720 +real gains and if they train for longer + +795 +00:34:00,919 --> 00:34:07,559 +they might get an even better model but + +796 +00:34:02,720 --> 00:34:07,559 +um I'm not you know 100% sure about + +797 +00:34:07,679 --> 00:34:12,639 +that cool + +798 +00:34:10,480 --> 00:34:14,359 +um yeah one one other thing that I + +799 +00:34:12,639 --> 00:34:16,119 +noticed which might be uh might be a + +800 +00:34:14,359 --> 00:34:18,119 +little bit interesting is um all of + +801 +00:34:16,119 --> 00:34:20,240 +these that I didn't mention here is all + +802 +00:34:18,119 --> 00:34:21,760 +of these have a learning rate schedule + +803 +00:34:20,240 --> 00:34:23,679 +and typically they have a learning rate + +804 +00:34:21,760 --> 00:34:25,760 +schedule where they do this standard + +805 +00:34:23,679 --> 00:34:29,159 +warmup where they increase and then they + +806 +00:34:25,760 --> 00:34:30,960 +decrease but they St decreasing at a a + +807 +00:34:29,159 --> 00:34:34,040 +floor and usually that floor is about + +808 +00:34:30,960 --> 00:34:36,720 +one1 the size of the um of the original + +809 +00:34:34,040 --> 00:34:38,520 +learning rate so the if they start out 3 + +810 +00:34:36,720 --> 00:34:41,919 +e to Theus 4 they'll decrease it but + +811 +00:34:38,520 --> 00:34:43,960 +only to 3 eus2 and then they're can so + +812 +00:34:41,919 --> 00:34:46,079 +that might be another good thing to put + +813 +00:34:43,960 --> 00:34:46,079 +it + +814 +00:34:46,480 --> 00:34:51,240 +out cool any questions about + +815 +00:34:51,320 --> 00:34:58,599 +this okay um so now I'll get into L 2 um + +816 +00:34:56,560 --> 00:35:00,200 +in Lama 2 you know is a model that + +817 +00:34:58,599 --> 00:35:04,400 +probably most people have heard about it + +818 +00:35:00,200 --> 00:35:07,599 +was created by meta um it's one of the + +819 +00:35:04,400 --> 00:35:09,480 +uh strongest open language models now + +820 +00:35:07,599 --> 00:35:10,839 +although arguably there might be + +821 +00:35:09,480 --> 00:35:15,000 +stronger open language + +822 +00:35:10,839 --> 00:35:18,400 +models and the goal is a strong and safe + +823 +00:35:15,000 --> 00:35:21,320 +open LM and they have base and chat + +824 +00:35:18,400 --> 00:35:23,400 +versions of it and some unique features + +825 +00:35:21,320 --> 00:35:24,680 +are I think this is the open model with + +826 +00:35:23,400 --> 00:35:30,119 +the strongest + +827 +00:35:24,680 --> 00:35:30,119 +safety uh safeguards so it + +828 +00:35:30,200 --> 00:35:35,079 +is if I were to pick one model that I + +829 +00:35:33,079 --> 00:35:37,200 +wanted to use in an actual system that + +830 +00:35:35,079 --> 00:35:39,599 +was directly conversing with users I + +831 +00:35:37,200 --> 00:35:41,920 +would probably pick this one over + +832 +00:35:39,599 --> 00:35:43,760 +something like uh mistol even though + +833 +00:35:41,920 --> 00:35:46,599 +mistol shows Superior performance some + +834 +00:35:43,760 --> 00:35:48,680 +of the time um it might say things that + +835 +00:35:46,599 --> 00:35:52,000 +you don't want it to be saying to like + +836 +00:35:48,680 --> 00:35:55,520 +users so I think that's one of the uh + +837 +00:35:52,000 --> 00:35:56,880 +the nice things about M so I've been + +838 +00:35:55,520 --> 00:35:58,280 +comparing everything else to it so + +839 +00:35:56,880 --> 00:36:00,560 +that's pretty normal + +840 +00:35:58,280 --> 00:36:03,160 +um one thing about the data is the data + +841 +00:36:00,560 --> 00:36:04,520 +is not open they didn't say what data + +842 +00:36:03,160 --> 00:36:06,960 +they trained on for reasons that I + +843 +00:36:04,520 --> 00:36:08,960 +talked about before um what they did say + +844 +00:36:06,960 --> 00:36:12,400 +is it was trained on public sources + +845 +00:36:08,960 --> 00:36:14,240 +upsampling the most factual sources so + +846 +00:36:12,400 --> 00:36:17,640 +um that's what they + +847 +00:36:14,240 --> 00:36:19,240 +said the Llama one paper has more + +848 +00:36:17,640 --> 00:36:20,760 +information and so I'll talk about what + +849 +00:36:19,240 --> 00:36:22,400 +they did in the Llama one paper and we + +850 +00:36:20,760 --> 00:36:24,920 +can maybe extrapolate that they did + +851 +00:36:22,400 --> 00:36:26,560 +something similar in the LL tube paper + +852 +00:36:24,920 --> 00:36:28,200 +um and then the total training amount is + +853 +00:36:26,560 --> 00:36:30,079 +2 trillion tokens so that's actually + +854 +00:36:28,200 --> 00:36:32,680 +less + +855 +00:36:30,079 --> 00:36:34,520 +than um so if we look at the Llama 1 + +856 +00:36:32,680 --> 00:36:36,319 +training data it looks a little bit like + +857 +00:36:34,520 --> 00:36:38,839 +it looks very much like Theo training + +858 +00:36:36,319 --> 00:36:41,200 +data it's common crawl C4 GitHub + +859 +00:36:38,839 --> 00:36:45,160 +Wikipedia books archives stack + +860 +00:36:41,200 --> 00:36:46,400 +exchange um and one thing you'll notice + +861 +00:36:45,160 --> 00:36:49,200 +is that they + +862 +00:36:46,400 --> 00:36:51,599 +upsampled uh Wikipedia and books and + +863 +00:36:49,200 --> 00:36:53,319 +down sampled GitHub according compared + +864 +00:36:51,599 --> 00:36:57,000 +to the amount of data that they actually + +865 +00:36:53,319 --> 00:37:00,760 +had and so they did 2.4 EPO over + +866 +00:36:57,000 --> 00:37:03,040 +Wikipedia 2.2 epochs over books and only + +867 +00:37:00,760 --> 00:37:05,880 +one Epoch over like the standard web + +868 +00:37:03,040 --> 00:37:08,240 +data and archive and stack exchange and + +869 +00:37:05,880 --> 00:37:09,760 +0.6 epx over the GitHub data that they + +870 +00:37:08,240 --> 00:37:11,520 +had so + +871 +00:37:09,760 --> 00:37:13,800 +obviously + +872 +00:37:11,520 --> 00:37:15,520 +they thought that this Wikipedia and + +873 +00:37:13,800 --> 00:37:17,040 +books data was more valuable for some + +874 +00:37:15,520 --> 00:37:20,560 +reason and they really wanted the model + +875 +00:37:17,040 --> 00:37:22,319 +to to learn well out it so I think um + +876 +00:37:20,560 --> 00:37:24,240 +when they say that they upsampled + +877 +00:37:22,319 --> 00:37:27,960 +factual data I'm assuming that that's + +878 +00:37:24,240 --> 00:37:27,960 +also what they did in mud + +879 +00:37:29,440 --> 00:37:33,640 +so the next thing um that's + +880 +00:37:35,960 --> 00:37:43,160 +yeah uh what does it need to have + +881 +00:37:40,280 --> 00:37:45,400 +like oh um yeah actually that's a really + +882 +00:37:43,160 --> 00:37:47,960 +good question so why are EPO not integer + +883 +00:37:45,400 --> 00:37:50,240 +values there's actually no reason at all + +884 +00:37:47,960 --> 00:37:52,040 +that you should do you know an integer + +885 +00:37:50,240 --> 00:37:54,760 +value of epo you can always save out a + +886 +00:37:52,040 --> 00:37:57,560 +checkpoint every you know 10,000 steps + +887 +00:37:54,760 --> 00:37:59,200 +or something so I'd actually encourage + +888 +00:37:57,560 --> 00:38:02,040 +people to get away from saving out + +889 +00:37:59,200 --> 00:38:03,640 +checkpoints every Epoch because that + +890 +00:38:02,040 --> 00:38:05,319 +kind of discourages you from making your + +891 +00:38:03,640 --> 00:38:07,160 +training data larger because if you make + +892 +00:38:05,319 --> 00:38:09,359 +your training data larger it will take + +893 +00:38:07,160 --> 00:38:11,760 +you'll think oh training takes forever + +894 +00:38:09,359 --> 00:38:13,480 +um because it takes forever to use an + +895 +00:38:11,760 --> 00:38:16,599 +Epoch but in reality you can just save + +896 +00:38:13,480 --> 00:38:18,760 +out you know periodically and um and + +897 +00:38:16,599 --> 00:38:21,319 +keep the checkpoints from earlier + +898 +00:38:18,760 --> 00:38:22,680 +so many language models don't train on + +899 +00:38:21,319 --> 00:38:24,480 +all the data on the web because it would + +900 +00:38:22,680 --> 00:38:25,800 +just be too expensive to do so despite + +901 +00:38:24,480 --> 00:38:27,640 +the fact that they have all the data on + +902 +00:38:25,800 --> 00:38:29,079 +the web + +903 +00:38:27,640 --> 00:38:31,000 +but very good question though it's + +904 +00:38:29,079 --> 00:38:34,560 +that's an important + +905 +00:38:31,000 --> 00:38:36,280 +Point um okay so now I'd like to talk a + +906 +00:38:34,560 --> 00:38:39,440 +little bit about the safety tuning that + +907 +00:38:36,280 --> 00:38:42,359 +goes into uh the Llama models I might + +908 +00:38:39,440 --> 00:38:45,640 +talk a little bit more about this um + +909 +00:38:42,359 --> 00:38:48,960 +later but I I think uh I'll I'll talk + +910 +00:38:45,640 --> 00:38:51,480 +about it now um basically the Llama 2 + +911 +00:38:48,960 --> 00:38:54,200 +developers put a lot of effort into + +912 +00:38:51,480 --> 00:38:56,400 +training the model to be safe because um + +913 +00:38:54,200 --> 00:38:59,599 +you know they're a big company and they + +914 +00:38:56,400 --> 00:39:01,200 +don't want any PR design disasters um uh + +915 +00:38:59,599 --> 00:39:02,680 +and also you know they want an actual + +916 +00:39:01,200 --> 00:39:04,960 +safe model that they can use and to BL + +917 +00:39:02,680 --> 00:39:08,240 +their products so I think they have the + +918 +00:39:04,960 --> 00:39:10,880 +Dual uh you know dual motivation + +919 +00:39:08,240 --> 00:39:13,200 +there the first thing that they did was + +920 +00:39:10,880 --> 00:39:15,960 +they collected lots of data for reward + +921 +00:39:13,200 --> 00:39:17,520 +modeling and reward modeling what they + +922 +00:39:15,960 --> 00:39:19,720 +say what they're calling reward modeling + +923 +00:39:17,520 --> 00:39:23,720 +is basically preference modeling so they + +924 +00:39:19,720 --> 00:39:26,359 +have you know multiple outputs where the + +925 +00:39:23,720 --> 00:39:28,359 +two outputs are somehow ranked for + +926 +00:39:26,359 --> 00:39:29,960 +preferences and I talked about this when + +927 +00:39:28,359 --> 00:39:31,839 +I was talking about DPO in the + +928 +00:39:29,960 --> 00:39:35,720 +reinforcement learning class for + +929 +00:39:31,839 --> 00:39:38,480 +example um a lot of these actually exist + +930 +00:39:35,720 --> 00:39:41,920 +so there's um like the anthropic helpful + +931 +00:39:38,480 --> 00:39:45,599 +and harmless data sets uh these open AI + +932 +00:39:41,920 --> 00:39:48,200 +data sets uh from web GPT stack exchange + +933 +00:39:45,599 --> 00:39:50,160 +on stack exchange they have um helpful + +934 +00:39:48,200 --> 00:39:52,240 +answers and not helpful answers so once + +935 +00:39:50,160 --> 00:39:57,720 +that you give thumbs up and thumbs down + +936 +00:39:52,240 --> 00:39:59,839 +to and um the Stanford uh human + +937 +00:39:57,720 --> 00:40:03,040 +preferences data set I I forget what s + +938 +00:39:59,839 --> 00:40:05,800 +stands for human preferences data set + +939 +00:40:03,040 --> 00:40:09,400 +basically this is um where they tried to + +940 +00:40:05,800 --> 00:40:11,599 +find Reddit posts I think Reddit posts + +941 +00:40:09,400 --> 00:40:13,720 +that got more upvotes despite the fact + +942 +00:40:11,599 --> 00:40:16,400 +that they were posted later than a a + +943 +00:40:13,720 --> 00:40:18,720 +previous one so the idea is like usually + +944 +00:40:16,400 --> 00:40:21,359 +the first post posts get more up votes + +945 +00:40:18,720 --> 00:40:22,880 +so if you get more up votes for a later + +946 +00:40:21,359 --> 00:40:25,240 +post that indicates that you're probably + +947 +00:40:22,880 --> 00:40:27,640 +more valuable than the earlier post so + +948 +00:40:25,240 --> 00:40:30,880 +kind of clever uh clever way of creating + +949 +00:40:27,640 --> 00:40:33,680 +data um I'm actually not sure what the + +950 +00:40:30,880 --> 00:40:36,240 +synthetic jpj was I didn't look at that + +951 +00:40:33,680 --> 00:40:37,640 +and then separately from that um meta + +952 +00:40:36,240 --> 00:40:39,599 +collected a very large amount of + +953 +00:40:37,640 --> 00:40:42,400 +internal data that they didn't release + +954 +00:40:39,599 --> 00:40:44,319 +uh for tuning llama and they did this + +955 +00:40:42,400 --> 00:40:46,760 +through various iterations so basically + +956 +00:40:44,319 --> 00:40:49,839 +what they did is they created a first + +957 +00:40:46,760 --> 00:40:53,240 +version of the model um they let it you + +958 +00:40:49,839 --> 00:40:55,599 +loose on users they also did some uh + +959 +00:40:53,240 --> 00:40:56,960 +some data collection with uh people who + +960 +00:40:55,599 --> 00:40:59,720 +were actually trying to break the model + +961 +00:40:56,960 --> 00:41:01,200 +and get getting it to say bad things + +962 +00:40:59,720 --> 00:41:02,760 +they collected preference data from + +963 +00:41:01,200 --> 00:41:04,599 +these people and then they iterated over + +964 +00:41:02,760 --> 00:41:06,960 +and over again to collect more and more + +965 +00:41:04,599 --> 00:41:09,720 +of this data on various uh versions of + +966 +00:41:06,960 --> 00:41:11,280 +the model so as the model got gets + +967 +00:41:09,720 --> 00:41:14,079 +better you know it's going to be harder + +968 +00:41:11,280 --> 00:41:16,240 +to collect this data but um they want to + +969 +00:41:14,079 --> 00:41:17,920 +try to improve the current model that + +970 +00:41:16,240 --> 00:41:20,599 +they + +971 +00:41:17,920 --> 00:41:22,680 +have so the next step that they did was + +972 +00:41:20,599 --> 00:41:26,079 +they trained a model to follow these + +973 +00:41:22,680 --> 00:41:27,920 +preferences and so they trained a model + +974 +00:41:26,079 --> 00:41:32,560 +that basically can predict human + +975 +00:41:27,920 --> 00:41:35,119 +preference given um given to uh language + +976 +00:41:32,560 --> 00:41:37,680 +model outputs and this is a hard problem + +977 +00:41:35,119 --> 00:41:40,440 +right because these are language model + +978 +00:41:37,680 --> 00:41:42,760 +outputs and the language model thought + +979 +00:41:40,440 --> 00:41:45,480 +it was a good output regardless because + +980 +00:41:42,760 --> 00:41:47,319 +otherwise it wouldn't be sampling and so + +981 +00:41:45,480 --> 00:41:49,720 +you need to distinguish between two very + +982 +00:41:47,319 --> 00:41:52,240 +fluent looking outputs where one is + +983 +00:41:49,720 --> 00:41:56,880 +preferred and one is not preferred so + +984 +00:41:52,240 --> 00:41:58,359 +even kind of strong models like um oh by + +985 +00:41:56,880 --> 00:42:00,319 +the way there are some open reward + +986 +00:41:58,359 --> 00:42:02,119 +models like this open Assistant reward + +987 +00:42:00,319 --> 00:42:03,839 +model is publicly available and you can + +988 +00:42:02,119 --> 00:42:08,520 +just go and download it if you want if + +989 +00:42:03,839 --> 00:42:10,920 +you want it um but this if you evaluate + +990 +00:42:08,520 --> 00:42:14,720 +it on this anthropic uh helpful and + +991 +00:42:10,920 --> 00:42:16,160 +harmless data set um this gets about 67 + +992 +00:42:14,720 --> 00:42:18,760 +or 68 + +993 +00:42:16,160 --> 00:42:24,680 +accuracy + +994 +00:42:18,760 --> 00:42:27,200 +um but if you evaluate it on um this + +995 +00:42:24,680 --> 00:42:29,480 +like open Assistant data set or sorry if + +996 +00:42:27,200 --> 00:42:33,359 +you evaluate the public models including + +997 +00:42:29,480 --> 00:42:36,079 +gp4 on The Meta data set actually it's + +998 +00:42:33,359 --> 00:42:38,720 +pretty hard for um to distinguish + +999 +00:42:36,079 --> 00:42:41,319 +between the things and here they're + +1000 +00:42:38,720 --> 00:42:44,720 +evaluating both helpful and harmless or + +1001 +00:42:41,319 --> 00:42:47,400 +helpful and safety and the reason why is + +1002 +00:42:44,720 --> 00:42:49,119 +because like it's very easy to create a + +1003 +00:42:47,400 --> 00:42:51,119 +very safe but not helpful at all model + +1004 +00:42:49,119 --> 00:42:53,640 +by saying I don't know all the time it's + +1005 +00:42:51,119 --> 00:42:55,480 +very it's relatively easy to create a + +1006 +00:42:53,640 --> 00:42:57,880 +helpful model that's very unsafe like it + +1007 +00:42:55,480 --> 00:42:59,480 +will do anything you want and so they + +1008 +00:42:57,880 --> 00:43:01,599 +want a balance between the two and they + +1009 +00:42:59,480 --> 00:43:03,480 +evaluate them separately they also + +1010 +00:43:01,599 --> 00:43:05,280 +created two different separate reward + +1011 +00:43:03,480 --> 00:43:07,880 +models so they created one reward model + +1012 +00:43:05,280 --> 00:43:10,079 +to distinguish safety and another reward + +1013 +00:43:07,880 --> 00:43:13,440 +model to distinguish helpfulness and + +1014 +00:43:10,079 --> 00:43:14,760 +they Ed these separately to uh to train + +1015 +00:43:13,440 --> 00:43:17,359 +the model and you can see that the + +1016 +00:43:14,760 --> 00:43:18,920 +helpfulness model does a lot better on + +1017 +00:43:17,359 --> 00:43:20,640 +discriminating between helpful things + +1018 +00:43:18,920 --> 00:43:22,319 +and the safety model does a lot better + +1019 +00:43:20,640 --> 00:43:23,760 +on discriminate or does a little better + +1020 +00:43:22,319 --> 00:43:25,960 +on discriminating between safe and + +1021 +00:43:23,760 --> 00:43:28,480 +unsafe + +1022 +00:43:25,960 --> 00:43:29,920 +things um + +1023 +00:43:28,480 --> 00:43:33,640 +actually I didn't include this in the + +1024 +00:43:29,920 --> 00:43:35,400 +slides but they also have an interesting + +1025 +00:43:33,640 --> 00:43:38,920 +graph that + +1026 +00:43:35,400 --> 00:43:41,119 +demonstrates um how good the reward + +1027 +00:43:38,920 --> 00:43:42,640 +models are based on their size and it + +1028 +00:43:41,119 --> 00:43:44,359 +turns out that this is a place where + +1029 +00:43:42,640 --> 00:43:47,559 +it's really really important to use a + +1030 +00:43:44,359 --> 00:43:49,760 +large and Powerful language model to + +1031 +00:43:47,559 --> 00:43:51,319 +determine your reward because they + +1032 +00:43:49,760 --> 00:43:52,680 +demonstrate that the 70 billion + +1033 +00:43:51,319 --> 00:43:55,280 +parameter model that they used is + +1034 +00:43:52,680 --> 00:43:57,359 +actually far better than the um than the + +1035 +00:43:55,280 --> 00:44:00,079 +smaller models that they used it + +1036 +00:43:57,359 --> 00:44:00,079 +predicting this + +1037 +00:44:01,359 --> 00:44:07,760 +reward so this is um a graph of their + +1038 +00:44:05,200 --> 00:44:10,480 +incremental training process for safety + +1039 +00:44:07,760 --> 00:44:12,640 +tuning and um you can see they have + +1040 +00:44:10,480 --> 00:44:15,920 +their first supervised fine tuned model + +1041 +00:44:12,640 --> 00:44:19,440 +this is with no um like RL or anything + +1042 +00:44:15,920 --> 00:44:22,240 +like this this is a second model + +1043 +00:44:19,440 --> 00:44:24,760 +um and uh it improves a lot with respect + +1044 +00:44:22,240 --> 00:44:28,119 +to helpfulness and then they do more and + +1045 +00:44:24,760 --> 00:44:30,400 +more rhf uh where they start with the + +1046 +00:44:28,119 --> 00:44:33,200 +like supervised fine tune model and and + +1047 +00:44:30,400 --> 00:44:36,079 +gradually do um add more reward data + +1048 +00:44:33,200 --> 00:44:38,200 +train with a better reward model and get + +1049 +00:44:36,079 --> 00:44:39,800 +to the end where they finally have the + +1050 +00:44:38,200 --> 00:44:41,359 +best model that and I believe this is + +1051 +00:44:39,800 --> 00:44:43,200 +the one that they actually released so + +1052 +00:44:41,359 --> 00:44:45,000 +you can see that they really put a lot + +1053 +00:44:43,200 --> 00:44:46,520 +of effort into making this model you + +1054 +00:44:45,000 --> 00:44:49,800 +know safe and that's one of the main + +1055 +00:44:46,520 --> 00:44:49,800 +points of the paper that they had + +1056 +00:44:51,319 --> 00:44:57,920 +here um another interesting part of the + +1057 +00:44:55,119 --> 00:45:02,319 +Llama 2 paper is how how they got it to + +1058 +00:44:57,920 --> 00:45:05,280 +follow chat instructions and so um I I + +1059 +00:45:02,319 --> 00:45:06,640 +think you're all familiar from the class + +1060 +00:45:05,280 --> 00:45:10,040 +where I talked about + +1061 +00:45:06,640 --> 00:45:13,000 +prompting B where basically they um + +1062 +00:45:10,040 --> 00:45:16,119 +prompt the language model using a system + +1063 +00:45:13,000 --> 00:45:20,359 +message and um a user message and an + +1064 +00:45:16,119 --> 00:45:23,160 +assistant message and so um the + +1065 +00:45:20,359 --> 00:45:25,000 +characteristic of the system message is + +1066 +00:45:23,160 --> 00:45:28,240 +this is something that you want to be + +1067 +00:45:25,000 --> 00:45:32,319 +obeyed throughout the um entire + +1068 +00:45:28,240 --> 00:45:34,599 +conversation right and + +1069 +00:45:32,319 --> 00:45:36,760 +so in order to get this obeyed + +1070 +00:45:34,599 --> 00:45:38,079 +throughout the entire conversation you + +1071 +00:45:36,760 --> 00:45:39,760 +need a model that's good at paying + +1072 +00:45:38,079 --> 00:45:40,760 +attent paying particular attention to + +1073 +00:45:39,760 --> 00:45:43,160 +the system + +1074 +00:45:40,760 --> 00:45:45,319 +message um in this example I'm saying + +1075 +00:45:43,160 --> 00:45:46,880 +write in only emojis so you no matter + +1076 +00:45:45,319 --> 00:45:48,720 +how long this conversation gets you want + +1077 +00:45:46,880 --> 00:45:50,599 +your model to continue writing in emojis + +1078 +00:45:48,720 --> 00:45:53,440 +and models don't do this + +1079 +00:45:50,599 --> 00:45:56,559 +spontaneously so what they did here and + +1080 +00:45:53,440 --> 00:45:58,359 +I'm I'm 90% 95% certain that my + +1081 +00:45:56,559 --> 00:45:59,800 +interpret of the paper is correct the + +1082 +00:45:58,359 --> 00:46:03,319 +paper is a little bit hard to understand + +1083 +00:45:59,800 --> 00:46:06,720 +with respect to this but um the uh what + +1084 +00:46:03,319 --> 00:46:10,480 +they I think they do is they take the + +1085 +00:46:06,720 --> 00:46:13,200 +system message and then they have a data + +1086 +00:46:10,480 --> 00:46:16,160 +generation step where they + +1087 +00:46:13,200 --> 00:46:19,079 +basically ask an existing model to write + +1088 +00:46:16,160 --> 00:46:21,400 +and only emojis and then say hello and + +1089 +00:46:19,079 --> 00:46:23,640 +then the model generates something and + +1090 +00:46:21,400 --> 00:46:26,599 +then they say again write in only emojis + +1091 +00:46:23,640 --> 00:46:28,440 +how are you doing and then they uh they + +1092 +00:46:26,599 --> 00:46:29,599 +generate it again and because this is so + +1093 +00:46:28,440 --> 00:46:32,680 +close in the + +1094 +00:46:29,599 --> 00:46:35,440 +context um the assistant basically will + +1095 +00:46:32,680 --> 00:46:36,760 +be will you know continue paying + +1096 +00:46:35,440 --> 00:46:39,119 +attention to these + +1097 +00:46:36,760 --> 00:46:40,599 +directions um and then after that now + +1098 +00:46:39,119 --> 00:46:42,640 +you have a data set that you can train + +1099 +00:46:40,599 --> 00:46:44,280 +your model on you can train your model + +1100 +00:46:42,640 --> 00:46:46,880 +on this generated data set that looks + +1101 +00:46:44,280 --> 00:46:49,079 +like write an only emojis say hello uh + +1102 +00:46:46,880 --> 00:46:50,480 +how are you doing and stuff like this + +1103 +00:46:49,079 --> 00:46:54,040 +and they try this with a whole bunch of + +1104 +00:46:50,480 --> 00:46:57,880 +rules it's like right um right as if + +1105 +00:46:54,040 --> 00:47:00,559 +you're explaining to a 5-year-old or um + +1106 +00:46:57,880 --> 00:47:02,720 +write in a very polite manner write in a + +1107 +00:47:00,559 --> 00:47:03,960 +very informal Manner and stuff like that + +1108 +00:47:02,720 --> 00:47:06,480 +so they generate a whole bunch of the + +1109 +00:47:03,960 --> 00:47:08,480 +synthetic data and in doing this they + +1110 +00:47:06,480 --> 00:47:09,960 +basically are able to train the model to + +1111 +00:47:08,480 --> 00:47:11,559 +pay very close attention to the system + +1112 +00:47:09,960 --> 00:47:13,480 +message because it needs to do so in + +1113 +00:47:11,559 --> 00:47:17,319 +order to do + +1114 +00:47:13,480 --> 00:47:19,160 +better so um yeah these are kind of the + +1115 +00:47:17,319 --> 00:47:20,599 +unique characteristics from lava 2 I'd + +1116 +00:47:19,160 --> 00:47:21,960 +love to tell you more about its training + +1117 +00:47:20,599 --> 00:47:24,520 +data and all that other stuff but they + +1118 +00:47:21,960 --> 00:47:26,240 +didn't tell us uh like what they did + +1119 +00:47:24,520 --> 00:47:28,839 +with respect to that so we'll just have + +1120 +00:47:26,240 --> 00:47:28,839 +to infer + +1121 +00:47:28,960 --> 00:47:33,559 +on cool uh any questions about + +1122 +00:47:33,800 --> 00:47:39,160 +this okay + +1123 +00:47:36,640 --> 00:47:40,839 +go so next I want to go into mistol and + +1124 +00:47:39,160 --> 00:47:42,599 +mixol this is going to be a little bit + +1125 +00:47:40,839 --> 00:47:44,200 +short because I've kind of covered some + +1126 +00:47:42,599 --> 00:47:45,720 +of the stuff already and also they + +1127 +00:47:44,200 --> 00:47:48,240 +didn't tell you very much about the + +1128 +00:47:45,720 --> 00:47:52,240 +training process um basically it was + +1129 +00:47:48,240 --> 00:47:54,079 +created by mistol um AI the company and + +1130 +00:47:52,240 --> 00:47:56,839 +it's a strong and somewhat multilingual + +1131 +00:47:54,079 --> 00:47:59,400 +open language model um it has some + +1132 +00:47:56,839 --> 00:48:01,760 +unique features like speed optimizations + +1133 +00:47:59,400 --> 00:48:03,200 +in um including grouped query attention + +1134 +00:48:01,760 --> 00:48:06,200 +and mixture of + +1135 +00:48:03,200 --> 00:48:06,200 +experts + +1136 +00:48:06,599 --> 00:48:12,359 +um it makes unlike the other ones it + +1137 +00:48:10,599 --> 00:48:14,599 +makes some actual architectural + +1138 +00:48:12,359 --> 00:48:17,599 +modifications including sliding window + +1139 +00:48:14,599 --> 00:48:19,160 +attention and um mixture of experts and + +1140 +00:48:17,599 --> 00:48:21,079 +I I have actually talked about both of + +1141 +00:48:19,160 --> 00:48:23,640 +them so I'll just very briefly go + +1142 +00:48:21,079 --> 00:48:26,040 +through them here um the data as far as + +1143 +00:48:23,640 --> 00:48:28,559 +I could tell was not disclosed uh very + +1144 +00:48:26,040 --> 00:48:30,480 +completely but one important thing is it + +1145 +00:48:28,559 --> 00:48:32,160 +includes English and European languages + +1146 +00:48:30,480 --> 00:48:35,520 +so at least theoretically it should be + +1147 +00:48:32,160 --> 00:48:38,040 +better than llama at this um one + +1148 +00:48:35,520 --> 00:48:39,559 +interesting thing about llama is llama + +1149 +00:48:38,040 --> 00:48:40,680 +if I remember correctly the actual + +1150 +00:48:39,559 --> 00:48:42,880 +numbers are in the paper but it's + +1151 +00:48:40,680 --> 00:48:47,920 +something like 85% + +1152 +00:48:42,880 --> 00:48:52,400 +English um 8% code and then like + +1153 +00:48:47,920 --> 00:48:54,559 +0.3% other languages like um starting at + +1154 +00:48:52,400 --> 00:48:57,280 +all the other languages it's like 0.3% + +1155 +00:48:54,559 --> 00:48:59,680 +so it's not very multilingual at all + +1156 +00:48:57,280 --> 00:49:01,319 +um and they were really only aiming to + +1157 +00:48:59,680 --> 00:49:04,799 +create a good uh English + +1158 +00:49:01,319 --> 00:49:06,200 +model um also the training uh details + +1159 +00:49:04,799 --> 00:49:08,280 +were not disclosed here like I wasn't + +1160 +00:49:06,200 --> 00:49:12,400 +able to find the back sides as far as I + +1161 +00:49:08,280 --> 00:49:15,119 +know um so mistol uses sliding window + +1162 +00:49:12,400 --> 00:49:18,200 +attention uh vanilla attention basically + +1163 +00:49:15,119 --> 00:49:21,440 +you always attend to all of the previous + +1164 +00:49:18,200 --> 00:49:24,880 +things in the sequence what mistol does + +1165 +00:49:21,440 --> 00:49:28,119 +is it attends to the previous n um + +1166 +00:49:24,880 --> 00:49:30,559 +examples where n is equal to 4090 6 and + +1167 +00:49:28,119 --> 00:49:34,839 +because of this uh what this means is + +1168 +00:49:30,559 --> 00:49:37,200 +you can attend uh 4096 back and then in + +1169 +00:49:34,839 --> 00:49:39,280 +the next layer you can attend 4096 back + +1170 +00:49:37,200 --> 00:49:41,599 +then you can attend 4096 back so + +1171 +00:49:39,280 --> 00:49:44,400 +basically as many layers as you have + +1172 +00:49:41,599 --> 00:49:47,240 +times 4096 you can attend that many + +1173 +00:49:44,400 --> 00:49:49,000 +tokens back for a minimal training + +1174 +00:49:47,240 --> 00:49:50,760 +penalty because still the length of + +1175 +00:49:49,000 --> 00:49:55,079 +attention for any particular token is + +1176 +00:49:50,760 --> 00:49:57,440 +the same uh so that's one + +1177 +00:49:55,079 --> 00:50:00,400 +feature oh and then yeah sorry the other + +1178 +00:49:57,440 --> 00:50:01,920 +feature is mixol is using um is using a + +1179 +00:50:00,400 --> 00:50:05,920 +mixture of experts like we talked about + +1180 +00:50:01,920 --> 00:50:07,720 +in the previous time so um despite these + +1181 +00:50:05,920 --> 00:50:09,520 +uh these are very strong models they're + +1182 +00:50:07,720 --> 00:50:12,960 +generally stronger than llama at a lot + +1183 +00:50:09,520 --> 00:50:15,480 +of things um and mixol is actually a lot + +1184 +00:50:12,960 --> 00:50:18,200 +faster and easier to deploy than llama + +1185 +00:50:15,480 --> 00:50:20,680 +70b uh it's smaller it only has 45 + +1186 +00:50:18,200 --> 00:50:23,680 +billion parameters so it's definitely a + +1187 +00:50:20,680 --> 00:50:26,680 +good choice if you want to use it yeah + +1188 +00:50:23,680 --> 00:50:26,680 +makinging + +1189 +00:50:28,720 --> 00:50:33,000 +yeah so it's attending to 496 + +1190 +00:50:33,520 --> 00:50:39,559 +C so the contact size + +1191 +00:50:37,720 --> 00:50:43,240 +typically like let's say you have a + +1192 +00:50:39,559 --> 00:50:45,240 +block of 4096 tokens here typically that + +1193 +00:50:43,240 --> 00:50:48,079 +means that the first token attends to + +1194 +00:50:45,240 --> 00:50:51,200 +zero tokens the second token attends to + +1195 +00:50:48,079 --> 00:50:54,640 +one token and the third token attends to + +1196 +00:50:51,200 --> 00:50:58,920 +two tokens here this is maybe a little + +1197 +00:50:54,640 --> 00:51:01,680 +bit uh Mis mislead I guess but if your + +1198 +00:50:58,920 --> 00:51:04,079 +context length is 4096 you actually get + +1199 +00:51:01,680 --> 00:51:07,760 +a block of twice that size you get a + +1200 +00:51:04,079 --> 00:51:10,960 +block of 8192 tokens and so the first + +1201 +00:51:07,760 --> 00:51:15,839 +one attends to all of the previous + +1202 +00:51:10,960 --> 00:51:17,760 +ones so the first uh sorry so + +1203 +00:51:15,839 --> 00:51:19,960 +the + +1204 +00:51:17,760 --> 00:51:22,280 +um so the + +1205 +00:51:19,960 --> 00:51:26,760 +40 + +1206 +00:51:22,280 --> 00:51:29,280 +9 7 token + +1207 +00:51:26,760 --> 00:51:32,280 +back to um all from + +1208 +00:51:29,280 --> 00:51:36,319 +[Music] + +1209 +00:51:32,280 --> 00:51:36,319 +to sorry either + +1210 +00:51:41,160 --> 00:51:46,880 +one96 and + +1211 +00:51:43,839 --> 00:51:50,520 +so because of that you moan to the very + +1212 +00:51:46,880 --> 00:51:50,520 +end then you have the 8198 + +1213 +00:51:50,880 --> 00:51:55,359 +seconding from like9 + +1214 +00:51:58,480 --> 00:52:01,920 +and so like every token is always + +1215 +00:52:00,319 --> 00:52:05,280 +attending to the previous one and that + +1216 +00:52:01,920 --> 00:52:08,200 +allows you to um to kind of attend to + +1217 +00:52:05,280 --> 00:52:08,200 +things in the previous + +1218 +00:52:11,760 --> 00:52:18,520 +BL uh no it's big so that allows them to + +1219 +00:52:15,000 --> 00:52:22,000 +attend a very large + +1220 +00:52:18,520 --> 00:52:24,599 +am cool um so the next one I'd like to + +1221 +00:52:22,000 --> 00:52:26,559 +talk about is quen this is one that in + +1222 +00:52:24,599 --> 00:52:29,040 +the US at least people maybe pay a a + +1223 +00:52:26,559 --> 00:52:33,000 +little bit less attention to um but it + +1224 +00:52:29,040 --> 00:52:35,680 +was created by Alibaba and it's a strong + +1225 +00:52:33,000 --> 00:52:37,559 +um multilingual model especially English + +1226 +00:52:35,680 --> 00:52:39,119 +and Chinese but even uh in other + +1227 +00:52:37,559 --> 00:52:41,000 +languages as + +1228 +00:52:39,119 --> 00:52:43,480 +well + +1229 +00:52:41,000 --> 00:52:45,160 +and uh one of its defining + +1230 +00:52:43,480 --> 00:52:48,240 +characteristics other than just being a + +1231 +00:52:45,160 --> 00:52:50,160 +strong model overall is that it's has a + +1232 +00:52:48,240 --> 00:52:51,799 +large vocabulary for multilingual + +1233 +00:52:50,160 --> 00:52:56,000 +support and strong + +1234 +00:52:51,799 --> 00:52:58,760 +performance um it comes in several sizes + +1235 +00:52:56,000 --> 00:53:01,880 +um I + +1236 +00:52:58,760 --> 00:53:04,799 +believe uh there's a 7B version and then + +1237 +00:53:01,880 --> 00:53:10,119 +there's also like a large like 70b + +1238 +00:53:04,799 --> 00:53:13,480 +version 72b I think and it's using very + +1239 +00:53:10,119 --> 00:53:15,319 +standard uh architecture things the only + +1240 +00:53:13,480 --> 00:53:18,119 +small difference it has is it has a bias + +1241 +00:53:15,319 --> 00:53:19,920 +in the attention layer which is doesn't + +1242 +00:53:18,119 --> 00:53:23,559 +uh exist in + +1243 +00:53:19,920 --> 00:53:25,880 +llama um an important thing is it's + +1244 +00:53:23,559 --> 00:53:28,920 +actually trained on multilingual data + +1245 +00:53:25,880 --> 00:53:32,720 +and they use a large vocabulary um they + +1246 +00:53:28,920 --> 00:53:33,839 +use a vocabulary of 150k in contrast to + +1247 +00:53:32,720 --> 00:53:36,599 +llama's + +1248 +00:53:33,839 --> 00:53:39,839 +32k and that allows it to handle + +1249 +00:53:36,599 --> 00:53:41,720 +multilingual uh data relatively + +1250 +00:53:39,839 --> 00:53:47,079 +well + +1251 +00:53:41,720 --> 00:53:49,359 +and um we have the three uh similar you + +1252 +00:53:47,079 --> 00:53:52,760 +know training regimes so overall it's + +1253 +00:53:49,359 --> 00:53:55,559 +not very diff different from uh + +1254 +00:53:52,760 --> 00:53:57,040 +llama what might be different is data + +1255 +00:53:55,559 --> 00:53:59,319 +engineering + +1256 +00:53:57,040 --> 00:54:00,680 +uh and actually I I expect the data + +1257 +00:53:59,319 --> 00:54:02,760 +engineering part is a bit different + +1258 +00:54:00,680 --> 00:54:06,400 +because overall it's a bit stronger than + +1259 +00:54:02,760 --> 00:54:09,920 +llama 2 um and I I think uh that has to + +1260 +00:54:06,400 --> 00:54:12,119 +do with data in in various areas one + +1261 +00:54:09,920 --> 00:54:16,920 +interesting piece from the paper that + +1262 +00:54:12,119 --> 00:54:18,280 +they have is uh if we think all the way + +1263 +00:54:16,920 --> 00:54:21,720 +back to when we talked about word + +1264 +00:54:18,280 --> 00:54:23,839 +subword models and word tokenization we + +1265 +00:54:21,720 --> 00:54:27,760 +remember that subword models split up + +1266 +00:54:23,839 --> 00:54:29,920 +the input and they split up the input uh + +1267 +00:54:27,760 --> 00:54:31,799 +so that frequent tokens get longer + +1268 +00:54:29,920 --> 00:54:34,520 +outputs and infrequent tokens get + +1269 +00:54:31,799 --> 00:54:36,359 +shorter outputs so one of the problems + +1270 +00:54:34,520 --> 00:54:40,559 +as I mentioned a long time ago when we + +1271 +00:54:36,359 --> 00:54:42,040 +covered this topic is this causes issues + +1272 +00:54:40,559 --> 00:54:43,000 +if you're doing multilingual things + +1273 +00:54:42,040 --> 00:54:44,880 +because if you have very little + +1274 +00:54:43,000 --> 00:54:47,520 +multilingual data in your training data + +1275 +00:54:44,880 --> 00:54:49,040 +for the subword tokenization model um it + +1276 +00:54:47,520 --> 00:54:51,559 +will end up splitting all of the words + +1277 +00:54:49,040 --> 00:54:55,680 +into basically characters or even bytes + +1278 +00:54:51,559 --> 00:54:59,040 +so what this shows here is this is + +1279 +00:54:55,680 --> 00:55:00,960 +comparing the amount of subord + +1280 +00:54:59,040 --> 00:55:03,040 +tokenization that happens according to + +1281 +00:55:00,960 --> 00:55:05,520 +each of the llms + +1282 +00:55:03,040 --> 00:55:08,599 +tokenizers with another explicitly + +1283 +00:55:05,520 --> 00:55:10,799 +multilingual model xlmr so xlmr is kind + +1284 +00:55:08,599 --> 00:55:12,760 +of their Baseline here with respect to + +1285 +00:55:10,799 --> 00:55:16,319 +how much it tokenizes each + +1286 +00:55:12,760 --> 00:55:19,079 +language and on the very left we have + +1287 +00:55:16,319 --> 00:55:22,839 +llama and so what we can see is that + +1288 +00:55:19,079 --> 00:55:26,599 +llama tokenizes TI + +1289 +00:55:22,839 --> 00:55:28,640 +3.7 times as much as it as xlmr does so + +1290 +00:55:26,599 --> 00:55:30,359 +it's basically splitting tie into tie up + +1291 +00:55:28,640 --> 00:55:32,480 +into little tiny bits which makes it + +1292 +00:55:30,359 --> 00:55:35,440 +very expensive and ineffective to + +1293 +00:55:32,480 --> 00:55:38,039 +process uh let's let's find some other + +1294 +00:55:35,440 --> 00:55:41,599 +languages that we care about we have he + +1295 +00:55:38,039 --> 00:55:43,760 +Hebrew Arabic + +1296 +00:55:41,599 --> 00:55:47,079 +Korean uh + +1297 +00:55:43,760 --> 00:55:49,559 +Japanese uh Chinese so all of these you + +1298 +00:55:47,079 --> 00:55:52,319 +can see are split up pretty into many + +1299 +00:55:49,559 --> 00:55:55,440 +many different chunks by + +1300 +00:55:52,319 --> 00:55:56,799 +Lama and then we we have a few other + +1301 +00:55:55,440 --> 00:55:58,359 +language models in the middle and then + +1302 +00:55:56,799 --> 00:56:01,440 +we have quen on the right side and what + +1303 +00:55:58,359 --> 00:56:04,039 +we can see is basically it's pretty + +1304 +00:56:01,440 --> 00:56:06,400 +comparable to xlmr maybe a little bit + +1305 +00:56:04,039 --> 00:56:09,520 +more than xlmr but pretty comparable to + +1306 +00:56:06,400 --> 00:56:12,839 +xlmr on many languages and then on code + +1307 +00:56:09,520 --> 00:56:15,000 +it actually um splits up code much less + +1308 +00:56:12,839 --> 00:56:17,039 +so we can see that you know its + +1309 +00:56:15,000 --> 00:56:18,960 +tokenizer is heavily + +1310 +00:56:17,039 --> 00:56:22,640 +multilingual um another thing I'd like + +1311 +00:56:18,960 --> 00:56:24,640 +to point out is um I I let I'm focusing + +1312 +00:56:22,640 --> 00:56:27,000 +on this particular language model for a + +1313 +00:56:24,640 --> 00:56:29,799 +number of reasons + +1314 +00:56:27,000 --> 00:56:32,440 +um the first one is multilinguality and + +1315 +00:56:29,799 --> 00:56:36,599 +I I like multilinguality I hope other + +1316 +00:56:32,440 --> 00:56:39,039 +people like multilinguality too um but + +1317 +00:56:36,599 --> 00:56:43,799 +another motivation is just it has quite + +1318 +00:56:39,039 --> 00:56:45,680 +strong performance and it's uh topping + +1319 +00:56:43,799 --> 00:56:47,960 +topping the leaderboards in in several + +1320 +00:56:45,680 --> 00:56:52,160 +different uh + +1321 +00:56:47,960 --> 00:56:57,640 +places so if we look at the open llm + +1322 +00:56:52,160 --> 00:56:57,640 +leaderboard um at least recently + +1323 +00:56:59,480 --> 00:57:07,440 +this was a fine-tuned model by Abus + +1324 +00:57:04,240 --> 00:57:09,440 +AI which was uh originally based on quen + +1325 +00:57:07,440 --> 00:57:11,079 +so you can see that this is like a + +1326 +00:57:09,440 --> 00:57:13,920 +strong found Foundation model that lots + +1327 +00:57:11,079 --> 00:57:16,440 +of people are using for fing things so + +1328 +00:57:13,920 --> 00:57:18,960 +um I would definitely uh encourage you + +1329 +00:57:16,440 --> 00:57:20,240 +to take a look at that too of course + +1330 +00:57:18,960 --> 00:57:22,520 +there's many many different models that + +1331 +00:57:20,240 --> 00:57:24,880 +I didn't cover because if I covered all + +1332 +00:57:22,520 --> 00:57:26,839 +of the general purpose models then we'd + +1333 +00:57:24,880 --> 00:57:29,599 +be here all day but um + +1334 +00:57:26,839 --> 00:57:31,200 +that's uh first start so next I want to + +1335 +00:57:29,599 --> 00:57:33,200 +go into other kind of special purpose + +1336 +00:57:31,200 --> 00:57:36,839 +models but are there any questions about + +1337 +00:57:33,200 --> 00:57:36,839 +um about the things I covered so + +1338 +00:57:38,000 --> 00:57:44,079 +far cool okay + +1339 +00:57:41,440 --> 00:57:47,960 +um so next I'd like to go into other + +1340 +00:57:44,079 --> 00:57:49,760 +models um first is code models so code + +1341 +00:57:47,960 --> 00:57:52,680 +models are models that were specifically + +1342 +00:57:49,760 --> 00:57:55,280 +trained on code actually right now every + +1343 +00:57:52,680 --> 00:57:56,960 +model is a code model um like nobody + +1344 +00:57:55,280 --> 00:57:58,799 +pre-train a large language model and is + +1345 +00:57:56,960 --> 00:58:01,720 +serious about it and doesn't train on + +1346 +00:57:58,799 --> 00:58:04,680 +code because um generating code is a + +1347 +00:58:01,720 --> 00:58:06,680 +huge use case and also um some work has + +1348 +00:58:04,680 --> 00:58:08,880 +demonstrated that gen training on code + +1349 +00:58:06,680 --> 00:58:13,720 +seems to improve reasoning abilities of + +1350 +00:58:08,880 --> 00:58:16,160 +language models as well um but uh these + +1351 +00:58:13,720 --> 00:58:19,319 +models were very heavily trained on code + +1352 +00:58:16,160 --> 00:58:22,400 +so um we have star coder 2 this is a + +1353 +00:58:19,319 --> 00:58:24,079 +very recent uh entry this is a fully + +1354 +00:58:22,400 --> 00:58:26,720 +open model so you can see the data it + +1355 +00:58:24,079 --> 00:58:29,039 +was trained on um all the training + +1356 +00:58:26,720 --> 00:58:31,640 +details are released and other stuff + +1357 +00:58:29,039 --> 00:58:36,760 +like that so this is kind of in the + +1358 +00:58:31,640 --> 00:58:38,599 +pythia you know piao category but it's + +1359 +00:58:36,760 --> 00:58:41,240 +very uh it's actually a very strong + +1360 +00:58:38,599 --> 00:58:42,839 +model very good model so it's uh a good + +1361 +00:58:41,240 --> 00:58:46,480 +one to know + +1362 +00:58:42,839 --> 00:58:48,680 +about um separately there's code llama + +1363 +00:58:46,480 --> 00:58:52,520 +by meta which is a code adaptation of + +1364 +00:58:48,680 --> 00:58:54,799 +llama and uh it also gets quite a quite + +1365 +00:58:52,520 --> 00:58:57,720 +good performance there's also another + +1366 +00:58:54,799 --> 00:58:59,760 +model uh called seek coder I would say + +1367 +00:58:57,720 --> 00:59:01,720 +all three of these are topping some + +1368 +00:58:59,760 --> 00:59:03,119 +variety of leaderboard where deep seek + +1369 +00:59:01,720 --> 00:59:04,640 +maybe is topping a few more leader + +1370 +00:59:03,119 --> 00:59:06,319 +boards than the other ones are but all + +1371 +00:59:04,640 --> 00:59:09,960 +of them are very competitive and might + +1372 +00:59:06,319 --> 00:59:11,680 +be the best in class for code things um + +1373 +00:59:09,960 --> 00:59:13,119 +I'm not talking very much about these + +1374 +00:59:11,680 --> 00:59:15,119 +because we're going to have a a class on + +1375 +00:59:13,119 --> 00:59:18,280 +code generation and code related things + +1376 +00:59:15,119 --> 00:59:21,000 +later so um I'm not going to go into a + +1377 +00:59:18,280 --> 00:59:21,000 +lot of detail + +1378 +00:59:21,319 --> 00:59:27,839 +here another thing is about math models + +1379 +00:59:24,680 --> 00:59:31,960 +and so like one thing is large language + +1380 +00:59:27,839 --> 00:59:35,480 +models are not particularly good at math + +1381 +00:59:31,960 --> 00:59:38,839 +um so there are quite a few models that + +1382 +00:59:35,480 --> 00:59:40,200 +were trained specifically for math um + +1383 +00:59:38,839 --> 00:59:45,160 +the first one is + +1384 +00:59:40,200 --> 00:59:47,280 +Lemma um yes that is a pun um for like + +1385 +00:59:45,160 --> 00:59:49,920 +LMA from + +1386 +00:59:47,280 --> 00:59:51,160 +maap I I'm I'm not responsible for it + +1387 +00:59:49,920 --> 00:59:55,240 +but I I thought it was kind of funny + +1388 +00:59:51,160 --> 00:59:56,920 +anyway um so uh this was by alther AI so + +1389 +00:59:55,240 --> 01:00:00,359 +because this was by Luther again this is + +1390 +00:59:56,920 --> 01:00:03,640 +a fully open model all the data is open + +1391 +01:00:00,359 --> 01:00:05,960 +um everything is known about it um also + +1392 +01:00:03,640 --> 01:00:08,480 +uh our our very own Shan wck was one of + +1393 +01:00:05,960 --> 01:00:10,559 +the contributors to it uh so if you want + +1394 +01:00:08,480 --> 01:00:13,839 +to know more about LMA you can go bother + +1395 +01:00:10,559 --> 01:00:17,440 +Sean so uh that's another thing that I + +1396 +01:00:13,839 --> 01:00:19,240 +should mention um another thing is deep + +1397 +01:00:17,440 --> 01:00:20,839 +seek who made the Deep seek Cod model + +1398 +01:00:19,240 --> 01:00:23,480 +has also created a very strong math + +1399 +01:00:20,839 --> 01:00:26,200 +model uh that's competitive with gp4 on + +1400 +01:00:23,480 --> 01:00:28,160 +a lot of math things uh basically the + +1401 +01:00:26,200 --> 01:00:30,480 +way they did this was they did this by + +1402 +01:00:28,160 --> 01:00:32,559 +um training a classifier to try to + +1403 +01:00:30,480 --> 01:00:34,640 +identify data on the web that is related + +1404 +01:00:32,559 --> 01:00:37,599 +to math and scraping all of that data + +1405 +01:00:34,640 --> 01:00:39,960 +and fine tuning on it so um you can get + +1406 +01:00:37,599 --> 01:00:42,280 +gold standard data from like proof pile + +1407 +01:00:39,960 --> 01:00:44,359 +and a whole bunch of other sources and + +1408 +01:00:42,280 --> 01:00:46,200 +so they trained a like math or not maath + +1409 +01:00:44,359 --> 01:00:48,400 +classifier and and harvested a lot of + +1410 +01:00:46,200 --> 01:00:52,400 +math related + +1411 +01:00:48,400 --> 01:00:52,400 +dat yeah + +1412 +01:00:59,880 --> 01:01:04,920 +it's mostly mostly data sets um I + +1413 +01:01:03,599 --> 01:01:07,119 +actually might be talking a little bit + +1414 +01:01:04,920 --> 01:01:10,039 +more about these in the reasoning class + +1415 +01:01:07,119 --> 01:01:11,799 +and I did a lot of uh I did a lot of + +1416 +01:01:10,039 --> 01:01:13,599 +prep to create these slides and actually + +1417 +01:01:11,799 --> 01:01:15,680 +ran out of time to do the math stuff so + +1418 +01:01:13,599 --> 01:01:17,200 +I might talk about it later um but I + +1419 +01:01:15,680 --> 01:01:18,480 +don't think they're really doing a lot + +1420 +01:01:17,200 --> 01:01:21,799 +of things like you could think of + +1421 +01:01:18,480 --> 01:01:23,440 +obvious things like doing RL rlf based + +1422 +01:01:21,799 --> 01:01:26,799 +on like whether it gets the answer right + +1423 +01:01:23,440 --> 01:01:28,559 +or not in the end um as far as I know + +1424 +01:01:26,799 --> 01:01:30,359 +that's not a big ingredient here but + +1425 +01:01:28,559 --> 01:01:31,920 +I'll be more sure of that when we talk + +1426 +01:01:30,359 --> 01:01:37,599 +about it + +1427 +01:01:31,920 --> 01:01:39,559 +later um cool and a final one uh it's + +1428 +01:01:37,599 --> 01:01:43,200 +not a Sy model it's a science model + +1429 +01:01:39,559 --> 01:01:45,920 +sorry for the typo um but uh this model + +1430 +01:01:43,200 --> 01:01:49,160 +Galactica um was a model for science + +1431 +01:01:45,920 --> 01:01:51,799 +that was trained by meta + +1432 +01:01:49,160 --> 01:01:54,359 +um does anyone remember this model or + +1433 +01:01:51,799 --> 01:01:58,079 +was anybody around when this model came + +1434 +01:01:54,359 --> 01:01:59,640 +out no there was a big uh a big PR + +1435 +01:01:58,079 --> 01:02:01,160 +disaster for meta when they released + +1436 +01:01:59,640 --> 01:02:03,480 +this model because they said this is a + +1437 +01:02:01,160 --> 01:02:05,520 +great model for math use it in your in + +1438 +01:02:03,480 --> 01:02:08,599 +writing your science paper sorry this is + +1439 +01:02:05,520 --> 01:02:10,480 +a great model for science try using it + +1440 +01:02:08,599 --> 01:02:12,640 +it in your science papers and this came + +1441 +01:02:10,480 --> 01:02:14,839 +out about two years ago and two years + +1442 +01:02:12,640 --> 01:02:16,640 +ago language models hallucinated all the + +1443 +01:02:14,839 --> 01:02:19,279 +time and came up with false scientific + +1444 +01:02:16,640 --> 01:02:22,039 +facts and stuff and so basically um a + +1445 +01:02:19,279 --> 01:02:25,680 +lot of people kind of bashed this model + +1446 +01:02:22,039 --> 01:02:27,440 +uh in my mind kind of unfairly because + +1447 +01:02:25,680 --> 01:02:31,200 +they actually have a lot of really + +1448 +01:02:27,440 --> 01:02:32,960 +interesting things in this paper um one + +1449 +01:02:31,200 --> 01:02:34,720 +interesting thing in this paper is they + +1450 +01:02:32,960 --> 01:02:37,000 +tried to create a general purpose model + +1451 +01:02:34,720 --> 01:02:38,960 +for science that's able to understand + +1452 +01:02:37,000 --> 01:02:41,960 +not only text but also various + +1453 +01:02:38,960 --> 01:02:47,720 +modalities of scientific data and so + +1454 +01:02:41,960 --> 01:02:51,000 +that includes text it includes latex um + +1455 +01:02:47,720 --> 01:02:53,799 +you know equations it includes code but + +1456 +01:02:51,000 --> 01:02:58,559 +it also included things like molecular + +1457 +01:02:53,799 --> 01:03:01,799 +structures and uh like collagens and DNA + +1458 +01:02:58,559 --> 01:03:04,160 +and stuff like this so they tried to + +1459 +01:03:01,799 --> 01:03:06,160 +like model biology and other things like + +1460 +01:03:04,160 --> 01:03:08,079 +this as well so I I think it's really + +1461 +01:03:06,160 --> 01:03:10,640 +kind of too bad that this model got a a + +1462 +01:03:08,079 --> 01:03:12,400 +bad WAP because I I really like the you + +1463 +01:03:10,640 --> 01:03:14,839 +know the work that went into it and I + +1464 +01:03:12,400 --> 01:03:16,359 +hope we'll see more of this um because + +1465 +01:03:14,839 --> 01:03:17,640 +language models for science is a really + +1466 +01:03:16,359 --> 01:03:19,880 +big topic that a lot of people are + +1467 +01:03:17,640 --> 01:03:19,880 +thinking + +1468 +01:03:20,760 --> 01:03:24,240 +about + +1469 +01:03:22,400 --> 01:03:26,440 +cool + +1470 +01:03:24,240 --> 01:03:28,000 +um one thing I didn't talk about is + +1471 +01:03:26,440 --> 01:03:29,880 +multimodal models but I hope to talk + +1472 +01:03:28,000 --> 01:03:32,440 +about multimodal models in a a future + +1473 +01:03:29,880 --> 01:03:33,359 +class so um I'll I'll talk more about + +1474 +01:03:32,440 --> 01:03:38,680 +that + +1475 +01:03:33,359 --> 01:03:41,640 +soon um the next thing is Clos models um + +1476 +01:03:38,680 --> 01:03:44,480 +so Clos models we don't know a whole lot + +1477 +01:03:41,640 --> 01:03:46,880 +about them uh most of what we know about + +1478 +01:03:44,480 --> 01:03:49,480 +them in their training data and other + +1479 +01:03:46,880 --> 01:03:52,359 +things like that is their uh is + +1480 +01:03:49,480 --> 01:03:54,720 +conjecture so the + +1481 +01:03:52,359 --> 01:03:57,839 +standard the standard format for + +1482 +01:03:54,720 --> 01:03:59,599 +releasing in a closed model or not + +1483 +01:03:57,839 --> 01:04:02,160 +releasing but you know publicizing a + +1484 +01:03:59,599 --> 01:04:04,279 +closed model is people will write a blog + +1485 +01:04:02,160 --> 01:04:05,960 +post and they'll write a paper and + +1486 +01:04:04,279 --> 01:04:07,720 +generally what the paper does is it only + +1487 +01:04:05,960 --> 01:04:09,559 +talks about evaluation it only talks + +1488 +01:04:07,720 --> 01:04:12,039 +about like how good the model is on + +1489 +01:04:09,559 --> 01:04:13,799 +various things how safe it is how they + +1490 +01:04:12,039 --> 01:04:16,279 +put a lot of effort into red teeming the + +1491 +01:04:13,799 --> 01:04:17,680 +model uh so that it doesn't do bad + +1492 +01:04:16,279 --> 01:04:18,839 +things and stuff like that and it tells + +1493 +01:04:17,680 --> 01:04:21,119 +you nothing about how they actually + +1494 +01:04:18,839 --> 01:04:23,279 +built the model so mostly like what I + +1495 +01:04:21,119 --> 01:04:26,279 +can talk about are capabilities as + +1496 +01:04:23,279 --> 01:04:28,520 +opposed to um + +1497 +01:04:26,279 --> 01:04:32,440 +talk about our capabilities as opposed + +1498 +01:04:28,520 --> 01:04:35,319 +to like what actually went into the + +1499 +01:04:32,440 --> 01:04:38,920 +model so um there's + +1500 +01:04:35,319 --> 01:04:40,880 +gp4 um gp4 I think everybody knows it's + +1501 +01:04:38,920 --> 01:04:43,640 +kind of the de facto standard strong + +1502 +01:04:40,880 --> 01:04:45,680 +language model it used to be the only + +1503 +01:04:43,640 --> 01:04:47,680 +strong language model like it used to be + +1504 +01:04:45,680 --> 01:04:50,079 +on its own the strongest language model + +1505 +01:04:47,680 --> 01:04:53,160 +and there were no real competitors to + +1506 +01:04:50,079 --> 01:04:55,000 +gp4 from that point of view I think + +1507 +01:04:53,160 --> 01:04:56,680 +still if I wanted a strong language + +1508 +01:04:55,000 --> 01:04:58,960 +model for just something that I'm I'm + +1509 +01:04:56,680 --> 01:05:00,880 +going to do randomly I still rely on G I + +1510 +01:04:58,960 --> 01:05:03,680 +still trust gp4 more than anything else + +1511 +01:05:00,880 --> 01:05:05,240 +to give me a really good answer um but + +1512 +01:05:03,680 --> 01:05:08,480 +there are now other competitors I'd like + +1513 +01:05:05,240 --> 01:05:11,960 +to talk about so gp4 anyway um you know + +1514 +01:05:08,480 --> 01:05:14,240 +it Powers the pro version of chat GPT it + +1515 +01:05:11,960 --> 01:05:18,039 +was tuned to be good as a chat-based + +1516 +01:05:14,240 --> 01:05:20,440 +assistant um it accepts image inputs uh + +1517 +01:05:18,039 --> 01:05:22,279 +and it supports calling external tools + +1518 +01:05:20,440 --> 01:05:23,599 +through function calling uh through a + +1519 +01:05:22,279 --> 01:05:27,119 +function calling + +1520 +01:05:23,599 --> 01:05:28,720 +interface um + +1521 +01:05:27,119 --> 01:05:30,599 +I I think people are are generally + +1522 +01:05:28,720 --> 01:05:34,000 +familiar with this but just in case + +1523 +01:05:30,599 --> 01:05:36,240 +you're not um I'd like to show a few + +1524 +01:05:34,000 --> 01:05:38,039 +things that I like to + +1525 +01:05:36,240 --> 01:05:39,640 +do + +1526 +01:05:38,039 --> 01:05:42,760 +so let + +1527 +01:05:39,640 --> 01:05:42,760 +[Music] + +1528 +01:05:46,920 --> 01:05:52,480 +me so I'll just randomly grab one of my + +1529 +01:05:50,440 --> 01:05:57,640 +papers from + +1530 +01:05:52,480 --> 01:05:57,640 +archive um my Mo my most recent paper + +1531 +01:06:03,400 --> 01:06:07,559 +and I can copy paste + +1532 +01:06:13,200 --> 01:06:22,240 +this and write uh turn this into Json + +1533 +01:06:19,240 --> 01:06:22,240 +forat + +1534 +01:06:27,960 --> 01:06:31,640 +and I drop it in + +1535 +01:06:29,880 --> 01:06:35,480 +here + +1536 +01:06:31,640 --> 01:06:38,279 +and so this is an exhibit of its like + +1537 +01:06:35,480 --> 01:06:42,240 +multimodal abilities because I can throw + +1538 +01:06:38,279 --> 01:06:44,359 +in a uh in a + +1539 +01:06:42,240 --> 01:06:48,400 +table and it basically turns it into + +1540 +01:06:44,359 --> 01:06:50,599 +Json clat for so um I I actually turned + +1541 +01:06:48,400 --> 01:06:52,119 +a fair amount of data FR in that I + +1542 +01:06:50,599 --> 01:06:53,960 +created in creating these slides into + +1543 +01:06:52,119 --> 01:06:56,039 +Json format so I can save it later for + +1544 +01:06:53,960 --> 01:06:59,079 +whatever I want it for and I did it + +1545 +01:06:56,039 --> 01:07:01,720 +through uh this so this is an example of + +1546 +01:06:59,079 --> 01:07:06,599 +the multimodal abilities can also tell + +1547 +01:07:01,720 --> 01:07:06,599 +you about images and stuff like that + +1548 +01:07:07,000 --> 01:07:14,319 +um so also um there was a famous article + +1549 +01:07:11,760 --> 01:07:16,760 +written by Gary Marcus that said deep + +1550 +01:07:14,319 --> 01:07:19,760 +learning is hitting a wall um it + +1551 +01:07:16,760 --> 01:07:22,880 +basically was written two years ago and + +1552 +01:07:19,760 --> 01:07:25,160 +uh Gary Marcus was saying deep learning + +1553 +01:07:22,880 --> 01:07:26,200 +doesn't uh you know is not the way for + +1554 +01:07:25,160 --> 01:07:27,760 +the future sure we're going to need + +1555 +01:07:26,200 --> 01:07:31,319 +things other than deep learning in order + +1556 +01:07:27,760 --> 01:07:34,559 +to uh you know be able to uh make + +1557 +01:07:31,319 --> 01:07:36,400 +progress and whe whether you believe + +1558 +01:07:34,559 --> 01:07:40,520 +that is true or not I I will let you to + +1559 +01:07:36,400 --> 01:07:46,520 +your own opinion um but uh I could also + +1560 +01:07:40,520 --> 01:07:51,359 +say uh create a picture of deep learning + +1561 +01:07:46,520 --> 01:07:55,400 +breaking through a brick wall and it can + +1562 +01:07:51,359 --> 01:07:55,400 +generate images for you + +1563 +01:08:02,599 --> 01:08:07,440 +course if you ever do a live demo even + +1564 +01:08:05,319 --> 01:08:10,319 +if it's a live demo of open AI product + +1565 +01:08:07,440 --> 01:08:13,559 +that a million people use it will break + +1566 +01:08:10,319 --> 01:08:16,719 +when you try to do it so um so this is + +1567 +01:08:13,559 --> 01:08:17,799 +another uh thing that it can do so there + +1568 +01:08:16,719 --> 01:08:19,560 +we have a picture of deep learning + +1569 +01:08:17,799 --> 01:08:22,640 +breaking through a brick wall and it can + +1570 +01:08:19,560 --> 01:08:26,159 +you know generate images and stuff so + +1571 +01:08:22,640 --> 01:08:28,560 +these are like the kinds of things that + +1572 +01:08:26,159 --> 01:08:30,960 +I now + +1573 +01:08:28,560 --> 01:08:32,880 +expect so it's not just like reasoning + +1574 +01:08:30,960 --> 01:08:35,839 +ability and other stuff like that it's + +1575 +01:08:32,880 --> 01:08:39,199 +also multi multimodality being able to + +1576 +01:08:35,839 --> 01:08:43,679 +generate code um another thing that's + +1577 +01:08:39,199 --> 01:08:46,719 +kind of nice um is make a + +1578 +01:08:43,679 --> 01:08:49,440 +histogram of these + +1579 +01:08:46,719 --> 01:08:54,640 +numbers one + +1580 +01:08:49,440 --> 01:08:54,640 +two one two four + +1581 +01:08:57,600 --> 01:09:04,040 +so it can do code generation and and + +1582 +01:08:59,719 --> 01:09:05,560 +display the results for you um there are + +1583 +01:09:04,040 --> 01:09:08,319 +efforts to + +1584 +01:09:05,560 --> 01:09:12,239 +make open source language models be able + +1585 +01:09:08,319 --> 01:09:14,000 +to do these things and um in order to do + +1586 +01:09:12,239 --> 01:09:16,759 +this you need multimodality you need + +1587 +01:09:14,000 --> 01:09:19,359 +also the ability to use tools so + +1588 +01:09:16,759 --> 01:09:21,400 +actually the way that this um worked + +1589 +01:09:19,359 --> 01:09:24,520 +here is very different than the way that + +1590 +01:09:21,400 --> 01:09:27,920 +this worked so this is actually using a + +1591 +01:09:24,520 --> 01:09:29,759 +image input into gp4 so what it's doing + +1592 +01:09:27,920 --> 01:09:33,040 +is it's encoding the image and then + +1593 +01:09:29,759 --> 01:09:34,719 +feeding it in as tokens into gp4 what + +1594 +01:09:33,040 --> 01:09:37,920 +this is doing here is this is rather + +1595 +01:09:34,719 --> 01:09:40,120 +calling a tool this is calling uh dolly3 + +1596 +01:09:37,920 --> 01:09:42,120 +as a tool and it's providing the caption + +1597 +01:09:40,120 --> 01:09:46,880 +to Dolly 3 you can even see maybe the + +1598 +01:09:42,120 --> 01:09:46,880 +caption that was provided to + +1599 +01:09:48,640 --> 01:09:55,560 +dolly3 you you previously were able to + +1600 +01:09:51,239 --> 01:09:57,960 +do that um by maybe downloading yeah so + +1601 +01:09:55,560 --> 01:10:01,600 +you can see the the + +1602 +01:09:57,960 --> 01:10:01,600 +caption uh which + +1603 +01:10:03,560 --> 01:10:08,120 +was a visual metaphor of deep learning + +1604 +01:10:06,320 --> 01:10:10,679 +is a powerful force breaking through a + +1605 +01:10:08,120 --> 01:10:13,400 +brick wall um or something like that and + +1606 +01:10:10,679 --> 01:10:15,480 +so gp4 basically what it did is it it + +1607 +01:10:13,400 --> 01:10:18,000 +said it wanted to call a tool and then + +1608 +01:10:15,480 --> 01:10:19,360 +it g provided the caption uh the caption + +1609 +01:10:18,000 --> 01:10:21,280 +and then it called it completely + +1610 +01:10:19,360 --> 01:10:22,320 +separate tool as an API in order to + +1611 +01:10:21,280 --> 01:10:27,320 +generate the + +1612 +01:10:22,320 --> 01:10:27,320 +image so um yeah the final + +1613 +01:10:28,199 --> 01:10:34,080 +well I managed to break chat gbt that's + +1614 +01:10:30,120 --> 01:10:36,520 +no small accomplishment um so but anyway + +1615 +01:10:34,080 --> 01:10:40,199 +these are some of the things that uh + +1616 +01:10:36,520 --> 01:10:42,360 +that the systems can do and because open + +1617 +01:10:40,199 --> 01:10:47,000 +AI has kind of become a standard that a + +1618 +01:10:42,360 --> 01:10:50,040 +lot of people want to uh compete with um + +1619 +01:10:47,000 --> 01:10:53,480 +also I would say Gemini Gemini and Claud + +1620 +01:10:50,040 --> 01:10:56,400 +are maybe the two um the two models that + +1621 +01:10:53,480 --> 01:10:59,440 +can compete with gp4 and terms of uh you + +1622 +01:10:56,400 --> 01:11:02,600 +know accuracy Gemini is a much newer + +1623 +01:10:59,440 --> 01:11:06,159 +model by Google that uh comes in two + +1624 +01:11:02,600 --> 01:11:08,280 +varieties Gemini Pro and Gemini Ultra uh + +1625 +01:11:06,159 --> 01:11:11,040 +one interesting thing about Gemini Pro + +1626 +01:11:08,280 --> 01:11:13,560 +is that it supports um very long inputs + +1627 +01:11:11,040 --> 01:11:15,679 +one to 10 million tokens it also + +1628 +01:11:13,560 --> 01:11:16,600 +supports image and video inputs and + +1629 +01:11:15,679 --> 01:11:20,239 +image + +1630 +01:11:16,600 --> 01:11:22,320 +outputs um I actually put a a video into + +1631 +01:11:20,239 --> 01:11:24,600 +it recently and the video recognition + +1632 +01:11:22,320 --> 01:11:27,159 +capabilities are pretty pretty nice so + +1633 +01:11:24,600 --> 01:11:29,280 +you can uh you can try that out if you + +1634 +01:11:27,159 --> 01:11:34,320 +want + +1635 +01:11:29,280 --> 01:11:36,640 +um and finally there's Claud it pla 3 it + +1636 +01:11:34,320 --> 01:11:39,280 +supports a context window of up to 200k + +1637 +01:11:36,640 --> 01:11:41,040 +also allows for processing images and + +1638 +01:11:39,280 --> 01:11:46,480 +overall has strong results competitive + +1639 +01:11:41,040 --> 01:11:49,880 +with gd4 so if you're looking for um if + +1640 +01:11:46,480 --> 01:11:51,480 +you're looking for models to use uh to + +1641 +01:11:49,880 --> 01:11:53,600 +try out better closed models you can + +1642 +01:11:51,480 --> 01:11:55,719 +definitely use these another thing I'm + +1643 +01:11:53,600 --> 01:11:58,239 +really excited about is how can we get + +1644 +01:11:55,719 --> 01:11:59,560 +like open models to you know demonstrate + +1645 +01:11:58,239 --> 01:12:01,320 +some of the interesting capabilities + +1646 +01:11:59,560 --> 01:12:02,840 +that we see in closed models so you know + +1647 +01:12:01,320 --> 01:12:07,120 +everybody can benefit and everybody + +1648 +01:12:02,840 --> 01:12:10,040 +knows uh you know uh the recipes to make + +1649 +01:12:07,120 --> 01:12:12,560 +models like this so I think that's + +1650 +01:12:10,040 --> 01:12:16,639 +mostly all I have for today another um + +1651 +01:12:12,560 --> 01:12:23,440 +another thing that is kind of neat + +1652 +01:12:16,639 --> 01:12:23,440 +is I just found this a little while ago + +1653 +01:12:28,800 --> 01:12:32,239 +but there is this uh + +1654 +01:12:33,320 --> 01:12:39,239 +interface uh called the god mode that + +1655 +01:12:36,880 --> 01:12:41,960 +allows you to put all of the chat apps + +1656 +01:12:39,239 --> 01:12:45,840 +next to each other and write the same + +1657 +01:12:41,960 --> 01:12:47,080 +chat query into them and uh and get the + +1658 +01:12:45,840 --> 01:12:48,719 +result from all of them so you can + +1659 +01:12:47,080 --> 01:12:51,080 +actually compare all of them in kind of + +1660 +01:12:48,719 --> 01:12:52,840 +an interactive settings so if you want + +1661 +01:12:51,080 --> 01:12:54,800 +to look at all especially all of the + +1662 +01:12:52,840 --> 01:12:56,679 +closed models open models it's you know + +1663 +01:12:54,800 --> 01:12:58,239 +not too are to do it yourself but if you + +1664 +01:12:56,679 --> 01:12:59,840 +want to try all of the Clos models + +1665 +01:12:58,239 --> 01:13:01,800 +together you can do that and like log + +1666 +01:12:59,840 --> 01:13:03,960 +into all of your accounts and then press + +1667 +01:13:01,800 --> 01:13:05,320 +go on aquery and see how they all this F + +1668 +01:13:03,960 --> 01:13:07,960 +so + +1669 +01:13:05,320 --> 01:13:09,800 +um that might be a good way to compare + +1670 +01:13:07,960 --> 01:13:12,000 +all of the models kind of qualitatively + +1671 +01:13:09,800 --> 01:13:14,679 +as opposed to + +1672 +01:13:12,000 --> 01:13:17,280 +qualitatively cool um that's all I have + +1673 +01:13:14,679 --> 01:13:19,440 +for today uh I don't know are there any + +1674 +01:13:17,280 --> 01:13:23,440 +questions or discussion or things like + +1675 +01:13:19,440 --> 01:13:23,440 +this yeah + +1676 +01:13:28,840 --> 01:13:35,679 +so a systematic way um the first thing + +1677 +01:13:32,760 --> 01:13:37,960 +you can do is look at the Benchmark + +1678 +01:13:35,679 --> 01:13:40,800 +results that have been published but + +1679 +01:13:37,960 --> 01:13:43,320 +actually I would like to give a caveat + +1680 +01:13:40,800 --> 01:13:43,320 +about + +1681 +01:13:45,199 --> 01:13:48,440 +this which + +1682 +01:13:50,000 --> 01:13:54,000 +is um + +1683 +01:14:22,960 --> 01:14:28,239 +so these are are the best bench marking + +1684 +01:14:25,600 --> 01:14:30,840 +results for the Gemini + +1685 +01:14:28,239 --> 01:14:33,440 +paper um + +1686 +01:14:30,840 --> 01:14:36,719 +and they have a table here um and + +1687 +01:14:33,440 --> 01:14:38,679 +basically what they kind of obviously to + +1688 +01:14:36,719 --> 01:14:41,679 +me wanted to demonstrate is that Gemini + +1689 +01:14:38,679 --> 01:14:44,760 +was the best model out of all the models + +1690 +01:14:41,679 --> 01:14:47,800 +um and so they have Gemini Pro and + +1691 +01:14:44,760 --> 01:14:50,040 +Gemini Ultra and they put Gemini Pro + +1692 +01:14:47,800 --> 01:14:52,639 +Ultra against gp4 and Gemini Pro against + +1693 +01:14:50,040 --> 01:14:56,360 +GPT 3.5 because they're you know + +1694 +01:14:52,639 --> 01:14:58,440 +comparable models um + +1695 +01:14:56,360 --> 01:15:01,880 +and they're yeah because they're + +1696 +01:14:58,440 --> 01:15:03,040 +comparable models basically and on + +1697 +01:15:01,880 --> 01:15:05,880 +things + +1698 +01:15:03,040 --> 01:15:07,400 +like um and they demonstrate that + +1699 +01:15:05,880 --> 01:15:08,199 +basically they're better in all all of + +1700 +01:15:07,400 --> 01:15:10,520 +these + +1701 +01:15:08,199 --> 01:15:14,760 +situations however there's a few details + +1702 +01:15:10,520 --> 01:15:17,120 +the first detail is um that the method + +1703 +01:15:14,760 --> 01:15:20,199 +that they're using to prompt the model + +1704 +01:15:17,120 --> 01:15:22,120 +is different here so we have like 94.4 + +1705 +01:15:20,199 --> 01:15:23,560 +versus 92 but the method they're using + +1706 +01:15:22,120 --> 01:15:25,520 +to prompt the model is different they're + +1707 +01:15:23,560 --> 01:15:29,159 +using they're + +1708 +01:15:25,520 --> 01:15:33,320 +32 and then basically uh getting the + +1709 +01:15:29,159 --> 01:15:36,320 +best from 32 and then another thing + +1710 +01:15:33,320 --> 01:15:41,360 +is if we look at this Human ofal + +1711 +01:15:36,320 --> 01:15:44,120 +Performance here um they reported their + +1712 +01:15:41,360 --> 01:15:47,000 +Human ofel Performance then they pulled + +1713 +01:15:44,120 --> 01:15:49,400 +the number from the original gp4 paper + +1714 +01:15:47,000 --> 01:15:53,159 +and compared to the number from the gp4 + +1715 +01:15:49,400 --> 01:15:54,639 +paper but all of these um you know apis + +1716 +01:15:53,159 --> 01:15:57,719 +are constantly changing they're getting + +1717 +01:15:54,639 --> 01:15:59,480 +better and better so we went um I I was + +1718 +01:15:57,719 --> 01:16:01,400 +very excited when Gemini first came out + +1719 +01:15:59,480 --> 01:16:03,120 +and we actually wrote a paper where we + +1720 +01:16:01,400 --> 01:16:05,320 +tried to look deeper into the + +1721 +01:16:03,120 --> 01:16:08,000 +performance and what we actually found + +1722 +01:16:05,320 --> 01:16:10,199 +is comparing Gemini Pro and GPT 3.5 + +1723 +01:16:08,000 --> 01:16:12,719 +turbo which should be comparable we + +1724 +01:16:10,199 --> 01:16:16,120 +found that actually GPT 3.5 turbo did a + +1725 +01:16:12,719 --> 01:16:19,280 +little bit better um in in most cases + +1726 +01:16:16,120 --> 01:16:20,920 +although not all cases and one of the + +1727 +01:16:19,280 --> 01:16:24,000 +things we noticed in particular is like + +1728 +01:16:20,920 --> 01:16:27,960 +human ofel GPD 3.5 had gotten like much + +1729 +01:16:24,000 --> 01:16:29,760 +much better over the course of uh like + +1730 +01:16:27,960 --> 01:16:31,639 +the time between the original paper was + +1731 +01:16:29,760 --> 01:16:34,120 +reported it had gone up by almost 30 + +1732 +01:16:31,639 --> 01:16:35,760 +points and also in a few cases we had + +1733 +01:16:34,120 --> 01:16:37,480 +like a little bit of trouble reproducing + +1734 +01:16:35,760 --> 01:16:39,280 +the Gemini Pro results just because they + +1735 +01:16:37,480 --> 01:16:40,360 +had like safety filters and other stuff + +1736 +01:16:39,280 --> 01:16:42,520 +like that that we had to get around + +1737 +01:16:40,360 --> 01:16:45,280 +before we got the results so it's not + +1738 +01:16:42,520 --> 01:16:49,560 +necessarily the case that you can + +1739 +01:16:45,280 --> 01:16:52,639 +completely take the um that you can + +1740 +01:16:49,560 --> 01:16:55,560 +completely take the results on face + +1741 +01:16:52,639 --> 01:16:57,040 +value actually as a first St I would + +1742 +01:16:55,560 --> 01:17:00,080 +suggest just trying to chat with the + +1743 +01:16:57,040 --> 01:17:03,719 +model um which is also why I introduced + +1744 +01:17:00,080 --> 01:17:06,679 +the like quote unquote god mode uh like + +1745 +01:17:03,719 --> 01:17:09,159 +browser because like you can kind of + +1746 +01:17:06,679 --> 01:17:10,639 +tell when it like when something's way + +1747 +01:17:09,159 --> 01:17:14,320 +better than another one just by the + +1748 +01:17:10,639 --> 01:17:17,159 +respones ites um separately if you want + +1749 +01:17:14,320 --> 01:17:17,159 +to do it much more + +1750 +01:17:20,199 --> 01:17:23,840 +systematically there are really nice + +1751 +01:17:22,360 --> 01:17:25,400 +tools for evaluation I think I might + +1752 +01:17:23,840 --> 01:17:26,960 +have talked about this before but if I + +1753 +01:17:25,400 --> 01:17:29,280 +haven't then you should definitely take + +1754 +01:17:26,960 --> 01:17:31,880 +a look at this there's the alther + +1755 +01:17:29,280 --> 01:17:34,040 +evaluation harness and the alther + +1756 +01:17:31,880 --> 01:17:35,679 +evaluation harness makes it really easy + +1757 +01:17:34,040 --> 01:17:37,600 +to evaluate for example hugging face + +1758 +01:17:35,679 --> 01:17:39,040 +models against many many different tasks + +1759 +01:17:37,600 --> 01:17:41,360 +so you can just pick which task you want + +1760 +01:17:39,040 --> 01:17:43,719 +to evaluate against pick the model name + +1761 +01:17:41,360 --> 01:17:47,400 +and and go and you can get evaluation + +1762 +01:17:43,719 --> 01:17:51,960 +results um that won't necessarily work + +1763 +01:17:47,400 --> 01:17:53,960 +for close models um but if you look for + +1764 +01:17:51,960 --> 01:17:55,480 +Uther language model evaluation harness + +1765 +01:17:53,960 --> 01:17:58,800 +that's maybe the easiest way to run + +1766 +01:17:55,480 --> 01:17:58,800 +evaluations or s for + +1767 +01:17:59,239 --> 01:18:05,239 +L Cool okay um so we're we're at time + +1768 +01:18:02,960 --> 01:18:07,480 +now uh but I'd be happy to answer a few + +1769 +01:18:05,239 --> 01:18:10,639 +questions if anybody else has any so + +1770 +01:18:07,480 --> 01:18:10,639 +thank you \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/transcript.vtt b/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..764a5dd85e61eeba822b4650de04c58eaf8f6f6c --- /dev/null +++ b/CMU Advanced NLP 2024 (15) A Tour of Modern Large Language Models/transcript.vtt @@ -0,0 +1,5311 @@ +WEBVTT + +00:00:00.280 --> 00:00:08.320 +can everyone hear Al set okay great so + +00:00:05.400 --> 00:00:09.840 +um today I'll be talking about a tour of + +00:00:08.320 --> 00:00:13.960 +modern uh + +00:00:09.840 --> 00:00:16.600 +llms and basically the idea here is that + +00:00:13.960 --> 00:00:18.600 +there is many many large language models + +00:00:16.600 --> 00:00:20.480 +available nowadays but I wanted to go + +00:00:18.600 --> 00:00:22.760 +through some of the ones that are + +00:00:20.480 --> 00:00:25.880 +particularly interesting for various + +00:00:22.760 --> 00:00:26.880 +reasons either because they disclose a + +00:00:25.880 --> 00:00:29.519 +lot of + +00:00:26.880 --> 00:00:31.119 +information uh you know about exactly + +00:00:29.519 --> 00:00:34.120 +how they were trains so we can get an + +00:00:31.119 --> 00:00:35.559 +idea about what is involved in training + +00:00:34.120 --> 00:00:39.120 +uh a kind of state-ofthe-art large + +00:00:35.559 --> 00:00:40.640 +language model or because they're kind + +00:00:39.120 --> 00:00:43.200 +of the strongest models that you can + +00:00:40.640 --> 00:00:45.160 +download and use on your own um like the + +00:00:43.200 --> 00:00:47.360 +best open weights language models that + +00:00:45.160 --> 00:00:49.559 +are available or because they're + +00:00:47.360 --> 00:00:51.879 +specialized to some particular topic or + +00:00:49.559 --> 00:00:53.480 +because they're the best closed uh + +00:00:51.879 --> 00:00:56.399 +language models but I'm going to + +00:00:53.480 --> 00:00:58.640 +particularly focus on the first two um + +00:00:56.399 --> 00:01:00.640 +just so like everybody has an idea about + +00:00:58.640 --> 00:01:03.239 +you know what what is going into all the + +00:01:00.640 --> 00:01:07.519 +models that you're using for whatever uh + +00:01:03.239 --> 00:01:07.519 +you know tasks that you're trying to + +00:01:09.119 --> 00:01:14.159 +solve so one important thing is uh what + +00:01:12.240 --> 00:01:18.080 +makes a model so we talk about you know + +00:01:14.159 --> 00:01:21.680 +like llama 2 or M roll or mix roll or + +00:01:18.080 --> 00:01:23.320 +whatever else and I think you know this + +00:01:21.680 --> 00:01:24.479 +already but it's worth reiterating again + +00:01:23.320 --> 00:01:27.320 +here because I'm going to talk about it + +00:01:24.479 --> 00:01:29.320 +a lot today but it's basically the model + +00:01:27.320 --> 00:01:31.280 +architecture so what architecture do you + +00:01:29.320 --> 00:01:33.799 +decide to use + +00:01:31.280 --> 00:01:35.840 +um what data do you decide to use and + +00:01:33.799 --> 00:01:39.759 +what training algorithm or Training + +00:01:35.840 --> 00:01:42.520 +Method do you decide to use and all of + +00:01:39.759 --> 00:01:46.040 +these are important um and there was + +00:01:42.520 --> 00:01:49.320 +actually uh a Twitter thread with Tom + +00:01:46.040 --> 00:01:52.399 +Wolf who's I guess CSO or CTO or + +00:01:49.320 --> 00:01:54.840 +something like that at hugging face um + +00:01:52.399 --> 00:01:56.840 +and basically what he was saying is uh a + +00:01:54.840 --> 00:01:59.240 +lot of people don't realize that the + +00:01:56.840 --> 00:02:01.039 +data is actually one of the most + +00:01:59.240 --> 00:02:04.320 +important parts + +00:02:01.039 --> 00:02:07.680 +um and the architectures are a lot less + +00:02:04.320 --> 00:02:10.920 +important nowadays and I think that + +00:02:07.680 --> 00:02:14.280 +there's some truth to that there's also + +00:02:10.920 --> 00:02:15.879 +some you know a counterargument to that + +00:02:14.280 --> 00:02:17.920 +uh the truth to that which you'll see + +00:02:15.879 --> 00:02:19.760 +today is that almost all of the models + +00:02:17.920 --> 00:02:21.360 +that we're using use very similar + +00:02:19.760 --> 00:02:23.120 +architectures like almost all of the + +00:02:21.360 --> 00:02:26.879 +models use an architecture that's very + +00:02:23.120 --> 00:02:28.760 +similar Dilma um but despite the fact + +00:02:26.879 --> 00:02:31.280 +that they use very similar architectures + +00:02:28.760 --> 00:02:33.599 +they're um accuracy is vastly different + +00:02:31.280 --> 00:02:36.080 +or their their abilities are vastly + +00:02:33.599 --> 00:02:38.519 +different so that must come from the + +00:02:36.080 --> 00:02:40.040 +data or the training decisions right so + +00:02:38.519 --> 00:02:41.640 +that's an argument for the fact that + +00:02:40.040 --> 00:02:44.040 +architecture decisions are a lot less + +00:02:41.640 --> 00:02:48.000 +important my counterargument to that is + +00:02:44.040 --> 00:02:49.840 +we spent N9 to 10 years fine-tuning and + +00:02:48.000 --> 00:02:51.560 +finding the Llama architecture so now we + +00:02:49.840 --> 00:02:53.120 +have the Llama architecture which is a + +00:02:51.560 --> 00:02:55.480 +really good architecture it works really + +00:02:53.120 --> 00:02:57.640 +well when training very large models on + +00:02:55.480 --> 00:02:59.239 +lots of data and so now we don't need to + +00:02:57.640 --> 00:03:01.360 +use another architecture because the + +00:02:59.239 --> 00:03:02.920 +architecture using is good but if we + +00:03:01.360 --> 00:03:06.200 +were trying to do the same thing with + +00:03:02.920 --> 00:03:07.640 +the like lstm from 2014 uh then none of + +00:03:06.200 --> 00:03:09.440 +the stuff we're doing today would work + +00:03:07.640 --> 00:03:11.760 +so that's an argument in favor of you + +00:03:09.440 --> 00:03:13.560 +know architectures being also + +00:03:11.760 --> 00:03:16.920 +architectures can make things faster and + +00:03:13.560 --> 00:03:16.920 +that's included in s decisions + +00:03:17.280 --> 00:03:21.280 +that + +00:03:19.040 --> 00:03:22.640 +so um the first thing I'd like to talk + +00:03:21.280 --> 00:03:25.280 +about before I get into any of the + +00:03:22.640 --> 00:03:28.000 +actual details is um open versus closed + +00:03:25.280 --> 00:03:30.480 +access uh this is not like modeling + +00:03:28.000 --> 00:03:31.760 +stuff but I think it's important and + +00:03:30.480 --> 00:03:35.599 +also helps you understand the + +00:03:31.760 --> 00:03:39.519 +environment a little bit so um there's a + +00:03:35.599 --> 00:03:42.200 +nice blog by pyang and others uh at + +00:03:39.519 --> 00:03:45.560 +which is also in the reference and they + +00:03:42.200 --> 00:03:47.720 +discuss several different varieties of + +00:03:45.560 --> 00:03:50.599 +like openness of release of language + +00:03:47.720 --> 00:03:52.560 +models in advanced AI systems and there + +00:03:50.599 --> 00:03:55.200 +are some things that we can talk about + +00:03:52.560 --> 00:03:59.000 +we can talk about the weights being open + +00:03:55.200 --> 00:04:01.439 +um described or closed inference uh code + +00:03:59.000 --> 00:04:03.319 +being open or inference methods being + +00:04:01.439 --> 00:04:04.959 +described or it being fully closed + +00:04:03.319 --> 00:04:08.120 +training being open described or closed + +00:04:04.959 --> 00:04:13.040 +and data being open described or closed + +00:04:08.120 --> 00:04:14.760 +and um in general uh we have like the + +00:04:13.040 --> 00:04:16.519 +open weights models that are on hugging + +00:04:14.760 --> 00:04:19.040 +face that might just mean the weights + +00:04:16.519 --> 00:04:20.600 +are open the inference code also needs + +00:04:19.040 --> 00:04:21.919 +to be open because otherwise you can't + +00:04:20.600 --> 00:04:24.160 +do inference on them if they're on + +00:04:21.919 --> 00:04:25.800 +hugging face but that doesn't mean that + +00:04:24.160 --> 00:04:28.120 +the training code is open it also + +00:04:25.800 --> 00:04:32.479 +doesn't mean that the data is open um + +00:04:28.120 --> 00:04:34.280 +and so there's various degrees of + +00:04:32.479 --> 00:04:37.320 +openness + +00:04:34.280 --> 00:04:40.919 +um and then of course there are things + +00:04:37.320 --> 00:04:42.520 +like uh GPT for or GPT models where + +00:04:40.919 --> 00:04:45.560 +basically all of this is closed and we + +00:04:42.520 --> 00:04:48.880 +don't know anything about it or know + +00:04:45.560 --> 00:04:50.560 +very little about it another thing is + +00:04:48.880 --> 00:04:52.600 +about licenses and + +00:04:50.560 --> 00:04:54.199 +permissiveness and this is kind of + +00:04:52.600 --> 00:04:56.880 +important if you want to do a research + +00:04:54.199 --> 00:05:01.240 +project to know because + +00:04:56.880 --> 00:05:04.080 +it means it it an impact on the things + +00:05:01.240 --> 00:05:05.520 +that you legally can do or can't do in + +00:05:04.080 --> 00:05:08.039 +universities I mean we should be + +00:05:05.520 --> 00:05:09.479 +following the law but we're maybe people + +00:05:08.039 --> 00:05:10.720 +think about this a little bit less if + +00:05:09.479 --> 00:05:12.240 +you're in a big company this is + +00:05:10.720 --> 00:05:14.919 +something that becomes really important + +00:05:12.240 --> 00:05:17.199 +so it's uh it's important to think + +00:05:14.919 --> 00:05:20.039 +about so I'm going to go through several + +00:05:17.199 --> 00:05:21.440 +degrees of licenses uh that if you've + +00:05:20.039 --> 00:05:25.759 +done anything in open source you + +00:05:21.440 --> 00:05:27.600 +probably know but um the or you probably + +00:05:25.759 --> 00:05:29.919 +know a lot of these the first one is + +00:05:27.600 --> 00:05:31.479 +public domain or cc0 + +00:05:29.919 --> 00:05:33.440 +and this basically means you can do + +00:05:31.479 --> 00:05:37.240 +anything with it like I could I could + +00:05:33.440 --> 00:05:39.280 +download it and um this includes the + +00:05:37.240 --> 00:05:41.680 +download it and redistribute it not give + +00:05:39.280 --> 00:05:44.560 +you any credit uh modify it in any way I + +00:05:41.680 --> 00:05:47.720 +want and this includes things like old + +00:05:44.560 --> 00:05:49.600 +copyrighted works and products of the US + +00:05:47.720 --> 00:05:51.400 +government workers so if you work for + +00:05:49.600 --> 00:05:53.240 +the US government in some capacities + +00:05:51.400 --> 00:05:58.560 +anything you generate becomes public + +00:05:53.240 --> 00:06:01.000 +domain um so old copyrighted Works um + +00:05:58.560 --> 00:06:04.560 +How how old do you think they need to be + +00:06:01.000 --> 00:06:04.560 +before they become uh + +00:06:04.720 --> 00:06:12.280 +uncopyrighted + +00:06:07.000 --> 00:06:12.280 +yeah uh I think that's pretty close + +00:06:14.319 --> 00:06:21.280 +so it's uh 70 years I + +00:06:18.520 --> 00:06:23.680 +guess oh sorry the life of the author + +00:06:21.280 --> 00:06:25.120 +plus an additional 70 years so like + +00:06:23.680 --> 00:06:28.479 +after the after the person has passed + +00:06:25.120 --> 00:06:30.720 +away 70 years I guess it says um does + +00:06:28.479 --> 00:06:34.520 +anyone know a work that just become + +00:06:30.720 --> 00:06:37.520 +became non-copyrighted yeah uh Mickey + +00:06:34.520 --> 00:06:43.199 +Mouse is still copyrighted + +00:06:37.520 --> 00:06:45.199 +yeah SBO uh did did it I okay so that + +00:06:43.199 --> 00:06:48.400 +that's some new news some other new news + +00:06:45.199 --> 00:06:50.759 +is wi the Poo um so Winnie the Poo just + +00:06:48.400 --> 00:06:54.199 +became non-copyrighted and actually I + +00:06:50.759 --> 00:06:55.840 +just heard uh last week that somebody + +00:06:54.199 --> 00:06:59.680 +made a horror movie where Winnie the + +00:06:55.840 --> 00:07:01.479 +Pooh was a a killer and that one uh a + +00:06:59.680 --> 00:07:04.960 +whole bunch of like bad movie awards in + +00:07:01.479 --> 00:07:06.639 +2023 so um that's the kind of things + +00:07:04.960 --> 00:07:09.080 +that can happen to your copyrighted + +00:07:06.639 --> 00:07:11.479 +works if they are released cc0 somebody + +00:07:09.080 --> 00:07:12.960 +can do anything they want with them uh + +00:07:11.479 --> 00:07:14.400 +you know so you need to be a little bit + +00:07:12.960 --> 00:07:18.080 +careful about + +00:07:14.400 --> 00:07:20.000 +that um next are MIT and bstd these are + +00:07:18.080 --> 00:07:22.400 +very common software licenses you'll see + +00:07:20.000 --> 00:07:25.720 +them on a lot of research projects these + +00:07:22.400 --> 00:07:27.400 +have very few restrictions um other than + +00:07:25.720 --> 00:07:29.319 +maybe maintaining the copyright notice + +00:07:27.400 --> 00:07:31.840 +for BC but that's about it you can do + +00:07:29.319 --> 00:07:33.840 +just about anything you want with it um + +00:07:31.840 --> 00:07:35.599 +actually I'm not sure if people know + +00:07:33.840 --> 00:07:39.599 +this but the Mac operating system is + +00:07:35.599 --> 00:07:42.199 +based on an old BSD Opera uh operating + +00:07:39.599 --> 00:07:44.280 +system where they uh took the they took + +00:07:42.199 --> 00:07:46.080 +the code they made it private they + +00:07:44.280 --> 00:07:49.560 +forked it made it private and now it's + +00:07:46.080 --> 00:07:51.919 +the proprietary Mac operating system so + +00:07:49.560 --> 00:07:53.720 +uh that's something you can do with an m + +00:07:51.919 --> 00:07:57.840 +m or BSD + +00:07:53.720 --> 00:08:00.000 +licensed um there's also a Pachi and CC + +00:07:57.840 --> 00:08:02.560 +by um + +00:08:00.000 --> 00:08:05.039 +here you must acknowledge the owner of + +00:08:02.560 --> 00:08:07.840 +the uh the original creators so you need + +00:08:05.039 --> 00:08:08.960 +to say this person actually created uh + +00:08:07.840 --> 00:08:11.520 +this + +00:08:08.960 --> 00:08:14.680 +originally + +00:08:11.520 --> 00:08:17.319 +um Apachi is also kind of interesting + +00:08:14.680 --> 00:08:21.759 +because they will give you a license to + +00:08:17.319 --> 00:08:25.960 +use that code and any patents that are + +00:08:21.759 --> 00:08:29.599 +associated with that code unless you sue + +00:08:25.960 --> 00:08:32.159 +the company who released it so um just + +00:08:29.599 --> 00:08:34.039 +Give an example let's say uh Google + +00:08:32.159 --> 00:08:36.279 +released their code under the Apache + +00:08:34.039 --> 00:08:38.919 +license and that code implements + +00:08:36.279 --> 00:08:42.680 +Transformers and Google has a patent on + +00:08:38.919 --> 00:08:45.760 +Transformers so if you use uh kind of + +00:08:42.680 --> 00:08:48.200 +jacks or tensorflow a Jack or tensorflow + +00:08:45.760 --> 00:08:50.120 +implementation of Transformers uh that + +00:08:48.200 --> 00:08:51.720 +was created by Google you're okay you're + +00:08:50.120 --> 00:08:54.640 +safe to use that because they've + +00:08:51.720 --> 00:08:57.360 +released it under uh under that license + +00:08:54.640 --> 00:08:59.560 +but if you sue Google uh for anything + +00:08:57.360 --> 00:09:01.760 +related to intellectual property Google + +00:08:59.560 --> 00:09:04.480 +could say uh don't you can't use + +00:09:01.760 --> 00:09:06.040 +Transformers anymore um and so like if + +00:09:04.480 --> 00:09:08.279 +open AI ever sues Google for + +00:09:06.040 --> 00:09:09.680 +intellectual property infringement + +00:09:08.279 --> 00:09:12.120 +Google will say okay you can't use + +00:09:09.680 --> 00:09:15.959 +Transformers or word embeddings good + +00:09:12.120 --> 00:09:17.640 +luck uh open so um there's this + +00:09:15.959 --> 00:09:20.760 +interesting thing where all of these uh + +00:09:17.640 --> 00:09:22.760 +tech companies now are using patented um + +00:09:20.760 --> 00:09:24.440 +patented things a lot of it apachi + +00:09:22.760 --> 00:09:26.040 +license software and so none of them can + +00:09:24.440 --> 00:09:28.959 +sue each other for patents so patents + +00:09:26.040 --> 00:09:30.560 +have become basically mostly worthless + +00:09:28.959 --> 00:09:35.320 +uh in big + +00:09:30.560 --> 00:09:36.360 +te um moving on um there's also a g GPL + +00:09:35.320 --> 00:09:39.360 +in + +00:09:36.360 --> 00:09:42.800 +ccbsa these are licenses where if you + +00:09:39.360 --> 00:09:45.680 +use them you need to reshare under that + +00:09:42.800 --> 00:09:47.839 +license um and so like if you create + +00:09:45.680 --> 00:09:49.440 +some software it's GPL licensed and you + +00:09:47.839 --> 00:09:52.160 +build on it and build something new you + +00:09:49.440 --> 00:09:54.839 +need to release it under the GPL license + +00:09:52.160 --> 00:09:58.160 +so a lot of companies will not + +00:09:54.839 --> 00:09:59.640 +use um will not use GPL software because + +00:09:58.160 --> 00:10:01.920 +that would mean that if they incorporate + +00:09:59.640 --> 00:10:04.959 +into their system their whole system + +00:10:01.920 --> 00:10:06.720 +like for example Google uh like all of + +00:10:04.959 --> 00:10:10.240 +Google would have to be GPL licensed in + +00:10:06.720 --> 00:10:11.720 +Rel EAS uh so um and I'm kind of + +00:10:10.240 --> 00:10:14.800 +simplifying these licenses I'm just + +00:10:11.720 --> 00:10:17.519 +giving you the gist CC BSA and sorry CC + +00:10:14.800 --> 00:10:20.640 +licenses are more for data so MIT BSC + +00:10:17.519 --> 00:10:22.640 +Apachi and GPL are more for software CC + +00:10:20.640 --> 00:10:27.640 +Creative Commons licenses are for data + +00:10:22.640 --> 00:10:29.640 +so um for example Wikipedia is CC by SAA + +00:10:27.640 --> 00:10:33.560 +I believe + +00:10:29.640 --> 00:10:33.560 +let me make sure that I'm not lying + +00:10:41.839 --> 00:10:48.240 +there yeah CC bys and so that means that + +00:10:46.040 --> 00:10:52.200 +if you make any derivative work of + +00:10:48.240 --> 00:10:54.160 +Wikipedia you need to share it um the + +00:10:52.200 --> 00:10:57.040 +same way that Wikipedia is uh so you + +00:10:54.160 --> 00:10:59.760 +need to give it the same + +00:10:57.040 --> 00:11:01.560 +license there's also um cre of Commons + +00:10:59.760 --> 00:11:03.240 +non-commercial licenses or software + +00:11:01.560 --> 00:11:05.519 +non-commercial licenses you say you + +00:11:03.240 --> 00:11:07.079 +can't use them for commercial purposes + +00:11:05.519 --> 00:11:09.279 +all the ones above you can use for + +00:11:07.079 --> 00:11:11.519 +commercial purposes once you start + +00:11:09.279 --> 00:11:13.440 +getting down here this is no often no + +00:11:11.519 --> 00:11:15.279 +longer called open source so the open + +00:11:13.440 --> 00:11:16.959 +source initiative says anything with a + +00:11:15.279 --> 00:11:19.839 +restriction on the way that you can use + +00:11:16.959 --> 00:11:22.639 +it is no longer open source and so that + +00:11:19.839 --> 00:11:25.360 +means if you say you can't use this for + +00:11:22.639 --> 00:11:27.720 +commercial purposes or you can't use + +00:11:25.360 --> 00:11:29.639 +this in military systems for example + +00:11:27.720 --> 00:11:32.320 +which some language models say that + +00:11:29.639 --> 00:11:33.680 +nowadays those are no longer called open + +00:11:32.320 --> 00:11:37.040 +source according to the open source + +00:11:33.680 --> 00:11:40.320 +initiative so that's a thing to know + +00:11:37.040 --> 00:11:42.920 +about then separately uh there are these + +00:11:40.320 --> 00:11:45.279 +licenses that a lot of people like meta + +00:11:42.920 --> 00:11:48.160 +or hugging face come up with for their + +00:11:45.279 --> 00:11:50.360 +um for their models recently so the + +00:11:48.160 --> 00:11:51.320 +Llama license um how many people are + +00:11:50.360 --> 00:11:54.200 +using + +00:11:51.320 --> 00:11:56.519 +llama in your projects how many people + +00:11:54.200 --> 00:11:56.519 +read the + +00:11:57.000 --> 00:12:00.880 +license so um are you sure you can use + +00:11:59.639 --> 00:12:04.959 +it in your + +00:12:00.880 --> 00:12:06.839 +project uh so you're you're probably in + +00:12:04.959 --> 00:12:09.000 +luck in your project if you're using it + +00:12:06.839 --> 00:12:11.560 +the Lama license you can read into it to + +00:12:09.000 --> 00:12:13.519 +see what it actually allows but it has + +00:12:11.560 --> 00:12:16.399 +um the original llama license has some + +00:12:13.519 --> 00:12:18.440 +interesting uh things number one you + +00:12:16.399 --> 00:12:21.079 +cannot use llama to train any language + +00:12:18.440 --> 00:12:23.000 +model that is not derived from llama so + +00:12:21.079 --> 00:12:26.120 +you can't generate data from llama in + +00:12:23.000 --> 00:12:30.040 +train M that's not allowed according to + +00:12:26.120 --> 00:12:32.440 +the r Li um another thing is uh you + +00:12:30.040 --> 00:12:34.680 +can't use it for military purposes so + +00:12:32.440 --> 00:12:36.160 +you can't use it um in building a + +00:12:34.680 --> 00:12:37.639 +missile system or something like that + +00:12:36.160 --> 00:12:41.440 +hopefully none of you are doing that for + +00:12:37.639 --> 00:12:42.920 +your project um and you also need to get + +00:12:41.440 --> 00:12:45.399 +a license from meta if you have + +00:12:42.920 --> 00:12:48.000 +something more than 300 million active + +00:12:45.399 --> 00:12:53.800 +user asign your social network service + +00:12:48.000 --> 00:12:56.079 +so if you're Google or um you know X or + +00:12:53.800 --> 00:12:57.680 +Twitter or you know whatever else you + +00:12:56.079 --> 00:13:00.519 +need to get a license for meta before + +00:12:57.680 --> 00:13:02.079 +you can start using one so + +00:13:00.519 --> 00:13:03.240 +basically they created that license so + +00:13:02.079 --> 00:13:06.720 +their competitors don't take their + +00:13:03.240 --> 00:13:08.959 +language model and just use it for free + +00:13:06.720 --> 00:13:11.000 +um and then the final thing is no + +00:13:08.959 --> 00:13:13.240 +license so like let's say you have some + +00:13:11.000 --> 00:13:15.560 +code that you upload to GitHub and you + +00:13:13.240 --> 00:13:17.839 +don't put a license on your code this + +00:13:15.560 --> 00:13:20.880 +means that you have only agreed to the + +00:13:17.839 --> 00:13:23.360 +GitHub licensing terms which means that + +00:13:20.880 --> 00:13:26.199 +actually nobody can use their code they + +00:13:23.360 --> 00:13:30.079 +can view it possibly but they can't you + +00:13:26.199 --> 00:13:31.720 +download it use it they can't like um + +00:13:30.079 --> 00:13:34.160 +they can't incorporate it into their own + +00:13:31.720 --> 00:13:36.000 +system so actually if you release + +00:13:34.160 --> 00:13:39.120 +research code I would highly encourage + +00:13:36.000 --> 00:13:41.120 +you to use MIT or BSD um or one of these + +00:13:39.120 --> 00:13:43.040 +permissive licenses so other people can + +00:13:41.120 --> 00:13:45.720 +use it and follow up and your code can + +00:13:43.040 --> 00:13:46.920 +be effectful so um this is an important + +00:13:45.720 --> 00:13:49.040 +thing to know about there's obviously + +00:13:46.920 --> 00:13:52.959 +lots more to know + +00:13:49.040 --> 00:13:56.440 +about um so then my question my next + +00:13:52.959 --> 00:13:57.360 +question is uh what is most of the text + +00:13:56.440 --> 00:13:59.560 +on the + +00:13:57.360 --> 00:14:01.160 +internet the majority of the text on the + +00:13:59.560 --> 00:14:04.839 +internet falls into one of these + +00:14:01.160 --> 00:14:04.839 +categories any idea which + +00:14:05.120 --> 00:14:12.759 +one so Wikipedia is CC bya what what + +00:14:09.040 --> 00:14:12.759 +about uh Mo most of the text + +00:14:14.199 --> 00:14:18.959 +on yeah it's not maybe not no license + +00:14:16.880 --> 00:14:21.680 +but all rights reserved so basically you + +00:14:18.959 --> 00:14:23.079 +can't use it without having permission + +00:14:21.680 --> 00:14:27.639 +from the copyright + +00:14:23.079 --> 00:14:30.639 +holders and so because of that + +00:14:27.639 --> 00:14:33.800 +um the idea of fair use becomes very + +00:14:30.639 --> 00:14:35.320 +important this is a us specific thing + +00:14:33.800 --> 00:14:36.880 +and the rules in other countries are + +00:14:35.320 --> 00:14:39.199 +different they're not the same as the us + +00:14:36.880 --> 00:14:41.680 +but in the US uh we have rules about + +00:14:39.199 --> 00:14:44.600 +where you can use particular types of + +00:14:41.680 --> 00:14:46.279 +data so the US fair use Doctrine is + +00:14:44.600 --> 00:14:50.240 +basically that you can use copyrighted + +00:14:46.279 --> 00:14:52.920 +material in some cases so + +00:14:50.240 --> 00:14:56.279 +um as a gross + +00:14:52.920 --> 00:15:01.800 +simplification um quoting a small amount + +00:14:56.279 --> 00:15:04.320 +of material in like a textbook or slides + +00:15:01.800 --> 00:15:07.079 +or something like this this is likely + +00:15:04.320 --> 00:15:10.040 +okay um there are going to be very few + +00:15:07.079 --> 00:15:11.399 +cases where this is not going to um you + +00:15:10.040 --> 00:15:12.720 +know where you're going to get in + +00:15:11.399 --> 00:15:15.600 +trouble for + +00:15:12.720 --> 00:15:18.000 +this another important uh judgment + +00:15:15.600 --> 00:15:19.600 +criteria for whether this is fair use is + +00:15:18.000 --> 00:15:22.440 +that it doesn't diminish the value of + +00:15:19.600 --> 00:15:25.120 +the original work so if I quote + +00:15:22.440 --> 00:15:27.759 +something in my like let's say I quoted + +00:15:25.120 --> 00:15:30.839 +all of Harry Potter in a textbook and + +00:15:27.759 --> 00:15:32.600 +then I sold my textbook for $3 anybody + +00:15:30.839 --> 00:15:34.279 +could take my textbook and read all of + +00:15:32.600 --> 00:15:35.800 +Harry Potter for $3 and the money + +00:15:34.279 --> 00:15:37.480 +wouldn't go to JK rolling and that would + +00:15:35.800 --> 00:15:41.040 +not be fair use because it's diminishing + +00:15:37.480 --> 00:15:42.920 +the value of similarly if I create a big + +00:15:41.040 --> 00:15:44.319 +Corpus of books and I upload them to a + +00:15:42.920 --> 00:15:46.079 +site where anyone can browse them that + +00:15:44.319 --> 00:15:48.319 +would also probably not be for use + +00:15:46.079 --> 00:15:49.160 +because the authors would not get paid + +00:15:48.319 --> 00:15:52.319 +for + +00:15:49.160 --> 00:15:54.480 +it another judgment Criterion is whether + +00:15:52.319 --> 00:15:57.399 +it's for non commercial purposes or not + +00:15:54.480 --> 00:15:59.639 +so like in universities we're actually + +00:15:57.399 --> 00:16:01.120 +held to a probably held to a more + +00:15:59.639 --> 00:16:03.000 +lenient standard of fa use if we're + +00:16:01.120 --> 00:16:06.120 +doing non-commercial research compared + +00:16:03.000 --> 00:16:08.519 +to a company that's doing it + +00:16:06.120 --> 00:16:11.480 +so um most data on the Internet is + +00:16:08.519 --> 00:16:13.279 +copyrighted so right now most model + +00:16:11.480 --> 00:16:16.240 +training not all model training but most + +00:16:13.279 --> 00:16:18.680 +model training is done um assuming fair + +00:16:16.240 --> 00:16:21.800 +use which means that training an AI + +00:16:18.680 --> 00:16:25.800 +model on copyrighted + +00:16:21.800 --> 00:16:29.480 +data is number one it cannot reproduce + +00:16:25.800 --> 00:16:32.240 +the material easily so it's instead of + +00:16:29.480 --> 00:16:33.600 +quoting material directly it's kind of + +00:16:32.240 --> 00:16:35.880 +combining the material together to + +00:16:33.600 --> 00:16:37.519 +create a new thing they're saying it + +00:16:35.880 --> 00:16:40.639 +doesn't diminish the commercial value of + +00:16:37.519 --> 00:16:42.360 +the original uh data um and then the + +00:16:40.639 --> 00:16:44.839 +non-commercial purposes is maybe a + +00:16:42.360 --> 00:16:47.240 +secondary concern since the first two + +00:16:44.839 --> 00:16:50.600 +hold um but there are lawsuits about + +00:16:47.240 --> 00:16:52.360 +this and so um this is a clip from The + +00:16:50.600 --> 00:16:55.560 +New York Times where the New York Times + +00:16:52.360 --> 00:16:58.279 +is suing open AI in Microsoft over uh + +00:16:55.560 --> 00:16:59.759 +them training on New York Times articles + +00:16:58.279 --> 00:17:02.040 +and they did do a lot of things like + +00:16:59.759 --> 00:17:05.799 +they demonstrate that you can get uh gp4 + +00:17:02.040 --> 00:17:08.319 +to reproduce uh like um New York Times + +00:17:05.799 --> 00:17:11.480 +articles and they also argue that people + +00:17:08.319 --> 00:17:12.880 +are using this gp4 as a source of news + +00:17:11.480 --> 00:17:14.079 +instead of going to the New York Times + +00:17:12.880 --> 00:17:15.959 +site so they're losing money from + +00:17:14.079 --> 00:17:19.199 +advertising and like other other things + +00:17:15.959 --> 00:17:21.679 +like that um another example is GitHub + +00:17:19.199 --> 00:17:24.000 +co-pilot was sued by people who uh + +00:17:21.679 --> 00:17:26.439 +uploaded software to GitHub and said + +00:17:24.000 --> 00:17:29.039 +that uh basically GitHub didn't have the + +00:17:26.439 --> 00:17:32.400 +right to use it to profit from it and + +00:17:29.039 --> 00:17:34.799 +diminish their uh you know their money + +00:17:32.400 --> 00:17:37.520 +so notably uh on this slide I'm using + +00:17:34.799 --> 00:17:42.039 +fair use I don't know if you've noticed + +00:17:37.520 --> 00:17:44.679 +like I copy I copy pasted an image from + +00:17:42.039 --> 00:17:46.360 +somebody's uh you know website and used + +00:17:44.679 --> 00:17:48.520 +it here that's copyrighted material but + +00:17:46.360 --> 00:17:49.640 +I'm using it because I'm quoting a small + +00:17:48.520 --> 00:17:52.440 +amount of material and I'm not + +00:17:49.640 --> 00:17:54.360 +diminishing the ostial values so um like + +00:17:52.440 --> 00:17:56.320 +fair use is very ubiquitous it's very + +00:17:54.360 --> 00:17:58.480 +important so we can do things like this + +00:17:56.320 --> 00:18:00.840 +but also um it's currently under thep + +00:17:58.480 --> 00:18:00.840 +with this + +00:18:01.280 --> 00:18:07.799 +models so then another question is why + +00:18:04.360 --> 00:18:12.520 +restrict model access why do we number + +00:18:07.799 --> 00:18:14.320 +one make models closed number two um you + +00:18:12.520 --> 00:18:16.159 +know maybe not even describe what we did + +00:18:14.320 --> 00:18:18.880 +in our models and I think there's three + +00:18:16.159 --> 00:18:21.360 +main reasons the first reason is + +00:18:18.880 --> 00:18:23.480 +commercial concerns and so they want to + +00:18:21.360 --> 00:18:25.760 +make money from the models so open AI + +00:18:23.480 --> 00:18:27.520 +makes money from the open AI API Gemini + +00:18:25.760 --> 00:18:29.480 +makes uh sorry Google makes money from + +00:18:27.520 --> 00:18:31.799 +the Gemini API + +00:18:29.480 --> 00:18:33.720 +um and anthropic makes money from the + +00:18:31.799 --> 00:18:34.760 +CLA API these are all models that I'm + +00:18:33.720 --> 00:18:37.640 +going to talk + +00:18:34.760 --> 00:18:39.440 +about number two safety I I think there + +00:18:37.640 --> 00:18:41.640 +are very legitimate concerns where if + +00:18:39.440 --> 00:18:43.840 +you release strong models people might + +00:18:41.640 --> 00:18:47.200 +use them for bad things so you know + +00:18:43.840 --> 00:18:49.120 +creating fake content online or uh doing + +00:18:47.200 --> 00:18:50.720 +spear fishing attacks against people and + +00:18:49.120 --> 00:18:52.600 +trying to you know scam them out of + +00:18:50.720 --> 00:18:55.600 +money or things like that so I think + +00:18:52.600 --> 00:18:57.240 +there are legitimate concerns about this + +00:18:55.600 --> 00:18:58.880 +and then the final one is legal + +00:18:57.240 --> 00:19:01.520 +liability so training models on + +00:18:58.880 --> 00:19:03.640 +copyrighted data is a legal gray area as + +00:19:01.520 --> 00:19:05.159 +I just mentioned so they don't want to + +00:19:03.640 --> 00:19:07.159 +say what data they trained on because if + +00:19:05.159 --> 00:19:10.240 +they say what data they trained on then + +00:19:07.159 --> 00:19:11.960 +they might get sued so these are the + +00:19:10.240 --> 00:19:14.960 +three main + +00:19:11.960 --> 00:19:17.960 +concerns so + +00:19:14.960 --> 00:19:19.480 +um anyway this this is a preface and + +00:19:17.960 --> 00:19:23.360 +then I want to go into like the actual + +00:19:19.480 --> 00:19:23.360 +models but are there any questions about + +00:19:24.679 --> 00:19:30.280 +this so if any of you + +00:19:27.280 --> 00:19:31.720 +are working at a company or starting a + +00:19:30.280 --> 00:19:33.120 +company thinking about working at a + +00:19:31.720 --> 00:19:35.440 +company or starting a company this is + +00:19:33.120 --> 00:19:37.320 +something you should be aware of um you + +00:19:35.440 --> 00:19:39.720 +should also be aware of the fact that + +00:19:37.320 --> 00:19:42.360 +you know open AI has been doing sketchy + +00:19:39.720 --> 00:19:46.640 +things for a long time and look where + +00:19:42.360 --> 00:19:48.440 +they are so you know it it's uh like + +00:19:46.640 --> 00:19:51.400 +this is very much a legal gray area and + +00:19:48.440 --> 00:19:53.880 +people are are uh moving through that + +00:19:51.400 --> 00:19:55.640 +gray area but anyway it's worth knowing + +00:19:53.880 --> 00:19:59.480 +that so next I'm going to talk about + +00:19:55.640 --> 00:20:00.679 +open models um so first bird's eye view + +00:19:59.480 --> 00:20:02.600 +I'm going to talk about five different + +00:20:00.679 --> 00:20:04.080 +models and I picked them for a reason + +00:20:02.600 --> 00:20:06.440 +the first two are because they're open + +00:20:04.080 --> 00:20:08.159 +source and fully reproducible namely + +00:20:06.440 --> 00:20:10.360 +pipia + +00:20:08.159 --> 00:20:11.919 +Ino and the reason why I want to talk + +00:20:10.360 --> 00:20:13.120 +about these is we know everything about + +00:20:11.919 --> 00:20:14.679 +them including what data they were + +00:20:13.120 --> 00:20:16.799 +trained on um what their training + +00:20:14.679 --> 00:20:19.080 +procedures are you can download all the + +00:20:16.799 --> 00:20:21.000 +the stuff so you can kind of know uh + +00:20:19.080 --> 00:20:24.840 +exactly what goes into making a strong + +00:20:21.000 --> 00:20:26.520 +model um Pia is uh actually has many + +00:20:24.840 --> 00:20:28.159 +sizes in checkpoints which is pretty + +00:20:26.520 --> 00:20:30.919 +interesting Ando is maybe the strongest + +00:20:28.159 --> 00:20:32.559 +reproduced model at the moment um then + +00:20:30.919 --> 00:20:34.120 +we have open weights models and these + +00:20:32.559 --> 00:20:35.520 +are models that aren't fully open they + +00:20:34.120 --> 00:20:38.679 +don't disclose everything they don't + +00:20:35.520 --> 00:20:40.760 +release their training data uh or + +00:20:38.679 --> 00:20:43.799 +code um but I'm going to talk about + +00:20:40.760 --> 00:20:46.520 +llama 2 which is the most popular um + +00:20:43.799 --> 00:20:48.280 +it's also heavily safety tuned mistol + +00:20:46.520 --> 00:20:50.840 +and mixol which is a strong and fast + +00:20:48.280 --> 00:20:53.200 +model um it's somewhat multilingual and + +00:20:50.840 --> 00:20:55.200 +also quen which is a very uh strong + +00:20:53.200 --> 00:20:57.520 +model it's more multilingual and + +00:20:55.200 --> 00:21:00.600 +specifically it's good in English and + +00:20:57.520 --> 00:21:03.440 +Chinese because it was train down of + +00:21:00.600 --> 00:21:04.720 +that so first going into Pia for each of + +00:21:03.440 --> 00:21:06.159 +them I'm going to give an overview and + +00:21:04.720 --> 00:21:08.880 +then talk about some interesting points + +00:21:06.159 --> 00:21:12.320 +about them so pythia was created by + +00:21:08.880 --> 00:21:14.799 +alther ai alther ai is one of the first + +00:21:12.320 --> 00:21:16.279 +um kind of open- source AI organizations + +00:21:14.799 --> 00:21:18.720 +they've created a huge number of really + +00:21:16.279 --> 00:21:21.480 +useful things including training code + +00:21:18.720 --> 00:21:25.279 +models training data sets and also + +00:21:21.480 --> 00:21:28.080 +evaluation that's used pretty widely um + +00:21:25.279 --> 00:21:29.760 +the goal of pythia was basically joint + +00:21:28.080 --> 00:21:32.159 +understanding model training Dynamics + +00:21:29.760 --> 00:21:36.320 +and scaling and so from that point of + +00:21:32.159 --> 00:21:39.120 +view um they released eight model sizes + +00:21:36.320 --> 00:21:41.880 +from 70 million parameters to 12 billion + +00:21:39.120 --> 00:21:44.960 +parameters for each model size they have + +00:21:41.880 --> 00:21:47.440 +154 checkpoints throughout the training + +00:21:44.960 --> 00:21:52.880 +process um so they basically trained on + +00:21:47.440 --> 00:21:55.960 +uh 3300 billion uh parameter uh tokens + +00:21:52.880 --> 00:21:57.400 +and uh did checkpoints you know + +00:21:55.960 --> 00:21:59.000 +periodically during that training + +00:21:57.400 --> 00:22:02.400 +process so you can do interest things + +00:21:59.000 --> 00:22:04.400 +like say uh how quickly do small models + +00:22:02.400 --> 00:22:06.919 +learn things how quickly do large models + +00:22:04.400 --> 00:22:09.480 +learn things and other stuff like + +00:22:06.919 --> 00:22:10.760 +that in terms of the architecture as I + +00:22:09.480 --> 00:22:12.760 +mentioned at the very beginning the + +00:22:10.760 --> 00:22:14.799 +architectures are actually very similar + +00:22:12.760 --> 00:22:17.840 +between them so it's almost easier to + +00:22:14.799 --> 00:22:21.080 +point out their differences than uh + +00:22:17.840 --> 00:22:22.559 +their like their similarities um + +00:22:21.080 --> 00:22:25.400 +actually one thing that's not on the + +00:22:22.559 --> 00:22:27.159 +slide is um I mainly focused on the + +00:22:25.400 --> 00:22:29.080 +seven billion models because almost + +00:22:27.159 --> 00:22:30.320 +everybody trains a seven billi model + +00:22:29.080 --> 00:22:32.720 +it's just kind of like one of the + +00:22:30.320 --> 00:22:34.640 +standard sizes it's the smallest size of + +00:22:32.720 --> 00:22:36.559 +llama it's one of the largest it's the + +00:22:34.640 --> 00:22:40.240 +largest size ofo and one of the largest + +00:22:36.559 --> 00:22:46.880 +sizes of pipon 7 billion models are + +00:22:40.240 --> 00:22:52.880 +generally um 4096 wide 32 uh + +00:22:46.880 --> 00:22:52.880 +deep uh 32 attention heads and they're + +00:22:54.200 --> 00:23:01.159 +um and their um hidden layer size is + +00:22:57.400 --> 00:23:04.400 +about like eight3 of the size of this + +00:23:01.159 --> 00:23:07.360 +and this is kind of a standard llama 7B + +00:23:04.400 --> 00:23:09.240 +architecture um as you scale up to + +00:23:07.360 --> 00:23:11.520 +larger sizes you just increase the + +00:23:09.240 --> 00:23:13.880 +number of layers you increase the the + +00:23:11.520 --> 00:23:16.080 +width and other things like that so + +00:23:13.880 --> 00:23:19.039 +that's very standard um the other + +00:23:16.080 --> 00:23:21.320 +standard is everybody uses a Transformer + +00:23:19.039 --> 00:23:24.440 +um everybody uses pre-layer Norm like I + +00:23:21.320 --> 00:23:27.120 +talked about before everybody uses rope + +00:23:24.440 --> 00:23:29.520 +eddings um almost everybody uses a swig + +00:23:27.120 --> 00:23:30.919 +glue activation so this is just kind of + +00:23:29.520 --> 00:23:31.880 +the standard recipe that almost + +00:23:30.919 --> 00:23:35.120 +everybody + +00:23:31.880 --> 00:23:37.000 +uses um where things start to change a + +00:23:35.120 --> 00:23:38.559 +little bit between the architectures + +00:23:37.000 --> 00:23:40.559 +which arguably might not be very + +00:23:38.559 --> 00:23:44.679 +important is how long is the context + +00:23:40.559 --> 00:23:48.320 +length so um pythia is 2K context + +00:23:44.679 --> 00:23:51.360 +compared to llama llama 2's 4K context + +00:23:48.320 --> 00:23:55.000 +um actually llama 1 is 1K context so + +00:23:51.360 --> 00:24:00.000 +Llama Llama Or sorry llama one is 2K + +00:23:55.000 --> 00:24:02.120 +context and llama 2 is 4K context um + +00:24:00.000 --> 00:24:03.880 +another thing is where do they put + +00:24:02.120 --> 00:24:06.240 +biases in the model most people don't + +00:24:03.880 --> 00:24:08.200 +use biases uh anywhere but sometimes + +00:24:06.240 --> 00:24:09.840 +they put them in various places the + +00:24:08.200 --> 00:24:11.919 +other thing is a variety of layer Norm + +00:24:09.840 --> 00:24:13.559 +that people use and Pia was using + +00:24:11.919 --> 00:24:16.240 +standard parametric layer Norm but + +00:24:13.559 --> 00:24:18.000 +gradually people are stepping back from + +00:24:16.240 --> 00:24:21.360 +that and they're using like RMS Norm or + +00:24:18.000 --> 00:24:22.880 +even nonparametric LMS so um small + +00:24:21.360 --> 00:24:25.559 +architecture differences but almost + +00:24:22.880 --> 00:24:29.240 +everybody uses something pretty + +00:24:25.559 --> 00:24:31.960 +similar um the data this was trained on + +00:24:29.240 --> 00:24:34.600 +300 billion tokens of the pile uh which + +00:24:31.960 --> 00:24:37.440 +is on the next slide but one interesting + +00:24:34.600 --> 00:24:39.000 +thing is that they also did a duplicated + +00:24:37.440 --> 00:24:43.320 +training run on + +00:24:39.000 --> 00:24:47.679 +270 s billions of the token ah sorry 207 + +00:24:43.320 --> 00:24:50.559 +billion tokens and um the idea is that + +00:24:47.679 --> 00:24:53.039 +they um they wanted to test how + +00:24:50.559 --> 00:24:54.919 +important it is to duplicate how much do + +00:24:53.039 --> 00:24:56.279 +you gain by D duplicating in terms of + +00:24:54.919 --> 00:24:59.559 +training + +00:24:56.279 --> 00:25:01.520 +efficiency and um + +00:24:59.559 --> 00:25:04.760 +they have different learning rates for + +00:25:01.520 --> 00:25:08.640 +different model sizes the 7B model is uh + +00:25:04.760 --> 00:25:11.760 +1.2 * e to Theus 4 in contrast llama is + +00:25:08.640 --> 00:25:13.120 +3 * eus 4 so this is a potentially big + +00:25:11.760 --> 00:25:16.840 +change because the learning rate is + +00:25:13.120 --> 00:25:18.880 +actually half the size here um is the + +00:25:16.840 --> 00:25:20.559 +batch size they use 2 million tokens and + +00:25:18.880 --> 00:25:23.600 +actually llama 2 uses four million + +00:25:20.559 --> 00:25:26.520 +tokens for the batch size so um there + +00:25:23.600 --> 00:25:29.000 +are some small differences + +00:25:26.520 --> 00:25:31.480 +there so next next I'd like to talk + +00:25:29.000 --> 00:25:33.760 +about the pile um this is kind of the + +00:25:31.480 --> 00:25:36.279 +original open data set for training + +00:25:33.760 --> 00:25:37.960 +large language models um that being said + +00:25:36.279 --> 00:25:42.159 +it's a really nice data set made out of + +00:25:37.960 --> 00:25:47.039 +lots of uh different types of data and + +00:25:42.159 --> 00:25:49.960 +namely it's trained on academic data so + +00:25:47.039 --> 00:25:52.559 +that includes things like PubMed archive + +00:25:49.960 --> 00:25:55.240 +free law the US patent office other + +00:25:52.559 --> 00:25:57.000 +stuff like that it's also trained on + +00:25:55.240 --> 00:26:00.080 +internet data so this is data that's + +00:25:57.000 --> 00:26:02.840 +just scraped from parts of the internet + +00:26:00.080 --> 00:26:05.799 +but also stack Exchange in + +00:26:02.840 --> 00:26:09.480 +Wikipedia um it also has some pros so + +00:26:05.799 --> 00:26:12.200 +these are um like book data sets it has + +00:26:09.480 --> 00:26:15.640 +some code data sets and it has some like + +00:26:12.200 --> 00:26:18.799 +subtitle dialog data sets in it so this + +00:26:15.640 --> 00:26:22.399 +overall is 800 gigabytes or about 300 + +00:26:18.799 --> 00:26:22.399 +billion tokens according to + +00:26:23.360 --> 00:26:28.080 +Tok so some of the findings from the + +00:26:25.760 --> 00:26:30.919 +pipia paper in addition to just being + +00:26:28.080 --> 00:26:33.399 +like one of the original strong uh open + +00:26:30.919 --> 00:26:36.279 +language models is they have some + +00:26:33.399 --> 00:26:38.600 +interesting analysis into um model + +00:26:36.279 --> 00:26:40.960 +memorization and how quickly models + +00:26:38.600 --> 00:26:44.080 +learn uh based on the number of tokens + +00:26:40.960 --> 00:26:45.520 +that you show them and this graph is + +00:26:44.080 --> 00:26:47.520 +maybe a little bit hard to see from the + +00:26:45.520 --> 00:26:49.440 +back so I'll interpret it the left side + +00:26:47.520 --> 00:26:50.840 +is one of their smaller models 160 + +00:26:49.440 --> 00:26:54.880 +million the right side is their biggest + +00:26:50.840 --> 00:26:57.799 +Model 12 billion um the different lines + +00:26:54.880 --> 00:26:58.840 +here are different steps of the training + +00:26:57.799 --> 00:27:03.120 +process + +00:26:58.840 --> 00:27:09.640 +so like uh 13,000 steps uh + +00:27:03.120 --> 00:27:13.840 +30 sorry 39,000 steps and uh etc etc and + +00:27:09.640 --> 00:27:18.240 +the xaxis here is the frequency of a + +00:27:13.840 --> 00:27:21.679 +fact in or a frequency of a fact in the + +00:27:18.240 --> 00:27:24.640 +training data and the y axis is question + +00:27:21.679 --> 00:27:29.159 +answering accuracy about that fact and + +00:27:24.640 --> 00:27:30.919 +so what this is basically showing is + +00:27:29.159 --> 00:27:35.679 +as you scale up the + +00:27:30.919 --> 00:27:38.520 +model um the larger models learn faster + +00:27:35.679 --> 00:27:41.120 +um up to a point so like right here you + +00:27:38.520 --> 00:27:44.519 +see the 2.8 billion model is about the + +00:27:41.120 --> 00:27:46.080 +same as the 12 billion model at earlier + +00:27:44.519 --> 00:27:48.080 +parts of the training + +00:27:46.080 --> 00:27:51.000 +process but as you get later in the + +00:27:48.080 --> 00:27:54.200 +training process the 12 billion model is + +00:27:51.000 --> 00:27:57.279 +like memorizing and being able to recall + +00:27:54.200 --> 00:27:58.840 +more facts uh so like right at the very + +00:27:57.279 --> 00:28:02.519 +beginning you need to scale up to about + +00:27:58.840 --> 00:28:05.840 +2.8 billion to learn efficiently uh but + +00:28:02.519 --> 00:28:07.799 +at the end this model is like better uh + +00:28:05.840 --> 00:28:10.399 +further on + +00:28:07.799 --> 00:28:12.000 +so this is really nice all of this all + +00:28:10.399 --> 00:28:14.240 +of these checkpoints all this data is + +00:28:12.000 --> 00:28:15.840 +open they even made the data loaders so + +00:28:14.240 --> 00:28:17.360 +it's reproducible so you can look at the + +00:28:15.840 --> 00:28:19.559 +actual data that the model was trained + +00:28:17.360 --> 00:28:21.000 +on um at each of the checkpoints so if + +00:28:19.559 --> 00:28:24.320 +you want to do this sort of analysis + +00:28:21.000 --> 00:28:27.120 +this is a good set of um models to look + +00:28:24.320 --> 00:28:28.720 +at um another thing that they did is + +00:28:27.120 --> 00:28:31.120 +they actually did interv itions on the + +00:28:28.720 --> 00:28:35.640 +data so they um tried to intervene on + +00:28:31.120 --> 00:28:37.279 +the data to modify it because uh male or + +00:28:35.640 --> 00:28:38.840 +masculine pronouns were much more + +00:28:37.279 --> 00:28:42.000 +frequent than feminine pronouns in the + +00:28:38.840 --> 00:28:43.919 +data so they intervened on the data um + +00:28:42.000 --> 00:28:45.559 +to try to balance out the distribution + +00:28:43.919 --> 00:28:48.000 +of masculine and feminine pronouns and + +00:28:45.559 --> 00:28:49.559 +demonstrated that the model became less + +00:28:48.000 --> 00:28:52.080 +biased towards generating masculine + +00:28:49.559 --> 00:28:55.480 +pronouns later so they also were able to + +00:28:52.080 --> 00:28:55.480 +do those sorts of intervention + +00:28:55.919 --> 00:29:00.039 +studies um any any questions about + +00:29:00.519 --> 00:29:07.919 +Pia okay um next I want to go too Soo is + +00:29:04.720 --> 00:29:10.279 +a more recent model um Pia I think came + +00:29:07.919 --> 00:29:13.200 +came out around a year agoo is very + +00:29:10.279 --> 00:29:15.440 +recent about a month ago and um this was + +00:29:13.200 --> 00:29:18.360 +created by ai2 the Ellen Institute for + +00:29:15.440 --> 00:29:20.440 +AI one thing you'll notice is the two um + +00:29:18.360 --> 00:29:22.279 +completely open models that I'm talking + +00:29:20.440 --> 00:29:24.799 +about both came from nonprofit + +00:29:22.279 --> 00:29:28.640 +organizations um so Al Luther is + +00:29:24.799 --> 00:29:30.039 +nonprofit uh ai2 is nonprofit so uh + +00:29:28.640 --> 00:29:31.519 +they're maybe a little bit less worried + +00:29:30.039 --> 00:29:34.919 +about people trying to sue them for lots + +00:29:31.519 --> 00:29:36.720 +of money for fair use violations uh so + +00:29:34.919 --> 00:29:38.120 +uh that's the cynical point of view the + +00:29:36.720 --> 00:29:39.679 +the non cynical point of view is they + +00:29:38.120 --> 00:29:42.279 +have nothing to profit by creating a + +00:29:39.679 --> 00:29:44.240 +better model uh by having other people + +00:29:42.279 --> 00:29:47.039 +create a better model so um they're + +00:29:44.240 --> 00:29:50.840 +willing to do this for open uh in good + +00:29:47.039 --> 00:29:54.080 +science um their goal is better science + +00:29:50.840 --> 00:29:55.880 +of State ofth art LMS and uh some of the + +00:29:54.080 --> 00:29:57.600 +unique features are top performance of a + +00:29:55.880 --> 00:29:59.840 +fully documented model and they also + +00:29:57.600 --> 00:30:02.960 +have in construction tun models + +00:29:59.840 --> 00:30:04.960 +Etc looking at the parameters um + +00:30:02.960 --> 00:30:06.240 +basically similar to llama the one big + +00:30:04.960 --> 00:30:08.440 +difference is they're using + +00:30:06.240 --> 00:30:10.440 +non-parametric layer Norm instead of RMS + +00:30:08.440 --> 00:30:13.640 +Norm so this is basically layer Norm + +00:30:10.440 --> 00:30:15.960 +with no parameters whatsoever um they + +00:30:13.640 --> 00:30:18.880 +they didn't super clearly justify why + +00:30:15.960 --> 00:30:21.760 +they decided to do this one difference + +00:30:18.880 --> 00:30:25.519 +from Pia uh this was actually trained on + +00:30:21.760 --> 00:30:29.559 +2.46 trillion tokens uh so compare this + +00:30:25.519 --> 00:30:32.600 +to uh to Pia which was trained on 300 + +00:30:29.559 --> 00:30:34.480 +billion tokens and so they basically + +00:30:32.600 --> 00:30:36.120 +trained it for a lot longer they trained + +00:30:34.480 --> 00:30:37.960 +it on something called the dolma Corpus + +00:30:36.120 --> 00:30:41.480 +which they also created at + +00:30:37.960 --> 00:30:44.279 +ai2 um actually I think this might be + +00:30:41.480 --> 00:30:47.279 +wrong uh so just ignore that that was + +00:30:44.279 --> 00:30:49.760 +copy paste mistake from typ so um they + +00:30:47.279 --> 00:30:52.039 +always use 3E to the minus 4 is a + +00:30:49.760 --> 00:30:53.679 +learning rate which is the same as uh as + +00:30:52.039 --> 00:30:56.039 +llama and the batch size is 4 million + +00:30:53.679 --> 00:30:59.960 +tokens which is also the same as + +00:30:56.039 --> 00:31:02.000 +llama so the domma that they created is + +00:30:59.960 --> 00:31:04.320 +um actually pretty similar to the pile + +00:31:02.000 --> 00:31:07.320 +but it's a larger Corpus it's three + +00:31:04.320 --> 00:31:09.240 +trillion tokens this is also fully open + +00:31:07.320 --> 00:31:11.480 +so you can download it from hugging face + +00:31:09.240 --> 00:31:15.399 +uh if you could find some dis to put + +00:31:11.480 --> 00:31:19.200 +three trillion tokens on um + +00:31:15.399 --> 00:31:21.080 +so uh another thing is that they have a + +00:31:19.200 --> 00:31:23.360 +data processing pipeline of language + +00:31:21.080 --> 00:31:26.240 +filtering quality filtering content + +00:31:23.360 --> 00:31:28.399 +filtering D duplication uh multisource + +00:31:26.240 --> 00:31:31.440 +mixing and tokenization + +00:31:28.399 --> 00:31:33.279 +and so the nice thing about this is a + +00:31:31.440 --> 00:31:35.639 +lot of this stuff is usually proprietary + +00:31:33.279 --> 00:31:38.240 +for most language modeling creators so + +00:31:35.639 --> 00:31:39.600 +if you want to see all of the like data + +00:31:38.240 --> 00:31:41.039 +processing pipeline that goes into + +00:31:39.600 --> 00:31:42.799 +training a model this is a pretty good + +00:31:41.039 --> 00:31:45.320 +example of + +00:31:42.799 --> 00:31:48.120 +that um the document types that are + +00:31:45.320 --> 00:31:51.080 +included are the common crawl and so the + +00:31:48.120 --> 00:31:53.919 +common crawl is just um data crawled + +00:31:51.080 --> 00:31:56.760 +from the Internet it's uh about 2.2 + +00:31:53.919 --> 00:32:00.039 +trillion tokens uh they also have the + +00:31:56.760 --> 00:32:03.399 +stack which is um lots of code about 400 + +00:32:00.039 --> 00:32:09.120 +billion tokens of code um C4 which is + +00:32:03.399 --> 00:32:13.039 +also uh web data uh Reddit um stem + +00:32:09.120 --> 00:32:16.960 +papers books and uh Wikipedia + +00:32:13.039 --> 00:32:19.039 +encyclopedia T so um you can see that it + +00:32:16.960 --> 00:32:21.440 +has a fairly large amount of coverage + +00:32:19.039 --> 00:32:24.480 +although mostly in + +00:32:21.440 --> 00:32:26.799 +English um so some findings from omo + +00:32:24.480 --> 00:32:29.440 +that I found interesting um number one + +00:32:26.799 --> 00:32:31.279 +it has competitive average performance + +00:32:29.440 --> 00:32:34.320 +so as I mentioned I think this is the + +00:32:31.279 --> 00:32:38.519 +first fully open and documented language + +00:32:34.320 --> 00:32:40.639 +model on the 7 billion range that is + +00:32:38.519 --> 00:32:43.360 +competitive with all the other uh kind + +00:32:40.639 --> 00:32:47.080 +of like Less open models in this range + +00:32:43.360 --> 00:32:49.200 +so uh for example uh llama 2 is 70.5 + +00:32:47.080 --> 00:32:51.840 +average on on all of the data sets that + +00:32:49.200 --> 00:32:53.960 +they're evaluating on Falcon is + +00:32:51.840 --> 00:32:58.000 +70.3 MPT is + +00:32:53.960 --> 00:33:00.000 +69.8 and almost 69.3 so it's not a + +00:32:58.000 --> 00:33:04.639 +slouch with respect to accuracy compared + +00:33:00.000 --> 00:33:06.399 +to pipia which had 63 um much of the + +00:33:04.639 --> 00:33:09.120 +issue with pipia could just be that they + +00:33:06.399 --> 00:33:12.080 +didn't train for long enough and some + +00:33:09.120 --> 00:33:15.039 +evidence of this is this is + +00:33:12.080 --> 00:33:17.000 +um where they measured performance + +00:33:15.039 --> 00:33:18.880 +constantly as they train for longer so + +00:33:17.000 --> 00:33:21.440 +the left side is training on 500 billion + +00:33:18.880 --> 00:33:24.080 +tokens which is already more than what + +00:33:21.440 --> 00:33:25.840 +pipia trained on the right side is uh + +00:33:24.080 --> 00:33:30.360 +two uh + +00:33:25.840 --> 00:33:32.679 +2.4 or 2.5 TR I tokens and you can see + +00:33:30.360 --> 00:33:34.440 +interestingly that the numbers are just + +00:33:32.679 --> 00:33:36.760 +continuing to increase as they train for + +00:33:34.440 --> 00:33:39.480 +longer so it seems that training for + +00:33:36.760 --> 00:33:43.679 +longer and longer just kind of + +00:33:39.480 --> 00:33:47.000 +helps um one question is whether they're + +00:33:43.679 --> 00:33:48.679 +like overfitting to uh the data set like + +00:33:47.000 --> 00:33:52.000 +is any of the test data included in + +00:33:48.679 --> 00:33:53.799 +their training data here um they did do + +00:33:52.000 --> 00:33:57.440 +D duplication to some extent to try to + +00:33:53.799 --> 00:33:59.320 +remove the test data so um I I think + +00:33:57.440 --> 00:34:00.919 +it's quite probable that this these are + +00:33:59.320 --> 00:34:02.720 +real gains and if they train for longer + +00:34:00.919 --> 00:34:07.559 +they might get an even better model but + +00:34:02.720 --> 00:34:07.559 +um I'm not you know 100% sure about + +00:34:07.679 --> 00:34:12.639 +that cool + +00:34:10.480 --> 00:34:14.359 +um yeah one one other thing that I + +00:34:12.639 --> 00:34:16.119 +noticed which might be uh might be a + +00:34:14.359 --> 00:34:18.119 +little bit interesting is um all of + +00:34:16.119 --> 00:34:20.240 +these that I didn't mention here is all + +00:34:18.119 --> 00:34:21.760 +of these have a learning rate schedule + +00:34:20.240 --> 00:34:23.679 +and typically they have a learning rate + +00:34:21.760 --> 00:34:25.760 +schedule where they do this standard + +00:34:23.679 --> 00:34:29.159 +warmup where they increase and then they + +00:34:25.760 --> 00:34:30.960 +decrease but they St decreasing at a a + +00:34:29.159 --> 00:34:34.040 +floor and usually that floor is about + +00:34:30.960 --> 00:34:36.720 +one1 the size of the um of the original + +00:34:34.040 --> 00:34:38.520 +learning rate so the if they start out 3 + +00:34:36.720 --> 00:34:41.919 +e to Theus 4 they'll decrease it but + +00:34:38.520 --> 00:34:43.960 +only to 3 eus2 and then they're can so + +00:34:41.919 --> 00:34:46.079 +that might be another good thing to put + +00:34:43.960 --> 00:34:46.079 +it + +00:34:46.480 --> 00:34:51.240 +out cool any questions about + +00:34:51.320 --> 00:34:58.599 +this okay um so now I'll get into L 2 um + +00:34:56.560 --> 00:35:00.200 +in Lama 2 you know is a model that + +00:34:58.599 --> 00:35:04.400 +probably most people have heard about it + +00:35:00.200 --> 00:35:07.599 +was created by meta um it's one of the + +00:35:04.400 --> 00:35:09.480 +uh strongest open language models now + +00:35:07.599 --> 00:35:10.839 +although arguably there might be + +00:35:09.480 --> 00:35:15.000 +stronger open language + +00:35:10.839 --> 00:35:18.400 +models and the goal is a strong and safe + +00:35:15.000 --> 00:35:21.320 +open LM and they have base and chat + +00:35:18.400 --> 00:35:23.400 +versions of it and some unique features + +00:35:21.320 --> 00:35:24.680 +are I think this is the open model with + +00:35:23.400 --> 00:35:30.119 +the strongest + +00:35:24.680 --> 00:35:30.119 +safety uh safeguards so it + +00:35:30.200 --> 00:35:35.079 +is if I were to pick one model that I + +00:35:33.079 --> 00:35:37.200 +wanted to use in an actual system that + +00:35:35.079 --> 00:35:39.599 +was directly conversing with users I + +00:35:37.200 --> 00:35:41.920 +would probably pick this one over + +00:35:39.599 --> 00:35:43.760 +something like uh mistol even though + +00:35:41.920 --> 00:35:46.599 +mistol shows Superior performance some + +00:35:43.760 --> 00:35:48.680 +of the time um it might say things that + +00:35:46.599 --> 00:35:52.000 +you don't want it to be saying to like + +00:35:48.680 --> 00:35:55.520 +users so I think that's one of the uh + +00:35:52.000 --> 00:35:56.880 +the nice things about M so I've been + +00:35:55.520 --> 00:35:58.280 +comparing everything else to it so + +00:35:56.880 --> 00:36:00.560 +that's pretty normal + +00:35:58.280 --> 00:36:03.160 +um one thing about the data is the data + +00:36:00.560 --> 00:36:04.520 +is not open they didn't say what data + +00:36:03.160 --> 00:36:06.960 +they trained on for reasons that I + +00:36:04.520 --> 00:36:08.960 +talked about before um what they did say + +00:36:06.960 --> 00:36:12.400 +is it was trained on public sources + +00:36:08.960 --> 00:36:14.240 +upsampling the most factual sources so + +00:36:12.400 --> 00:36:17.640 +um that's what they + +00:36:14.240 --> 00:36:19.240 +said the Llama one paper has more + +00:36:17.640 --> 00:36:20.760 +information and so I'll talk about what + +00:36:19.240 --> 00:36:22.400 +they did in the Llama one paper and we + +00:36:20.760 --> 00:36:24.920 +can maybe extrapolate that they did + +00:36:22.400 --> 00:36:26.560 +something similar in the LL tube paper + +00:36:24.920 --> 00:36:28.200 +um and then the total training amount is + +00:36:26.560 --> 00:36:30.079 +2 trillion tokens so that's actually + +00:36:28.200 --> 00:36:32.680 +less + +00:36:30.079 --> 00:36:34.520 +than um so if we look at the Llama 1 + +00:36:32.680 --> 00:36:36.319 +training data it looks a little bit like + +00:36:34.520 --> 00:36:38.839 +it looks very much like Theo training + +00:36:36.319 --> 00:36:41.200 +data it's common crawl C4 GitHub + +00:36:38.839 --> 00:36:45.160 +Wikipedia books archives stack + +00:36:41.200 --> 00:36:46.400 +exchange um and one thing you'll notice + +00:36:45.160 --> 00:36:49.200 +is that they + +00:36:46.400 --> 00:36:51.599 +upsampled uh Wikipedia and books and + +00:36:49.200 --> 00:36:53.319 +down sampled GitHub according compared + +00:36:51.599 --> 00:36:57.000 +to the amount of data that they actually + +00:36:53.319 --> 00:37:00.760 +had and so they did 2.4 EPO over + +00:36:57.000 --> 00:37:03.040 +Wikipedia 2.2 epochs over books and only + +00:37:00.760 --> 00:37:05.880 +one Epoch over like the standard web + +00:37:03.040 --> 00:37:08.240 +data and archive and stack exchange and + +00:37:05.880 --> 00:37:09.760 +0.6 epx over the GitHub data that they + +00:37:08.240 --> 00:37:11.520 +had so + +00:37:09.760 --> 00:37:13.800 +obviously + +00:37:11.520 --> 00:37:15.520 +they thought that this Wikipedia and + +00:37:13.800 --> 00:37:17.040 +books data was more valuable for some + +00:37:15.520 --> 00:37:20.560 +reason and they really wanted the model + +00:37:17.040 --> 00:37:22.319 +to to learn well out it so I think um + +00:37:20.560 --> 00:37:24.240 +when they say that they upsampled + +00:37:22.319 --> 00:37:27.960 +factual data I'm assuming that that's + +00:37:24.240 --> 00:37:27.960 +also what they did in mud + +00:37:29.440 --> 00:37:33.640 +so the next thing um that's + +00:37:35.960 --> 00:37:43.160 +yeah uh what does it need to have + +00:37:40.280 --> 00:37:45.400 +like oh um yeah actually that's a really + +00:37:43.160 --> 00:37:47.960 +good question so why are EPO not integer + +00:37:45.400 --> 00:37:50.240 +values there's actually no reason at all + +00:37:47.960 --> 00:37:52.040 +that you should do you know an integer + +00:37:50.240 --> 00:37:54.760 +value of epo you can always save out a + +00:37:52.040 --> 00:37:57.560 +checkpoint every you know 10,000 steps + +00:37:54.760 --> 00:37:59.200 +or something so I'd actually encourage + +00:37:57.560 --> 00:38:02.040 +people to get away from saving out + +00:37:59.200 --> 00:38:03.640 +checkpoints every Epoch because that + +00:38:02.040 --> 00:38:05.319 +kind of discourages you from making your + +00:38:03.640 --> 00:38:07.160 +training data larger because if you make + +00:38:05.319 --> 00:38:09.359 +your training data larger it will take + +00:38:07.160 --> 00:38:11.760 +you'll think oh training takes forever + +00:38:09.359 --> 00:38:13.480 +um because it takes forever to use an + +00:38:11.760 --> 00:38:16.599 +Epoch but in reality you can just save + +00:38:13.480 --> 00:38:18.760 +out you know periodically and um and + +00:38:16.599 --> 00:38:21.319 +keep the checkpoints from earlier + +00:38:18.760 --> 00:38:22.680 +so many language models don't train on + +00:38:21.319 --> 00:38:24.480 +all the data on the web because it would + +00:38:22.680 --> 00:38:25.800 +just be too expensive to do so despite + +00:38:24.480 --> 00:38:27.640 +the fact that they have all the data on + +00:38:25.800 --> 00:38:29.079 +the web + +00:38:27.640 --> 00:38:31.000 +but very good question though it's + +00:38:29.079 --> 00:38:34.560 +that's an important + +00:38:31.000 --> 00:38:36.280 +Point um okay so now I'd like to talk a + +00:38:34.560 --> 00:38:39.440 +little bit about the safety tuning that + +00:38:36.280 --> 00:38:42.359 +goes into uh the Llama models I might + +00:38:39.440 --> 00:38:45.640 +talk a little bit more about this um + +00:38:42.359 --> 00:38:48.960 +later but I I think uh I'll I'll talk + +00:38:45.640 --> 00:38:51.480 +about it now um basically the Llama 2 + +00:38:48.960 --> 00:38:54.200 +developers put a lot of effort into + +00:38:51.480 --> 00:38:56.400 +training the model to be safe because um + +00:38:54.200 --> 00:38:59.599 +you know they're a big company and they + +00:38:56.400 --> 00:39:01.200 +don't want any PR design disasters um uh + +00:38:59.599 --> 00:39:02.680 +and also you know they want an actual + +00:39:01.200 --> 00:39:04.960 +safe model that they can use and to BL + +00:39:02.680 --> 00:39:08.240 +their products so I think they have the + +00:39:04.960 --> 00:39:10.880 +Dual uh you know dual motivation + +00:39:08.240 --> 00:39:13.200 +there the first thing that they did was + +00:39:10.880 --> 00:39:15.960 +they collected lots of data for reward + +00:39:13.200 --> 00:39:17.520 +modeling and reward modeling what they + +00:39:15.960 --> 00:39:19.720 +say what they're calling reward modeling + +00:39:17.520 --> 00:39:23.720 +is basically preference modeling so they + +00:39:19.720 --> 00:39:26.359 +have you know multiple outputs where the + +00:39:23.720 --> 00:39:28.359 +two outputs are somehow ranked for + +00:39:26.359 --> 00:39:29.960 +preferences and I talked about this when + +00:39:28.359 --> 00:39:31.839 +I was talking about DPO in the + +00:39:29.960 --> 00:39:35.720 +reinforcement learning class for + +00:39:31.839 --> 00:39:38.480 +example um a lot of these actually exist + +00:39:35.720 --> 00:39:41.920 +so there's um like the anthropic helpful + +00:39:38.480 --> 00:39:45.599 +and harmless data sets uh these open AI + +00:39:41.920 --> 00:39:48.200 +data sets uh from web GPT stack exchange + +00:39:45.599 --> 00:39:50.160 +on stack exchange they have um helpful + +00:39:48.200 --> 00:39:52.240 +answers and not helpful answers so once + +00:39:50.160 --> 00:39:57.720 +that you give thumbs up and thumbs down + +00:39:52.240 --> 00:39:59.839 +to and um the Stanford uh human + +00:39:57.720 --> 00:40:03.040 +preferences data set I I forget what s + +00:39:59.839 --> 00:40:05.800 +stands for human preferences data set + +00:40:03.040 --> 00:40:09.400 +basically this is um where they tried to + +00:40:05.800 --> 00:40:11.599 +find Reddit posts I think Reddit posts + +00:40:09.400 --> 00:40:13.720 +that got more upvotes despite the fact + +00:40:11.599 --> 00:40:16.400 +that they were posted later than a a + +00:40:13.720 --> 00:40:18.720 +previous one so the idea is like usually + +00:40:16.400 --> 00:40:21.359 +the first post posts get more up votes + +00:40:18.720 --> 00:40:22.880 +so if you get more up votes for a later + +00:40:21.359 --> 00:40:25.240 +post that indicates that you're probably + +00:40:22.880 --> 00:40:27.640 +more valuable than the earlier post so + +00:40:25.240 --> 00:40:30.880 +kind of clever uh clever way of creating + +00:40:27.640 --> 00:40:33.680 +data um I'm actually not sure what the + +00:40:30.880 --> 00:40:36.240 +synthetic jpj was I didn't look at that + +00:40:33.680 --> 00:40:37.640 +and then separately from that um meta + +00:40:36.240 --> 00:40:39.599 +collected a very large amount of + +00:40:37.640 --> 00:40:42.400 +internal data that they didn't release + +00:40:39.599 --> 00:40:44.319 +uh for tuning llama and they did this + +00:40:42.400 --> 00:40:46.760 +through various iterations so basically + +00:40:44.319 --> 00:40:49.839 +what they did is they created a first + +00:40:46.760 --> 00:40:53.240 +version of the model um they let it you + +00:40:49.839 --> 00:40:55.599 +loose on users they also did some uh + +00:40:53.240 --> 00:40:56.960 +some data collection with uh people who + +00:40:55.599 --> 00:40:59.720 +were actually trying to break the model + +00:40:56.960 --> 00:41:01.200 +and get getting it to say bad things + +00:40:59.720 --> 00:41:02.760 +they collected preference data from + +00:41:01.200 --> 00:41:04.599 +these people and then they iterated over + +00:41:02.760 --> 00:41:06.960 +and over again to collect more and more + +00:41:04.599 --> 00:41:09.720 +of this data on various uh versions of + +00:41:06.960 --> 00:41:11.280 +the model so as the model got gets + +00:41:09.720 --> 00:41:14.079 +better you know it's going to be harder + +00:41:11.280 --> 00:41:16.240 +to collect this data but um they want to + +00:41:14.079 --> 00:41:17.920 +try to improve the current model that + +00:41:16.240 --> 00:41:20.599 +they + +00:41:17.920 --> 00:41:22.680 +have so the next step that they did was + +00:41:20.599 --> 00:41:26.079 +they trained a model to follow these + +00:41:22.680 --> 00:41:27.920 +preferences and so they trained a model + +00:41:26.079 --> 00:41:32.560 +that basically can predict human + +00:41:27.920 --> 00:41:35.119 +preference given um given to uh language + +00:41:32.560 --> 00:41:37.680 +model outputs and this is a hard problem + +00:41:35.119 --> 00:41:40.440 +right because these are language model + +00:41:37.680 --> 00:41:42.760 +outputs and the language model thought + +00:41:40.440 --> 00:41:45.480 +it was a good output regardless because + +00:41:42.760 --> 00:41:47.319 +otherwise it wouldn't be sampling and so + +00:41:45.480 --> 00:41:49.720 +you need to distinguish between two very + +00:41:47.319 --> 00:41:52.240 +fluent looking outputs where one is + +00:41:49.720 --> 00:41:56.880 +preferred and one is not preferred so + +00:41:52.240 --> 00:41:58.359 +even kind of strong models like um oh by + +00:41:56.880 --> 00:42:00.319 +the way there are some open reward + +00:41:58.359 --> 00:42:02.119 +models like this open Assistant reward + +00:42:00.319 --> 00:42:03.839 +model is publicly available and you can + +00:42:02.119 --> 00:42:08.520 +just go and download it if you want if + +00:42:03.839 --> 00:42:10.920 +you want it um but this if you evaluate + +00:42:08.520 --> 00:42:14.720 +it on this anthropic uh helpful and + +00:42:10.920 --> 00:42:16.160 +harmless data set um this gets about 67 + +00:42:14.720 --> 00:42:18.760 +or 68 + +00:42:16.160 --> 00:42:24.680 +accuracy + +00:42:18.760 --> 00:42:27.200 +um but if you evaluate it on um this + +00:42:24.680 --> 00:42:29.480 +like open Assistant data set or sorry if + +00:42:27.200 --> 00:42:33.359 +you evaluate the public models including + +00:42:29.480 --> 00:42:36.079 +gp4 on The Meta data set actually it's + +00:42:33.359 --> 00:42:38.720 +pretty hard for um to distinguish + +00:42:36.079 --> 00:42:41.319 +between the things and here they're + +00:42:38.720 --> 00:42:44.720 +evaluating both helpful and harmless or + +00:42:41.319 --> 00:42:47.400 +helpful and safety and the reason why is + +00:42:44.720 --> 00:42:49.119 +because like it's very easy to create a + +00:42:47.400 --> 00:42:51.119 +very safe but not helpful at all model + +00:42:49.119 --> 00:42:53.640 +by saying I don't know all the time it's + +00:42:51.119 --> 00:42:55.480 +very it's relatively easy to create a + +00:42:53.640 --> 00:42:57.880 +helpful model that's very unsafe like it + +00:42:55.480 --> 00:42:59.480 +will do anything you want and so they + +00:42:57.880 --> 00:43:01.599 +want a balance between the two and they + +00:42:59.480 --> 00:43:03.480 +evaluate them separately they also + +00:43:01.599 --> 00:43:05.280 +created two different separate reward + +00:43:03.480 --> 00:43:07.880 +models so they created one reward model + +00:43:05.280 --> 00:43:10.079 +to distinguish safety and another reward + +00:43:07.880 --> 00:43:13.440 +model to distinguish helpfulness and + +00:43:10.079 --> 00:43:14.760 +they Ed these separately to uh to train + +00:43:13.440 --> 00:43:17.359 +the model and you can see that the + +00:43:14.760 --> 00:43:18.920 +helpfulness model does a lot better on + +00:43:17.359 --> 00:43:20.640 +discriminating between helpful things + +00:43:18.920 --> 00:43:22.319 +and the safety model does a lot better + +00:43:20.640 --> 00:43:23.760 +on discriminate or does a little better + +00:43:22.319 --> 00:43:25.960 +on discriminating between safe and + +00:43:23.760 --> 00:43:28.480 +unsafe + +00:43:25.960 --> 00:43:29.920 +things um + +00:43:28.480 --> 00:43:33.640 +actually I didn't include this in the + +00:43:29.920 --> 00:43:35.400 +slides but they also have an interesting + +00:43:33.640 --> 00:43:38.920 +graph that + +00:43:35.400 --> 00:43:41.119 +demonstrates um how good the reward + +00:43:38.920 --> 00:43:42.640 +models are based on their size and it + +00:43:41.119 --> 00:43:44.359 +turns out that this is a place where + +00:43:42.640 --> 00:43:47.559 +it's really really important to use a + +00:43:44.359 --> 00:43:49.760 +large and Powerful language model to + +00:43:47.559 --> 00:43:51.319 +determine your reward because they + +00:43:49.760 --> 00:43:52.680 +demonstrate that the 70 billion + +00:43:51.319 --> 00:43:55.280 +parameter model that they used is + +00:43:52.680 --> 00:43:57.359 +actually far better than the um than the + +00:43:55.280 --> 00:44:00.079 +smaller models that they used it + +00:43:57.359 --> 00:44:00.079 +predicting this + +00:44:01.359 --> 00:44:07.760 +reward so this is um a graph of their + +00:44:05.200 --> 00:44:10.480 +incremental training process for safety + +00:44:07.760 --> 00:44:12.640 +tuning and um you can see they have + +00:44:10.480 --> 00:44:15.920 +their first supervised fine tuned model + +00:44:12.640 --> 00:44:19.440 +this is with no um like RL or anything + +00:44:15.920 --> 00:44:22.240 +like this this is a second model + +00:44:19.440 --> 00:44:24.760 +um and uh it improves a lot with respect + +00:44:22.240 --> 00:44:28.119 +to helpfulness and then they do more and + +00:44:24.760 --> 00:44:30.400 +more rhf uh where they start with the + +00:44:28.119 --> 00:44:33.200 +like supervised fine tune model and and + +00:44:30.400 --> 00:44:36.079 +gradually do um add more reward data + +00:44:33.200 --> 00:44:38.200 +train with a better reward model and get + +00:44:36.079 --> 00:44:39.800 +to the end where they finally have the + +00:44:38.200 --> 00:44:41.359 +best model that and I believe this is + +00:44:39.800 --> 00:44:43.200 +the one that they actually released so + +00:44:41.359 --> 00:44:45.000 +you can see that they really put a lot + +00:44:43.200 --> 00:44:46.520 +of effort into making this model you + +00:44:45.000 --> 00:44:49.800 +know safe and that's one of the main + +00:44:46.520 --> 00:44:49.800 +points of the paper that they had + +00:44:51.319 --> 00:44:57.920 +here um another interesting part of the + +00:44:55.119 --> 00:45:02.319 +Llama 2 paper is how how they got it to + +00:44:57.920 --> 00:45:05.280 +follow chat instructions and so um I I + +00:45:02.319 --> 00:45:06.640 +think you're all familiar from the class + +00:45:05.280 --> 00:45:10.040 +where I talked about + +00:45:06.640 --> 00:45:13.000 +prompting B where basically they um + +00:45:10.040 --> 00:45:16.119 +prompt the language model using a system + +00:45:13.000 --> 00:45:20.359 +message and um a user message and an + +00:45:16.119 --> 00:45:23.160 +assistant message and so um the + +00:45:20.359 --> 00:45:25.000 +characteristic of the system message is + +00:45:23.160 --> 00:45:28.240 +this is something that you want to be + +00:45:25.000 --> 00:45:32.319 +obeyed throughout the um entire + +00:45:28.240 --> 00:45:34.599 +conversation right and + +00:45:32.319 --> 00:45:36.760 +so in order to get this obeyed + +00:45:34.599 --> 00:45:38.079 +throughout the entire conversation you + +00:45:36.760 --> 00:45:39.760 +need a model that's good at paying + +00:45:38.079 --> 00:45:40.760 +attent paying particular attention to + +00:45:39.760 --> 00:45:43.160 +the system + +00:45:40.760 --> 00:45:45.319 +message um in this example I'm saying + +00:45:43.160 --> 00:45:46.880 +write in only emojis so you no matter + +00:45:45.319 --> 00:45:48.720 +how long this conversation gets you want + +00:45:46.880 --> 00:45:50.599 +your model to continue writing in emojis + +00:45:48.720 --> 00:45:53.440 +and models don't do this + +00:45:50.599 --> 00:45:56.559 +spontaneously so what they did here and + +00:45:53.440 --> 00:45:58.359 +I'm I'm 90% 95% certain that my + +00:45:56.559 --> 00:45:59.800 +interpret of the paper is correct the + +00:45:58.359 --> 00:46:03.319 +paper is a little bit hard to understand + +00:45:59.800 --> 00:46:06.720 +with respect to this but um the uh what + +00:46:03.319 --> 00:46:10.480 +they I think they do is they take the + +00:46:06.720 --> 00:46:13.200 +system message and then they have a data + +00:46:10.480 --> 00:46:16.160 +generation step where they + +00:46:13.200 --> 00:46:19.079 +basically ask an existing model to write + +00:46:16.160 --> 00:46:21.400 +and only emojis and then say hello and + +00:46:19.079 --> 00:46:23.640 +then the model generates something and + +00:46:21.400 --> 00:46:26.599 +then they say again write in only emojis + +00:46:23.640 --> 00:46:28.440 +how are you doing and then they uh they + +00:46:26.599 --> 00:46:29.599 +generate it again and because this is so + +00:46:28.440 --> 00:46:32.680 +close in the + +00:46:29.599 --> 00:46:35.440 +context um the assistant basically will + +00:46:32.680 --> 00:46:36.760 +be will you know continue paying + +00:46:35.440 --> 00:46:39.119 +attention to these + +00:46:36.760 --> 00:46:40.599 +directions um and then after that now + +00:46:39.119 --> 00:46:42.640 +you have a data set that you can train + +00:46:40.599 --> 00:46:44.280 +your model on you can train your model + +00:46:42.640 --> 00:46:46.880 +on this generated data set that looks + +00:46:44.280 --> 00:46:49.079 +like write an only emojis say hello uh + +00:46:46.880 --> 00:46:50.480 +how are you doing and stuff like this + +00:46:49.079 --> 00:46:54.040 +and they try this with a whole bunch of + +00:46:50.480 --> 00:46:57.880 +rules it's like right um right as if + +00:46:54.040 --> 00:47:00.559 +you're explaining to a 5-year-old or um + +00:46:57.880 --> 00:47:02.720 +write in a very polite manner write in a + +00:47:00.559 --> 00:47:03.960 +very informal Manner and stuff like that + +00:47:02.720 --> 00:47:06.480 +so they generate a whole bunch of the + +00:47:03.960 --> 00:47:08.480 +synthetic data and in doing this they + +00:47:06.480 --> 00:47:09.960 +basically are able to train the model to + +00:47:08.480 --> 00:47:11.559 +pay very close attention to the system + +00:47:09.960 --> 00:47:13.480 +message because it needs to do so in + +00:47:11.559 --> 00:47:17.319 +order to do + +00:47:13.480 --> 00:47:19.160 +better so um yeah these are kind of the + +00:47:17.319 --> 00:47:20.599 +unique characteristics from lava 2 I'd + +00:47:19.160 --> 00:47:21.960 +love to tell you more about its training + +00:47:20.599 --> 00:47:24.520 +data and all that other stuff but they + +00:47:21.960 --> 00:47:26.240 +didn't tell us uh like what they did + +00:47:24.520 --> 00:47:28.839 +with respect to that so we'll just have + +00:47:26.240 --> 00:47:28.839 +to infer + +00:47:28.960 --> 00:47:33.559 +on cool uh any questions about + +00:47:33.800 --> 00:47:39.160 +this okay + +00:47:36.640 --> 00:47:40.839 +go so next I want to go into mistol and + +00:47:39.160 --> 00:47:42.599 +mixol this is going to be a little bit + +00:47:40.839 --> 00:47:44.200 +short because I've kind of covered some + +00:47:42.599 --> 00:47:45.720 +of the stuff already and also they + +00:47:44.200 --> 00:47:48.240 +didn't tell you very much about the + +00:47:45.720 --> 00:47:52.240 +training process um basically it was + +00:47:48.240 --> 00:47:54.079 +created by mistol um AI the company and + +00:47:52.240 --> 00:47:56.839 +it's a strong and somewhat multilingual + +00:47:54.079 --> 00:47:59.400 +open language model um it has some + +00:47:56.839 --> 00:48:01.760 +unique features like speed optimizations + +00:47:59.400 --> 00:48:03.200 +in um including grouped query attention + +00:48:01.760 --> 00:48:06.200 +and mixture of + +00:48:03.200 --> 00:48:06.200 +experts + +00:48:06.599 --> 00:48:12.359 +um it makes unlike the other ones it + +00:48:10.599 --> 00:48:14.599 +makes some actual architectural + +00:48:12.359 --> 00:48:17.599 +modifications including sliding window + +00:48:14.599 --> 00:48:19.160 +attention and um mixture of experts and + +00:48:17.599 --> 00:48:21.079 +I I have actually talked about both of + +00:48:19.160 --> 00:48:23.640 +them so I'll just very briefly go + +00:48:21.079 --> 00:48:26.040 +through them here um the data as far as + +00:48:23.640 --> 00:48:28.559 +I could tell was not disclosed uh very + +00:48:26.040 --> 00:48:30.480 +completely but one important thing is it + +00:48:28.559 --> 00:48:32.160 +includes English and European languages + +00:48:30.480 --> 00:48:35.520 +so at least theoretically it should be + +00:48:32.160 --> 00:48:38.040 +better than llama at this um one + +00:48:35.520 --> 00:48:39.559 +interesting thing about llama is llama + +00:48:38.040 --> 00:48:40.680 +if I remember correctly the actual + +00:48:39.559 --> 00:48:42.880 +numbers are in the paper but it's + +00:48:40.680 --> 00:48:47.920 +something like 85% + +00:48:42.880 --> 00:48:52.400 +English um 8% code and then like + +00:48:47.920 --> 00:48:54.559 +0.3% other languages like um starting at + +00:48:52.400 --> 00:48:57.280 +all the other languages it's like 0.3% + +00:48:54.559 --> 00:48:59.680 +so it's not very multilingual at all + +00:48:57.280 --> 00:49:01.319 +um and they were really only aiming to + +00:48:59.680 --> 00:49:04.799 +create a good uh English + +00:49:01.319 --> 00:49:06.200 +model um also the training uh details + +00:49:04.799 --> 00:49:08.280 +were not disclosed here like I wasn't + +00:49:06.200 --> 00:49:12.400 +able to find the back sides as far as I + +00:49:08.280 --> 00:49:15.119 +know um so mistol uses sliding window + +00:49:12.400 --> 00:49:18.200 +attention uh vanilla attention basically + +00:49:15.119 --> 00:49:21.440 +you always attend to all of the previous + +00:49:18.200 --> 00:49:24.880 +things in the sequence what mistol does + +00:49:21.440 --> 00:49:28.119 +is it attends to the previous n um + +00:49:24.880 --> 00:49:30.559 +examples where n is equal to 4090 6 and + +00:49:28.119 --> 00:49:34.839 +because of this uh what this means is + +00:49:30.559 --> 00:49:37.200 +you can attend uh 4096 back and then in + +00:49:34.839 --> 00:49:39.280 +the next layer you can attend 4096 back + +00:49:37.200 --> 00:49:41.599 +then you can attend 4096 back so + +00:49:39.280 --> 00:49:44.400 +basically as many layers as you have + +00:49:41.599 --> 00:49:47.240 +times 4096 you can attend that many + +00:49:44.400 --> 00:49:49.000 +tokens back for a minimal training + +00:49:47.240 --> 00:49:50.760 +penalty because still the length of + +00:49:49.000 --> 00:49:55.079 +attention for any particular token is + +00:49:50.760 --> 00:49:57.440 +the same uh so that's one + +00:49:55.079 --> 00:50:00.400 +feature oh and then yeah sorry the other + +00:49:57.440 --> 00:50:01.920 +feature is mixol is using um is using a + +00:50:00.400 --> 00:50:05.920 +mixture of experts like we talked about + +00:50:01.920 --> 00:50:07.720 +in the previous time so um despite these + +00:50:05.920 --> 00:50:09.520 +uh these are very strong models they're + +00:50:07.720 --> 00:50:12.960 +generally stronger than llama at a lot + +00:50:09.520 --> 00:50:15.480 +of things um and mixol is actually a lot + +00:50:12.960 --> 00:50:18.200 +faster and easier to deploy than llama + +00:50:15.480 --> 00:50:20.680 +70b uh it's smaller it only has 45 + +00:50:18.200 --> 00:50:23.680 +billion parameters so it's definitely a + +00:50:20.680 --> 00:50:26.680 +good choice if you want to use it yeah + +00:50:23.680 --> 00:50:26.680 +makinging + +00:50:28.720 --> 00:50:33.000 +yeah so it's attending to 496 + +00:50:33.520 --> 00:50:39.559 +C so the contact size + +00:50:37.720 --> 00:50:43.240 +typically like let's say you have a + +00:50:39.559 --> 00:50:45.240 +block of 4096 tokens here typically that + +00:50:43.240 --> 00:50:48.079 +means that the first token attends to + +00:50:45.240 --> 00:50:51.200 +zero tokens the second token attends to + +00:50:48.079 --> 00:50:54.640 +one token and the third token attends to + +00:50:51.200 --> 00:50:58.920 +two tokens here this is maybe a little + +00:50:54.640 --> 00:51:01.680 +bit uh Mis mislead I guess but if your + +00:50:58.920 --> 00:51:04.079 +context length is 4096 you actually get + +00:51:01.680 --> 00:51:07.760 +a block of twice that size you get a + +00:51:04.079 --> 00:51:10.960 +block of 8192 tokens and so the first + +00:51:07.760 --> 00:51:15.839 +one attends to all of the previous + +00:51:10.960 --> 00:51:17.760 +ones so the first uh sorry so + +00:51:15.839 --> 00:51:19.960 +the + +00:51:17.760 --> 00:51:22.280 +um so the + +00:51:19.960 --> 00:51:26.760 +40 + +00:51:22.280 --> 00:51:29.280 +9 7 token + +00:51:26.760 --> 00:51:32.280 +back to um all from + +00:51:29.280 --> 00:51:36.319 +[Music] + +00:51:32.280 --> 00:51:36.319 +to sorry either + +00:51:41.160 --> 00:51:46.880 +one96 and + +00:51:43.839 --> 00:51:50.520 +so because of that you moan to the very + +00:51:46.880 --> 00:51:50.520 +end then you have the 8198 + +00:51:50.880 --> 00:51:55.359 +seconding from like9 + +00:51:58.480 --> 00:52:01.920 +and so like every token is always + +00:52:00.319 --> 00:52:05.280 +attending to the previous one and that + +00:52:01.920 --> 00:52:08.200 +allows you to um to kind of attend to + +00:52:05.280 --> 00:52:08.200 +things in the previous + +00:52:11.760 --> 00:52:18.520 +BL uh no it's big so that allows them to + +00:52:15.000 --> 00:52:22.000 +attend a very large + +00:52:18.520 --> 00:52:24.599 +am cool um so the next one I'd like to + +00:52:22.000 --> 00:52:26.559 +talk about is quen this is one that in + +00:52:24.599 --> 00:52:29.040 +the US at least people maybe pay a a + +00:52:26.559 --> 00:52:33.000 +little bit less attention to um but it + +00:52:29.040 --> 00:52:35.680 +was created by Alibaba and it's a strong + +00:52:33.000 --> 00:52:37.559 +um multilingual model especially English + +00:52:35.680 --> 00:52:39.119 +and Chinese but even uh in other + +00:52:37.559 --> 00:52:41.000 +languages as + +00:52:39.119 --> 00:52:43.480 +well + +00:52:41.000 --> 00:52:45.160 +and uh one of its defining + +00:52:43.480 --> 00:52:48.240 +characteristics other than just being a + +00:52:45.160 --> 00:52:50.160 +strong model overall is that it's has a + +00:52:48.240 --> 00:52:51.799 +large vocabulary for multilingual + +00:52:50.160 --> 00:52:56.000 +support and strong + +00:52:51.799 --> 00:52:58.760 +performance um it comes in several sizes + +00:52:56.000 --> 00:53:01.880 +um I + +00:52:58.760 --> 00:53:04.799 +believe uh there's a 7B version and then + +00:53:01.880 --> 00:53:10.119 +there's also like a large like 70b + +00:53:04.799 --> 00:53:13.480 +version 72b I think and it's using very + +00:53:10.119 --> 00:53:15.319 +standard uh architecture things the only + +00:53:13.480 --> 00:53:18.119 +small difference it has is it has a bias + +00:53:15.319 --> 00:53:19.920 +in the attention layer which is doesn't + +00:53:18.119 --> 00:53:23.559 +uh exist in + +00:53:19.920 --> 00:53:25.880 +llama um an important thing is it's + +00:53:23.559 --> 00:53:28.920 +actually trained on multilingual data + +00:53:25.880 --> 00:53:32.720 +and they use a large vocabulary um they + +00:53:28.920 --> 00:53:33.839 +use a vocabulary of 150k in contrast to + +00:53:32.720 --> 00:53:36.599 +llama's + +00:53:33.839 --> 00:53:39.839 +32k and that allows it to handle + +00:53:36.599 --> 00:53:41.720 +multilingual uh data relatively + +00:53:39.839 --> 00:53:47.079 +well + +00:53:41.720 --> 00:53:49.359 +and um we have the three uh similar you + +00:53:47.079 --> 00:53:52.760 +know training regimes so overall it's + +00:53:49.359 --> 00:53:55.559 +not very diff different from uh + +00:53:52.760 --> 00:53:57.040 +llama what might be different is data + +00:53:55.559 --> 00:53:59.319 +engineering + +00:53:57.040 --> 00:54:00.680 +uh and actually I I expect the data + +00:53:59.319 --> 00:54:02.760 +engineering part is a bit different + +00:54:00.680 --> 00:54:06.400 +because overall it's a bit stronger than + +00:54:02.760 --> 00:54:09.920 +llama 2 um and I I think uh that has to + +00:54:06.400 --> 00:54:12.119 +do with data in in various areas one + +00:54:09.920 --> 00:54:16.920 +interesting piece from the paper that + +00:54:12.119 --> 00:54:18.280 +they have is uh if we think all the way + +00:54:16.920 --> 00:54:21.720 +back to when we talked about word + +00:54:18.280 --> 00:54:23.839 +subword models and word tokenization we + +00:54:21.720 --> 00:54:27.760 +remember that subword models split up + +00:54:23.839 --> 00:54:29.920 +the input and they split up the input uh + +00:54:27.760 --> 00:54:31.799 +so that frequent tokens get longer + +00:54:29.920 --> 00:54:34.520 +outputs and infrequent tokens get + +00:54:31.799 --> 00:54:36.359 +shorter outputs so one of the problems + +00:54:34.520 --> 00:54:40.559 +as I mentioned a long time ago when we + +00:54:36.359 --> 00:54:42.040 +covered this topic is this causes issues + +00:54:40.559 --> 00:54:43.000 +if you're doing multilingual things + +00:54:42.040 --> 00:54:44.880 +because if you have very little + +00:54:43.000 --> 00:54:47.520 +multilingual data in your training data + +00:54:44.880 --> 00:54:49.040 +for the subword tokenization model um it + +00:54:47.520 --> 00:54:51.559 +will end up splitting all of the words + +00:54:49.040 --> 00:54:55.680 +into basically characters or even bytes + +00:54:51.559 --> 00:54:59.040 +so what this shows here is this is + +00:54:55.680 --> 00:55:00.960 +comparing the amount of subord + +00:54:59.040 --> 00:55:03.040 +tokenization that happens according to + +00:55:00.960 --> 00:55:05.520 +each of the llms + +00:55:03.040 --> 00:55:08.599 +tokenizers with another explicitly + +00:55:05.520 --> 00:55:10.799 +multilingual model xlmr so xlmr is kind + +00:55:08.599 --> 00:55:12.760 +of their Baseline here with respect to + +00:55:10.799 --> 00:55:16.319 +how much it tokenizes each + +00:55:12.760 --> 00:55:19.079 +language and on the very left we have + +00:55:16.319 --> 00:55:22.839 +llama and so what we can see is that + +00:55:19.079 --> 00:55:26.599 +llama tokenizes TI + +00:55:22.839 --> 00:55:28.640 +3.7 times as much as it as xlmr does so + +00:55:26.599 --> 00:55:30.359 +it's basically splitting tie into tie up + +00:55:28.640 --> 00:55:32.480 +into little tiny bits which makes it + +00:55:30.359 --> 00:55:35.440 +very expensive and ineffective to + +00:55:32.480 --> 00:55:38.039 +process uh let's let's find some other + +00:55:35.440 --> 00:55:41.599 +languages that we care about we have he + +00:55:38.039 --> 00:55:43.760 +Hebrew Arabic + +00:55:41.599 --> 00:55:47.079 +Korean uh + +00:55:43.760 --> 00:55:49.559 +Japanese uh Chinese so all of these you + +00:55:47.079 --> 00:55:52.319 +can see are split up pretty into many + +00:55:49.559 --> 00:55:55.440 +many different chunks by + +00:55:52.319 --> 00:55:56.799 +Lama and then we we have a few other + +00:55:55.440 --> 00:55:58.359 +language models in the middle and then + +00:55:56.799 --> 00:56:01.440 +we have quen on the right side and what + +00:55:58.359 --> 00:56:04.039 +we can see is basically it's pretty + +00:56:01.440 --> 00:56:06.400 +comparable to xlmr maybe a little bit + +00:56:04.039 --> 00:56:09.520 +more than xlmr but pretty comparable to + +00:56:06.400 --> 00:56:12.839 +xlmr on many languages and then on code + +00:56:09.520 --> 00:56:15.000 +it actually um splits up code much less + +00:56:12.839 --> 00:56:17.039 +so we can see that you know its + +00:56:15.000 --> 00:56:18.960 +tokenizer is heavily + +00:56:17.039 --> 00:56:22.640 +multilingual um another thing I'd like + +00:56:18.960 --> 00:56:24.640 +to point out is um I I let I'm focusing + +00:56:22.640 --> 00:56:27.000 +on this particular language model for a + +00:56:24.640 --> 00:56:29.799 +number of reasons + +00:56:27.000 --> 00:56:32.440 +um the first one is multilinguality and + +00:56:29.799 --> 00:56:36.599 +I I like multilinguality I hope other + +00:56:32.440 --> 00:56:39.039 +people like multilinguality too um but + +00:56:36.599 --> 00:56:43.799 +another motivation is just it has quite + +00:56:39.039 --> 00:56:45.680 +strong performance and it's uh topping + +00:56:43.799 --> 00:56:47.960 +topping the leaderboards in in several + +00:56:45.680 --> 00:56:52.160 +different uh + +00:56:47.960 --> 00:56:57.640 +places so if we look at the open llm + +00:56:52.160 --> 00:56:57.640 +leaderboard um at least recently + +00:56:59.480 --> 00:57:07.440 +this was a fine-tuned model by Abus + +00:57:04.240 --> 00:57:09.440 +AI which was uh originally based on quen + +00:57:07.440 --> 00:57:11.079 +so you can see that this is like a + +00:57:09.440 --> 00:57:13.920 +strong found Foundation model that lots + +00:57:11.079 --> 00:57:16.440 +of people are using for fing things so + +00:57:13.920 --> 00:57:18.960 +um I would definitely uh encourage you + +00:57:16.440 --> 00:57:20.240 +to take a look at that too of course + +00:57:18.960 --> 00:57:22.520 +there's many many different models that + +00:57:20.240 --> 00:57:24.880 +I didn't cover because if I covered all + +00:57:22.520 --> 00:57:26.839 +of the general purpose models then we'd + +00:57:24.880 --> 00:57:29.599 +be here all day but um + +00:57:26.839 --> 00:57:31.200 +that's uh first start so next I want to + +00:57:29.599 --> 00:57:33.200 +go into other kind of special purpose + +00:57:31.200 --> 00:57:36.839 +models but are there any questions about + +00:57:33.200 --> 00:57:36.839 +um about the things I covered so + +00:57:38.000 --> 00:57:44.079 +far cool okay + +00:57:41.440 --> 00:57:47.960 +um so next I'd like to go into other + +00:57:44.079 --> 00:57:49.760 +models um first is code models so code + +00:57:47.960 --> 00:57:52.680 +models are models that were specifically + +00:57:49.760 --> 00:57:55.280 +trained on code actually right now every + +00:57:52.680 --> 00:57:56.960 +model is a code model um like nobody + +00:57:55.280 --> 00:57:58.799 +pre-train a large language model and is + +00:57:56.960 --> 00:58:01.720 +serious about it and doesn't train on + +00:57:58.799 --> 00:58:04.680 +code because um generating code is a + +00:58:01.720 --> 00:58:06.680 +huge use case and also um some work has + +00:58:04.680 --> 00:58:08.880 +demonstrated that gen training on code + +00:58:06.680 --> 00:58:13.720 +seems to improve reasoning abilities of + +00:58:08.880 --> 00:58:16.160 +language models as well um but uh these + +00:58:13.720 --> 00:58:19.319 +models were very heavily trained on code + +00:58:16.160 --> 00:58:22.400 +so um we have star coder 2 this is a + +00:58:19.319 --> 00:58:24.079 +very recent uh entry this is a fully + +00:58:22.400 --> 00:58:26.720 +open model so you can see the data it + +00:58:24.079 --> 00:58:29.039 +was trained on um all the training + +00:58:26.720 --> 00:58:31.640 +details are released and other stuff + +00:58:29.039 --> 00:58:36.760 +like that so this is kind of in the + +00:58:31.640 --> 00:58:38.599 +pythia you know piao category but it's + +00:58:36.760 --> 00:58:41.240 +very uh it's actually a very strong + +00:58:38.599 --> 00:58:42.839 +model very good model so it's uh a good + +00:58:41.240 --> 00:58:46.480 +one to know + +00:58:42.839 --> 00:58:48.680 +about um separately there's code llama + +00:58:46.480 --> 00:58:52.520 +by meta which is a code adaptation of + +00:58:48.680 --> 00:58:54.799 +llama and uh it also gets quite a quite + +00:58:52.520 --> 00:58:57.720 +good performance there's also another + +00:58:54.799 --> 00:58:59.760 +model uh called seek coder I would say + +00:58:57.720 --> 00:59:01.720 +all three of these are topping some + +00:58:59.760 --> 00:59:03.119 +variety of leaderboard where deep seek + +00:59:01.720 --> 00:59:04.640 +maybe is topping a few more leader + +00:59:03.119 --> 00:59:06.319 +boards than the other ones are but all + +00:59:04.640 --> 00:59:09.960 +of them are very competitive and might + +00:59:06.319 --> 00:59:11.680 +be the best in class for code things um + +00:59:09.960 --> 00:59:13.119 +I'm not talking very much about these + +00:59:11.680 --> 00:59:15.119 +because we're going to have a a class on + +00:59:13.119 --> 00:59:18.280 +code generation and code related things + +00:59:15.119 --> 00:59:21.000 +later so um I'm not going to go into a + +00:59:18.280 --> 00:59:21.000 +lot of detail + +00:59:21.319 --> 00:59:27.839 +here another thing is about math models + +00:59:24.680 --> 00:59:31.960 +and so like one thing is large language + +00:59:27.839 --> 00:59:35.480 +models are not particularly good at math + +00:59:31.960 --> 00:59:38.839 +um so there are quite a few models that + +00:59:35.480 --> 00:59:40.200 +were trained specifically for math um + +00:59:38.839 --> 00:59:45.160 +the first one is + +00:59:40.200 --> 00:59:47.280 +Lemma um yes that is a pun um for like + +00:59:45.160 --> 00:59:49.920 +LMA from + +00:59:47.280 --> 00:59:51.160 +maap I I'm I'm not responsible for it + +00:59:49.920 --> 00:59:55.240 +but I I thought it was kind of funny + +00:59:51.160 --> 00:59:56.920 +anyway um so uh this was by alther AI so + +00:59:55.240 --> 01:00:00.359 +because this was by Luther again this is + +00:59:56.920 --> 01:00:03.640 +a fully open model all the data is open + +01:00:00.359 --> 01:00:05.960 +um everything is known about it um also + +01:00:03.640 --> 01:00:08.480 +uh our our very own Shan wck was one of + +01:00:05.960 --> 01:00:10.559 +the contributors to it uh so if you want + +01:00:08.480 --> 01:00:13.839 +to know more about LMA you can go bother + +01:00:10.559 --> 01:00:17.440 +Sean so uh that's another thing that I + +01:00:13.839 --> 01:00:19.240 +should mention um another thing is deep + +01:00:17.440 --> 01:00:20.839 +seek who made the Deep seek Cod model + +01:00:19.240 --> 01:00:23.480 +has also created a very strong math + +01:00:20.839 --> 01:00:26.200 +model uh that's competitive with gp4 on + +01:00:23.480 --> 01:00:28.160 +a lot of math things uh basically the + +01:00:26.200 --> 01:00:30.480 +way they did this was they did this by + +01:00:28.160 --> 01:00:32.559 +um training a classifier to try to + +01:00:30.480 --> 01:00:34.640 +identify data on the web that is related + +01:00:32.559 --> 01:00:37.599 +to math and scraping all of that data + +01:00:34.640 --> 01:00:39.960 +and fine tuning on it so um you can get + +01:00:37.599 --> 01:00:42.280 +gold standard data from like proof pile + +01:00:39.960 --> 01:00:44.359 +and a whole bunch of other sources and + +01:00:42.280 --> 01:00:46.200 +so they trained a like math or not maath + +01:00:44.359 --> 01:00:48.400 +classifier and and harvested a lot of + +01:00:46.200 --> 01:00:52.400 +math related + +01:00:48.400 --> 01:00:52.400 +dat yeah + +01:00:59.880 --> 01:01:04.920 +it's mostly mostly data sets um I + +01:01:03.599 --> 01:01:07.119 +actually might be talking a little bit + +01:01:04.920 --> 01:01:10.039 +more about these in the reasoning class + +01:01:07.119 --> 01:01:11.799 +and I did a lot of uh I did a lot of + +01:01:10.039 --> 01:01:13.599 +prep to create these slides and actually + +01:01:11.799 --> 01:01:15.680 +ran out of time to do the math stuff so + +01:01:13.599 --> 01:01:17.200 +I might talk about it later um but I + +01:01:15.680 --> 01:01:18.480 +don't think they're really doing a lot + +01:01:17.200 --> 01:01:21.799 +of things like you could think of + +01:01:18.480 --> 01:01:23.440 +obvious things like doing RL rlf based + +01:01:21.799 --> 01:01:26.799 +on like whether it gets the answer right + +01:01:23.440 --> 01:01:28.559 +or not in the end um as far as I know + +01:01:26.799 --> 01:01:30.359 +that's not a big ingredient here but + +01:01:28.559 --> 01:01:31.920 +I'll be more sure of that when we talk + +01:01:30.359 --> 01:01:37.599 +about it + +01:01:31.920 --> 01:01:39.559 +later um cool and a final one uh it's + +01:01:37.599 --> 01:01:43.200 +not a Sy model it's a science model + +01:01:39.559 --> 01:01:45.920 +sorry for the typo um but uh this model + +01:01:43.200 --> 01:01:49.160 +Galactica um was a model for science + +01:01:45.920 --> 01:01:51.799 +that was trained by meta + +01:01:49.160 --> 01:01:54.359 +um does anyone remember this model or + +01:01:51.799 --> 01:01:58.079 +was anybody around when this model came + +01:01:54.359 --> 01:01:59.640 +out no there was a big uh a big PR + +01:01:58.079 --> 01:02:01.160 +disaster for meta when they released + +01:01:59.640 --> 01:02:03.480 +this model because they said this is a + +01:02:01.160 --> 01:02:05.520 +great model for math use it in your in + +01:02:03.480 --> 01:02:08.599 +writing your science paper sorry this is + +01:02:05.520 --> 01:02:10.480 +a great model for science try using it + +01:02:08.599 --> 01:02:12.640 +it in your science papers and this came + +01:02:10.480 --> 01:02:14.839 +out about two years ago and two years + +01:02:12.640 --> 01:02:16.640 +ago language models hallucinated all the + +01:02:14.839 --> 01:02:19.279 +time and came up with false scientific + +01:02:16.640 --> 01:02:22.039 +facts and stuff and so basically um a + +01:02:19.279 --> 01:02:25.680 +lot of people kind of bashed this model + +01:02:22.039 --> 01:02:27.440 +uh in my mind kind of unfairly because + +01:02:25.680 --> 01:02:31.200 +they actually have a lot of really + +01:02:27.440 --> 01:02:32.960 +interesting things in this paper um one + +01:02:31.200 --> 01:02:34.720 +interesting thing in this paper is they + +01:02:32.960 --> 01:02:37.000 +tried to create a general purpose model + +01:02:34.720 --> 01:02:38.960 +for science that's able to understand + +01:02:37.000 --> 01:02:41.960 +not only text but also various + +01:02:38.960 --> 01:02:47.720 +modalities of scientific data and so + +01:02:41.960 --> 01:02:51.000 +that includes text it includes latex um + +01:02:47.720 --> 01:02:53.799 +you know equations it includes code but + +01:02:51.000 --> 01:02:58.559 +it also included things like molecular + +01:02:53.799 --> 01:03:01.799 +structures and uh like collagens and DNA + +01:02:58.559 --> 01:03:04.160 +and stuff like this so they tried to + +01:03:01.799 --> 01:03:06.160 +like model biology and other things like + +01:03:04.160 --> 01:03:08.079 +this as well so I I think it's really + +01:03:06.160 --> 01:03:10.640 +kind of too bad that this model got a a + +01:03:08.079 --> 01:03:12.400 +bad WAP because I I really like the you + +01:03:10.640 --> 01:03:14.839 +know the work that went into it and I + +01:03:12.400 --> 01:03:16.359 +hope we'll see more of this um because + +01:03:14.839 --> 01:03:17.640 +language models for science is a really + +01:03:16.359 --> 01:03:19.880 +big topic that a lot of people are + +01:03:17.640 --> 01:03:19.880 +thinking + +01:03:20.760 --> 01:03:24.240 +about + +01:03:22.400 --> 01:03:26.440 +cool + +01:03:24.240 --> 01:03:28.000 +um one thing I didn't talk about is + +01:03:26.440 --> 01:03:29.880 +multimodal models but I hope to talk + +01:03:28.000 --> 01:03:32.440 +about multimodal models in a a future + +01:03:29.880 --> 01:03:33.359 +class so um I'll I'll talk more about + +01:03:32.440 --> 01:03:38.680 +that + +01:03:33.359 --> 01:03:41.640 +soon um the next thing is Clos models um + +01:03:38.680 --> 01:03:44.480 +so Clos models we don't know a whole lot + +01:03:41.640 --> 01:03:46.880 +about them uh most of what we know about + +01:03:44.480 --> 01:03:49.480 +them in their training data and other + +01:03:46.880 --> 01:03:52.359 +things like that is their uh is + +01:03:49.480 --> 01:03:54.720 +conjecture so the + +01:03:52.359 --> 01:03:57.839 +standard the standard format for + +01:03:54.720 --> 01:03:59.599 +releasing in a closed model or not + +01:03:57.839 --> 01:04:02.160 +releasing but you know publicizing a + +01:03:59.599 --> 01:04:04.279 +closed model is people will write a blog + +01:04:02.160 --> 01:04:05.960 +post and they'll write a paper and + +01:04:04.279 --> 01:04:07.720 +generally what the paper does is it only + +01:04:05.960 --> 01:04:09.559 +talks about evaluation it only talks + +01:04:07.720 --> 01:04:12.039 +about like how good the model is on + +01:04:09.559 --> 01:04:13.799 +various things how safe it is how they + +01:04:12.039 --> 01:04:16.279 +put a lot of effort into red teeming the + +01:04:13.799 --> 01:04:17.680 +model uh so that it doesn't do bad + +01:04:16.279 --> 01:04:18.839 +things and stuff like that and it tells + +01:04:17.680 --> 01:04:21.119 +you nothing about how they actually + +01:04:18.839 --> 01:04:23.279 +built the model so mostly like what I + +01:04:21.119 --> 01:04:26.279 +can talk about are capabilities as + +01:04:23.279 --> 01:04:28.520 +opposed to um + +01:04:26.279 --> 01:04:32.440 +talk about our capabilities as opposed + +01:04:28.520 --> 01:04:35.319 +to like what actually went into the + +01:04:32.440 --> 01:04:38.920 +model so um there's + +01:04:35.319 --> 01:04:40.880 +gp4 um gp4 I think everybody knows it's + +01:04:38.920 --> 01:04:43.640 +kind of the de facto standard strong + +01:04:40.880 --> 01:04:45.680 +language model it used to be the only + +01:04:43.640 --> 01:04:47.680 +strong language model like it used to be + +01:04:45.680 --> 01:04:50.079 +on its own the strongest language model + +01:04:47.680 --> 01:04:53.160 +and there were no real competitors to + +01:04:50.079 --> 01:04:55.000 +gp4 from that point of view I think + +01:04:53.160 --> 01:04:56.680 +still if I wanted a strong language + +01:04:55.000 --> 01:04:58.960 +model for just something that I'm I'm + +01:04:56.680 --> 01:05:00.880 +going to do randomly I still rely on G I + +01:04:58.960 --> 01:05:03.680 +still trust gp4 more than anything else + +01:05:00.880 --> 01:05:05.240 +to give me a really good answer um but + +01:05:03.680 --> 01:05:08.480 +there are now other competitors I'd like + +01:05:05.240 --> 01:05:11.960 +to talk about so gp4 anyway um you know + +01:05:08.480 --> 01:05:14.240 +it Powers the pro version of chat GPT it + +01:05:11.960 --> 01:05:18.039 +was tuned to be good as a chat-based + +01:05:14.240 --> 01:05:20.440 +assistant um it accepts image inputs uh + +01:05:18.039 --> 01:05:22.279 +and it supports calling external tools + +01:05:20.440 --> 01:05:23.599 +through function calling uh through a + +01:05:22.279 --> 01:05:27.119 +function calling + +01:05:23.599 --> 01:05:28.720 +interface um + +01:05:27.119 --> 01:05:30.599 +I I think people are are generally + +01:05:28.720 --> 01:05:34.000 +familiar with this but just in case + +01:05:30.599 --> 01:05:36.240 +you're not um I'd like to show a few + +01:05:34.000 --> 01:05:38.039 +things that I like to + +01:05:36.240 --> 01:05:39.640 +do + +01:05:38.039 --> 01:05:42.760 +so let + +01:05:39.640 --> 01:05:42.760 +[Music] + +01:05:46.920 --> 01:05:52.480 +me so I'll just randomly grab one of my + +01:05:50.440 --> 01:05:57.640 +papers from + +01:05:52.480 --> 01:05:57.640 +archive um my Mo my most recent paper + +01:06:03.400 --> 01:06:07.559 +and I can copy paste + +01:06:13.200 --> 01:06:22.240 +this and write uh turn this into Json + +01:06:19.240 --> 01:06:22.240 +forat + +01:06:27.960 --> 01:06:31.640 +and I drop it in + +01:06:29.880 --> 01:06:35.480 +here + +01:06:31.640 --> 01:06:38.279 +and so this is an exhibit of its like + +01:06:35.480 --> 01:06:42.240 +multimodal abilities because I can throw + +01:06:38.279 --> 01:06:44.359 +in a uh in a + +01:06:42.240 --> 01:06:48.400 +table and it basically turns it into + +01:06:44.359 --> 01:06:50.599 +Json clat for so um I I actually turned + +01:06:48.400 --> 01:06:52.119 +a fair amount of data FR in that I + +01:06:50.599 --> 01:06:53.960 +created in creating these slides into + +01:06:52.119 --> 01:06:56.039 +Json format so I can save it later for + +01:06:53.960 --> 01:06:59.079 +whatever I want it for and I did it + +01:06:56.039 --> 01:07:01.720 +through uh this so this is an example of + +01:06:59.079 --> 01:07:06.599 +the multimodal abilities can also tell + +01:07:01.720 --> 01:07:06.599 +you about images and stuff like that + +01:07:07.000 --> 01:07:14.319 +um so also um there was a famous article + +01:07:11.760 --> 01:07:16.760 +written by Gary Marcus that said deep + +01:07:14.319 --> 01:07:19.760 +learning is hitting a wall um it + +01:07:16.760 --> 01:07:22.880 +basically was written two years ago and + +01:07:19.760 --> 01:07:25.160 +uh Gary Marcus was saying deep learning + +01:07:22.880 --> 01:07:26.200 +doesn't uh you know is not the way for + +01:07:25.160 --> 01:07:27.760 +the future sure we're going to need + +01:07:26.200 --> 01:07:31.319 +things other than deep learning in order + +01:07:27.760 --> 01:07:34.559 +to uh you know be able to uh make + +01:07:31.319 --> 01:07:36.400 +progress and whe whether you believe + +01:07:34.559 --> 01:07:40.520 +that is true or not I I will let you to + +01:07:36.400 --> 01:07:46.520 +your own opinion um but uh I could also + +01:07:40.520 --> 01:07:51.359 +say uh create a picture of deep learning + +01:07:46.520 --> 01:07:55.400 +breaking through a brick wall and it can + +01:07:51.359 --> 01:07:55.400 +generate images for you + +01:08:02.599 --> 01:08:07.440 +course if you ever do a live demo even + +01:08:05.319 --> 01:08:10.319 +if it's a live demo of open AI product + +01:08:07.440 --> 01:08:13.559 +that a million people use it will break + +01:08:10.319 --> 01:08:16.719 +when you try to do it so um so this is + +01:08:13.559 --> 01:08:17.799 +another uh thing that it can do so there + +01:08:16.719 --> 01:08:19.560 +we have a picture of deep learning + +01:08:17.799 --> 01:08:22.640 +breaking through a brick wall and it can + +01:08:19.560 --> 01:08:26.159 +you know generate images and stuff so + +01:08:22.640 --> 01:08:28.560 +these are like the kinds of things that + +01:08:26.159 --> 01:08:30.960 +I now + +01:08:28.560 --> 01:08:32.880 +expect so it's not just like reasoning + +01:08:30.960 --> 01:08:35.839 +ability and other stuff like that it's + +01:08:32.880 --> 01:08:39.199 +also multi multimodality being able to + +01:08:35.839 --> 01:08:43.679 +generate code um another thing that's + +01:08:39.199 --> 01:08:46.719 +kind of nice um is make a + +01:08:43.679 --> 01:08:49.440 +histogram of these + +01:08:46.719 --> 01:08:54.640 +numbers one + +01:08:49.440 --> 01:08:54.640 +two one two four + +01:08:57.600 --> 01:09:04.040 +so it can do code generation and and + +01:08:59.719 --> 01:09:05.560 +display the results for you um there are + +01:09:04.040 --> 01:09:08.319 +efforts to + +01:09:05.560 --> 01:09:12.239 +make open source language models be able + +01:09:08.319 --> 01:09:14.000 +to do these things and um in order to do + +01:09:12.239 --> 01:09:16.759 +this you need multimodality you need + +01:09:14.000 --> 01:09:19.359 +also the ability to use tools so + +01:09:16.759 --> 01:09:21.400 +actually the way that this um worked + +01:09:19.359 --> 01:09:24.520 +here is very different than the way that + +01:09:21.400 --> 01:09:27.920 +this worked so this is actually using a + +01:09:24.520 --> 01:09:29.759 +image input into gp4 so what it's doing + +01:09:27.920 --> 01:09:33.040 +is it's encoding the image and then + +01:09:29.759 --> 01:09:34.719 +feeding it in as tokens into gp4 what + +01:09:33.040 --> 01:09:37.920 +this is doing here is this is rather + +01:09:34.719 --> 01:09:40.120 +calling a tool this is calling uh dolly3 + +01:09:37.920 --> 01:09:42.120 +as a tool and it's providing the caption + +01:09:40.120 --> 01:09:46.880 +to Dolly 3 you can even see maybe the + +01:09:42.120 --> 01:09:46.880 +caption that was provided to + +01:09:48.640 --> 01:09:55.560 +dolly3 you you previously were able to + +01:09:51.239 --> 01:09:57.960 +do that um by maybe downloading yeah so + +01:09:55.560 --> 01:10:01.600 +you can see the the + +01:09:57.960 --> 01:10:01.600 +caption uh which + +01:10:03.560 --> 01:10:08.120 +was a visual metaphor of deep learning + +01:10:06.320 --> 01:10:10.679 +is a powerful force breaking through a + +01:10:08.120 --> 01:10:13.400 +brick wall um or something like that and + +01:10:10.679 --> 01:10:15.480 +so gp4 basically what it did is it it + +01:10:13.400 --> 01:10:18.000 +said it wanted to call a tool and then + +01:10:15.480 --> 01:10:19.360 +it g provided the caption uh the caption + +01:10:18.000 --> 01:10:21.280 +and then it called it completely + +01:10:19.360 --> 01:10:22.320 +separate tool as an API in order to + +01:10:21.280 --> 01:10:27.320 +generate the + +01:10:22.320 --> 01:10:27.320 +image so um yeah the final + +01:10:28.199 --> 01:10:34.080 +well I managed to break chat gbt that's + +01:10:30.120 --> 01:10:36.520 +no small accomplishment um so but anyway + +01:10:34.080 --> 01:10:40.199 +these are some of the things that uh + +01:10:36.520 --> 01:10:42.360 +that the systems can do and because open + +01:10:40.199 --> 01:10:47.000 +AI has kind of become a standard that a + +01:10:42.360 --> 01:10:50.040 +lot of people want to uh compete with um + +01:10:47.000 --> 01:10:53.480 +also I would say Gemini Gemini and Claud + +01:10:50.040 --> 01:10:56.400 +are maybe the two um the two models that + +01:10:53.480 --> 01:10:59.440 +can compete with gp4 and terms of uh you + +01:10:56.400 --> 01:11:02.600 +know accuracy Gemini is a much newer + +01:10:59.440 --> 01:11:06.159 +model by Google that uh comes in two + +01:11:02.600 --> 01:11:08.280 +varieties Gemini Pro and Gemini Ultra uh + +01:11:06.159 --> 01:11:11.040 +one interesting thing about Gemini Pro + +01:11:08.280 --> 01:11:13.560 +is that it supports um very long inputs + +01:11:11.040 --> 01:11:15.679 +one to 10 million tokens it also + +01:11:13.560 --> 01:11:16.600 +supports image and video inputs and + +01:11:15.679 --> 01:11:20.239 +image + +01:11:16.600 --> 01:11:22.320 +outputs um I actually put a a video into + +01:11:20.239 --> 01:11:24.600 +it recently and the video recognition + +01:11:22.320 --> 01:11:27.159 +capabilities are pretty pretty nice so + +01:11:24.600 --> 01:11:29.280 +you can uh you can try that out if you + +01:11:27.159 --> 01:11:34.320 +want + +01:11:29.280 --> 01:11:36.640 +um and finally there's Claud it pla 3 it + +01:11:34.320 --> 01:11:39.280 +supports a context window of up to 200k + +01:11:36.640 --> 01:11:41.040 +also allows for processing images and + +01:11:39.280 --> 01:11:46.480 +overall has strong results competitive + +01:11:41.040 --> 01:11:49.880 +with gd4 so if you're looking for um if + +01:11:46.480 --> 01:11:51.480 +you're looking for models to use uh to + +01:11:49.880 --> 01:11:53.600 +try out better closed models you can + +01:11:51.480 --> 01:11:55.719 +definitely use these another thing I'm + +01:11:53.600 --> 01:11:58.239 +really excited about is how can we get + +01:11:55.719 --> 01:11:59.560 +like open models to you know demonstrate + +01:11:58.239 --> 01:12:01.320 +some of the interesting capabilities + +01:11:59.560 --> 01:12:02.840 +that we see in closed models so you know + +01:12:01.320 --> 01:12:07.120 +everybody can benefit and everybody + +01:12:02.840 --> 01:12:10.040 +knows uh you know uh the recipes to make + +01:12:07.120 --> 01:12:12.560 +models like this so I think that's + +01:12:10.040 --> 01:12:16.639 +mostly all I have for today another um + +01:12:12.560 --> 01:12:23.440 +another thing that is kind of neat + +01:12:16.639 --> 01:12:23.440 +is I just found this a little while ago + +01:12:28.800 --> 01:12:32.239 +but there is this uh + +01:12:33.320 --> 01:12:39.239 +interface uh called the god mode that + +01:12:36.880 --> 01:12:41.960 +allows you to put all of the chat apps + +01:12:39.239 --> 01:12:45.840 +next to each other and write the same + +01:12:41.960 --> 01:12:47.080 +chat query into them and uh and get the + +01:12:45.840 --> 01:12:48.719 +result from all of them so you can + +01:12:47.080 --> 01:12:51.080 +actually compare all of them in kind of + +01:12:48.719 --> 01:12:52.840 +an interactive settings so if you want + +01:12:51.080 --> 01:12:54.800 +to look at all especially all of the + +01:12:52.840 --> 01:12:56.679 +closed models open models it's you know + +01:12:54.800 --> 01:12:58.239 +not too are to do it yourself but if you + +01:12:56.679 --> 01:12:59.840 +want to try all of the Clos models + +01:12:58.239 --> 01:13:01.800 +together you can do that and like log + +01:12:59.840 --> 01:13:03.960 +into all of your accounts and then press + +01:13:01.800 --> 01:13:05.320 +go on aquery and see how they all this F + +01:13:03.960 --> 01:13:07.960 +so + +01:13:05.320 --> 01:13:09.800 +um that might be a good way to compare + +01:13:07.960 --> 01:13:12.000 +all of the models kind of qualitatively + +01:13:09.800 --> 01:13:14.679 +as opposed to + +01:13:12.000 --> 01:13:17.280 +qualitatively cool um that's all I have + +01:13:14.679 --> 01:13:19.440 +for today uh I don't know are there any + +01:13:17.280 --> 01:13:23.440 +questions or discussion or things like + +01:13:19.440 --> 01:13:23.440 +this yeah + +01:13:28.840 --> 01:13:35.679 +so a systematic way um the first thing + +01:13:32.760 --> 01:13:37.960 +you can do is look at the Benchmark + +01:13:35.679 --> 01:13:40.800 +results that have been published but + +01:13:37.960 --> 01:13:43.320 +actually I would like to give a caveat + +01:13:40.800 --> 01:13:43.320 +about + +01:13:45.199 --> 01:13:48.440 +this which + +01:13:50.000 --> 01:13:54.000 +is um + +01:14:22.960 --> 01:14:28.239 +so these are are the best bench marking + +01:14:25.600 --> 01:14:30.840 +results for the Gemini + +01:14:28.239 --> 01:14:33.440 +paper um + +01:14:30.840 --> 01:14:36.719 +and they have a table here um and + +01:14:33.440 --> 01:14:38.679 +basically what they kind of obviously to + +01:14:36.719 --> 01:14:41.679 +me wanted to demonstrate is that Gemini + +01:14:38.679 --> 01:14:44.760 +was the best model out of all the models + +01:14:41.679 --> 01:14:47.800 +um and so they have Gemini Pro and + +01:14:44.760 --> 01:14:50.040 +Gemini Ultra and they put Gemini Pro + +01:14:47.800 --> 01:14:52.639 +Ultra against gp4 and Gemini Pro against + +01:14:50.040 --> 01:14:56.360 +GPT 3.5 because they're you know + +01:14:52.639 --> 01:14:58.440 +comparable models um + +01:14:56.360 --> 01:15:01.880 +and they're yeah because they're + +01:14:58.440 --> 01:15:03.040 +comparable models basically and on + +01:15:01.880 --> 01:15:05.880 +things + +01:15:03.040 --> 01:15:07.400 +like um and they demonstrate that + +01:15:05.880 --> 01:15:08.199 +basically they're better in all all of + +01:15:07.400 --> 01:15:10.520 +these + +01:15:08.199 --> 01:15:14.760 +situations however there's a few details + +01:15:10.520 --> 01:15:17.120 +the first detail is um that the method + +01:15:14.760 --> 01:15:20.199 +that they're using to prompt the model + +01:15:17.120 --> 01:15:22.120 +is different here so we have like 94.4 + +01:15:20.199 --> 01:15:23.560 +versus 92 but the method they're using + +01:15:22.120 --> 01:15:25.520 +to prompt the model is different they're + +01:15:23.560 --> 01:15:29.159 +using they're + +01:15:25.520 --> 01:15:33.320 +32 and then basically uh getting the + +01:15:29.159 --> 01:15:36.320 +best from 32 and then another thing + +01:15:33.320 --> 01:15:41.360 +is if we look at this Human ofal + +01:15:36.320 --> 01:15:44.120 +Performance here um they reported their + +01:15:41.360 --> 01:15:47.000 +Human ofel Performance then they pulled + +01:15:44.120 --> 01:15:49.400 +the number from the original gp4 paper + +01:15:47.000 --> 01:15:53.159 +and compared to the number from the gp4 + +01:15:49.400 --> 01:15:54.639 +paper but all of these um you know apis + +01:15:53.159 --> 01:15:57.719 +are constantly changing they're getting + +01:15:54.639 --> 01:15:59.480 +better and better so we went um I I was + +01:15:57.719 --> 01:16:01.400 +very excited when Gemini first came out + +01:15:59.480 --> 01:16:03.120 +and we actually wrote a paper where we + +01:16:01.400 --> 01:16:05.320 +tried to look deeper into the + +01:16:03.120 --> 01:16:08.000 +performance and what we actually found + +01:16:05.320 --> 01:16:10.199 +is comparing Gemini Pro and GPT 3.5 + +01:16:08.000 --> 01:16:12.719 +turbo which should be comparable we + +01:16:10.199 --> 01:16:16.120 +found that actually GPT 3.5 turbo did a + +01:16:12.719 --> 01:16:19.280 +little bit better um in in most cases + +01:16:16.120 --> 01:16:20.920 +although not all cases and one of the + +01:16:19.280 --> 01:16:24.000 +things we noticed in particular is like + +01:16:20.920 --> 01:16:27.960 +human ofel GPD 3.5 had gotten like much + +01:16:24.000 --> 01:16:29.760 +much better over the course of uh like + +01:16:27.960 --> 01:16:31.639 +the time between the original paper was + +01:16:29.760 --> 01:16:34.120 +reported it had gone up by almost 30 + +01:16:31.639 --> 01:16:35.760 +points and also in a few cases we had + +01:16:34.120 --> 01:16:37.480 +like a little bit of trouble reproducing + +01:16:35.760 --> 01:16:39.280 +the Gemini Pro results just because they + +01:16:37.480 --> 01:16:40.360 +had like safety filters and other stuff + +01:16:39.280 --> 01:16:42.520 +like that that we had to get around + +01:16:40.360 --> 01:16:45.280 +before we got the results so it's not + +01:16:42.520 --> 01:16:49.560 +necessarily the case that you can + +01:16:45.280 --> 01:16:52.639 +completely take the um that you can + +01:16:49.560 --> 01:16:55.560 +completely take the results on face + +01:16:52.639 --> 01:16:57.040 +value actually as a first St I would + +01:16:55.560 --> 01:17:00.080 +suggest just trying to chat with the + +01:16:57.040 --> 01:17:03.719 +model um which is also why I introduced + +01:17:00.080 --> 01:17:06.679 +the like quote unquote god mode uh like + +01:17:03.719 --> 01:17:09.159 +browser because like you can kind of + +01:17:06.679 --> 01:17:10.639 +tell when it like when something's way + +01:17:09.159 --> 01:17:14.320 +better than another one just by the + +01:17:10.639 --> 01:17:17.159 +respones ites um separately if you want + +01:17:14.320 --> 01:17:17.159 +to do it much more + +01:17:20.199 --> 01:17:23.840 +systematically there are really nice + +01:17:22.360 --> 01:17:25.400 +tools for evaluation I think I might + +01:17:23.840 --> 01:17:26.960 +have talked about this before but if I + +01:17:25.400 --> 01:17:29.280 +haven't then you should definitely take + +01:17:26.960 --> 01:17:31.880 +a look at this there's the alther + +01:17:29.280 --> 01:17:34.040 +evaluation harness and the alther + +01:17:31.880 --> 01:17:35.679 +evaluation harness makes it really easy + +01:17:34.040 --> 01:17:37.600 +to evaluate for example hugging face + +01:17:35.679 --> 01:17:39.040 +models against many many different tasks + +01:17:37.600 --> 01:17:41.360 +so you can just pick which task you want + +01:17:39.040 --> 01:17:43.719 +to evaluate against pick the model name + +01:17:41.360 --> 01:17:47.400 +and and go and you can get evaluation + +01:17:43.719 --> 01:17:51.960 +results um that won't necessarily work + +01:17:47.400 --> 01:17:53.960 +for close models um but if you look for + +01:17:51.960 --> 01:17:55.480 +Uther language model evaluation harness + +01:17:53.960 --> 01:17:58.800 +that's maybe the easiest way to run + +01:17:55.480 --> 01:17:58.800 +evaluations or s for + +01:17:59.239 --> 01:18:05.239 +L Cool okay um so we're we're at time + +01:18:02.960 --> 01:18:07.480 +now uh but I'd be happy to answer a few + +01:18:05.239 --> 01:18:10.639 +questions if anybody else has any so + +01:18:07.480 --> 01:18:10.639 +thank you diff --git a/CMU Advanced NLP 2024 (17) Code Generation/CMU Advanced NLP 2024 (17) Code Generation.mp4 b/CMU Advanced NLP 2024 (17) Code Generation/CMU Advanced NLP 2024 (17) Code Generation.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d1c0e96a049488af6cc80b5d1713f4db35d17ac7 --- /dev/null +++ b/CMU Advanced NLP 2024 (17) Code Generation/CMU Advanced NLP 2024 (17) Code Generation.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7fcb735ceea4c24db426084df97f450a16142c64c4736ab6e403bb13741c8350 +size 63648833 diff --git a/CMU Advanced NLP 2024 (17) Code Generation/metadata.json b/CMU Advanced NLP 2024 (17) Code Generation/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..c3dcedf5452ae7ec9629fece5f5a040b6369a2a3 --- /dev/null +++ b/CMU Advanced NLP 2024 (17) Code Generation/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=bN2ZZieBXsE", + "title": "CMU Advanced NLP 2024 (17) Code Generation" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (17) Code Generation/transcript.srt b/CMU Advanced NLP 2024 (17) Code Generation/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..5f7cdc0d2b537a8c2ad868f8564856f822e0b12c --- /dev/null +++ b/CMU Advanced NLP 2024 (17) Code Generation/transcript.srt @@ -0,0 +1,6263 @@ +1 +00:00:00,480 --> 00:00:06,279 +so uh I guess we can get started uh + +2 +00:00:04,080 --> 00:00:09,880 +today I'm going to be talking about code + +3 +00:00:06,279 --> 00:00:11,719 +generation and uh so this is a a + +4 +00:00:09,880 --> 00:00:13,599 +research topic that I've uh worked on + +5 +00:00:11,719 --> 00:00:15,280 +for a long time now I I like a lot it's + +6 +00:00:13,599 --> 00:00:17,520 +become very useful nowadays which is + +7 +00:00:15,280 --> 00:00:20,960 +very exciting um so I'd like to talk + +8 +00:00:17,520 --> 00:00:23,119 +about kind of some of the basics and + +9 +00:00:20,960 --> 00:00:28,000 +Frontiers uh that we're working on right + +10 +00:00:23,119 --> 00:00:28,000 +now in this General uh area + +11 +00:00:31,719 --> 00:00:36,760 +um + +12 +00:00:33,360 --> 00:00:38,160 +so before I get into code generation + +13 +00:00:36,760 --> 00:00:40,719 +specifically one thing I'd like to point + +14 +00:00:38,160 --> 00:00:43,399 +out is for the next four or so classes + +15 +00:00:40,719 --> 00:00:45,680 +I'm going to be talking about tasks and + +16 +00:00:43,399 --> 00:00:48,680 +up until now I've been focusing on a lot + +17 +00:00:45,680 --> 00:00:52,840 +of like General things that weren't as + +18 +00:00:48,680 --> 00:00:55,199 +much about any specific tasks um + +19 +00:00:52,840 --> 00:00:57,000 +and I know that not everybody's going to + +20 +00:00:55,199 --> 00:00:59,399 +be interested in the four tasks that I'm + +21 +00:00:57,000 --> 00:01:00,960 +talking about in the next you know four + +22 +00:00:59,399 --> 00:01:02,480 +lectures + +23 +00:01:00,960 --> 00:01:04,920 +um + +24 +00:01:02,480 --> 00:01:06,640 +but I'm going to be covering various + +25 +00:01:04,920 --> 00:01:08,680 +things about different tasks and + +26 +00:01:06,640 --> 00:01:10,640 +hopefully you can map the same questions + +27 +00:01:08,680 --> 00:01:12,040 +onto whatever task you are interested in + +28 +00:01:10,640 --> 00:01:14,360 +if you're not interested in any of the + +29 +00:01:12,040 --> 00:01:15,880 +ones I talk about here so basically what + +30 +00:01:14,360 --> 00:01:18,119 +I want to talk about is the task + +31 +00:01:15,880 --> 00:01:21,040 +objective like why do we do that task + +32 +00:01:18,119 --> 00:01:23,479 +why is it important um what data sets + +33 +00:01:21,040 --> 00:01:26,560 +can we use to train or test our models + +34 +00:01:23,479 --> 00:01:28,799 +on these tasks evaluation metrics and + +35 +00:01:26,560 --> 00:01:31,200 +how do we evaluate uh both manually and + +36 +00:01:28,799 --> 00:01:32,079 +automatically with respect to how good + +37 +00:01:31,200 --> 00:01:34,960 +we're + +38 +00:01:32,079 --> 00:01:37,880 +doing and finally models and methods so + +39 +00:01:34,960 --> 00:01:40,720 +you know how do we solve the + +40 +00:01:37,880 --> 00:01:42,479 +problem and so for code generation first + +41 +00:01:40,720 --> 00:01:44,439 +I'd like to talk about the overview and + +42 +00:01:42,479 --> 00:01:47,040 +objectives of code generation so + +43 +00:01:44,439 --> 00:01:48,840 +basically code generation is the task of + +44 +00:01:47,040 --> 00:01:52,439 +generating executable code is an + +45 +00:01:48,840 --> 00:01:54,479 +interface to uh a program or to + +46 +00:01:52,439 --> 00:01:58,320 +computers and there's a lot of different + +47 +00:01:54,479 --> 00:02:01,000 +ways we can do this um why do we want to + +48 +00:01:58,320 --> 00:02:03,159 +do this so + +49 +00:02:01,000 --> 00:02:05,000 +the first thing is that software + +50 +00:02:03,159 --> 00:02:06,759 +engineering is really important and + +51 +00:02:05,000 --> 00:02:09,640 +being able to generate code accelerate + +52 +00:02:06,759 --> 00:02:11,560 +software engineering uh now code + +53 +00:02:09,640 --> 00:02:13,640 +generation is practical and I hope that + +54 +00:02:11,560 --> 00:02:15,599 +everybody in the class is using some + +55 +00:02:13,640 --> 00:02:17,840 +sort of you know code generation to + +56 +00:02:15,599 --> 00:02:20,200 +accelerate your own workflow if you're + +57 +00:02:17,840 --> 00:02:22,599 +not I highly encourage you to to try it + +58 +00:02:20,200 --> 00:02:26,200 +because it's very + +59 +00:02:22,599 --> 00:02:31,040 +useful second it also does things like + +60 +00:02:26,200 --> 00:02:34,239 +enabling models to access tools um + +61 +00:02:31,040 --> 00:02:37,440 +and even if you're not specifically + +62 +00:02:34,239 --> 00:02:39,440 +working on a software related task this + +63 +00:02:37,440 --> 00:02:41,000 +can be helpful but I want to talk about + +64 +00:02:39,440 --> 00:02:42,480 +this in a later class when we talk about + +65 +00:02:41,000 --> 00:02:46,640 +llm agents so I'm not going to be + +66 +00:02:42,480 --> 00:02:48,319 +talking about um that as much this time + +67 +00:02:46,640 --> 00:02:50,159 +uh one other thing that I I forgot to + +68 +00:02:48,319 --> 00:02:52,920 +mention here which I'm also going to + +69 +00:02:50,159 --> 00:02:55,000 +talk about in the later class is even if + +70 +00:02:52,920 --> 00:02:58,120 +you're not using code at all training on + +71 +00:02:55,000 --> 00:03:00,319 +code has been shown to cause some + +72 +00:02:58,120 --> 00:03:01,920 +benefits to learning models uh + +73 +00:03:00,319 --> 00:03:03,799 +specifically with respect to learning + +74 +00:03:01,920 --> 00:03:06,480 +like difficult multitask reasoning uh + +75 +00:03:03,799 --> 00:03:07,599 +sorry multi-step reasoning tasks and so + +76 +00:03:06,480 --> 00:03:09,480 +that's another reason why you might want + +77 +00:03:07,599 --> 00:03:10,840 +to worry about codes so I'm going to + +78 +00:03:09,480 --> 00:03:12,840 +mainly talk about the first one this + +79 +00:03:10,840 --> 00:03:14,560 +time and leave the other two uh for + +80 +00:03:12,840 --> 00:03:17,720 +future + +81 +00:03:14,560 --> 00:03:21,760 +lectures so specifically for this task + +82 +00:03:17,720 --> 00:03:25,200 +our input um is some sort of + +83 +00:03:21,760 --> 00:03:27,360 +specification of what we want to do um + +84 +00:03:25,200 --> 00:03:30,319 +and our output is going to be + +85 +00:03:27,360 --> 00:03:33,000 +code so + +86 +00:03:30,319 --> 00:03:35,920 +when you write a + +87 +00:03:33,000 --> 00:03:37,239 +program how do you describe the thing + +88 +00:03:35,920 --> 00:03:40,239 +that you want to implement in the + +89 +00:03:37,239 --> 00:03:42,000 +program before you implement it like uh + +90 +00:03:40,239 --> 00:03:44,720 +yeah what are some of the specifications + +91 +00:03:42,000 --> 00:03:44,720 +that people can give + +92 +00:03:45,280 --> 00:03:50,720 +you what the input and output of the + +93 +00:03:47,680 --> 00:03:52,360 +functions are uh yes uh sorry what what + +94 +00:03:50,720 --> 00:03:54,400 +types the inputs and outputs of the + +95 +00:03:52,360 --> 00:03:56,239 +function are so those would be like type + +96 +00:03:54,400 --> 00:03:57,760 +in in Python for example yeah that + +97 +00:03:56,239 --> 00:03:59,439 +that's a good one it's actually not on + +98 +00:03:57,760 --> 00:04:02,079 +my list of things here but it's it's a + +99 +00:03:59,439 --> 00:04:06,040 +good Point yeah any any other things + +100 +00:04:02,079 --> 00:04:08,680 +yeah complexity requirements complexity + +101 +00:04:06,040 --> 00:04:11,040 +requirements constraints that is also + +102 +00:04:08,680 --> 00:04:14,840 +not on my list of things here uh that's + +103 +00:04:11,040 --> 00:04:17,040 +uh that's a good one too um and any uh + +104 +00:04:14,840 --> 00:04:20,280 +slightly more straight forward + +105 +00:04:17,040 --> 00:04:24,040 +things pseudo code yeah um in pseudo + +106 +00:04:20,280 --> 00:04:26,720 +code uh what what is pseudo code written + +107 +00:04:24,040 --> 00:04:28,440 +in natural natural language yeah so + +108 +00:04:26,720 --> 00:04:31,199 +natural language inputs are are one + +109 +00:04:28,440 --> 00:04:34,520 +thing so I will tell you I want I want a + +110 +00:04:31,199 --> 00:04:39,160 +program that uh I want you to write a + +111 +00:04:34,520 --> 00:04:41,479 +web interface that allows me to um order + +112 +00:04:39,160 --> 00:04:43,560 +pizza or something like that that that + +113 +00:04:41,479 --> 00:04:46,560 +would be one way to do it any other + +114 +00:04:43,560 --> 00:04:46,560 +ideas + +115 +00:04:51,199 --> 00:04:55,840 +yeah this is what I have and this is + +116 +00:04:53,360 --> 00:04:57,240 +what I want yeah so um that's especially + +117 +00:04:55,840 --> 00:04:59,880 +the case if you're like modifying a + +118 +00:04:57,240 --> 00:05:01,400 +program um or something like that so + +119 +00:04:59,880 --> 00:05:06,280 +actually the next one on my list there + +120 +00:05:01,400 --> 00:05:06,280 +so good good point um any other + +121 +00:05:09,759 --> 00:05:15,720 +ideas yeah or or a multimodal person you + +122 +00:05:12,880 --> 00:05:20,120 +know I might say I want a pizza ordering + +123 +00:05:15,720 --> 00:05:22,039 +I want a pizza ordering app and up here + +124 +00:05:20,120 --> 00:05:24,000 +it should have your like username so you + +125 +00:05:22,039 --> 00:05:25,840 +can click through the settings and like + +126 +00:05:24,000 --> 00:05:27,080 +over here you should have the menu and + +127 +00:05:25,840 --> 00:05:28,680 +over here you should have your check out + +128 +00:05:27,080 --> 00:05:30,400 +card or something like that you know + +129 +00:05:28,680 --> 00:05:32,440 +it's something you do for a programmer + +130 +00:05:30,400 --> 00:05:34,680 +as well until recently we couldn't + +131 +00:05:32,440 --> 00:05:37,680 +really use that with like actual models + +132 +00:05:34,680 --> 00:05:40,560 +but um yeah yeah well that was my fourth + +133 +00:05:37,680 --> 00:05:42,639 +one but um and then the other one uh + +134 +00:05:40,560 --> 00:05:44,960 +inputs and outputs this could come in + +135 +00:05:42,639 --> 00:05:46,560 +the form of like unit tests or something + +136 +00:05:44,960 --> 00:05:49,199 +like that where it's like yeah this is + +137 +00:05:46,560 --> 00:05:51,160 +the input this is the expected output so + +138 +00:05:49,199 --> 00:05:53,240 +these are all things we use both as + +139 +00:05:51,160 --> 00:05:55,639 +human programmers and in code generation + +140 +00:05:53,240 --> 00:05:58,120 +models I really like the two other + +141 +00:05:55,639 --> 00:06:00,440 +points though um + +142 +00:05:58,120 --> 00:06:03,759 +because typin + +143 +00:06:00,440 --> 00:06:05,479 +are actually something that you like + +144 +00:06:03,759 --> 00:06:06,599 +writing writing with typ pints is + +145 +00:06:05,479 --> 00:06:09,240 +actually something that you can do with + +146 +00:06:06,599 --> 00:06:14,120 +code generation models and um + +147 +00:06:09,240 --> 00:06:16,680 +constraints such as like it should it + +148 +00:06:14,120 --> 00:06:20,199 +should meet certain speed requirements + +149 +00:06:16,680 --> 00:06:21,520 +or it should um you know use certain + +150 +00:06:20,199 --> 00:06:22,960 +libraries or something like that are + +151 +00:06:21,520 --> 00:06:24,840 +also constraints that you could add I + +152 +00:06:22,960 --> 00:06:26,120 +didn't put that on this slide here that + +153 +00:06:24,840 --> 00:06:28,319 +might come in the natural language + +154 +00:06:26,120 --> 00:06:30,639 +description but it could be something + +155 +00:06:28,319 --> 00:06:32,759 +separate and then you know the output is + +156 +00:06:30,639 --> 00:06:36,759 +whatever code you want + +157 +00:06:32,759 --> 00:06:38,240 +to so um how many people are using like + +158 +00:06:36,759 --> 00:06:41,000 +GitHub + +159 +00:06:38,240 --> 00:06:46,160 +co-pilot like what + +160 +00:06:41,000 --> 00:06:47,759 +percentage maybe about half okay um how + +161 +00:06:46,160 --> 00:06:49,840 +many people are using another like + +162 +00:06:47,759 --> 00:06:56,080 +assisted coding tool other than GitHub + +163 +00:06:49,840 --> 00:06:57,400 +coet yeah g gp4 gp4 is an could be an + +164 +00:06:56,080 --> 00:06:58,680 +assisted coding tool I'm talking more + +165 +00:06:57,400 --> 00:07:02,400 +like something that's actually in your + +166 +00:06:58,680 --> 00:07:04,759 +IDE something yeah anybody + +167 +00:07:02,400 --> 00:07:07,680 +else does anyone use + +168 +00:07:04,759 --> 00:07:13,639 +cursor no + +169 +00:07:07,680 --> 00:07:18,039 +um yeah cursor yeah okay so + +170 +00:07:13,639 --> 00:07:20,919 +yeah Co collab uh Ai and collab yeah so + +171 +00:07:18,039 --> 00:07:24,080 +um so I think there are a lot of these + +172 +00:07:20,919 --> 00:07:26,879 +uh going around I I use co-pilot myself + +173 +00:07:24,080 --> 00:07:28,639 +I have not used cursor I do use GPD 4 um + +174 +00:07:26,879 --> 00:07:30,599 +and I'll I'll show you an example of how + +175 +00:07:28,639 --> 00:07:32,919 +I use them different + +176 +00:07:30,599 --> 00:07:34,360 +um if you haven't used copilot hopefully + +177 +00:07:32,919 --> 00:07:39,599 +this will + +178 +00:07:34,360 --> 00:07:42,599 +work um I just made a a simple + +179 +00:07:39,599 --> 00:07:42,599 +video + +180 +00:07:43,280 --> 00:07:49,520 +oops okay that's not working but anyway + +181 +00:07:46,159 --> 00:07:51,000 +you um you type your uh you know you + +182 +00:07:49,520 --> 00:07:54,319 +type and it basically completes your + +183 +00:07:51,000 --> 00:07:56,639 +code so this is this is an example here + +184 +00:07:54,319 --> 00:07:58,599 +and I didn't write any of this code + +185 +00:07:56,639 --> 00:08:02,360 +actually I just wrote the comments and + +186 +00:07:58,599 --> 00:08:04,000 +then it filled in the the actual C and + +187 +00:08:02,360 --> 00:08:05,639 +also I didn't exactly check if it's + +188 +00:08:04,000 --> 00:08:08,080 +correct or not + +189 +00:08:05,639 --> 00:08:11,120 +so if there's any mistake it's co + +190 +00:08:08,080 --> 00:08:15,159 +Pilot's fault not my fault but um I it + +191 +00:08:11,120 --> 00:08:15,159 +looked correct to me so + +192 +00:08:15,759 --> 00:08:21,120 +um and oh by the way you get to use it + +193 +00:08:18,120 --> 00:08:22,800 +for free with your CMU account so if you + +194 +00:08:21,120 --> 00:08:24,120 +uh if you don't want to use it but don't + +195 +00:08:22,800 --> 00:08:25,919 +want to pay for it you're and left + +196 +00:08:24,120 --> 00:08:31,639 +because you can use + +197 +00:08:25,919 --> 00:08:36,320 +it um another example uh is gd4 or uh + +198 +00:08:31,639 --> 00:08:38,519 +more recently Cloud 3 um and basically + +199 +00:08:36,320 --> 00:08:40,680 +this can do a different variety of + +200 +00:08:38,519 --> 00:08:43,719 +things so we talked about screenshots + +201 +00:08:40,680 --> 00:08:45,720 +and basically I asked Claude to create a + +202 +00:08:43,719 --> 00:08:48,399 +react app that replicates the claw + +203 +00:08:45,720 --> 00:08:50,240 +interface by giving it a screenshot and + +204 +00:08:48,399 --> 00:08:52,560 +asking it create a react app that looks + +205 +00:08:50,240 --> 00:08:55,200 +like the screenshot and then it gave me + +206 +00:08:52,560 --> 00:09:00,800 +a whole bunch of text and in the end it + +207 +00:08:55,200 --> 00:09:03,320 +started um making this uh container here + +208 +00:09:00,800 --> 00:09:08,040 +um + +209 +00:09:03,320 --> 00:09:11,040 +and this uh it basically is skipping + +210 +00:09:08,040 --> 00:09:12,800 +some of the styling stuff uh because + +211 +00:09:11,040 --> 00:09:14,480 +large language models I I think they're + +212 +00:09:12,800 --> 00:09:16,560 +basically trained so that they don't + +213 +00:09:14,480 --> 00:09:19,959 +give really really long responses + +214 +00:09:16,560 --> 00:09:21,320 +because like if you uh asked for + +215 +00:09:19,959 --> 00:09:23,640 +something that would take a really + +216 +00:09:21,320 --> 00:09:25,519 +really long time and then the model just + +217 +00:09:23,640 --> 00:09:26,880 +complied and gave that to you for a + +218 +00:09:25,519 --> 00:09:29,000 +really really long time it would cost + +219 +00:09:26,880 --> 00:09:30,680 +them a lot of money so I feel like they + +220 +00:09:29,000 --> 00:09:32,440 +they B try to train the models to only + +221 +00:09:30,680 --> 00:09:37,160 +out at like a thousand tokens at a time + +222 +00:09:32,440 --> 00:09:38,959 +or something like that so um it it won't + +223 +00:09:37,160 --> 00:09:40,839 +actually go out and program the whole + +224 +00:09:38,959 --> 00:09:43,120 +project for you but with a little + +225 +00:09:40,839 --> 00:09:44,680 +cajoling if you say okay now implement + +226 +00:09:43,120 --> 00:09:48,519 +this part now implement this part now + +227 +00:09:44,680 --> 00:09:49,959 +implement this part um you uh you can + +228 +00:09:48,519 --> 00:09:53,040 +end up with some pretty interesting + +229 +00:09:49,959 --> 00:09:55,680 +stuff and let me + +230 +00:09:53,040 --> 00:09:57,120 +uh let me see if I can I can show you an + +231 +00:09:55,680 --> 00:10:01,320 +example + +232 +00:09:57,120 --> 00:10:01,320 +so I I know a little bit of + +233 +00:10:01,440 --> 00:10:07,040 +react um the front end framework but I + +234 +00:10:04,240 --> 00:10:09,839 +don't know a whole lot but recently + +235 +00:10:07,040 --> 00:10:14,279 +we've been um working on an open-source + +236 +00:10:09,839 --> 00:10:18,959 +assisted coding app and I most of this + +237 +00:10:14,279 --> 00:10:21,519 +was just written by quad um it's uh I I + +238 +00:10:18,959 --> 00:10:23,079 +said I want an app that on the left side + +239 +00:10:21,519 --> 00:10:26,160 +it has a chat window and then on the + +240 +00:10:23,079 --> 00:10:28,240 +right side it has three uh three panes + +241 +00:10:26,160 --> 00:10:30,120 +one is a terminal one is a planner and + +242 +00:10:28,240 --> 00:10:32,200 +one is a code editor + +243 +00:10:30,120 --> 00:10:33,880 +and um so it gave me something it was + +244 +00:10:32,200 --> 00:10:37,399 +kind of ugly so I said okay make the + +245 +00:10:33,880 --> 00:10:40,639 +background black um change the CSS file + +246 +00:10:37,399 --> 00:10:43,639 +so that um you have like a user icon and + +247 +00:10:40,639 --> 00:10:46,040 +a robot icon and stuff like that and + +248 +00:10:43,639 --> 00:10:49,240 +after this I I wrote very little of this + +249 +00:10:46,040 --> 00:10:51,079 +code I wrote like 1% of this code or + +250 +00:10:49,240 --> 00:10:54,480 +something like that and it's able to to + +251 +00:10:51,079 --> 00:10:57,880 +do these sorts of things for you um so + +252 +00:10:54,480 --> 00:11:01,000 +if you don't like writing front ends + +253 +00:10:57,880 --> 00:11:03,880 +good luck uh or good good news that you + +254 +00:11:01,000 --> 00:11:05,560 +uh can come up with a passable front end + +255 +00:11:03,880 --> 00:11:07,519 +without uh without actually having to + +256 +00:11:05,560 --> 00:11:08,720 +write it nonetheless you know good front + +257 +00:11:07,519 --> 00:11:10,200 +end Engineers will come up with + +258 +00:11:08,720 --> 00:11:13,639 +something much more beautiful than that + +259 +00:11:10,200 --> 00:11:15,880 +so um so basically why do I why did I + +260 +00:11:13,639 --> 00:11:19,959 +want to say this I think um GitHub + +261 +00:11:15,880 --> 00:11:20,839 +co-pilot and Pla or gp4 serve very + +262 +00:11:19,959 --> 00:11:25,200 +different + +263 +00:11:20,839 --> 00:11:27,360 +purposes um GitHub co-pilot is code + +264 +00:11:25,200 --> 00:11:30,160 +completion and it mostly works for + +265 +00:11:27,360 --> 00:11:32,440 +shorter things so it's like your next + +266 +00:11:30,160 --> 00:11:34,760 +thought in your code in code that you + +267 +00:11:32,440 --> 00:11:37,560 +know pretty well something like plot or + +268 +00:11:34,760 --> 00:11:40,639 +gp4 is much better for really long + +269 +00:11:37,560 --> 00:11:44,680 +things um where you want to build like a + +270 +00:11:40,639 --> 00:11:47,040 +full class or something like that and I + +271 +00:11:44,680 --> 00:11:48,480 +also have found that if you're coding in + +272 +00:11:47,040 --> 00:11:50,079 +a language that you're very familiar + +273 +00:11:48,480 --> 00:11:51,560 +with copilot might be more useful + +274 +00:11:50,079 --> 00:11:52,959 +because you want fine grain control and + +275 +00:11:51,560 --> 00:11:55,040 +you want it to fill out things to make + +276 +00:11:52,959 --> 00:11:56,519 +it faster whereas if you're coding in a + +277 +00:11:55,040 --> 00:11:58,040 +language that you're not very familiar + +278 +00:11:56,519 --> 00:11:59,680 +with something like Claud is good + +279 +00:11:58,040 --> 00:12:01,839 +because you can write a whole you know + +280 +00:11:59,680 --> 00:12:04,800 +program forties so these are the + +281 +00:12:01,839 --> 00:12:07,680 +differences another thing is GitHub + +282 +00:12:04,800 --> 00:12:09,240 +co-pilot needs to be frighteningly fast + +283 +00:12:07,680 --> 00:12:10,839 +because it needs to move at the speed + +284 +00:12:09,240 --> 00:12:12,880 +that like programmers are thinking in + +285 +00:12:10,839 --> 00:12:14,920 +programming next whereas something like + +286 +00:12:12,880 --> 00:12:16,800 +Claud it doesn't you know using it in + +287 +00:12:14,920 --> 00:12:18,880 +the way that I use cloud here doesn't + +288 +00:12:16,800 --> 00:12:22,600 +really matter because I can say uh + +289 +00:12:18,880 --> 00:12:24,079 +programing me a you know a web app and + +290 +00:12:22,600 --> 00:12:25,360 +then I can go and have dinner and come + +291 +00:12:24,079 --> 00:12:28,199 +back and have a web app and I'd be + +292 +00:12:25,360 --> 00:12:31,720 +perfectly happy with that right so um + +293 +00:12:28,199 --> 00:12:37,199 +the latency request are also + +294 +00:12:31,720 --> 00:12:37,199 +different cool um any any questions here + +295 +00:12:37,399 --> 00:12:42,600 +yeah that debugging code they + +296 +00:12:43,000 --> 00:12:47,959 +are the well so + +297 +00:12:45,839 --> 00:12:50,760 +co-pilot I haven't actually tried it + +298 +00:12:47,959 --> 00:12:52,480 +that much um if I wanted to debug code + +299 +00:12:50,760 --> 00:12:54,880 +I'd probably use something like pla or + +300 +00:12:52,480 --> 00:12:56,360 +gp4 just because actually I'll I'll + +301 +00:12:54,880 --> 00:12:58,320 +mention this in a second but co-pilot's + +302 +00:12:56,360 --> 00:13:00,360 +a much smaller model uh because it needs + +303 +00:12:58,320 --> 00:13:01,839 +to be very fast or what they're using in + +304 +00:13:00,360 --> 00:13:04,040 +copilot is a smaller model because it + +305 +00:13:01,839 --> 00:13:05,519 +needs to be very fast so I would + +306 +00:13:04,040 --> 00:13:08,360 +probably use a bigger model for anything + +307 +00:13:05,519 --> 00:13:10,120 +that required like good understanding I + +308 +00:13:08,360 --> 00:13:11,480 +think it's passable at debugging code + +309 +00:13:10,120 --> 00:13:13,079 +but it won't find the really difficult + +310 +00:13:11,480 --> 00:13:15,639 +things and it probably won't find things + +311 +00:13:13,079 --> 00:13:18,279 +that require spanning across uh multiple + +312 +00:13:15,639 --> 00:13:21,240 +files but I I'm not 100% sure about that + +313 +00:13:18,279 --> 00:13:25,519 +like I think it's worth + +314 +00:13:21,240 --> 00:13:25,519 +testing um any other + +315 +00:13:25,880 --> 00:13:30,120 +questions okay so if I haven't convinced + +316 +00:13:28,360 --> 00:13:32,360 +you that as software developers you + +317 +00:13:30,120 --> 00:13:34,880 +should be using this hopefully this next + +318 +00:13:32,360 --> 00:13:37,480 +uh this next slide will so this was a + +319 +00:13:34,880 --> 00:13:41,199 +study that was run by GitHub uh shortly + +320 +00:13:37,480 --> 00:13:43,160 +after um after co-pilot came out and so + +321 +00:13:41,199 --> 00:13:45,440 +why do we do code generation why are + +322 +00:13:43,160 --> 00:13:47,240 +people very excited about it so the + +323 +00:13:45,440 --> 00:13:50,240 +first is U making software isn't + +324 +00:13:47,240 --> 00:13:53,480 +important um and I recently calculated + +325 +00:13:50,240 --> 00:13:55,920 +what from some Labor Statistics and the + +326 +00:13:53,480 --> 00:13:59,440 +total amount that software developers + +327 +00:13:55,920 --> 00:14:01,880 +make um in a year is $175 billion so + +328 +00:13:59,440 --> 00:14:05,000 +that's providing at least that much you + +329 +00:14:01,880 --> 00:14:06,800 +know value so it's a very high value uh + +330 +00:14:05,000 --> 00:14:09,079 +profession so if we could make it faster + +331 +00:14:06,800 --> 00:14:11,480 +you know it would have even more + +332 +00:14:09,079 --> 00:14:12,920 +value another thing is code generation + +333 +00:14:11,480 --> 00:14:15,680 +leads to large improvements in + +334 +00:14:12,920 --> 00:14:17,160 +productivity so uh get Hub ran this + +335 +00:14:15,680 --> 00:14:18,680 +study where they randomly assigned + +336 +00:14:17,160 --> 00:14:21,519 +developers to groups who would either + +337 +00:14:18,680 --> 00:14:24,440 +use co-pilot or not use co-pilot and + +338 +00:14:21,519 --> 00:14:26,480 +they assigned them the same task and + +339 +00:14:24,440 --> 00:14:30,759 +basically the people who use copilot + +340 +00:14:26,480 --> 00:14:34,199 +their rate of um completion went up by + +341 +00:14:30,759 --> 00:14:36,320 +8% and they finished um in about 40% of + +342 +00:14:34,199 --> 00:14:39,279 +the time of the people who didn't use it + +343 +00:14:36,320 --> 00:14:43,639 +and so I think this + +344 +00:14:39,279 --> 00:14:45,920 +is or uh yeah they say 55% less times so + +345 +00:14:43,639 --> 00:14:47,759 +this is very impressive but it's also + +346 +00:14:45,920 --> 00:14:50,199 +not at all surprising if you're using a + +347 +00:14:47,759 --> 00:14:52,880 +Cod like assisted coding assistant it + +348 +00:14:50,199 --> 00:14:54,360 +just makes you code faster also if you + +349 +00:14:52,880 --> 00:14:56,040 +don't like writing doc strings it's + +350 +00:14:54,360 --> 00:14:57,519 +really good at writing doc strings so + +351 +00:14:56,040 --> 00:14:59,680 +you can write documentation for your + +352 +00:14:57,519 --> 00:15:00,759 +code not wor about so + +353 +00:14:59,680 --> 00:15:04,399 +okay + +354 +00:15:00,759 --> 00:15:07,000 +cool um + +355 +00:15:04,399 --> 00:15:09,720 +so there are differences between code + +356 +00:15:07,000 --> 00:15:14,000 +and natural language uh and I've listed + +357 +00:15:09,720 --> 00:15:15,560 +a few of them here and the differences + +358 +00:15:14,000 --> 00:15:18,120 +between code and natural language also + +359 +00:15:15,560 --> 00:15:20,160 +affect how we build models for this test + +360 +00:15:18,120 --> 00:15:23,160 +so the first one is that code has strict + +361 +00:15:20,160 --> 00:15:26,000 +grammar uh if you make a small mistake + +362 +00:15:23,160 --> 00:15:27,920 +in your code grammar usually it will + +363 +00:15:26,000 --> 00:15:29,839 +just break and your program won't work + +364 +00:15:27,920 --> 00:15:31,319 +so you need to be very careful as + +365 +00:15:29,839 --> 00:15:32,560 +opposed to natural language grammar + +366 +00:15:31,319 --> 00:15:33,600 +where you can make small mistakes and it + +367 +00:15:32,560 --> 00:15:36,120 +doesn't make a + +368 +00:15:33,600 --> 00:15:40,120 +difference another thing is in code you + +369 +00:15:36,120 --> 00:15:42,720 +know the semantic flow of the code and + +370 +00:15:40,120 --> 00:15:44,160 +so we know that certain variables + +371 +00:15:42,720 --> 00:15:45,560 +correspond to each other we know that + +372 +00:15:44,160 --> 00:15:48,639 +they're flowing through the program in a + +373 +00:15:45,560 --> 00:15:50,880 +certain way another thing is code is + +374 +00:15:48,639 --> 00:15:54,120 +executable so we can actually execute it + +375 +00:15:50,880 --> 00:15:56,199 +and observe the result unlike in natural + +376 +00:15:54,120 --> 00:16:00,000 +language and another important thing is + +377 +00:15:56,199 --> 00:16:03,399 +code is created incrementally so code is + +378 +00:16:00,000 --> 00:16:05,680 +not you know unlike text text is also + +379 +00:16:03,399 --> 00:16:07,399 +created incrementally but it's not + +380 +00:16:05,680 --> 00:16:08,720 +usually you write it once you might + +381 +00:16:07,399 --> 00:16:11,199 +revise it a little bit and then you're + +382 +00:16:08,720 --> 00:16:14,040 +done and you you don't need to touch it + +383 +00:16:11,199 --> 00:16:15,399 +again but um in code you touch it over + +384 +00:16:14,040 --> 00:16:17,800 +and over and over again as you develop a + +385 +00:16:15,399 --> 00:16:17,800 +sof + +386 +00:16:18,040 --> 00:16:23,040 +project so if we look at code Generation + +387 +00:16:21,079 --> 00:16:27,079 +Um I would like to talk a little bit + +388 +00:16:23,040 --> 00:16:29,079 +about uh subtasks and data sets next so + +389 +00:16:27,079 --> 00:16:30,480 +the most famous data set for a Cod code + +390 +00:16:29,079 --> 00:16:34,279 +generation nowadays is something called + +391 +00:16:30,480 --> 00:16:38,680 +human ofel um this is a very nice data + +392 +00:16:34,279 --> 00:16:42,480 +set um for a number of reasons uh I + +393 +00:16:38,680 --> 00:16:44,240 +think it is used too much um nonetheless + +394 +00:16:42,480 --> 00:16:46,759 +and I I think there are better data sets + +395 +00:16:44,240 --> 00:16:51,240 +that we maybe should be using more but + +396 +00:16:46,759 --> 00:16:54,000 +basically human ofel is um it has + +397 +00:16:51,240 --> 00:16:55,920 +examples of usage of the Python standard + +398 +00:16:54,000 --> 00:16:59,360 +Library where some are easier some are + +399 +00:16:55,920 --> 00:17:02,880 +harder and just to give some examples + +400 +00:16:59,360 --> 00:17:06,760 +uh we're saying given a nonempty list of + +401 +00:17:02,880 --> 00:17:10,480 +integers return the sum of all the odd + +402 +00:17:06,760 --> 00:17:12,959 +elements that are in even positions so + +403 +00:17:10,480 --> 00:17:16,079 +it's kind of like a elite code + +404 +00:17:12,959 --> 00:17:19,199 +style you know program but maybe one of + +405 +00:17:16,079 --> 00:17:22,400 +the easier ones and then in order to + +406 +00:17:19,199 --> 00:17:25,240 +solve that you find all of the put + +407 +00:17:22,400 --> 00:17:28,480 +elements in even positions and then you + +408 +00:17:25,240 --> 00:17:29,679 +only return them if uh the value itself + +409 +00:17:28,480 --> 00:17:32,799 +is + +410 +00:17:29,679 --> 00:17:34,200 +um so like you can do that in a oneliner + +411 +00:17:32,799 --> 00:17:36,600 +but you need to think about it a little + +412 +00:17:34,200 --> 00:17:38,919 +bit um and then you have + +413 +00:17:36,600 --> 00:17:43,120 +more + +414 +00:17:38,919 --> 00:17:43,810 +um returns encoded uh sorry takes an + +415 +00:17:43,120 --> 00:17:46,910 +input + +416 +00:17:43,810 --> 00:17:46,910 +[Music] + +417 +00:17:47,160 --> 00:17:50,919 +string yeah actually sorry this is from + +418 +00:17:49,320 --> 00:17:53,600 +the paper I didn't read it before I copy + +419 +00:17:50,919 --> 00:17:57,080 +pasted it in here but um yeah that's a + +420 +00:17:53,600 --> 00:17:58,880 +decoding one and one one thing about + +421 +00:17:57,080 --> 00:18:02,240 +this uh that's important to know is it + +422 +00:17:58,880 --> 00:18:04,200 +only has 164 examples so it's actually a + +423 +00:18:02,240 --> 00:18:07,600 +relatively small number of + +424 +00:18:04,200 --> 00:18:09,440 +examples um it's also just the python + +425 +00:18:07,600 --> 00:18:11,200 +standard Library so it's not testing + +426 +00:18:09,440 --> 00:18:14,960 +usage of any other + +427 +00:18:11,200 --> 00:18:17,520 +libraries um so these two things + +428 +00:18:14,960 --> 00:18:19,720 +together make it not the most realistic + +429 +00:18:17,520 --> 00:18:21,880 +you know examination of your programming + +430 +00:18:19,720 --> 00:18:23,640 +skills just like leak code is not the + +431 +00:18:21,880 --> 00:18:25,640 +most realistic examination of your + +432 +00:18:23,640 --> 00:18:28,240 +programming skills but you know I don't + +433 +00:18:25,640 --> 00:18:31,720 +know companies use it anyway so maybe + +434 +00:18:28,240 --> 00:18:35,159 +human devel is reasonable but um so then + +435 +00:18:31,720 --> 00:18:37,120 +we go um into the inputs and outputs uh + +436 +00:18:35,159 --> 00:18:40,679 +the inputs and outputs usually include a + +437 +00:18:37,120 --> 00:18:43,440 +doc string um some input and output + +438 +00:18:40,679 --> 00:18:47,640 +examples and then they have tests to + +439 +00:18:43,440 --> 00:18:47,640 +verify the accuracy of your + +440 +00:18:47,880 --> 00:18:52,840 +outputs so the metric that's used to + +441 +00:18:50,559 --> 00:18:58,919 +evaluate these systems is something + +442 +00:18:52,840 --> 00:19:01,400 +called passet K and the basic idea is um + +443 +00:18:58,919 --> 00:19:03,400 +we generate K examples will at least one + +444 +00:19:01,400 --> 00:19:06,960 +of them pass the unit + +445 +00:19:03,400 --> 00:19:10,720 +tests and the idea here is + +446 +00:19:06,960 --> 00:19:13,480 +that if we have models we might want to + +447 +00:19:10,720 --> 00:19:14,960 +generate like well there there's a + +448 +00:19:13,480 --> 00:19:17,480 +couple reasons why we would care about + +449 +00:19:14,960 --> 00:19:19,880 +this pass it one is kind of obvious + +450 +00:19:17,480 --> 00:19:23,200 +because we generate one and then we + +451 +00:19:19,880 --> 00:19:26,480 +measure how um you know how likely it is + +452 +00:19:23,200 --> 00:19:29,280 +to pass unit tests but pass it five why + +453 +00:19:26,480 --> 00:19:30,760 +would we care about passet five well + +454 +00:19:29,280 --> 00:19:32,159 +number one maybe you could show five + +455 +00:19:30,760 --> 00:19:34,240 +programs to a person and they could + +456 +00:19:32,159 --> 00:19:37,039 +choose the one that they like the best + +457 +00:19:34,240 --> 00:19:39,919 +or maybe you could have unit test write + +458 +00:19:37,039 --> 00:19:41,720 +unit tests in advance and then generate + +459 +00:19:39,919 --> 00:19:43,880 +five programs check which one pass the + +460 +00:19:41,720 --> 00:19:45,480 +unit tests and then use the ones only + +461 +00:19:43,880 --> 00:19:48,360 +that pass the unit test or something + +462 +00:19:45,480 --> 00:19:51,000 +like that so there's also some interest + +463 +00:19:48,360 --> 00:19:53,320 +in uh whether you could generate you + +464 +00:19:51,000 --> 00:19:54,600 +know multiple examples and then pick a + +465 +00:19:53,320 --> 00:19:56,919 +good + +466 +00:19:54,600 --> 00:19:59,080 +one there's a little bit of nuance in + +467 +00:19:56,919 --> 00:20:02,120 +how this is actually calculated so + +468 +00:19:59,080 --> 00:20:04,240 +basically um if you generate only K like + +469 +00:20:02,120 --> 00:20:05,960 +if you if you sample only one example + +470 +00:20:04,240 --> 00:20:07,400 +there's a lot of variance in whether you + +471 +00:20:05,960 --> 00:20:10,159 +get it right or not so what they + +472 +00:20:07,400 --> 00:20:13,440 +actually do is they generate like 10 + +473 +00:20:10,159 --> 00:20:15,600 +outputs or 200 outputs and then they + +474 +00:20:13,440 --> 00:20:18,159 +calculate the expected number of those + +475 +00:20:15,600 --> 00:20:20,320 +that the expected number of cases where + +476 +00:20:18,159 --> 00:20:23,280 +that would pass by just doing a little + +477 +00:20:20,320 --> 00:20:25,440 +bit of uh like math calculating the + +478 +00:20:23,280 --> 00:20:28,679 +number of combinations where one passes + +479 +00:20:25,440 --> 00:20:30,720 +or one doesn't and here k n is the total + +480 +00:20:28,679 --> 00:20:34,240 +number you generate C is the number of + +481 +00:20:30,720 --> 00:20:36,520 +correct ansers and K is uh your passive + +482 +00:20:34,240 --> 00:20:36,520 +K + +483 +00:20:37,159 --> 00:20:43,360 +value + +484 +00:20:38,919 --> 00:20:46,280 +cool um so any any questions about + +485 +00:20:43,360 --> 00:20:47,880 +these you'll you'll see a bunch of uh + +486 +00:20:46,280 --> 00:20:50,520 +people evaluating on this human ofel + +487 +00:20:47,880 --> 00:20:52,760 +with passive K including all of the you + +488 +00:20:50,520 --> 00:20:57,520 +know new llms that come out it's a very + +489 +00:20:52,760 --> 00:20:57,520 +standard Edge yeah + +490 +00:21:01,760 --> 00:21:06,039 +is yeah that that's a good um question I + +491 +00:21:04,919 --> 00:21:07,840 +think I'm going to cover that a little + +492 +00:21:06,039 --> 00:21:11,039 +bit later but I might as well say it now + +493 +00:21:07,840 --> 00:21:13,640 +so llms + +494 +00:21:11,039 --> 00:21:15,080 +are llms are good at code because they + +495 +00:21:13,640 --> 00:21:16,880 +intentionally include a lot of code + +496 +00:21:15,080 --> 00:21:19,520 +training data in LL training and the + +497 +00:21:16,880 --> 00:21:22,679 +reason for that is twofold um the first + +498 +00:21:19,520 --> 00:21:25,320 +one is that code generation is a huge + +499 +00:21:22,679 --> 00:21:26,960 +application of llms right now and like + +500 +00:21:25,320 --> 00:21:28,679 +if you had an llm that couldn't do code + +501 +00:21:26,960 --> 00:21:32,320 +generation it'd be kind of embarrassing + +502 +00:21:28,679 --> 00:21:33,960 +so um Everybody includes this number two + +503 +00:21:32,320 --> 00:21:36,600 +uh code has been shown to improve kind + +504 +00:21:33,960 --> 00:21:38,080 +of the reasoning abilities of llms and + +505 +00:21:36,600 --> 00:21:41,640 +because of that people include code for + +506 +00:21:38,080 --> 00:21:43,440 +that purpose so yeah um it's not that + +507 +00:21:41,640 --> 00:21:45,600 +LMS are inherently good at code or + +508 +00:21:43,440 --> 00:21:48,840 +anything it's that they have lots of + +509 +00:21:45,600 --> 00:21:51,640 +lots of code TR and I'll I'll explain + +510 +00:21:48,840 --> 00:21:54,279 +exactly how they construct this + +511 +00:21:51,640 --> 00:21:57,200 +St and actually if you remember last + +512 +00:21:54,279 --> 00:21:59,640 +time uh I talked about the pile which + +513 +00:21:57,200 --> 00:22:01,039 +was or not last time but uh when I + +514 +00:21:59,640 --> 00:22:03,159 +talked about the tour of large language + +515 +00:22:01,039 --> 00:22:06,360 +models I talked about the pile and the + +516 +00:22:03,159 --> 00:22:09,799 +pile is almost half toe for + +517 +00:22:06,360 --> 00:22:12,000 +example cool any other + +518 +00:22:09,799 --> 00:22:17,240 +questions + +519 +00:22:12,000 --> 00:22:19,320 +okay so another uh a first Improvement + +520 +00:22:17,240 --> 00:22:22,080 +or at least change that we can make to + +521 +00:22:19,320 --> 00:22:23,880 +human ofel is uh going to broader + +522 +00:22:22,080 --> 00:22:26,720 +domains and covering a broader variety + +523 +00:22:23,880 --> 00:22:28,559 +of libraries and this is a data set that + +524 +00:22:26,720 --> 00:22:30,880 +we created actually a long time ago but + +525 +00:22:28,559 --> 00:22:33,799 +but we recently added execution based + +526 +00:22:30,880 --> 00:22:36,159 +evaluation to it it's called konola and + +527 +00:22:33,799 --> 00:22:36,919 +the execution based uh evaluation one is + +528 +00:22:36,159 --> 00:22:40,360 +called + +529 +00:22:36,919 --> 00:22:43,039 +odex and basically what we did here is + +530 +00:22:40,360 --> 00:22:45,720 +we scraped data from stack Overflow + +531 +00:22:43,039 --> 00:22:48,039 +including uh inputs and output uh + +532 +00:22:45,720 --> 00:22:50,559 +Solutions and then based on this scraped + +533 +00:22:48,039 --> 00:22:54,240 +data we uh did some manual curation to + +534 +00:22:50,559 --> 00:22:57,640 +turn these into like actual questions um + +535 +00:22:54,240 --> 00:22:59,640 +and answers about how you could write uh + +536 +00:22:57,640 --> 00:23:01,799 +solve programming + +537 +00:22:59,640 --> 00:23:04,080 +problems and + +538 +00:23:01,799 --> 00:23:05,600 +um because this is scraped from stack + +539 +00:23:04,080 --> 00:23:09,159 +Overflow there's no restriction that + +540 +00:23:05,600 --> 00:23:10,520 +this is from the python standard Library + +541 +00:23:09,159 --> 00:23:13,200 +which also means that it can cover a + +542 +00:23:10,520 --> 00:23:14,919 +very wide variety of libraries and it's + +543 +00:23:13,200 --> 00:23:16,760 +approximately according to the + +544 +00:23:14,919 --> 00:23:20,320 +popularity of the libraries because we + +545 +00:23:16,760 --> 00:23:24,159 +took popular posts so um that's a a good + +546 +00:23:20,320 --> 00:23:25,400 +thing uh you know it it is a reasonable + +547 +00:23:24,159 --> 00:23:26,559 +way to come up with a realistic + +548 +00:23:25,400 --> 00:23:29,520 +distribution of libraries that you + +549 +00:23:26,559 --> 00:23:31,799 +should be looking at um odex adds + +550 +00:23:29,520 --> 00:23:34,159 +execution based evaluation previously + +551 +00:23:31,799 --> 00:23:36,679 +what we had was we only had the snippet + +552 +00:23:34,159 --> 00:23:40,600 +that was able to solve the problem as + +553 +00:23:36,679 --> 00:23:42,360 +opposed to um as opposed to being able + +554 +00:23:40,600 --> 00:23:46,880 +to execute unit + +555 +00:23:42,360 --> 00:23:49,440 +tests and just to show how this has a + +556 +00:23:46,880 --> 00:23:52,000 +broader variety of libraries on the top + +557 +00:23:49,440 --> 00:23:53,919 +we have the distribution of odex + +558 +00:23:52,000 --> 00:23:57,320 +libraries and we can see about half of + +559 +00:23:53,919 --> 00:23:59,600 +them use libraries and this includes a + +560 +00:23:57,320 --> 00:24:01,279 +variety of things including pandas + +561 +00:23:59,600 --> 00:24:04,799 +numpy + +562 +00:24:01,279 --> 00:24:06,400 +um reg o selections you know all of + +563 +00:24:04,799 --> 00:24:09,279 +these should be libraries that look + +564 +00:24:06,400 --> 00:24:14,559 +familiar to you um in contrast if we + +565 +00:24:09,279 --> 00:24:17,200 +look at human eval human eval is right + +566 +00:24:14,559 --> 00:24:18,840 +here so you can see almost all of the + +567 +00:24:17,200 --> 00:24:20,600 +questions require no libraries and all + +568 +00:24:18,840 --> 00:24:22,120 +of the other ones require libraries that + +569 +00:24:20,600 --> 00:24:24,360 +were included in the pipe onstead + +570 +00:24:22,120 --> 00:24:27,640 +libraries so + +571 +00:24:24,360 --> 00:24:29,120 +um in reality this is probably more what + +572 +00:24:27,640 --> 00:24:30,120 +your program in queries are going to + +573 +00:24:29,120 --> 00:24:31,240 +look like they're not going to look like + +574 +00:24:30,120 --> 00:24:33,600 +lead code they're going to look like + +575 +00:24:31,240 --> 00:24:33,600 +using + +576 +00:24:35,360 --> 00:24:42,080 +APS so um originally when we did conal + +577 +00:24:40,039 --> 00:24:44,200 +we didn't use execution based evaluation + +578 +00:24:42,080 --> 00:24:47,480 +because creating unit tests uh for lots + +579 +00:24:44,200 --> 00:24:51,360 +of stack Overflow posts is hard + +580 +00:24:47,480 --> 00:24:53,640 +um specifically there's two issues the + +581 +00:24:51,360 --> 00:24:55,000 +first one is that it requires that code + +582 +00:24:53,640 --> 00:24:58,880 +be easily + +583 +00:24:55,000 --> 00:25:02,320 +executable um now think about + +584 +00:24:58,880 --> 00:25:04,559 +how you would do that for Matt plot lib + +585 +00:25:02,320 --> 00:25:06,200 +for example how would you create a unit + +586 +00:25:04,559 --> 00:25:08,080 +test to test whether Matt plot lib + +587 +00:25:06,200 --> 00:25:10,760 +successfully created a bar chart for + +588 +00:25:08,080 --> 00:25:12,440 +something it's kind of tough right you + +589 +00:25:10,760 --> 00:25:13,840 +like you would have to get the image and + +590 +00:25:12,440 --> 00:25:16,919 +you'd have to confirm that the image was + +591 +00:25:13,840 --> 00:25:21,200 +a bar chart and uh other things like + +592 +00:25:16,919 --> 00:25:22,720 +that um even worse what if it was uh + +593 +00:25:21,200 --> 00:25:25,600 +kind of like a server framework like + +594 +00:25:22,720 --> 00:25:27,440 +ajango how would you confirm that ajango + +595 +00:25:25,600 --> 00:25:30,559 +you know server is working appropriately + +596 +00:25:27,440 --> 00:25:32,600 +and that's kind of tricky so um actually + +597 +00:25:30,559 --> 00:25:34,480 +coming up with realistic unit tests for + +598 +00:25:32,600 --> 00:25:36,919 +real programs can be + +599 +00:25:34,480 --> 00:25:38,840 +difficult um another problem with + +600 +00:25:36,919 --> 00:25:41,640 +execution based evaluation is it ignores + +601 +00:25:38,840 --> 00:25:45,320 +stylistic considerations so I could + +602 +00:25:41,640 --> 00:25:48,279 +write very spaghetti like very spaghetti + +603 +00:25:45,320 --> 00:25:50,200 +code and as long as it executed properly + +604 +00:25:48,279 --> 00:25:52,559 +it would still be judged as correct and + +605 +00:25:50,200 --> 00:25:54,399 +sometimes that's actually an issue so + +606 +00:25:52,559 --> 00:25:56,360 +usually it's not a problem because + +607 +00:25:54,399 --> 00:25:58,600 +language models write reasonably good + +608 +00:25:56,360 --> 00:26:00,600 +code but sometimes you want to match the + +609 +00:25:58,600 --> 00:26:05,039 +or other things like that + +610 +00:26:00,600 --> 00:26:06,559 +so some alternatives are blue score + +611 +00:26:05,039 --> 00:26:09,000 +which we've talked about before it's + +612 +00:26:06,559 --> 00:26:12,679 +basically count calculating the engram + +613 +00:26:09,000 --> 00:26:16,919 +overlap between a gold standard human uh + +614 +00:26:12,679 --> 00:26:20,440 +implementation and a uh in the system + +615 +00:26:16,919 --> 00:26:24,000 +output and there's also specifically + +616 +00:26:20,440 --> 00:26:26,480 +adapted methods for evaluating code and + +617 +00:26:24,000 --> 00:26:29,080 +so there's a method called code blue and + +618 +00:26:26,480 --> 00:26:31,360 +basically the way code blue works is it + +619 +00:26:29,080 --> 00:26:35,240 +also considers the syntax and semantic + +620 +00:26:31,360 --> 00:26:37,080 +flow of the code so it measures overlap + +621 +00:26:35,240 --> 00:26:40,120 +between + +622 +00:26:37,080 --> 00:26:42,120 +strings in the original code but it also + +623 +00:26:40,120 --> 00:26:48,640 +considers overlap between the syntax + +624 +00:26:42,120 --> 00:26:53,000 +trees of the code and uh whether the + +625 +00:26:48,640 --> 00:26:56,320 +um these like semantic information flow + +626 +00:26:53,000 --> 00:26:57,919 +graphs look similar so uh all all of + +627 +00:26:56,320 --> 00:26:59,440 +these things work together to calculate + +628 +00:26:57,919 --> 00:27:02,720 +the C + +629 +00:26:59,440 --> 00:27:04,480 +St one thing I I should mention is how + +630 +00:27:02,720 --> 00:27:06,840 +do we get these syntax trees in the + +631 +00:27:04,480 --> 00:27:09,039 +first place um for example if we're + +632 +00:27:06,840 --> 00:27:12,919 +talking about python there's a python + +633 +00:27:09,039 --> 00:27:14,760 +Library uh for ab abstract syntax tree + +634 +00:27:12,919 --> 00:27:16,559 +it's just part of the standard library + +635 +00:27:14,760 --> 00:27:18,320 +and it's necessary to run the python + +636 +00:27:16,559 --> 00:27:20,559 +interpreter so you can just get these + +637 +00:27:18,320 --> 00:27:24,320 +trees directly from the python ASD + +638 +00:27:20,559 --> 00:27:25,880 +Library uh not hard to do uh for this I + +639 +00:27:24,320 --> 00:27:27,840 +forget what they did in the code blue + +640 +00:27:25,880 --> 00:27:30,679 +thing but there are uh analyzers that + +641 +00:27:27,840 --> 00:27:32,120 +allow you to analyze this control FL so + +642 +00:27:30,679 --> 00:27:34,159 +this is taking advantage of the fact + +643 +00:27:32,120 --> 00:27:37,440 +that code is you know predictable it has + +644 +00:27:34,159 --> 00:27:41,480 +predictable syntax and you can you + +645 +00:27:37,440 --> 00:27:43,960 +can6 um one disadvantage of blue and + +646 +00:27:41,480 --> 00:27:45,799 +code blue of course is that you know you + +647 +00:27:43,960 --> 00:27:47,679 +can write two very different looking + +648 +00:27:45,799 --> 00:27:49,559 +programs that actually are both correct + +649 +00:27:47,679 --> 00:27:51,799 +and blue will underestimate the goodness + +650 +00:27:49,559 --> 00:27:54,440 +of those programs so maybe using both of + +651 +00:27:51,799 --> 00:27:57,159 +them together is uh is + +652 +00:27:54,440 --> 00:28:00,120 +appropriate uh if if you can write unit + +653 +00:27:57,159 --> 00:28:00,120 +Test please + +654 +00:28:00,559 --> 00:28:04,279 +um another one which I'll just cover + +655 +00:28:02,600 --> 00:28:05,399 +very briefly we talked about BT score + +656 +00:28:04,279 --> 00:28:08,159 +before when I was talking about + +657 +00:28:05,399 --> 00:28:11,120 +evaluation of uh you know generated text + +658 +00:28:08,159 --> 00:28:13,480 +and there's also code BT score which um + +659 +00:28:11,120 --> 00:28:15,799 +we uh we created here at + +660 +00:28:13,480 --> 00:28:20,080 +CMU and it's basically an embedding + +661 +00:28:15,799 --> 00:28:21,760 +based metric uh to compare code and so + +662 +00:28:20,080 --> 00:28:23,399 +Bert score if you remember basically + +663 +00:28:21,760 --> 00:28:25,679 +what it did is it calculated the coign + +664 +00:28:23,399 --> 00:28:27,840 +similarity between each of the tokens uh + +665 +00:28:25,679 --> 00:28:30,159 +between a generated text and a reference + +666 +00:28:27,840 --> 00:28:34,279 +text we do exactly the same thing for + +667 +00:28:30,159 --> 00:28:36,080 +code um so we calculate the Sim cosine + +668 +00:28:34,279 --> 00:28:39,200 +similarity between tokens for a + +669 +00:28:36,080 --> 00:28:42,960 +reference code and generated + +670 +00:28:39,200 --> 00:28:45,000 +code and we released a model called + +671 +00:28:42,960 --> 00:28:46,559 +codir which was basically Bert but + +672 +00:28:45,000 --> 00:28:49,440 +continued trained on lots and lots of + +673 +00:28:46,559 --> 00:28:51,840 +code uh that allowed us to do that and + +674 +00:28:49,440 --> 00:28:55,480 +um basically we were able to demonstrate + +675 +00:28:51,840 --> 00:28:59,200 +that this gave better correlation both + +676 +00:28:55,480 --> 00:29:01,480 +with final execution accuracy and with + +677 +00:28:59,200 --> 00:29:05,200 +human judgments of whether the the code + +678 +00:29:01,480 --> 00:29:08,000 +was correct and so um some people uh + +679 +00:29:05,200 --> 00:29:09,559 +created a data set of human correctness + +680 +00:29:08,000 --> 00:29:12,559 +judgments and we were able to put a + +681 +00:29:09,559 --> 00:29:14,240 +little better with that as well um why + +682 +00:29:12,559 --> 00:29:15,640 +do we care about correlation with + +683 +00:29:14,240 --> 00:29:17,399 +execution + +684 +00:29:15,640 --> 00:29:20,200 +accuracy + +685 +00:29:17,399 --> 00:29:22,320 +um this is important in the cases when + +686 +00:29:20,200 --> 00:29:23,559 +we can't create unit tests or when + +687 +00:29:22,320 --> 00:29:26,120 +creating unit test would be too + +688 +00:29:23,559 --> 00:29:27,519 +expensive so this gives us a better + +689 +00:29:26,120 --> 00:29:30,640 +approximation for what we would get if + +690 +00:29:27,519 --> 00:29:30,640 +we ran tests + +691 +00:29:39,840 --> 00:29:45,000 +in yeah so we did not we did not + +692 +00:29:42,600 --> 00:29:46,799 +consider code structure here uh would + +693 +00:29:45,000 --> 00:29:48,480 +different variable names affect it yes + +694 +00:29:46,799 --> 00:29:50,159 +different variable names would affect it + +695 +00:29:48,480 --> 00:29:51,799 +but not as much as the other metrics + +696 +00:29:50,159 --> 00:29:53,960 +which is why it's better why it has + +697 +00:29:51,799 --> 00:29:56,720 +better + +698 +00:29:53,960 --> 00:30:00,000 +correlations and like for example + +699 +00:29:56,720 --> 00:30:03,679 +codir I imagine probably gives very + +700 +00:30:00,000 --> 00:30:05,120 +similar representations to I and J just + +701 +00:30:03,679 --> 00:30:07,960 +because they're both used in iterators + +702 +00:30:05,120 --> 00:30:09,039 +all the time whereas uh a normal Burt + +703 +00:30:07,960 --> 00:30:10,960 +model would give very different + +704 +00:30:09,039 --> 00:30:12,760 +representations to I and J right because + +705 +00:30:10,960 --> 00:30:14,960 +I is like a personal pronoun and J is + +706 +00:30:12,760 --> 00:30:17,200 +not so um that's the reason why + +707 +00:30:14,960 --> 00:30:20,399 +continued training would + +708 +00:30:17,200 --> 00:30:24,799 +help cool any other + +709 +00:30:20,399 --> 00:30:26,640 +things okay so another um another place + +710 +00:30:24,799 --> 00:30:29,480 +where code generation can be useful uh + +711 +00:30:26,640 --> 00:30:33,440 +we had the example of collab uh is in + +712 +00:30:29,480 --> 00:30:36,200 +collab notebooks and this or in uh data + +713 +00:30:33,440 --> 00:30:38,519 +science notebooks this paper was by uh + +714 +00:30:36,200 --> 00:30:41,440 +Google so this might actually even be + +715 +00:30:38,519 --> 00:30:43,960 +used in the collab thing because collab + +716 +00:30:41,440 --> 00:30:45,640 +is a Google thing um but data data + +717 +00:30:43,960 --> 00:30:47,320 +science notebooks allow for incremental + +718 +00:30:45,640 --> 00:30:50,519 +implementation I'm sure a lot of people + +719 +00:30:47,320 --> 00:30:53,559 +here or almost everybody here uses them + +720 +00:30:50,519 --> 00:30:55,279 +um and another interesting thing is say + +721 +00:30:53,559 --> 00:30:57,519 +allow for evaluation of code generation + +722 +00:30:55,279 --> 00:30:58,960 +in context uh or incremental code + +723 +00:30:57,519 --> 00:31:00,639 +generation + +724 +00:30:58,960 --> 00:31:02,720 +and so you start out with like a + +725 +00:31:00,639 --> 00:31:04,880 +notebook and then you have AAL + +726 +00:31:02,720 --> 00:31:06,600 +languageand and then youate the output + +727 +00:31:04,880 --> 00:31:09,240 +AAL language command you generate the + +728 +00:31:06,600 --> 00:31:10,799 +output etc etc so this is an extal + +729 +00:31:09,240 --> 00:31:14,519 +example from the STA + +730 +00:31:10,799 --> 00:31:17,519 +set um so this paper is very nice it it + +731 +00:31:14,519 --> 00:31:20,320 +has a lot of uh you know it's a nice + +732 +00:31:17,519 --> 00:31:21,720 +data set one other thing that was really + +733 +00:31:20,320 --> 00:31:24,200 +interesting from this paper is it + +734 +00:31:21,720 --> 00:31:27,919 +demonstrated the problem of data leakage + +735 +00:31:24,200 --> 00:31:29,679 +in evaluating models and this is a Rel + +736 +00:31:27,919 --> 00:31:32,440 +relatively large problem I don't know if + +737 +00:31:29,679 --> 00:31:33,799 +we have a silver bullet solution for + +738 +00:31:32,440 --> 00:31:36,120 +this but it's an important thing to be + +739 +00:31:33,799 --> 00:31:38,120 +aware of uh not just for code generation + +740 +00:31:36,120 --> 00:31:39,639 +but these are examples from code + +741 +00:31:38,120 --> 00:31:43,519 +generation + +742 +00:31:39,639 --> 00:31:45,679 +so here um in the arcade data set they + +743 +00:31:43,519 --> 00:31:48,519 +basically both evaluated existing + +744 +00:31:45,679 --> 00:31:51,720 +notebooks and they evaluated notebooks + +745 +00:31:48,519 --> 00:31:53,279 +that um existing notebooks that they got + +746 +00:31:51,720 --> 00:31:55,960 +from the web and they evaluated + +747 +00:31:53,279 --> 00:31:59,000 +notebooks that they actually created + +748 +00:31:55,960 --> 00:32:00,399 +themselves and there's very very Stark + +749 +00:31:59,000 --> 00:32:02,600 +difference between the notebooks that + +750 +00:32:00,399 --> 00:32:04,440 +were created on the web and the + +751 +00:32:02,600 --> 00:32:07,399 +notebooks that they evaluated themselves + +752 +00:32:04,440 --> 00:32:10,159 +so like most of the code generation + +753 +00:32:07,399 --> 00:32:11,679 +models except for Palm uh which was the + +754 +00:32:10,159 --> 00:32:14,760 +best model when they created this data + +755 +00:32:11,679 --> 00:32:17,360 +set did really poorly or did really well + +756 +00:32:14,760 --> 00:32:21,120 +on the existing data and quite poorly on + +757 +00:32:17,360 --> 00:32:25,279 +the new data um which is probably an + +758 +00:32:21,120 --> 00:32:28,159 +indication of um probably an indication + +759 +00:32:25,279 --> 00:32:29,720 +of the fact that you know this is to + +760 +00:32:28,159 --> 00:32:32,240 +some extent leaked into the training + +761 +00:32:29,720 --> 00:32:35,320 +data of the language models there was + +762 +00:32:32,240 --> 00:32:37,760 +also a very recent + +763 +00:32:35,320 --> 00:32:40,240 +um paper actually I think this might be + +764 +00:32:37,760 --> 00:32:43,159 +2024 there was a very recent paper that + +765 +00:32:40,240 --> 00:32:45,880 +did a similar thing uh where they + +766 +00:32:43,159 --> 00:32:48,440 +evaluated on human ofel and then their + +767 +00:32:45,880 --> 00:32:52,000 +live codebench in live codebench + +768 +00:32:48,440 --> 00:32:55,639 +basically what they did is they tried to + +769 +00:32:52,000 --> 00:32:58,519 +pick problems from Le code and other + +770 +00:32:55,639 --> 00:33:00,519 +websites that were more recent versus + +771 +00:32:58,519 --> 00:33:01,960 +less recent and they have some really + +772 +00:33:00,519 --> 00:33:04,880 +nice graphs in their paper where they + +773 +00:33:01,960 --> 00:33:06,519 +demonstrate that the less recent ones + +774 +00:33:04,880 --> 00:33:08,159 +before the training cut off have like a + +775 +00:33:06,519 --> 00:33:10,080 +high accuracy and then suddenly it drops + +776 +00:33:08,159 --> 00:33:12,639 +right at the trading C off of the the + +777 +00:33:10,080 --> 00:33:13,480 +models so this is something to to be + +778 +00:33:12,639 --> 00:33:17,360 +aware + +779 +00:33:13,480 --> 00:33:20,519 +of and what this figure is showing here + +780 +00:33:17,360 --> 00:33:24,039 +is this figure is showing on the xaxis + +781 +00:33:20,519 --> 00:33:26,840 +pass it one on the Live code bench easy + +782 +00:33:24,039 --> 00:33:28,679 +and then pass it one on human ofel so we + +783 +00:33:26,840 --> 00:33:31,480 +see this kn + +784 +00:33:28,679 --> 00:33:34,039 +correlation between + +785 +00:33:31,480 --> 00:33:35,919 +essentially like passing on life code + +786 +00:33:34,039 --> 00:33:37,399 +bench easy and passing on human ofel + +787 +00:33:35,919 --> 00:33:40,000 +then we have this group of models that + +788 +00:33:37,399 --> 00:33:42,159 +are kind of like up here and these are + +789 +00:33:40,000 --> 00:33:43,960 +ones where basically it's likely that + +790 +00:33:42,159 --> 00:33:46,480 +human ofel leaked into the training data + +791 +00:33:43,960 --> 00:33:48,840 +because they're getting better scores on + +792 +00:33:46,480 --> 00:33:50,919 +human ofel than you would expect that + +793 +00:33:48,840 --> 00:33:53,360 +they get uh you know just looking at + +794 +00:33:50,919 --> 00:33:55,360 +their uh you know performance on another + +795 +00:33:53,360 --> 00:33:57,320 +data set there's also a nice like + +796 +00:33:55,360 --> 00:34:00,000 +analogous one for math reasoning + +797 +00:33:57,320 --> 00:34:01,519 +problems um like this so this is + +798 +00:34:00,000 --> 00:34:03,039 +definitely something to be aware of if + +799 +00:34:01,519 --> 00:34:04,559 +you're looking only at like very + +800 +00:34:03,039 --> 00:34:06,200 +standard benchmarks that people are + +801 +00:34:04,559 --> 00:34:11,159 +trading + +802 +00:34:06,200 --> 00:34:11,159 +in cool um any questions about + +803 +00:34:12,119 --> 00:34:19,240 +this okay um another data set uh that I + +804 +00:34:17,720 --> 00:34:20,599 +I really like the concept of and + +805 +00:34:19,240 --> 00:34:22,919 +recently it's gotten a little bit of + +806 +00:34:20,599 --> 00:34:25,399 +Buzz because it was used in a um an + +807 +00:34:22,919 --> 00:34:28,399 +evaluation of a new coding assistant + +808 +00:34:25,399 --> 00:34:30,480 +called Devon but this is um + +809 +00:34:28,399 --> 00:34:32,240 +something called sbench and it's issues + +810 +00:34:30,480 --> 00:34:34,639 +from GitHub and code + +811 +00:34:32,240 --> 00:34:37,119 +bases uh is the input and you want to + +812 +00:34:34,639 --> 00:34:39,480 +generate a poll request to basically uh + +813 +00:34:37,119 --> 00:34:42,919 +solve these issues and so your input is + +814 +00:34:39,480 --> 00:34:45,800 +like data leak in gbdt due to warm start + +815 +00:34:42,919 --> 00:34:48,800 +this is about non standard then you have + +816 +00:34:45,800 --> 00:34:51,159 +the code base um it generates a PR for + +817 +00:34:48,800 --> 00:34:53,079 +you and then it's run through the unit + +818 +00:34:51,159 --> 00:34:55,919 +tests to see if it passes all the unit + +819 +00:34:53,079 --> 00:34:57,160 +test post PRS so it's very similar to + +820 +00:34:55,919 --> 00:34:59,240 +you know what you would be doing in a + +821 +00:34:57,160 --> 00:35:01,280 +well Main software project you open a + +822 +00:34:59,240 --> 00:35:05,240 +issue and then you open a poll request + +823 +00:35:01,280 --> 00:35:07,800 +to fix an issue um this requires things + +824 +00:35:05,240 --> 00:35:10,240 +like long context understanding um being + +825 +00:35:07,800 --> 00:35:13,200 +able to do very precise implementations + +826 +00:35:10,240 --> 00:35:14,720 +based on large software projects and + +827 +00:35:13,200 --> 00:35:17,920 +right now the state-of-the-art on this + +828 +00:35:14,720 --> 00:35:20,680 +is at about 14% so it's definitely not a + +829 +00:35:17,920 --> 00:35:23,119 +solv problem at all um in the original + +830 +00:35:20,680 --> 00:35:27,920 +paper uh the the state-of-the-art method + +831 +00:35:23,119 --> 00:35:29,400 +was like 6% or something like that so um + +832 +00:35:27,920 --> 00:35:32,079 +I imagine that we're not going to get up + +833 +00:35:29,400 --> 00:35:33,880 +to 90% anytime soon because it's + +834 +00:35:32,079 --> 00:35:35,720 +probably solving the easier ones and the + +835 +00:35:33,880 --> 00:35:37,280 +harder ones are you know far beyond the + +836 +00:35:35,720 --> 00:35:39,920 +ability of any language model we have at + +837 +00:35:37,280 --> 00:35:42,320 +the moment um but I I really like this + +838 +00:35:39,920 --> 00:35:43,960 +Benchmark one caveat if you really like + +839 +00:35:42,320 --> 00:35:45,520 +this Benchmark is that it's kind of + +840 +00:35:43,960 --> 00:35:47,760 +heavy to run so you need to be a little + +841 +00:35:45,520 --> 00:35:51,000 +bit careful uh because you need to pull + +842 +00:35:47,760 --> 00:35:54,280 +in like full repositories to um to run + +843 +00:35:51,000 --> 00:35:56,319 +on so yeah be a little + +844 +00:35:54,280 --> 00:35:57,920 +bit sorry there's so many like + +845 +00:35:56,319 --> 00:35:59,640 +interesting data sets recently in this + +846 +00:35:57,920 --> 00:36:01,079 +area that I I spent a lot of time on + +847 +00:35:59,640 --> 00:36:04,240 +data set so I'll try to go a little bit + +848 +00:36:01,079 --> 00:36:06,200 +more quickly but um uh a final one is + +849 +00:36:04,240 --> 00:36:09,359 +design to code and this is also a very + +850 +00:36:06,200 --> 00:36:11,520 +recent data set um basically the idea is + +851 +00:36:09,359 --> 00:36:16,359 +code generation from websites so your + +852 +00:36:11,520 --> 00:36:18,119 +input is a website and your output is uh + +853 +00:36:16,359 --> 00:36:22,520 +like JavaScript code that implements + +854 +00:36:18,119 --> 00:36:24,960 +that website and or or css or HTML code + +855 +00:36:22,520 --> 00:36:26,880 +that implements the website so I I + +856 +00:36:24,960 --> 00:36:30,119 +really like this because you know it's a + +857 +00:36:26,880 --> 00:36:32,280 +good test bed for multi modal models and + +858 +00:36:30,119 --> 00:36:34,040 +there aren't a whole lot of strong open + +859 +00:36:32,280 --> 00:36:36,160 +source multimodal models that can solve + +860 +00:36:34,040 --> 00:36:36,960 +this at the moment so I think it's kind + +861 +00:36:36,160 --> 00:36:39,720 +of + +862 +00:36:36,960 --> 00:36:41,480 +cool um they also proposed a design to + +863 +00:36:39,720 --> 00:36:43,480 +code model that does the best on this + +864 +00:36:41,480 --> 00:36:47,119 +data set out of uh you know any of the + +865 +00:36:43,480 --> 00:36:47,119 +open source models but it's still far + +866 +00:36:47,400 --> 00:36:53,040 +from and then the question becomes how + +867 +00:36:50,680 --> 00:36:56,079 +do they um evaluate this in the first + +868 +00:36:53,040 --> 00:36:59,440 +place and basically the idea is that + +869 +00:36:56,079 --> 00:37:01,400 +they do highle visual similarity and so + +870 +00:36:59,440 --> 00:37:03,920 +they calculate visual embeddings of the + +871 +00:37:01,400 --> 00:37:06,119 +generated sites and then they also do + +872 +00:37:03,920 --> 00:37:08,240 +lowl element similarity so they try to + +873 +00:37:06,119 --> 00:37:10,440 +identify all of the elements in the + +874 +00:37:08,240 --> 00:37:12,119 +generated web page and make sure that uh + +875 +00:37:10,440 --> 00:37:15,720 +they recall all of the generated + +876 +00:37:12,119 --> 00:37:18,760 +elements so um I think this is nice one + +877 +00:37:15,720 --> 00:37:21,000 +thing if you notice um if you use even + +878 +00:37:18,760 --> 00:37:25,960 +state-ofthe-art like closed models like + +879 +00:37:21,000 --> 00:37:28,040 +CLA 3 or um GPD 4 is they're really bad + +880 +00:37:25,960 --> 00:37:29,440 +at this recall they it can generate + +881 +00:37:28,040 --> 00:37:31,800 +something that looks like maybe a little + +882 +00:37:29,440 --> 00:37:33,839 +bit similar but it will be missing like + +883 +00:37:31,800 --> 00:37:35,720 +the elements the design will be off you + +884 +00:37:33,839 --> 00:37:37,720 +know other stuff like that so I think + +885 +00:37:35,720 --> 00:37:41,079 +even in the closed like strong models + +886 +00:37:37,720 --> 00:37:41,079 +this is not a Sol + +887 +00:37:41,319 --> 00:37:47,079 +problem cool uh + +888 +00:37:45,000 --> 00:37:49,880 +yeah + +889 +00:37:47,079 --> 00:37:51,880 +problem um so why is that a hard problem + +890 +00:37:49,880 --> 00:37:54,200 +for the models I don't actually have a + +891 +00:37:51,880 --> 00:37:57,200 +really confident answer to that but I + +892 +00:37:54,200 --> 00:37:57,200 +think + +893 +00:38:00,240 --> 00:38:05,200 +so one thing I can tell you is that they + +894 +00:38:02,839 --> 00:38:08,839 +are able to + +895 +00:38:05,200 --> 00:38:12,000 +improve um so they're able to generate + +896 +00:38:08,839 --> 00:38:14,720 +something and then I say no that's bad + +897 +00:38:12,000 --> 00:38:16,160 +please like make it better and it's + +898 +00:38:14,720 --> 00:38:17,800 +generally better the second time + +899 +00:38:16,160 --> 00:38:19,920 +especially if you give specific things + +900 +00:38:17,800 --> 00:38:22,319 +like oh uh but the background on the + +901 +00:38:19,920 --> 00:38:25,160 +generated site is white but actually it + +902 +00:38:22,319 --> 00:38:27,599 +should be black and if you think about + +903 +00:38:25,160 --> 00:38:31,480 +like even a skilled human programmer do + +904 +00:38:27,599 --> 00:38:35,119 +you think you could write like website + +905 +00:38:31,480 --> 00:38:37,680 +code and then view it once and then it + +906 +00:38:35,119 --> 00:38:40,319 +would be correct I think you probably + +907 +00:38:37,680 --> 00:38:42,160 +couldn't right and so like we're asking + +908 +00:38:40,319 --> 00:38:44,040 +models to do essentially the same thing + +909 +00:38:42,160 --> 00:38:46,920 +except they're like even worse than us + +910 +00:38:44,040 --> 00:38:48,560 +and you know keeping track of all the V + +911 +00:38:46,920 --> 00:38:50,720 +visual elements and stuff so I think + +912 +00:38:48,560 --> 00:38:52,480 +it's more like this problem probably + +913 +00:38:50,720 --> 00:38:54,720 +just needs iterative refinement + +914 +00:38:52,480 --> 00:38:58,839 +otherwise it's like asking too much of a + +915 +00:38:54,720 --> 00:39:02,640 +model maybe I don't know + +916 +00:38:58,839 --> 00:39:04,520 +cool okay so um let's go into methods + +917 +00:39:02,640 --> 00:39:06,920 +and code generation has some unique + +918 +00:39:04,520 --> 00:39:09,400 +things um the basic method that you can + +919 +00:39:06,920 --> 00:39:11,240 +always use is a code generating LM and + +920 +00:39:09,400 --> 00:39:13,040 +so you feed in previous code or you feed + +921 +00:39:11,240 --> 00:39:16,040 +in whatever context you have into the LM + +922 +00:39:13,040 --> 00:39:18,079 +and you generate um uh from it and + +923 +00:39:16,040 --> 00:39:20,079 +virtually all Serius LMS are trained on + +924 +00:39:18,079 --> 00:39:23,079 +code nowadays like I I just mentioned + +925 +00:39:20,079 --> 00:39:23,079 +before + +926 +00:39:23,119 --> 00:39:29,920 +um one one important thing here is uh + +927 +00:39:28,560 --> 00:39:31,240 +when you're generating if you're + +928 +00:39:29,920 --> 00:39:33,040 +generating for something like code + +929 +00:39:31,240 --> 00:39:34,480 +generation I definitely suggest that you + +930 +00:39:33,040 --> 00:39:36,119 +modify your temperature settings + +931 +00:39:34,480 --> 00:39:38,359 +appropriately and set it to a low + +932 +00:39:36,119 --> 00:39:42,160 +temperature um otherwise you'll get kind + +933 +00:39:38,359 --> 00:39:45,079 +of crazy uh code but if you set it to a + +934 +00:39:42,160 --> 00:39:45,079 +low temperature you can get + +935 +00:39:46,440 --> 00:39:52,160 +better anyway um one really core + +936 +00:39:49,640 --> 00:39:54,240 +capability of code LMS especially ones + +937 +00:39:52,160 --> 00:39:55,599 +that you use in your IDE like uh + +938 +00:39:54,240 --> 00:39:58,160 +co-pilot is + +939 +00:39:55,599 --> 00:40:00,000 +infilling and um + +940 +00:39:58,160 --> 00:40:03,680 +the the paper that proposed this is + +941 +00:40:00,000 --> 00:40:05,920 +actually by Daniel Freed at LTI here and + +942 +00:40:03,680 --> 00:40:09,160 +um + +943 +00:40:05,920 --> 00:40:11,240 +the basically what you want to do often + +944 +00:40:09,160 --> 00:40:13,000 +is you have previous code you have next + +945 +00:40:11,240 --> 00:40:14,680 +code and you want to just fill in like a + +946 +00:40:13,000 --> 00:40:17,960 +line that's missing like you want to add + +947 +00:40:14,680 --> 00:40:19,040 +an extra you know if statement or or + +948 +00:40:17,960 --> 00:40:22,720 +some sort of + +949 +00:40:19,040 --> 00:40:24,880 +modification and so the way that at + +950 +00:40:22,720 --> 00:40:27,000 +least this paper proposed it and the way + +951 +00:40:24,880 --> 00:40:29,800 +that I think most LMS are actually doing + +952 +00:40:27,000 --> 00:40:30,640 +this is they take a standard left to + +953 +00:40:29,800 --> 00:40:33,200 +right + +954 +00:40:30,640 --> 00:40:36,040 +LM and what they want to do is they want + +955 +00:40:33,200 --> 00:40:39,040 +to infill this code chunk and so what + +956 +00:40:36,040 --> 00:40:40,440 +they do is they put a mask in the place + +957 +00:40:39,040 --> 00:40:42,119 +where they want to fill the chunk which + +958 +00:40:40,440 --> 00:40:46,280 +would also be where your cursor is in + +959 +00:40:42,119 --> 00:40:49,960 +your IDE right uh at that point and then + +960 +00:40:46,280 --> 00:40:52,680 +they have Mas to zero and then at the + +961 +00:40:49,960 --> 00:40:57,400 +end they put mask to zero again and then + +962 +00:40:52,680 --> 00:40:59,000 +they output the like you know all of the + +963 +00:40:57,400 --> 00:41:01,040 +code that you want to generate there and + +964 +00:40:59,000 --> 00:41:02,839 +so you can just kind of arbitrarily + +965 +00:41:01,040 --> 00:41:05,480 +generate these trunks by pulling you + +966 +00:41:02,839 --> 00:41:07,000 +know masking out chunks uh putting in + +967 +00:41:05,480 --> 00:41:08,960 +The Mask token and then moving it to the + +968 +00:41:07,000 --> 00:41:10,440 +end of the sequence and then you can + +969 +00:41:08,960 --> 00:41:13,160 +just use a standard left to right Auto + +970 +00:41:10,440 --> 00:41:15,359 +regressive language model to solve this + +971 +00:41:13,160 --> 00:41:17,040 +problem so this is really important if + +972 +00:41:15,359 --> 00:41:18,520 +you want to build like a co-pilot style + +973 +00:41:17,040 --> 00:41:20,160 +thing and all of the code language + +974 +00:41:18,520 --> 00:41:23,680 +models that I talk about at the end of + +975 +00:41:20,160 --> 00:41:23,680 +this class uh use this + +976 +00:41:24,800 --> 00:41:30,440 +technique um another thing is there's + +977 +00:41:28,160 --> 00:41:33,760 +lots of available information uh for + +978 +00:41:30,440 --> 00:41:36,040 +learning coding things um or for solving + +979 +00:41:33,760 --> 00:41:38,880 +coding tasks this includes you know the + +980 +00:41:36,040 --> 00:41:40,440 +current code context of course um also + +981 +00:41:38,880 --> 00:41:41,920 +the description of the issue that you + +982 +00:41:40,440 --> 00:41:45,160 +want to be fixing like if you're solving + +983 +00:41:41,920 --> 00:41:49,240 +a poll request um repo context from + +984 +00:41:45,160 --> 00:41:51,880 +other files um what tabs you have open + +985 +00:41:49,240 --> 00:41:55,920 +uh so that that's also an important + +986 +00:41:51,880 --> 00:41:58,599 +thing and when GitHub co-pilot came out + +987 +00:41:55,920 --> 00:42:01,960 +they didn't really tell you the details + +988 +00:41:58,599 --> 00:42:04,480 +of how they were doing this but um + +989 +00:42:01,960 --> 00:42:09,079 +GitHub co-pilot is written in JavaScript + +990 +00:42:04,480 --> 00:42:11,839 +and uh there was a p PhD student I think + +991 +00:42:09,079 --> 00:42:14,000 +from maybe Georgia Tech or something uh + +992 +00:42:11,839 --> 00:42:16,839 +who or Master student who basically went + +993 +00:42:14,000 --> 00:42:19,160 +in and took the JavaScript and like Dem + +994 +00:42:16,839 --> 00:42:21,839 +minified it and like reverse engineered + +995 +00:42:19,160 --> 00:42:23,640 +what was actually happening um and uh + +996 +00:42:21,839 --> 00:42:26,680 +wrote A Blog about it and this blog is + +997 +00:42:23,640 --> 00:42:28,800 +is great uh so basically what uh + +998 +00:42:26,680 --> 00:42:32,200 +co-pilot was doing which also kind of + +999 +00:42:28,800 --> 00:42:33,839 +gives you a gold standard um way of uh + +1000 +00:42:32,200 --> 00:42:36,920 +looking + +1001 +00:42:33,839 --> 00:42:39,440 +at uh you know what kind of information + +1002 +00:42:36,920 --> 00:42:43,440 +is necessary to create a good model is + +1003 +00:42:39,440 --> 00:42:45,240 +first they extract um information for + +1004 +00:42:43,440 --> 00:42:47,400 +the prompt given the current document + +1005 +00:42:45,240 --> 00:42:49,240 +and the cursor position so they take the + +1006 +00:42:47,400 --> 00:42:51,720 +current document where is the cursor and + +1007 +00:42:49,240 --> 00:42:54,640 +what is before this and what is after + +1008 +00:42:51,720 --> 00:42:56,960 +this um they identify the relative path + +1009 +00:42:54,640 --> 00:42:59,960 +of the file and what language it's in so + +1010 +00:42:56,960 --> 00:43:01,760 +they they identifi python files or + +1011 +00:42:59,960 --> 00:43:04,240 +JavaScript files or + +1012 +00:43:01,760 --> 00:43:07,440 +whatever they find the most recently + +1013 +00:43:04,240 --> 00:43:09,800 +accessed 20 files in the same language + +1014 +00:43:07,440 --> 00:43:12,599 +so like if you've opened 20 tabs they + +1015 +00:43:09,800 --> 00:43:15,559 +keep track of which tab you had + +1016 +00:43:12,599 --> 00:43:18,280 +open um and then the actual prompt that + +1017 +00:43:15,559 --> 00:43:22,119 +they send over includes text that is + +1018 +00:43:18,280 --> 00:43:23,640 +before text that's after um similar + +1019 +00:43:22,119 --> 00:43:26,520 +files out of the 20 files that you've + +1020 +00:43:23,640 --> 00:43:29,480 +opened recently um also information from + +1021 +00:43:26,520 --> 00:43:31,760 +imported files and metadata about the + +1022 +00:43:29,480 --> 00:43:33,079 +language and the path so all of this is + +1023 +00:43:31,760 --> 00:43:37,079 +sent to the + +1024 +00:43:33,079 --> 00:43:38,720 +model um and so this is just basically + +1025 +00:43:37,079 --> 00:43:40,160 +it's really good prompt engineering + +1026 +00:43:38,720 --> 00:43:41,760 +right they're figuring out a good way to + +1027 +00:43:40,160 --> 00:43:44,200 +get all of the information that would be + +1028 +00:43:41,760 --> 00:43:45,680 +useful uh for getting this model to work + +1029 +00:43:44,200 --> 00:43:49,559 +into the + +1030 +00:43:45,680 --> 00:43:50,920 +prompt um so I there's much much more + +1031 +00:43:49,559 --> 00:43:52,839 +information in this plug it's a really + +1032 +00:43:50,920 --> 00:43:57,400 +nice blog if you uh if you want to see + +1033 +00:43:52,839 --> 00:43:57,400 +about it but um that's the basic + +1034 +00:43:57,640 --> 00:44:00,240 +any any + +1035 +00:44:01,240 --> 00:44:07,160 +questions okay + +1036 +00:44:03,520 --> 00:44:11,240 +cool yeah is this just what gets sent + +1037 +00:44:07,160 --> 00:44:13,520 +over to theot server or does + +1038 +00:44:11,240 --> 00:44:15,240 +copilot this is what gets sent over to + +1039 +00:44:13,520 --> 00:44:17,920 +the co-pilot server but the way they're + +1040 +00:44:15,240 --> 00:44:20,960 +sending it makes me guess that like all + +1041 +00:44:17,920 --> 00:44:22,839 +of this is red so like they also are + +1042 +00:44:20,960 --> 00:44:24,559 +considering I didn't mention it here but + +1043 +00:44:22,839 --> 00:44:26,000 +they're considering the token limit and + +1044 +00:44:24,559 --> 00:44:27,599 +other stuff like that so that kind of + +1045 +00:44:26,000 --> 00:44:30,760 +makes me feel like this is + +1046 +00:44:27,599 --> 00:44:30,760 +actually the + +1047 +00:44:32,240 --> 00:44:38,440 +pr uh cool + +1048 +00:44:35,359 --> 00:44:41,040 +so another uh thing that you can do is + +1049 +00:44:38,440 --> 00:44:42,520 +retrieval based code generation and + +1050 +00:44:41,040 --> 00:44:45,640 +retrieval based code + +1051 +00:44:42,520 --> 00:44:47,599 +generation uh basically what it does is + +1052 +00:44:45,640 --> 00:44:50,920 +it's like rag for code + +1053 +00:44:47,599 --> 00:44:53,240 +Generation Um and this has been around + +1054 +00:44:50,920 --> 00:44:55,640 +for a while including our work that I + +1055 +00:44:53,240 --> 00:44:57,680 +cited here and a few more in in + +1056 +00:44:55,640 --> 00:44:59,960 +2018 um + +1057 +00:44:57,680 --> 00:45:03,000 +and so one way you can do this is you + +1058 +00:44:59,960 --> 00:45:07,160 +can retrieve similar code from online + +1059 +00:45:03,000 --> 00:45:09,720 +and then use it to basically prompt a + +1060 +00:45:07,160 --> 00:45:11,920 +retrieval augmented language model uh + +1061 +00:45:09,720 --> 00:45:14,480 +this is good if you have a model that's + +1062 +00:45:11,920 --> 00:45:16,920 +not super good at code in the first + +1063 +00:45:14,480 --> 00:45:19,920 +place or you know it's making mistakes + +1064 +00:45:16,920 --> 00:45:21,680 +it's also good if you have a large code + +1065 +00:45:19,920 --> 00:45:23,040 +base like that's inter internal and you + +1066 +00:45:21,680 --> 00:45:24,200 +know the language model was not trained + +1067 +00:45:23,040 --> 00:45:26,359 +on it but you still want to use that + +1068 +00:45:24,200 --> 00:45:27,559 +code base for code generation so it's + +1069 +00:45:26,359 --> 00:45:29,599 +really good if you're working at like a + +1070 +00:45:27,559 --> 00:45:32,160 +big company for example that has a very + +1071 +00:45:29,599 --> 00:45:33,319 +constant coding style but hasn't trained + +1072 +00:45:32,160 --> 00:45:37,160 +its own + +1073 +00:45:33,319 --> 00:45:39,720 +LM um also particularly in code there's + +1074 +00:45:37,160 --> 00:45:43,559 +also documentation uh which can be + +1075 +00:45:39,720 --> 00:45:46,920 +retrieved and so we have new libraries + +1076 +00:45:43,559 --> 00:45:51,359 +all the time right and one frustrating + +1077 +00:45:46,920 --> 00:45:53,119 +thing when using like uh chat jpt or CLA + +1078 +00:45:51,359 --> 00:45:57,400 +or something like that when you're + +1079 +00:45:53,119 --> 00:45:59,559 +writing programs is that it can use old + +1080 +00:45:57,400 --> 00:46:03,480 +versions of libraries that are no longer + +1081 +00:45:59,559 --> 00:46:05,359 +compatible and so um in this paper uh + +1082 +00:46:03,480 --> 00:46:08,359 +which this is one of our papers too we + +1083 +00:46:05,359 --> 00:46:10,079 +called it DOC prompting um basically the + +1084 +00:46:08,359 --> 00:46:13,720 +idea is that + +1085 +00:46:10,079 --> 00:46:17,440 +you have your natural language input and + +1086 +00:46:13,720 --> 00:46:20,119 +then you look up uh similar thing + +1087 +00:46:17,440 --> 00:46:23,240 +similar documentation so you find like + +1088 +00:46:20,119 --> 00:46:25,319 +pigment is a general syntax highlighter + +1089 +00:46:23,240 --> 00:46:28,160 +uh so you can uh find syntax + +1090 +00:46:25,319 --> 00:46:31,160 +highlighting um you can also look up the + +1091 +00:46:28,160 --> 00:46:32,640 +lexer you can look up the HTML formatter + +1092 +00:46:31,160 --> 00:46:35,119 +and then all of the things that have + +1093 +00:46:32,640 --> 00:46:37,000 +similar documentation then you can uh + +1094 +00:46:35,119 --> 00:46:39,480 +append that to the prompt and then have + +1095 +00:46:37,000 --> 00:46:41,680 +that Genera output and we demonstrate + +1096 +00:46:39,480 --> 00:46:43,200 +that this is good both in general but + +1097 +00:46:41,680 --> 00:46:44,800 +also it's particularly good when you're + +1098 +00:46:43,200 --> 00:46:46,240 +dealing with new libraries that haven't + +1099 +00:46:44,800 --> 00:46:48,280 +been seen before or libraries that have + +1100 +00:46:46,240 --> 00:46:50,119 +been updated so this is another thing + +1101 +00:46:48,280 --> 00:46:53,000 +that you can + +1102 +00:46:50,119 --> 00:46:55,720 +do + +1103 +00:46:53,000 --> 00:46:57,520 +cool um another thing that you can do + +1104 +00:46:55,720 --> 00:47:00,040 +with code that you can't do easily with + +1105 +00:46:57,520 --> 00:47:04,040 +natural language is execution + +1106 +00:47:00,040 --> 00:47:06,119 +feedback and so this is a a paper where + +1107 +00:47:04,040 --> 00:47:09,359 +basically they do something that's + +1108 +00:47:06,119 --> 00:47:10,319 +rather simple but they generate multiple + +1109 +00:47:09,359 --> 00:47:13,359 +types of + +1110 +00:47:10,319 --> 00:47:14,559 +code or multiple instances of code so + +1111 +00:47:13,359 --> 00:47:16,880 +they basically sample different + +1112 +00:47:14,559 --> 00:47:19,960 +varieties of code and I was talking + +1113 +00:47:16,880 --> 00:47:22,720 +about like casset K right uh before + +1114 +00:47:19,960 --> 00:47:25,000 +casset K is good if you have some way to + +1115 +00:47:22,720 --> 00:47:26,520 +confirm which output is correct like you + +1116 +00:47:25,000 --> 00:47:28,040 +already have unit tests and you can run + +1117 +00:47:26,520 --> 00:47:29,440 +the unit test and identify which one + +1118 +00:47:28,040 --> 00:47:31,839 +passes the unit test or you can have a + +1119 +00:47:29,440 --> 00:47:34,160 +human check it but in the case when you + +1120 +00:47:31,839 --> 00:47:35,640 +can't do that what can you do and + +1121 +00:47:34,160 --> 00:47:38,079 +basically what you can do is you can + +1122 +00:47:35,640 --> 00:47:40,800 +execute all of the code Snippets that + +1123 +00:47:38,079 --> 00:47:43,839 +the model generated and check if the + +1124 +00:47:40,800 --> 00:47:48,520 +outputs overlap with each other and if + +1125 +00:47:43,839 --> 00:47:50,680 +you have um you know 30 programs that + +1126 +00:47:48,520 --> 00:47:53,680 +all generate very similar outputs then + +1127 +00:47:50,680 --> 00:47:55,079 +those outputs you know then that program + +1128 +00:47:53,680 --> 00:47:56,520 +is probably correct and then you can + +1129 +00:47:55,079 --> 00:48:00,000 +just pick one of them according to some + +1130 +00:47:56,520 --> 00:48:02,160 +criteria Ian specifically in this case + +1131 +00:48:00,000 --> 00:48:03,960 +they picked the program that has the + +1132 +00:48:02,160 --> 00:48:05,599 +lowest base risk like when we talked + +1133 +00:48:03,960 --> 00:48:09,040 +about minimum base risk and the decoding + +1134 +00:48:05,599 --> 00:48:10,839 +much so um they they basically execute a + +1135 +00:48:09,040 --> 00:48:12,800 +lot and then calculate the base risk of + +1136 +00:48:10,839 --> 00:48:17,000 +that + +1137 +00:48:12,800 --> 00:48:17,000 +that cool um + +1138 +00:48:17,680 --> 00:48:24,440 +yeah yeah and so like self consistency + +1139 +00:48:21,599 --> 00:48:26,079 +is a variety of Base risk um and they're + +1140 +00:48:24,440 --> 00:48:27,640 +using base risk here because outputs + +1141 +00:48:26,079 --> 00:48:30,720 +might not be exact the same but being + +1142 +00:48:27,640 --> 00:48:30,720 +closer is probably better + +1143 +00:48:34,160 --> 00:48:39,040 +than + +1144 +00:48:36,760 --> 00:48:40,559 +comp comparison of the code yeah that's + +1145 +00:48:39,040 --> 00:48:42,880 +a good question especially if you use + +1146 +00:48:40,559 --> 00:48:44,319 +something good like uh code BT score to + +1147 +00:48:42,880 --> 00:48:46,280 +do that comparison you might not even + +1148 +00:48:44,319 --> 00:48:50,280 +need to that's + +1149 +00:48:46,280 --> 00:48:50,280 +that I don't think they did that in + +1150 +00:48:50,559 --> 00:48:57,240 +this cool um another interesting thing + +1151 +00:48:54,920 --> 00:48:59,760 +um is there's + +1152 +00:48:57,240 --> 00:49:04,119 +several lines of work on fixing based on + +1153 +00:48:59,760 --> 00:49:06,720 +eror messages so the basic idea is you + +1154 +00:49:04,119 --> 00:49:08,160 +generate code you try to run it you get + +1155 +00:49:06,720 --> 00:49:13,280 +an airor message from it and then you + +1156 +00:49:08,160 --> 00:49:16,200 +feed that back to the llm um in order to + +1157 +00:49:13,280 --> 00:49:17,520 +you know correct the error and like llms + +1158 +00:49:16,200 --> 00:49:19,119 +if you give them an err and you give + +1159 +00:49:17,520 --> 00:49:20,839 +them buggy code they do have some + +1160 +00:49:19,119 --> 00:49:24,599 +capacity to do that especially as you + +1161 +00:49:20,839 --> 00:49:28,839 +get to theer llm so uh this is kind of a + +1162 +00:49:24,599 --> 00:49:31,200 +a nice uh paradigm this paper intercode + +1163 +00:49:28,839 --> 00:49:33,880 +actually generalizes this a bit and it's + +1164 +00:49:31,200 --> 00:49:38,359 +more recent that's why I cited it here + +1165 +00:49:33,880 --> 00:49:40,000 +and uh so this also um like says you can + +1166 +00:49:38,359 --> 00:49:42,640 +do single turn code generation you can + +1167 +00:49:40,000 --> 00:49:44,960 +also say oh could you please try again + +1168 +00:49:42,640 --> 00:49:46,400 +um you can also uh do planning and + +1169 +00:49:44,960 --> 00:49:48,160 +solving and other stuff like that so + +1170 +00:49:46,400 --> 00:49:49,960 +this is a good kind of like environment + +1171 +00:49:48,160 --> 00:49:52,079 +if you're interested in making these + +1172 +00:49:49,960 --> 00:49:56,720 +more like interactive coding assistance + +1173 +00:49:52,079 --> 00:49:56,720 +for example so you could take a look bre + +1174 +00:49:58,359 --> 00:50:03,359 +cool + +1175 +00:50:00,119 --> 00:50:07,119 +um another important topic is code + +1176 +00:50:03,359 --> 00:50:08,880 +synthesis from input output examples so + +1177 +00:50:07,119 --> 00:50:12,319 +actually when you said code generation + +1178 +00:50:08,880 --> 00:50:14,760 +or code synthesis like five years ago or + +1179 +00:50:12,319 --> 00:50:17,440 +10 years ago a lot of people would think + +1180 +00:50:14,760 --> 00:50:19,440 +about this uh so this is actually this + +1181 +00:50:17,440 --> 00:50:22,440 +has been around a lot longer than code + +1182 +00:50:19,440 --> 00:50:24,160 +synthesis um than serious inquiries into + +1183 +00:50:22,440 --> 00:50:27,680 +code synthesis from natural + +1184 +00:50:24,160 --> 00:50:30,680 +language um + +1185 +00:50:27,680 --> 00:50:33,839 +so basically the way this works is it + +1186 +00:50:30,680 --> 00:50:35,319 +can have no natural language whatsoever + +1187 +00:50:33,839 --> 00:50:39,119 +um but you still can try to guess the + +1188 +00:50:35,319 --> 00:50:42,000 +input from uh input output examples when + +1189 +00:50:39,119 --> 00:50:44,319 +would you want to do this so one example + +1190 +00:50:42,000 --> 00:50:45,839 +of this is something called flashfill + +1191 +00:50:44,319 --> 00:50:48,599 +which has been around for a very long + +1192 +00:50:45,839 --> 00:50:51,839 +time in Microsoft Excel and basically + +1193 +00:50:48,599 --> 00:50:55,400 +the way it works is you have one column + +1194 +00:50:51,839 --> 00:50:58,640 +and um the column might be + +1195 +00:50:55,400 --> 00:50:58,640 +like uh + +1196 +00:50:59,559 --> 00:51:02,880 +R new + +1197 +00:51:03,040 --> 00:51:12,799 +big and uh + +1198 +00:51:06,559 --> 00:51:12,799 +else just pick on three because he also + +1199 +00:51:14,040 --> 00:51:19,599 +up and so we have this column and then + +1200 +00:51:17,160 --> 00:51:19,599 +we have like + +1201 +00:51:20,400 --> 00:51:26,760 +gig um and from like one or a couple + +1202 +00:51:25,160 --> 00:51:28,400 +examples basically what it does is it + +1203 +00:51:26,760 --> 00:51:30,319 +tries to induce a program that can + +1204 +00:51:28,400 --> 00:51:33,319 +generate all the other examples properly + +1205 +00:51:30,319 --> 00:51:35,599 +so in this particular case that would be + +1206 +00:51:33,319 --> 00:51:38,440 +um you know like + +1207 +00:51:35,599 --> 00:51:40,480 +split take the first character from the + +1208 +00:51:38,440 --> 00:51:43,280 +first one and all of the last one and + +1209 +00:51:40,480 --> 00:51:45,280 +then concatenate and then M or something + +1210 +00:51:43,280 --> 00:51:48,280 +like that right + +1211 +00:51:45,280 --> 00:51:50,079 +um and so this is useful in some cases + +1212 +00:51:48,280 --> 00:51:51,599 +like you know in Excel when you have + +1213 +00:51:50,079 --> 00:51:53,359 +this long sheet and you want to fill in + +1214 +00:51:51,599 --> 00:51:56,160 +the rest of it and this has actually + +1215 +00:51:53,359 --> 00:51:57,720 +been deployed uh you know in Excel in + +1216 +00:51:56,160 --> 00:52:00,960 +white + +1217 +00:51:57,720 --> 00:52:02,559 +used um if you're interested in this + +1218 +00:52:00,960 --> 00:52:06,040 +topic there's a fair amount of work in + +1219 +00:52:02,559 --> 00:52:08,839 +it um my there's a little bit less work + +1220 +00:52:06,040 --> 00:52:10,240 +now because most people are focusing on + +1221 +00:52:08,839 --> 00:52:12,400 +uh learning programs from natural + +1222 +00:52:10,240 --> 00:52:14,839 +language and other stuff like this but + +1223 +00:52:12,400 --> 00:52:16,480 +uh this slightly older Pap paper called + +1224 +00:52:14,839 --> 00:52:19,359 +interpret explains a bunch of the + +1225 +00:52:16,480 --> 00:52:22,880 +different methods that people used and + +1226 +00:52:19,359 --> 00:52:25,920 +um how you uh like how they compare and + +1227 +00:52:22,880 --> 00:52:28,119 +stuff and also um Joshua ten and bums + +1228 +00:52:25,920 --> 00:52:29,880 +group from MI has done a lot on program + +1229 +00:52:28,119 --> 00:52:31,319 +synthesis from input output examples so + +1230 +00:52:29,880 --> 00:52:32,359 +you could also take a look at that that + +1231 +00:52:31,319 --> 00:52:35,079 +sounds + +1232 +00:52:32,359 --> 00:52:38,240 +interesting um one thing about this is + +1233 +00:52:35,079 --> 00:52:40,280 +these generally are mostly done on + +1234 +00:52:38,240 --> 00:52:43,319 +domain specific languages so they're + +1235 +00:52:40,280 --> 00:52:46,839 +mostly done like only for reg X's or + +1236 +00:52:43,319 --> 00:52:48,480 +they're done only for you know SQL or + +1237 +00:52:46,839 --> 00:52:50,079 +something like that not for the more + +1238 +00:52:48,480 --> 00:52:51,960 +general purpose languages just because + +1239 +00:52:50,079 --> 00:52:54,079 +the problem without any natural language + +1240 +00:52:51,960 --> 00:52:56,520 +specification is harder and so you need + +1241 +00:52:54,079 --> 00:52:57,520 +to like make the search space smaller or + +1242 +00:52:56,520 --> 00:53:01,559 +Additionally you needed to make the + +1243 +00:52:57,520 --> 00:53:04,440 +search small for theable so um that's a + +1244 +00:53:01,559 --> 00:53:04,440 +another thing to know + +1245 +00:53:04,799 --> 00:53:09,440 +about cool um any questions about + +1246 +00:53:09,480 --> 00:53:14,440 +these nice okay so finally in the the + +1247 +00:53:12,559 --> 00:53:15,599 +last few minutes I'd like to talk about + +1248 +00:53:14,440 --> 00:53:18,480 +um code + +1249 +00:53:15,599 --> 00:53:22,880 +LMS and I'm going to go through about + +1250 +00:53:18,480 --> 00:53:24,599 +four of them the first one is codex and + +1251 +00:53:22,880 --> 00:53:26,200 +so yeah actually what I should mention + +1252 +00:53:24,599 --> 00:53:28,079 +is all of the LMS that I talked about up + +1253 +00:53:26,200 --> 00:53:30,640 +until this point are code LMS because + +1254 +00:53:28,079 --> 00:53:31,680 +every LM trains on code so I'm mainly + +1255 +00:53:30,640 --> 00:53:36,119 +going to be talking about one + +1256 +00:53:31,680 --> 00:53:39,200 +specifically for code this time um so + +1257 +00:53:36,119 --> 00:53:42,480 +codex is the first and kind of like + +1258 +00:53:39,200 --> 00:53:45,880 +first really big impact Cod LM um it was + +1259 +00:53:42,480 --> 00:53:47,720 +created by open AI um originally I don't + +1260 +00:53:45,880 --> 00:53:49,079 +know about the deployed model now + +1261 +00:53:47,720 --> 00:53:51,599 +because you know they don't release the + +1262 +00:53:49,079 --> 00:53:53,799 +details of it but originally this was + +1263 +00:53:51,599 --> 00:53:57,920 +trained by continued training from + +1264 +00:53:53,799 --> 00:53:59,799 +gpt3 so they had a text M and then they + +1265 +00:53:57,920 --> 00:54:03,079 +just continued training it on lots and + +1266 +00:53:59,799 --> 00:54:05,680 +lots of code from GitHub um so yeah the + +1267 +00:54:03,079 --> 00:54:08,799 +data was lots of data from GitHub um if + +1268 +00:54:05,680 --> 00:54:11,280 +you did anything on GitHub at any point + +1269 +00:54:08,799 --> 00:54:14,119 +in your life uh you might be uh + +1270 +00:54:11,280 --> 00:54:17,720 +contributing to codep so thank you on + +1271 +00:54:14,119 --> 00:54:22,440 +behalf of open AI a 80 billion dollar + +1272 +00:54:17,720 --> 00:54:24,599 +company and uh importantly it Powers I + +1273 +00:54:22,440 --> 00:54:27,599 +believe it still Powers GitHub + +1274 +00:54:24,599 --> 00:54:31,160 +co-pilot one interesting thing is they + +1275 +00:54:27,599 --> 00:54:33,119 +had a large version of codex um and then + +1276 +00:54:31,160 --> 00:54:35,799 +they had a smaller version of codex + +1277 +00:54:33,119 --> 00:54:38,359 +called code kushman and the thing + +1278 +00:54:35,799 --> 00:54:40,040 +actually powering GitHub co-pilot is not + +1279 +00:54:38,359 --> 00:54:42,839 +the the largest version it's not code Da + +1280 +00:54:40,040 --> 00:54:46,359 +Vinci it's code kushman which is uh + +1281 +00:54:42,839 --> 00:54:48,680 +smaller and much faster and the reason + +1282 +00:54:46,359 --> 00:54:50,640 +why is probably twofold number one um + +1283 +00:54:48,680 --> 00:54:54,160 +you need really fast responses when + +1284 +00:54:50,640 --> 00:54:55,760 +you're you know working on code and + +1285 +00:54:54,160 --> 00:54:57,440 +there's actually in co-pilot there's + +1286 +00:54:55,760 --> 00:55:00,280 +some cach and other stuff like that to + +1287 +00:54:57,440 --> 00:55:01,960 +make your responses very fast as well um + +1288 +00:55:00,280 --> 00:55:03,400 +the second reason is probably it' just + +1289 +00:55:01,960 --> 00:55:05,040 +be too expensive for them to run Da + +1290 +00:55:03,400 --> 00:55:06,760 +Vinci over all the code bases for how + +1291 +00:55:05,040 --> 00:55:10,400 +much they're charging you for co-pilot + +1292 +00:55:06,760 --> 00:55:12,119 +so like every single time you like + +1293 +00:55:10,400 --> 00:55:14,280 +change something in one of your files if + +1294 +00:55:12,119 --> 00:55:17,079 +you're using copilot it's rerunning in + +1295 +00:55:14,280 --> 00:55:19,359 +llm and that would become very expensive + +1296 +00:55:17,079 --> 00:55:20,599 +if you look look at the token count so I + +1297 +00:55:19,359 --> 00:55:21,839 +think they're using a smaller model + +1298 +00:55:20,599 --> 00:55:22,920 +because of that but nonetheless it's + +1299 +00:55:21,839 --> 00:55:27,039 +very + +1300 +00:55:22,920 --> 00:55:28,640 +good um cool + +1301 +00:55:27,039 --> 00:55:30,680 +so now I want to get into some more + +1302 +00:55:28,640 --> 00:55:33,880 +modern models uh the first one I want to + +1303 +00:55:30,680 --> 00:55:35,520 +get into is uh star coder 2 and the + +1304 +00:55:33,880 --> 00:55:38,359 +reason why I want to talk about this + +1305 +00:55:35,520 --> 00:55:40,160 +first is because uh not necessarily that + +1306 +00:55:38,359 --> 00:55:41,880 +it's like absolutely the best one + +1307 +00:55:40,160 --> 00:55:43,400 +although it's very good but it's one of + +1308 +00:55:41,880 --> 00:55:45,319 +the models that actually tells us + +1309 +00:55:43,400 --> 00:55:47,240 +everything about their training data and + +1310 +00:55:45,319 --> 00:55:50,400 +training process and stuff so we know uh + +1311 +00:55:47,240 --> 00:55:53,039 +everything about them so the creator of + +1312 +00:55:50,400 --> 00:55:54,440 +This was um the big science project + +1313 +00:55:53,039 --> 00:55:56,880 +which was led by hugging face and + +1314 +00:55:54,440 --> 00:55:58,680 +service now um + +1315 +00:55:56,880 --> 00:56:02,079 +and includes lots and lots of people + +1316 +00:55:58,680 --> 00:56:04,960 +from various universities and things um + +1317 +00:56:02,079 --> 00:56:09,319 +the architecture is mostly llama style + +1318 +00:56:04,960 --> 00:56:11,960 +it has 3B 7B and 15b variants um one + +1319 +00:56:09,319 --> 00:56:15,480 +interesting thing about all code LMS is + +1320 +00:56:11,960 --> 00:56:17,680 +that they all do long context they all + +1321 +00:56:15,480 --> 00:56:20,359 +do longer context and they all + +1322 +00:56:17,680 --> 00:56:23,200 +reconfigure rope for longer context + +1323 +00:56:20,359 --> 00:56:25,280 +specifically so you know rope has a + +1324 +00:56:23,200 --> 00:56:28,599 +Theta parameter that allows you to tell + +1325 +00:56:25,280 --> 00:56:31,720 +how long the um like sign sine waves and + +1326 +00:56:28,599 --> 00:56:33,720 +stuff like that are and they all always + +1327 +00:56:31,720 --> 00:56:36,079 +um change the parameters so that the + +1328 +00:56:33,720 --> 00:56:38,599 +context is longer so that's another good + +1329 +00:56:36,079 --> 00:56:38,599 +thing to know + +1330 +00:56:38,640 --> 00:56:44,559 +about the the training data section of + +1331 +00:56:42,000 --> 00:56:48,799 +this paper is really fascinating I can + +1332 +00:56:44,559 --> 00:56:51,240 +like it it's a really good way to look + +1333 +00:56:48,799 --> 00:56:54,160 +at you know how much data engineering + +1334 +00:56:51,240 --> 00:56:55,960 +goes into making a good model um and + +1335 +00:56:54,160 --> 00:56:57,960 +just very shortly they give a lot more + +1336 +00:56:55,960 --> 00:57:00,640 +detail in the paper but it's trained on + +1337 +00:56:57,960 --> 00:57:04,839 +code uh including the stack which is + +1338 +00:57:00,640 --> 00:57:06,920 +just a huge uh amount like repository of + +1339 +00:57:04,839 --> 00:57:08,359 +code that I'll talk about in a second + +1340 +00:57:06,920 --> 00:57:10,559 +separately from that it was trained on + +1341 +00:57:08,359 --> 00:57:13,079 +GitHub issues it was trained on poll + +1342 +00:57:10,559 --> 00:57:16,000 +requests Jupiter notebooks keggle + +1343 +00:57:13,079 --> 00:57:18,319 +notebooks documentation and also + +1344 +00:57:16,000 --> 00:57:23,440 +intermediate representations from uh + +1345 +00:57:18,319 --> 00:57:26,440 +llvm so llvm is a uh you know like + +1346 +00:57:23,440 --> 00:57:28,920 +intermediate uh compiler style thing + +1347 +00:57:26,440 --> 00:57:30,839 +that is used for compiling code and it + +1348 +00:57:28,920 --> 00:57:34,400 +was also trained on a few code relevant + +1349 +00:57:30,839 --> 00:57:38,440 +natural language data sets + +1350 +00:57:34,400 --> 00:57:39,960 +um so for pre-processing they do + +1351 +00:57:38,440 --> 00:57:42,640 +something pretty interesting which is + +1352 +00:57:39,960 --> 00:57:44,240 +they add metadata tags such as the repo + +1353 +00:57:42,640 --> 00:57:48,119 +name and the file name and other stuff + +1354 +00:57:44,240 --> 00:57:49,799 +like this uh 50% of the time and they do + +1355 +00:57:48,119 --> 00:57:51,599 +this 50% of the time because they want + +1356 +00:57:49,799 --> 00:57:54,400 +the model to work with them but also be + +1357 +00:57:51,599 --> 00:57:57,079 +robust without them um and so you can + +1358 +00:57:54,400 --> 00:57:59,839 +either add them or not add them at test + +1359 +00:57:57,079 --> 00:58:03,079 +time uh they also do infilling every + +1360 +00:57:59,839 --> 00:58:05,960 +serus code LM does infilling Based + +1361 +00:58:03,079 --> 00:58:07,480 +training um one interesting thing about + +1362 +00:58:05,960 --> 00:58:08,960 +this from the training perspective is + +1363 +00:58:07,480 --> 00:58:12,000 +they actually trained it for four to + +1364 +00:58:08,960 --> 00:58:14,359 +five epochs um which is much more than + +1365 +00:58:12,000 --> 00:58:17,160 +we normally do so normally we only train + +1366 +00:58:14,359 --> 00:58:18,359 +for like one Epoch over you know all of + +1367 +00:58:17,160 --> 00:58:20,079 +the data we have but here they were + +1368 +00:58:18,359 --> 00:58:21,319 +training for monger and that's just + +1369 +00:58:20,079 --> 00:58:23,359 +because the amount of data they can get + +1370 +00:58:21,319 --> 00:58:24,400 +for code is less than the amount of data + +1371 +00:58:23,359 --> 00:58:27,200 +they can get for all the national + +1372 +00:58:24,400 --> 00:58:30,039 +language I + +1373 +00:58:27,200 --> 00:58:33,200 +so the data set that they created is uh + +1374 +00:58:30,039 --> 00:58:36,119 +the stack 2 and this is a code + +1375 +00:58:33,200 --> 00:58:37,839 +pre-training data set um one interesting + +1376 +00:58:36,119 --> 00:58:40,039 +thing that they thought about was uh + +1377 +00:58:37,839 --> 00:58:42,960 +license considerations so I talked about + +1378 +00:58:40,039 --> 00:58:44,480 +the um how copyright is a problem when + +1379 +00:58:42,960 --> 00:58:46,640 +trading large language models two + +1380 +00:58:44,480 --> 00:58:48,880 +classes ago and so here they + +1381 +00:58:46,640 --> 00:58:50,119 +specifically tried to find things with + +1382 +00:58:48,880 --> 00:58:52,520 +permissive + +1383 +00:58:50,119 --> 00:58:53,880 +licenses and so what they did is they + +1384 +00:58:52,520 --> 00:58:57,000 +basically looked at the license on + +1385 +00:58:53,880 --> 00:58:59,520 +GitHub um and if the GitHub license was + +1386 +00:58:57,000 --> 00:59:01,440 +permissive they marked it as permissive + +1387 +00:58:59,520 --> 00:59:02,880 +um then they tried to detect licenses + +1388 +00:59:01,440 --> 00:59:05,720 +and then um if all of them were + +1389 +00:59:02,880 --> 00:59:08,000 +permissive they marked it as + +1390 +00:59:05,720 --> 00:59:10,480 +permissive this is a huge table that + +1391 +00:59:08,000 --> 00:59:14,160 +they have in the paper of all of the + +1392 +00:59:10,480 --> 00:59:15,480 +data that they have and um you know I'm + +1393 +00:59:14,160 --> 00:59:16,920 +not going to go through all of this + +1394 +00:59:15,480 --> 00:59:18,920 +obviously but what you can see is some + +1395 +00:59:16,920 --> 00:59:22,480 +of the biggest data sets are like + +1396 +00:59:18,920 --> 00:59:26,280 +Java um + +1397 +00:59:22,480 --> 00:59:28,640 +PHP markdown + +1398 +00:59:26,280 --> 00:59:30,039 +and uh Python and other stuff like that + +1399 +00:59:28,640 --> 00:59:32,240 +so you can see the major programming + +1400 +00:59:30,039 --> 00:59:35,559 +languages have lots of data but there's + +1401 +00:59:32,240 --> 00:59:38,400 +also a long tail so if you like your uh + +1402 +00:59:35,559 --> 00:59:40,440 +you know more esoteric uh but cool + +1403 +00:59:38,400 --> 00:59:43,960 +programming languages like rust yes it + +1404 +00:59:40,440 --> 00:59:46,160 +has rust too so um we can do all all of + +1405 +00:59:43,960 --> 00:59:46,160 +those + +1406 +00:59:46,480 --> 00:59:53,079 +things so the next model that I'd like + +1407 +00:59:49,799 --> 00:59:55,200 +to talk about is cod llama and cod llama + +1408 +00:59:53,079 --> 00:59:57,920 +is another competitive model it came out + +1409 +00:59:55,200 --> 00:59:59,480 +a little bit before star coder and star + +1410 +00:59:57,920 --> 01:00:02,680 +coder 2 and deep sea coder which I'm + +1411 +00:59:59,480 --> 01:00:04,079 +going to talk about um this is a created + +1412 +01:00:02,680 --> 01:00:08,319 +by + +1413 +01:00:04,079 --> 01:00:11,160 +meta and um the architecture is the same + +1414 +01:00:08,319 --> 01:00:14,280 +as llama 2 uh basically and they did + +1415 +01:00:11,160 --> 01:00:16,400 +continued training from llama 2 um but + +1416 +01:00:14,280 --> 01:00:18,000 +they trained it on longer input contexts + +1417 +01:00:16,400 --> 01:00:21,720 +and they also extended the length of + +1418 +01:00:18,000 --> 01:00:23,559 +rope so uh those are you know standard + +1419 +01:00:21,720 --> 01:00:26,680 +things for code language + +1420 +01:00:23,559 --> 01:00:28,680 +models it was trained on DED code and + +1421 +01:00:26,680 --> 01:00:30,400 +also synthetically created instruction + +1422 +01:00:28,680 --> 01:00:33,280 +data so they created like instruction + +1423 +01:00:30,400 --> 01:00:37,920 +tuning data specifically for + +1424 +01:00:33,280 --> 01:00:39,480 +code um and the training was incremental + +1425 +01:00:37,920 --> 01:00:42,559 +with various data sets and what I mean + +1426 +01:00:39,480 --> 01:00:45,599 +by this is they trained on 500 billion + +1427 +01:00:42,559 --> 01:00:47,599 +uh I believe tokens of code and then + +1428 +01:00:45,599 --> 01:00:50,400 +they did long context fine tuning on 20 + +1429 +01:00:47,599 --> 01:00:52,599 +billion tokens and then they also did + +1430 +01:00:50,400 --> 01:00:55,400 +instruction tuning they also have a + +1431 +01:00:52,599 --> 01:00:57,079 +python specific one and the reason why + +1432 +01:00:55,400 --> 01:00:59,640 +they have a p specific one is not + +1433 +01:00:57,079 --> 01:01:02,319 +because python is more import important + +1434 +01:00:59,640 --> 01:01:03,839 +uh uh necessarily but because a lot of + +1435 +01:01:02,319 --> 01:01:05,559 +the benchmarks are in Python because + +1436 +01:01:03,839 --> 01:01:06,920 +machine learning people like who are + +1437 +01:01:05,559 --> 01:01:09,240 +creating benchmarks they also like + +1438 +01:01:06,920 --> 01:01:11,200 +python so python is more common in the + +1439 +01:01:09,240 --> 01:01:14,240 +benchmarks so they basically wanted to + +1440 +01:01:11,200 --> 01:01:15,720 +do well on the benchmarks I think uh and + +1441 +01:01:14,240 --> 01:01:17,920 +and created a data set that does well in + +1442 +01:01:15,720 --> 01:01:19,240 +the benchmarks but um if you are + +1443 +01:01:17,920 --> 01:01:23,160 +creating python you can use the code + +1444 +01:01:19,240 --> 01:01:25,280 +llama python it's better at pipelines so + +1445 +01:01:23,160 --> 01:01:28,000 +um and then the final one I'd like to + +1446 +01:01:25,280 --> 01:01:29,839 +talk about is is a deep seek coder uh + +1447 +01:01:28,000 --> 01:01:32,079 +this is notable because it's a very + +1448 +01:01:29,839 --> 01:01:34,599 +strong model it it's maybe the strongest + +1449 +01:01:32,079 --> 01:01:38,799 +model on average over all the code + +1450 +01:01:34,599 --> 01:01:41,599 +models um they did 87% the data is not + +1451 +01:01:38,799 --> 01:01:44,640 +super clear but they did 87% source code + +1452 +01:01:41,599 --> 01:01:46,359 +10% English um from markdown in stock + +1453 +01:01:44,640 --> 01:01:51,160 +exchange and 3% Chinese because it's + +1454 +01:01:46,359 --> 01:01:53,559 +from a Chinese company deep seek um and + +1455 +01:01:51,160 --> 01:01:54,960 +they did standard prepr uh but one + +1456 +01:01:53,559 --> 01:01:57,319 +interesting thing they did is they + +1457 +01:01:54,960 --> 01:01:59,200 +included Library dependencies so they + +1458 +01:01:57,319 --> 01:02:01,799 +basically crawled the dependency graph + +1459 +01:01:59,200 --> 01:02:03,640 +of libraries pulled out files from the + +1460 +01:02:01,799 --> 01:02:06,000 +libraries that were referenced and then + +1461 +01:02:03,640 --> 01:02:07,440 +used them in training and so that's + +1462 +01:02:06,000 --> 01:02:09,319 +particularly useful if you want the + +1463 +01:02:07,440 --> 01:02:12,920 +model to be able to reference external + +1464 +01:02:09,319 --> 01:02:14,039 +libraries well um so that's kind of an + +1465 +01:02:12,920 --> 01:02:17,279 +interesting + +1466 +01:02:14,039 --> 01:02:19,599 +thing um the architecture is pretty + +1467 +01:02:17,279 --> 01:02:22,960 +standard it's llama likee with 1.3 + +1468 +01:02:19,599 --> 01:02:24,599 +billion 6.7 billion in 33b variants and + +1469 +01:02:22,960 --> 01:02:27,279 +it has a reconfigured work like the + +1470 +01:02:24,599 --> 01:02:30,520 +others and they on two trillion + +1471 +01:02:27,279 --> 01:02:34,200 +tokens um so then a question becomes + +1472 +01:02:30,520 --> 01:02:36,680 +which one to use um and I created a + +1473 +01:02:34,200 --> 01:02:39,160 +summary here um all of them have + +1474 +01:02:36,680 --> 01:02:40,760 +somewhat similar performance uh this is + +1475 +01:02:39,160 --> 01:02:42,760 +they're compared in the star coder 2 + +1476 +01:02:40,760 --> 01:02:45,640 +paper so you can go in and look at + +1477 +01:02:42,760 --> 01:02:48,160 +details at the starcode to paper um + +1478 +01:02:45,640 --> 01:02:51,119 +deeps coder seems to be strong on + +1479 +01:02:48,160 --> 01:02:52,799 +standard programming tasks um whereas + +1480 +01:02:51,119 --> 01:02:54,799 +star coder seems to be strong on data + +1481 +01:02:52,799 --> 01:02:56,680 +science notebooks so like on average + +1482 +01:02:54,799 --> 01:02:59,160 +it's better at kind of sound notebooks + +1483 +01:02:56,680 --> 01:03:02,079 +but all of them are good models um all + +1484 +01:02:59,160 --> 01:03:05,440 +of them are not quite as good as uh like + +1485 +01:03:02,079 --> 01:03:08,920 +gp4 quad on like they're very uh you + +1486 +01:03:05,440 --> 01:03:10,799 +know more complex tasks but uh they're + +1487 +01:03:08,920 --> 01:03:12,359 +available and you can find to them and + +1488 +01:03:10,799 --> 01:03:16,880 +do other things like that as + +1489 +01:03:12,359 --> 01:03:21,599 +well one caveat about the Deep seek + +1490 +01:03:16,880 --> 01:03:24,640 +thing is actually if I go back to this + +1491 +01:03:21,599 --> 01:03:27,559 +slide um a lot of the models up here are + +1492 +01:03:24,640 --> 01:03:29,640 +deep seek um so you do need to be a + +1493 +01:03:27,559 --> 01:03:31,400 +little bit careful about like + +1494 +01:03:29,640 --> 01:03:34,400 +interpreting their human Evel results + +1495 +01:03:31,400 --> 01:03:36,319 +because it's possible that the model uh + +1496 +01:03:34,400 --> 01:03:38,799 +was trained on data very similar to + +1497 +01:03:36,319 --> 01:03:40,279 +human eval or something like that so do + +1498 +01:03:38,799 --> 01:03:42,880 +take that with a grain of salt but even + +1499 +01:03:40,279 --> 01:03:44,520 +on other data sets where presumably the + +1500 +01:03:42,880 --> 01:03:46,760 +model has not seen those data sets it + +1501 +01:03:44,520 --> 01:03:49,920 +still does very well so it's not like + +1502 +01:03:46,760 --> 01:03:51,480 +it's um you know as you can see it's + +1503 +01:03:49,920 --> 01:03:54,640 +still one of the most competitive code + +1504 +01:03:51,480 --> 01:03:57,680 +models even on this new LCB um data set + +1505 +01:03:54,640 --> 01:04:01,359 +so uh that's want into the + +1506 +01:03:57,680 --> 01:04:03,000 +a cool um that's all I have for today I + +1507 +01:04:01,359 --> 01:04:04,359 +you know I love to talk about this topic + +1508 +01:04:03,000 --> 01:04:06,480 +I've done a lot of research on it so I'm + +1509 +01:04:04,359 --> 01:04:11,200 +happy to discuss any questions if people + +1510 +01:04:06,480 --> 01:04:14,720 +have them either in front of everyone or + +1511 +01:04:11,200 --> 01:04:14,720 +after any any + +1512 +01:04:16,480 --> 01:04:24,160 +questions uh yeah just wondering there a + +1513 +01:04:20,359 --> 01:04:27,720 +like enfor the outut during using things + +1514 +01:04:24,160 --> 01:04:27,720 +other than models + +1515 +01:04:30,599 --> 01:04:36,599 +yeah great question is there a way to + +1516 +01:04:33,640 --> 01:04:38,200 +enforce uh restrictions at decoding time + +1517 +01:04:36,599 --> 01:04:39,760 +other than using the model's uh + +1518 +01:04:38,200 --> 01:04:42,240 +probabilities because this is code and + +1519 +01:04:39,760 --> 01:04:42,240 +we know the + +1520 +01:04:42,440 --> 01:04:51,079 +syntax yes and no um there + +1521 +01:04:46,319 --> 01:04:53,200 +are for code it's not always immediately + +1522 +01:04:51,079 --> 01:04:54,400 +obvious like I mean one one thing you + +1523 +01:04:53,200 --> 01:04:55,960 +could do is just generate a bunch of + +1524 +01:04:54,400 --> 01:04:58,520 +results and throw out all the syntax + +1525 +01:04:55,960 --> 01:04:59,480 +incorrect on that's easy right um but if + +1526 +01:04:58,520 --> 01:05:02,520 +you don't want to do that and you want + +1527 +01:04:59,480 --> 01:05:04,839 +to do it at decoding time it's dependent + +1528 +01:05:02,520 --> 01:05:07,480 +on you being able to have an incremental + +1529 +01:05:04,839 --> 01:05:09,079 +syntax parser that allows you to like + +1530 +01:05:07,480 --> 01:05:12,400 +throw out bad + +1531 +01:05:09,079 --> 01:05:14,160 +hypotheses like incrementally and that's + +1532 +01:05:12,400 --> 01:05:16,240 +possible that's very easy for some + +1533 +01:05:14,160 --> 01:05:17,200 +languages and not possible not as easy + +1534 +01:05:16,240 --> 01:05:20,559 +for other + +1535 +01:05:17,200 --> 01:05:23,720 +languages um one really big thing right + +1536 +01:05:20,559 --> 01:05:26,599 +now is Json so like a lot of the time + +1537 +01:05:23,720 --> 01:05:28,319 +people want to Output Json uh in you + +1538 +01:05:26,599 --> 01:05:31,559 +know then par the Json and use it in + +1539 +01:05:28,319 --> 01:05:36,640 +some Downstream test and there actually + +1540 +01:05:31,559 --> 01:05:36,640 +are libraries um just to give a + +1541 +01:05:38,559 --> 01:05:45,839 +few um here's one this Library called + +1542 +01:05:42,640 --> 01:05:48,799 +outlines um is one that basically allows + +1543 +01:05:45,839 --> 01:05:50,440 +you to incorporate syntactic constraints + +1544 +01:05:48,799 --> 01:05:53,240 +through like weighted finite State + +1545 +01:05:50,440 --> 01:05:55,160 +automata and other stuff like this um to + +1546 +01:05:53,240 --> 01:05:57,680 +allow you to throw away anything that + +1547 +01:05:55,160 --> 01:06:02,039 +doesn't here to your grammar another + +1548 +01:05:57,680 --> 01:06:02,039 +popular one which + +1549 +01:06:02,720 --> 01:06:06,880 +is nice but a little bit more + +1550 +01:06:07,160 --> 01:06:12,760 +complicated is + +1551 +01:06:09,799 --> 01:06:15,160 +um this one uh + +1552 +01:06:12,760 --> 01:06:17,200 +guidance so if you want to look at like + +1553 +01:06:15,160 --> 01:06:19,720 +constrained generation of outputs I + +1554 +01:06:17,200 --> 01:06:21,640 +would definitely recommend uh looking at + +1555 +01:06:19,720 --> 01:06:22,839 +one of these two either outlines or or + +1556 +01:06:21,640 --> 01:06:24,440 +guidance and they both give you + +1557 +01:06:22,839 --> 01:06:26,520 +different ways to add constraints to + +1558 +01:06:24,440 --> 01:06:29,079 +Output um we did actually talk about + +1559 +01:06:26,520 --> 01:06:31,200 +outlines a little bit during the like uh + +1560 +01:06:29,079 --> 01:06:34,599 +generation class but um we didn't go + +1561 +01:06:31,200 --> 01:06:35,760 +into a lot of details so uh yeah but I I + +1562 +01:06:34,599 --> 01:06:39,559 +would recommend + +1563 +01:06:35,760 --> 01:06:39,559 +this cool any other + +1564 +01:06:39,599 --> 01:06:43,920 +questions okay if not uh I guess we can + +1565 +01:06:42,079 --> 01:06:47,880 +finish up and I'm happy to talk we have + +1566 +01:06:43,920 --> 01:06:47,880 +a little bit of extra time \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (17) Code Generation/transcript.vtt b/CMU Advanced NLP 2024 (17) Code Generation/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..7d3d7faf3c5b3943a5c62dee737be670677916da --- /dev/null +++ b/CMU Advanced NLP 2024 (17) Code Generation/transcript.vtt @@ -0,0 +1,4699 @@ +WEBVTT + +00:00:00.480 --> 00:00:06.279 +so uh I guess we can get started uh + +00:00:04.080 --> 00:00:09.880 +today I'm going to be talking about code + +00:00:06.279 --> 00:00:11.719 +generation and uh so this is a a + +00:00:09.880 --> 00:00:13.599 +research topic that I've uh worked on + +00:00:11.719 --> 00:00:15.280 +for a long time now I I like a lot it's + +00:00:13.599 --> 00:00:17.520 +become very useful nowadays which is + +00:00:15.280 --> 00:00:20.960 +very exciting um so I'd like to talk + +00:00:17.520 --> 00:00:23.119 +about kind of some of the basics and + +00:00:20.960 --> 00:00:28.000 +Frontiers uh that we're working on right + +00:00:23.119 --> 00:00:28.000 +now in this General uh area + +00:00:31.719 --> 00:00:36.760 +um + +00:00:33.360 --> 00:00:38.160 +so before I get into code generation + +00:00:36.760 --> 00:00:40.719 +specifically one thing I'd like to point + +00:00:38.160 --> 00:00:43.399 +out is for the next four or so classes + +00:00:40.719 --> 00:00:45.680 +I'm going to be talking about tasks and + +00:00:43.399 --> 00:00:48.680 +up until now I've been focusing on a lot + +00:00:45.680 --> 00:00:52.840 +of like General things that weren't as + +00:00:48.680 --> 00:00:55.199 +much about any specific tasks um + +00:00:52.840 --> 00:00:57.000 +and I know that not everybody's going to + +00:00:55.199 --> 00:00:59.399 +be interested in the four tasks that I'm + +00:00:57.000 --> 00:01:00.960 +talking about in the next you know four + +00:00:59.399 --> 00:01:02.480 +lectures + +00:01:00.960 --> 00:01:04.920 +um + +00:01:02.480 --> 00:01:06.640 +but I'm going to be covering various + +00:01:04.920 --> 00:01:08.680 +things about different tasks and + +00:01:06.640 --> 00:01:10.640 +hopefully you can map the same questions + +00:01:08.680 --> 00:01:12.040 +onto whatever task you are interested in + +00:01:10.640 --> 00:01:14.360 +if you're not interested in any of the + +00:01:12.040 --> 00:01:15.880 +ones I talk about here so basically what + +00:01:14.360 --> 00:01:18.119 +I want to talk about is the task + +00:01:15.880 --> 00:01:21.040 +objective like why do we do that task + +00:01:18.119 --> 00:01:23.479 +why is it important um what data sets + +00:01:21.040 --> 00:01:26.560 +can we use to train or test our models + +00:01:23.479 --> 00:01:28.799 +on these tasks evaluation metrics and + +00:01:26.560 --> 00:01:31.200 +how do we evaluate uh both manually and + +00:01:28.799 --> 00:01:32.079 +automatically with respect to how good + +00:01:31.200 --> 00:01:34.960 +we're + +00:01:32.079 --> 00:01:37.880 +doing and finally models and methods so + +00:01:34.960 --> 00:01:40.720 +you know how do we solve the + +00:01:37.880 --> 00:01:42.479 +problem and so for code generation first + +00:01:40.720 --> 00:01:44.439 +I'd like to talk about the overview and + +00:01:42.479 --> 00:01:47.040 +objectives of code generation so + +00:01:44.439 --> 00:01:48.840 +basically code generation is the task of + +00:01:47.040 --> 00:01:52.439 +generating executable code is an + +00:01:48.840 --> 00:01:54.479 +interface to uh a program or to + +00:01:52.439 --> 00:01:58.320 +computers and there's a lot of different + +00:01:54.479 --> 00:02:01.000 +ways we can do this um why do we want to + +00:01:58.320 --> 00:02:03.159 +do this so + +00:02:01.000 --> 00:02:05.000 +the first thing is that software + +00:02:03.159 --> 00:02:06.759 +engineering is really important and + +00:02:05.000 --> 00:02:09.640 +being able to generate code accelerate + +00:02:06.759 --> 00:02:11.560 +software engineering uh now code + +00:02:09.640 --> 00:02:13.640 +generation is practical and I hope that + +00:02:11.560 --> 00:02:15.599 +everybody in the class is using some + +00:02:13.640 --> 00:02:17.840 +sort of you know code generation to + +00:02:15.599 --> 00:02:20.200 +accelerate your own workflow if you're + +00:02:17.840 --> 00:02:22.599 +not I highly encourage you to to try it + +00:02:20.200 --> 00:02:26.200 +because it's very + +00:02:22.599 --> 00:02:31.040 +useful second it also does things like + +00:02:26.200 --> 00:02:34.239 +enabling models to access tools um + +00:02:31.040 --> 00:02:37.440 +and even if you're not specifically + +00:02:34.239 --> 00:02:39.440 +working on a software related task this + +00:02:37.440 --> 00:02:41.000 +can be helpful but I want to talk about + +00:02:39.440 --> 00:02:42.480 +this in a later class when we talk about + +00:02:41.000 --> 00:02:46.640 +llm agents so I'm not going to be + +00:02:42.480 --> 00:02:48.319 +talking about um that as much this time + +00:02:46.640 --> 00:02:50.159 +uh one other thing that I I forgot to + +00:02:48.319 --> 00:02:52.920 +mention here which I'm also going to + +00:02:50.159 --> 00:02:55.000 +talk about in the later class is even if + +00:02:52.920 --> 00:02:58.120 +you're not using code at all training on + +00:02:55.000 --> 00:03:00.319 +code has been shown to cause some + +00:02:58.120 --> 00:03:01.920 +benefits to learning models uh + +00:03:00.319 --> 00:03:03.799 +specifically with respect to learning + +00:03:01.920 --> 00:03:06.480 +like difficult multitask reasoning uh + +00:03:03.799 --> 00:03:07.599 +sorry multi-step reasoning tasks and so + +00:03:06.480 --> 00:03:09.480 +that's another reason why you might want + +00:03:07.599 --> 00:03:10.840 +to worry about codes so I'm going to + +00:03:09.480 --> 00:03:12.840 +mainly talk about the first one this + +00:03:10.840 --> 00:03:14.560 +time and leave the other two uh for + +00:03:12.840 --> 00:03:17.720 +future + +00:03:14.560 --> 00:03:21.760 +lectures so specifically for this task + +00:03:17.720 --> 00:03:25.200 +our input um is some sort of + +00:03:21.760 --> 00:03:27.360 +specification of what we want to do um + +00:03:25.200 --> 00:03:30.319 +and our output is going to be + +00:03:27.360 --> 00:03:33.000 +code so + +00:03:30.319 --> 00:03:35.920 +when you write a + +00:03:33.000 --> 00:03:37.239 +program how do you describe the thing + +00:03:35.920 --> 00:03:40.239 +that you want to implement in the + +00:03:37.239 --> 00:03:42.000 +program before you implement it like uh + +00:03:40.239 --> 00:03:44.720 +yeah what are some of the specifications + +00:03:42.000 --> 00:03:44.720 +that people can give + +00:03:45.280 --> 00:03:50.720 +you what the input and output of the + +00:03:47.680 --> 00:03:52.360 +functions are uh yes uh sorry what what + +00:03:50.720 --> 00:03:54.400 +types the inputs and outputs of the + +00:03:52.360 --> 00:03:56.239 +function are so those would be like type + +00:03:54.400 --> 00:03:57.760 +in in Python for example yeah that + +00:03:56.239 --> 00:03:59.439 +that's a good one it's actually not on + +00:03:57.760 --> 00:04:02.079 +my list of things here but it's it's a + +00:03:59.439 --> 00:04:06.040 +good Point yeah any any other things + +00:04:02.079 --> 00:04:08.680 +yeah complexity requirements complexity + +00:04:06.040 --> 00:04:11.040 +requirements constraints that is also + +00:04:08.680 --> 00:04:14.840 +not on my list of things here uh that's + +00:04:11.040 --> 00:04:17.040 +uh that's a good one too um and any uh + +00:04:14.840 --> 00:04:20.280 +slightly more straight forward + +00:04:17.040 --> 00:04:24.040 +things pseudo code yeah um in pseudo + +00:04:20.280 --> 00:04:26.720 +code uh what what is pseudo code written + +00:04:24.040 --> 00:04:28.440 +in natural natural language yeah so + +00:04:26.720 --> 00:04:31.199 +natural language inputs are are one + +00:04:28.440 --> 00:04:34.520 +thing so I will tell you I want I want a + +00:04:31.199 --> 00:04:39.160 +program that uh I want you to write a + +00:04:34.520 --> 00:04:41.479 +web interface that allows me to um order + +00:04:39.160 --> 00:04:43.560 +pizza or something like that that that + +00:04:41.479 --> 00:04:46.560 +would be one way to do it any other + +00:04:43.560 --> 00:04:46.560 +ideas + +00:04:51.199 --> 00:04:55.840 +yeah this is what I have and this is + +00:04:53.360 --> 00:04:57.240 +what I want yeah so um that's especially + +00:04:55.840 --> 00:04:59.880 +the case if you're like modifying a + +00:04:57.240 --> 00:05:01.400 +program um or something like that so + +00:04:59.880 --> 00:05:06.280 +actually the next one on my list there + +00:05:01.400 --> 00:05:06.280 +so good good point um any other + +00:05:09.759 --> 00:05:15.720 +ideas yeah or or a multimodal person you + +00:05:12.880 --> 00:05:20.120 +know I might say I want a pizza ordering + +00:05:15.720 --> 00:05:22.039 +I want a pizza ordering app and up here + +00:05:20.120 --> 00:05:24.000 +it should have your like username so you + +00:05:22.039 --> 00:05:25.840 +can click through the settings and like + +00:05:24.000 --> 00:05:27.080 +over here you should have the menu and + +00:05:25.840 --> 00:05:28.680 +over here you should have your check out + +00:05:27.080 --> 00:05:30.400 +card or something like that you know + +00:05:28.680 --> 00:05:32.440 +it's something you do for a programmer + +00:05:30.400 --> 00:05:34.680 +as well until recently we couldn't + +00:05:32.440 --> 00:05:37.680 +really use that with like actual models + +00:05:34.680 --> 00:05:40.560 +but um yeah yeah well that was my fourth + +00:05:37.680 --> 00:05:42.639 +one but um and then the other one uh + +00:05:40.560 --> 00:05:44.960 +inputs and outputs this could come in + +00:05:42.639 --> 00:05:46.560 +the form of like unit tests or something + +00:05:44.960 --> 00:05:49.199 +like that where it's like yeah this is + +00:05:46.560 --> 00:05:51.160 +the input this is the expected output so + +00:05:49.199 --> 00:05:53.240 +these are all things we use both as + +00:05:51.160 --> 00:05:55.639 +human programmers and in code generation + +00:05:53.240 --> 00:05:58.120 +models I really like the two other + +00:05:55.639 --> 00:06:00.440 +points though um + +00:05:58.120 --> 00:06:03.759 +because typin + +00:06:00.440 --> 00:06:05.479 +are actually something that you like + +00:06:03.759 --> 00:06:06.599 +writing writing with typ pints is + +00:06:05.479 --> 00:06:09.240 +actually something that you can do with + +00:06:06.599 --> 00:06:14.120 +code generation models and um + +00:06:09.240 --> 00:06:16.680 +constraints such as like it should it + +00:06:14.120 --> 00:06:20.199 +should meet certain speed requirements + +00:06:16.680 --> 00:06:21.520 +or it should um you know use certain + +00:06:20.199 --> 00:06:22.960 +libraries or something like that are + +00:06:21.520 --> 00:06:24.840 +also constraints that you could add I + +00:06:22.960 --> 00:06:26.120 +didn't put that on this slide here that + +00:06:24.840 --> 00:06:28.319 +might come in the natural language + +00:06:26.120 --> 00:06:30.639 +description but it could be something + +00:06:28.319 --> 00:06:32.759 +separate and then you know the output is + +00:06:30.639 --> 00:06:36.759 +whatever code you want + +00:06:32.759 --> 00:06:38.240 +to so um how many people are using like + +00:06:36.759 --> 00:06:41.000 +GitHub + +00:06:38.240 --> 00:06:46.160 +co-pilot like what + +00:06:41.000 --> 00:06:47.759 +percentage maybe about half okay um how + +00:06:46.160 --> 00:06:49.840 +many people are using another like + +00:06:47.759 --> 00:06:56.080 +assisted coding tool other than GitHub + +00:06:49.840 --> 00:06:57.400 +coet yeah g gp4 gp4 is an could be an + +00:06:56.080 --> 00:06:58.680 +assisted coding tool I'm talking more + +00:06:57.400 --> 00:07:02.400 +like something that's actually in your + +00:06:58.680 --> 00:07:04.759 +IDE something yeah anybody + +00:07:02.400 --> 00:07:07.680 +else does anyone use + +00:07:04.759 --> 00:07:13.639 +cursor no + +00:07:07.680 --> 00:07:18.039 +um yeah cursor yeah okay so + +00:07:13.639 --> 00:07:20.919 +yeah Co collab uh Ai and collab yeah so + +00:07:18.039 --> 00:07:24.080 +um so I think there are a lot of these + +00:07:20.919 --> 00:07:26.879 +uh going around I I use co-pilot myself + +00:07:24.080 --> 00:07:28.639 +I have not used cursor I do use GPD 4 um + +00:07:26.879 --> 00:07:30.599 +and I'll I'll show you an example of how + +00:07:28.639 --> 00:07:32.919 +I use them different + +00:07:30.599 --> 00:07:34.360 +um if you haven't used copilot hopefully + +00:07:32.919 --> 00:07:39.599 +this will + +00:07:34.360 --> 00:07:42.599 +work um I just made a a simple + +00:07:39.599 --> 00:07:42.599 +video + +00:07:43.280 --> 00:07:49.520 +oops okay that's not working but anyway + +00:07:46.159 --> 00:07:51.000 +you um you type your uh you know you + +00:07:49.520 --> 00:07:54.319 +type and it basically completes your + +00:07:51.000 --> 00:07:56.639 +code so this is this is an example here + +00:07:54.319 --> 00:07:58.599 +and I didn't write any of this code + +00:07:56.639 --> 00:08:02.360 +actually I just wrote the comments and + +00:07:58.599 --> 00:08:04.000 +then it filled in the the actual C and + +00:08:02.360 --> 00:08:05.639 +also I didn't exactly check if it's + +00:08:04.000 --> 00:08:08.080 +correct or not + +00:08:05.639 --> 00:08:11.120 +so if there's any mistake it's co + +00:08:08.080 --> 00:08:15.159 +Pilot's fault not my fault but um I it + +00:08:11.120 --> 00:08:15.159 +looked correct to me so + +00:08:15.759 --> 00:08:21.120 +um and oh by the way you get to use it + +00:08:18.120 --> 00:08:22.800 +for free with your CMU account so if you + +00:08:21.120 --> 00:08:24.120 +uh if you don't want to use it but don't + +00:08:22.800 --> 00:08:25.919 +want to pay for it you're and left + +00:08:24.120 --> 00:08:31.639 +because you can use + +00:08:25.919 --> 00:08:36.320 +it um another example uh is gd4 or uh + +00:08:31.639 --> 00:08:38.519 +more recently Cloud 3 um and basically + +00:08:36.320 --> 00:08:40.680 +this can do a different variety of + +00:08:38.519 --> 00:08:43.719 +things so we talked about screenshots + +00:08:40.680 --> 00:08:45.720 +and basically I asked Claude to create a + +00:08:43.719 --> 00:08:48.399 +react app that replicates the claw + +00:08:45.720 --> 00:08:50.240 +interface by giving it a screenshot and + +00:08:48.399 --> 00:08:52.560 +asking it create a react app that looks + +00:08:50.240 --> 00:08:55.200 +like the screenshot and then it gave me + +00:08:52.560 --> 00:09:00.800 +a whole bunch of text and in the end it + +00:08:55.200 --> 00:09:03.320 +started um making this uh container here + +00:09:00.800 --> 00:09:08.040 +um + +00:09:03.320 --> 00:09:11.040 +and this uh it basically is skipping + +00:09:08.040 --> 00:09:12.800 +some of the styling stuff uh because + +00:09:11.040 --> 00:09:14.480 +large language models I I think they're + +00:09:12.800 --> 00:09:16.560 +basically trained so that they don't + +00:09:14.480 --> 00:09:19.959 +give really really long responses + +00:09:16.560 --> 00:09:21.320 +because like if you uh asked for + +00:09:19.959 --> 00:09:23.640 +something that would take a really + +00:09:21.320 --> 00:09:25.519 +really long time and then the model just + +00:09:23.640 --> 00:09:26.880 +complied and gave that to you for a + +00:09:25.519 --> 00:09:29.000 +really really long time it would cost + +00:09:26.880 --> 00:09:30.680 +them a lot of money so I feel like they + +00:09:29.000 --> 00:09:32.440 +they B try to train the models to only + +00:09:30.680 --> 00:09:37.160 +out at like a thousand tokens at a time + +00:09:32.440 --> 00:09:38.959 +or something like that so um it it won't + +00:09:37.160 --> 00:09:40.839 +actually go out and program the whole + +00:09:38.959 --> 00:09:43.120 +project for you but with a little + +00:09:40.839 --> 00:09:44.680 +cajoling if you say okay now implement + +00:09:43.120 --> 00:09:48.519 +this part now implement this part now + +00:09:44.680 --> 00:09:49.959 +implement this part um you uh you can + +00:09:48.519 --> 00:09:53.040 +end up with some pretty interesting + +00:09:49.959 --> 00:09:55.680 +stuff and let me + +00:09:53.040 --> 00:09:57.120 +uh let me see if I can I can show you an + +00:09:55.680 --> 00:10:01.320 +example + +00:09:57.120 --> 00:10:01.320 +so I I know a little bit of + +00:10:01.440 --> 00:10:07.040 +react um the front end framework but I + +00:10:04.240 --> 00:10:09.839 +don't know a whole lot but recently + +00:10:07.040 --> 00:10:14.279 +we've been um working on an open-source + +00:10:09.839 --> 00:10:18.959 +assisted coding app and I most of this + +00:10:14.279 --> 00:10:21.519 +was just written by quad um it's uh I I + +00:10:18.959 --> 00:10:23.079 +said I want an app that on the left side + +00:10:21.519 --> 00:10:26.160 +it has a chat window and then on the + +00:10:23.079 --> 00:10:28.240 +right side it has three uh three panes + +00:10:26.160 --> 00:10:30.120 +one is a terminal one is a planner and + +00:10:28.240 --> 00:10:32.200 +one is a code editor + +00:10:30.120 --> 00:10:33.880 +and um so it gave me something it was + +00:10:32.200 --> 00:10:37.399 +kind of ugly so I said okay make the + +00:10:33.880 --> 00:10:40.639 +background black um change the CSS file + +00:10:37.399 --> 00:10:43.639 +so that um you have like a user icon and + +00:10:40.639 --> 00:10:46.040 +a robot icon and stuff like that and + +00:10:43.639 --> 00:10:49.240 +after this I I wrote very little of this + +00:10:46.040 --> 00:10:51.079 +code I wrote like 1% of this code or + +00:10:49.240 --> 00:10:54.480 +something like that and it's able to to + +00:10:51.079 --> 00:10:57.880 +do these sorts of things for you um so + +00:10:54.480 --> 00:11:01.000 +if you don't like writing front ends + +00:10:57.880 --> 00:11:03.880 +good luck uh or good good news that you + +00:11:01.000 --> 00:11:05.560 +uh can come up with a passable front end + +00:11:03.880 --> 00:11:07.519 +without uh without actually having to + +00:11:05.560 --> 00:11:08.720 +write it nonetheless you know good front + +00:11:07.519 --> 00:11:10.200 +end Engineers will come up with + +00:11:08.720 --> 00:11:13.639 +something much more beautiful than that + +00:11:10.200 --> 00:11:15.880 +so um so basically why do I why did I + +00:11:13.639 --> 00:11:19.959 +want to say this I think um GitHub + +00:11:15.880 --> 00:11:20.839 +co-pilot and Pla or gp4 serve very + +00:11:19.959 --> 00:11:25.200 +different + +00:11:20.839 --> 00:11:27.360 +purposes um GitHub co-pilot is code + +00:11:25.200 --> 00:11:30.160 +completion and it mostly works for + +00:11:27.360 --> 00:11:32.440 +shorter things so it's like your next + +00:11:30.160 --> 00:11:34.760 +thought in your code in code that you + +00:11:32.440 --> 00:11:37.560 +know pretty well something like plot or + +00:11:34.760 --> 00:11:40.639 +gp4 is much better for really long + +00:11:37.560 --> 00:11:44.680 +things um where you want to build like a + +00:11:40.639 --> 00:11:47.040 +full class or something like that and I + +00:11:44.680 --> 00:11:48.480 +also have found that if you're coding in + +00:11:47.040 --> 00:11:50.079 +a language that you're very familiar + +00:11:48.480 --> 00:11:51.560 +with copilot might be more useful + +00:11:50.079 --> 00:11:52.959 +because you want fine grain control and + +00:11:51.560 --> 00:11:55.040 +you want it to fill out things to make + +00:11:52.959 --> 00:11:56.519 +it faster whereas if you're coding in a + +00:11:55.040 --> 00:11:58.040 +language that you're not very familiar + +00:11:56.519 --> 00:11:59.680 +with something like Claud is good + +00:11:58.040 --> 00:12:01.839 +because you can write a whole you know + +00:11:59.680 --> 00:12:04.800 +program forties so these are the + +00:12:01.839 --> 00:12:07.680 +differences another thing is GitHub + +00:12:04.800 --> 00:12:09.240 +co-pilot needs to be frighteningly fast + +00:12:07.680 --> 00:12:10.839 +because it needs to move at the speed + +00:12:09.240 --> 00:12:12.880 +that like programmers are thinking in + +00:12:10.839 --> 00:12:14.920 +programming next whereas something like + +00:12:12.880 --> 00:12:16.800 +Claud it doesn't you know using it in + +00:12:14.920 --> 00:12:18.880 +the way that I use cloud here doesn't + +00:12:16.800 --> 00:12:22.600 +really matter because I can say uh + +00:12:18.880 --> 00:12:24.079 +programing me a you know a web app and + +00:12:22.600 --> 00:12:25.360 +then I can go and have dinner and come + +00:12:24.079 --> 00:12:28.199 +back and have a web app and I'd be + +00:12:25.360 --> 00:12:31.720 +perfectly happy with that right so um + +00:12:28.199 --> 00:12:37.199 +the latency request are also + +00:12:31.720 --> 00:12:37.199 +different cool um any any questions here + +00:12:37.399 --> 00:12:42.600 +yeah that debugging code they + +00:12:43.000 --> 00:12:47.959 +are the well so + +00:12:45.839 --> 00:12:50.760 +co-pilot I haven't actually tried it + +00:12:47.959 --> 00:12:52.480 +that much um if I wanted to debug code + +00:12:50.760 --> 00:12:54.880 +I'd probably use something like pla or + +00:12:52.480 --> 00:12:56.360 +gp4 just because actually I'll I'll + +00:12:54.880 --> 00:12:58.320 +mention this in a second but co-pilot's + +00:12:56.360 --> 00:13:00.360 +a much smaller model uh because it needs + +00:12:58.320 --> 00:13:01.839 +to be very fast or what they're using in + +00:13:00.360 --> 00:13:04.040 +copilot is a smaller model because it + +00:13:01.839 --> 00:13:05.519 +needs to be very fast so I would + +00:13:04.040 --> 00:13:08.360 +probably use a bigger model for anything + +00:13:05.519 --> 00:13:10.120 +that required like good understanding I + +00:13:08.360 --> 00:13:11.480 +think it's passable at debugging code + +00:13:10.120 --> 00:13:13.079 +but it won't find the really difficult + +00:13:11.480 --> 00:13:15.639 +things and it probably won't find things + +00:13:13.079 --> 00:13:18.279 +that require spanning across uh multiple + +00:13:15.639 --> 00:13:21.240 +files but I I'm not 100% sure about that + +00:13:18.279 --> 00:13:25.519 +like I think it's worth + +00:13:21.240 --> 00:13:25.519 +testing um any other + +00:13:25.880 --> 00:13:30.120 +questions okay so if I haven't convinced + +00:13:28.360 --> 00:13:32.360 +you that as software developers you + +00:13:30.120 --> 00:13:34.880 +should be using this hopefully this next + +00:13:32.360 --> 00:13:37.480 +uh this next slide will so this was a + +00:13:34.880 --> 00:13:41.199 +study that was run by GitHub uh shortly + +00:13:37.480 --> 00:13:43.160 +after um after co-pilot came out and so + +00:13:41.199 --> 00:13:45.440 +why do we do code generation why are + +00:13:43.160 --> 00:13:47.240 +people very excited about it so the + +00:13:45.440 --> 00:13:50.240 +first is U making software isn't + +00:13:47.240 --> 00:13:53.480 +important um and I recently calculated + +00:13:50.240 --> 00:13:55.920 +what from some Labor Statistics and the + +00:13:53.480 --> 00:13:59.440 +total amount that software developers + +00:13:55.920 --> 00:14:01.880 +make um in a year is $175 billion so + +00:13:59.440 --> 00:14:05.000 +that's providing at least that much you + +00:14:01.880 --> 00:14:06.800 +know value so it's a very high value uh + +00:14:05.000 --> 00:14:09.079 +profession so if we could make it faster + +00:14:06.800 --> 00:14:11.480 +you know it would have even more + +00:14:09.079 --> 00:14:12.920 +value another thing is code generation + +00:14:11.480 --> 00:14:15.680 +leads to large improvements in + +00:14:12.920 --> 00:14:17.160 +productivity so uh get Hub ran this + +00:14:15.680 --> 00:14:18.680 +study where they randomly assigned + +00:14:17.160 --> 00:14:21.519 +developers to groups who would either + +00:14:18.680 --> 00:14:24.440 +use co-pilot or not use co-pilot and + +00:14:21.519 --> 00:14:26.480 +they assigned them the same task and + +00:14:24.440 --> 00:14:30.759 +basically the people who use copilot + +00:14:26.480 --> 00:14:34.199 +their rate of um completion went up by + +00:14:30.759 --> 00:14:36.320 +8% and they finished um in about 40% of + +00:14:34.199 --> 00:14:39.279 +the time of the people who didn't use it + +00:14:36.320 --> 00:14:43.639 +and so I think this + +00:14:39.279 --> 00:14:45.920 +is or uh yeah they say 55% less times so + +00:14:43.639 --> 00:14:47.759 +this is very impressive but it's also + +00:14:45.920 --> 00:14:50.199 +not at all surprising if you're using a + +00:14:47.759 --> 00:14:52.880 +Cod like assisted coding assistant it + +00:14:50.199 --> 00:14:54.360 +just makes you code faster also if you + +00:14:52.880 --> 00:14:56.040 +don't like writing doc strings it's + +00:14:54.360 --> 00:14:57.519 +really good at writing doc strings so + +00:14:56.040 --> 00:14:59.680 +you can write documentation for your + +00:14:57.519 --> 00:15:00.759 +code not wor about so + +00:14:59.680 --> 00:15:04.399 +okay + +00:15:00.759 --> 00:15:07.000 +cool um + +00:15:04.399 --> 00:15:09.720 +so there are differences between code + +00:15:07.000 --> 00:15:14.000 +and natural language uh and I've listed + +00:15:09.720 --> 00:15:15.560 +a few of them here and the differences + +00:15:14.000 --> 00:15:18.120 +between code and natural language also + +00:15:15.560 --> 00:15:20.160 +affect how we build models for this test + +00:15:18.120 --> 00:15:23.160 +so the first one is that code has strict + +00:15:20.160 --> 00:15:26.000 +grammar uh if you make a small mistake + +00:15:23.160 --> 00:15:27.920 +in your code grammar usually it will + +00:15:26.000 --> 00:15:29.839 +just break and your program won't work + +00:15:27.920 --> 00:15:31.319 +so you need to be very careful as + +00:15:29.839 --> 00:15:32.560 +opposed to natural language grammar + +00:15:31.319 --> 00:15:33.600 +where you can make small mistakes and it + +00:15:32.560 --> 00:15:36.120 +doesn't make a + +00:15:33.600 --> 00:15:40.120 +difference another thing is in code you + +00:15:36.120 --> 00:15:42.720 +know the semantic flow of the code and + +00:15:40.120 --> 00:15:44.160 +so we know that certain variables + +00:15:42.720 --> 00:15:45.560 +correspond to each other we know that + +00:15:44.160 --> 00:15:48.639 +they're flowing through the program in a + +00:15:45.560 --> 00:15:50.880 +certain way another thing is code is + +00:15:48.639 --> 00:15:54.120 +executable so we can actually execute it + +00:15:50.880 --> 00:15:56.199 +and observe the result unlike in natural + +00:15:54.120 --> 00:16:00.000 +language and another important thing is + +00:15:56.199 --> 00:16:03.399 +code is created incrementally so code is + +00:16:00.000 --> 00:16:05.680 +not you know unlike text text is also + +00:16:03.399 --> 00:16:07.399 +created incrementally but it's not + +00:16:05.680 --> 00:16:08.720 +usually you write it once you might + +00:16:07.399 --> 00:16:11.199 +revise it a little bit and then you're + +00:16:08.720 --> 00:16:14.040 +done and you you don't need to touch it + +00:16:11.199 --> 00:16:15.399 +again but um in code you touch it over + +00:16:14.040 --> 00:16:17.800 +and over and over again as you develop a + +00:16:15.399 --> 00:16:17.800 +sof + +00:16:18.040 --> 00:16:23.040 +project so if we look at code Generation + +00:16:21.079 --> 00:16:27.079 +Um I would like to talk a little bit + +00:16:23.040 --> 00:16:29.079 +about uh subtasks and data sets next so + +00:16:27.079 --> 00:16:30.480 +the most famous data set for a Cod code + +00:16:29.079 --> 00:16:34.279 +generation nowadays is something called + +00:16:30.480 --> 00:16:38.680 +human ofel um this is a very nice data + +00:16:34.279 --> 00:16:42.480 +set um for a number of reasons uh I + +00:16:38.680 --> 00:16:44.240 +think it is used too much um nonetheless + +00:16:42.480 --> 00:16:46.759 +and I I think there are better data sets + +00:16:44.240 --> 00:16:51.240 +that we maybe should be using more but + +00:16:46.759 --> 00:16:54.000 +basically human ofel is um it has + +00:16:51.240 --> 00:16:55.920 +examples of usage of the Python standard + +00:16:54.000 --> 00:16:59.360 +Library where some are easier some are + +00:16:55.920 --> 00:17:02.880 +harder and just to give some examples + +00:16:59.360 --> 00:17:06.760 +uh we're saying given a nonempty list of + +00:17:02.880 --> 00:17:10.480 +integers return the sum of all the odd + +00:17:06.760 --> 00:17:12.959 +elements that are in even positions so + +00:17:10.480 --> 00:17:16.079 +it's kind of like a elite code + +00:17:12.959 --> 00:17:19.199 +style you know program but maybe one of + +00:17:16.079 --> 00:17:22.400 +the easier ones and then in order to + +00:17:19.199 --> 00:17:25.240 +solve that you find all of the put + +00:17:22.400 --> 00:17:28.480 +elements in even positions and then you + +00:17:25.240 --> 00:17:29.679 +only return them if uh the value itself + +00:17:28.480 --> 00:17:32.799 +is + +00:17:29.679 --> 00:17:34.200 +um so like you can do that in a oneliner + +00:17:32.799 --> 00:17:36.600 +but you need to think about it a little + +00:17:34.200 --> 00:17:38.919 +bit um and then you have + +00:17:36.600 --> 00:17:43.120 +more + +00:17:38.919 --> 00:17:43.810 +um returns encoded uh sorry takes an + +00:17:43.120 --> 00:17:46.910 +input + +00:17:43.810 --> 00:17:46.910 +[Music] + +00:17:47.160 --> 00:17:50.919 +string yeah actually sorry this is from + +00:17:49.320 --> 00:17:53.600 +the paper I didn't read it before I copy + +00:17:50.919 --> 00:17:57.080 +pasted it in here but um yeah that's a + +00:17:53.600 --> 00:17:58.880 +decoding one and one one thing about + +00:17:57.080 --> 00:18:02.240 +this uh that's important to know is it + +00:17:58.880 --> 00:18:04.200 +only has 164 examples so it's actually a + +00:18:02.240 --> 00:18:07.600 +relatively small number of + +00:18:04.200 --> 00:18:09.440 +examples um it's also just the python + +00:18:07.600 --> 00:18:11.200 +standard Library so it's not testing + +00:18:09.440 --> 00:18:14.960 +usage of any other + +00:18:11.200 --> 00:18:17.520 +libraries um so these two things + +00:18:14.960 --> 00:18:19.720 +together make it not the most realistic + +00:18:17.520 --> 00:18:21.880 +you know examination of your programming + +00:18:19.720 --> 00:18:23.640 +skills just like leak code is not the + +00:18:21.880 --> 00:18:25.640 +most realistic examination of your + +00:18:23.640 --> 00:18:28.240 +programming skills but you know I don't + +00:18:25.640 --> 00:18:31.720 +know companies use it anyway so maybe + +00:18:28.240 --> 00:18:35.159 +human devel is reasonable but um so then + +00:18:31.720 --> 00:18:37.120 +we go um into the inputs and outputs uh + +00:18:35.159 --> 00:18:40.679 +the inputs and outputs usually include a + +00:18:37.120 --> 00:18:43.440 +doc string um some input and output + +00:18:40.679 --> 00:18:47.640 +examples and then they have tests to + +00:18:43.440 --> 00:18:47.640 +verify the accuracy of your + +00:18:47.880 --> 00:18:52.840 +outputs so the metric that's used to + +00:18:50.559 --> 00:18:58.919 +evaluate these systems is something + +00:18:52.840 --> 00:19:01.400 +called passet K and the basic idea is um + +00:18:58.919 --> 00:19:03.400 +we generate K examples will at least one + +00:19:01.400 --> 00:19:06.960 +of them pass the unit + +00:19:03.400 --> 00:19:10.720 +tests and the idea here is + +00:19:06.960 --> 00:19:13.480 +that if we have models we might want to + +00:19:10.720 --> 00:19:14.960 +generate like well there there's a + +00:19:13.480 --> 00:19:17.480 +couple reasons why we would care about + +00:19:14.960 --> 00:19:19.880 +this pass it one is kind of obvious + +00:19:17.480 --> 00:19:23.200 +because we generate one and then we + +00:19:19.880 --> 00:19:26.480 +measure how um you know how likely it is + +00:19:23.200 --> 00:19:29.280 +to pass unit tests but pass it five why + +00:19:26.480 --> 00:19:30.760 +would we care about passet five well + +00:19:29.280 --> 00:19:32.159 +number one maybe you could show five + +00:19:30.760 --> 00:19:34.240 +programs to a person and they could + +00:19:32.159 --> 00:19:37.039 +choose the one that they like the best + +00:19:34.240 --> 00:19:39.919 +or maybe you could have unit test write + +00:19:37.039 --> 00:19:41.720 +unit tests in advance and then generate + +00:19:39.919 --> 00:19:43.880 +five programs check which one pass the + +00:19:41.720 --> 00:19:45.480 +unit tests and then use the ones only + +00:19:43.880 --> 00:19:48.360 +that pass the unit test or something + +00:19:45.480 --> 00:19:51.000 +like that so there's also some interest + +00:19:48.360 --> 00:19:53.320 +in uh whether you could generate you + +00:19:51.000 --> 00:19:54.600 +know multiple examples and then pick a + +00:19:53.320 --> 00:19:56.919 +good + +00:19:54.600 --> 00:19:59.080 +one there's a little bit of nuance in + +00:19:56.919 --> 00:20:02.120 +how this is actually calculated so + +00:19:59.080 --> 00:20:04.240 +basically um if you generate only K like + +00:20:02.120 --> 00:20:05.960 +if you if you sample only one example + +00:20:04.240 --> 00:20:07.400 +there's a lot of variance in whether you + +00:20:05.960 --> 00:20:10.159 +get it right or not so what they + +00:20:07.400 --> 00:20:13.440 +actually do is they generate like 10 + +00:20:10.159 --> 00:20:15.600 +outputs or 200 outputs and then they + +00:20:13.440 --> 00:20:18.159 +calculate the expected number of those + +00:20:15.600 --> 00:20:20.320 +that the expected number of cases where + +00:20:18.159 --> 00:20:23.280 +that would pass by just doing a little + +00:20:20.320 --> 00:20:25.440 +bit of uh like math calculating the + +00:20:23.280 --> 00:20:28.679 +number of combinations where one passes + +00:20:25.440 --> 00:20:30.720 +or one doesn't and here k n is the total + +00:20:28.679 --> 00:20:34.240 +number you generate C is the number of + +00:20:30.720 --> 00:20:36.520 +correct ansers and K is uh your passive + +00:20:34.240 --> 00:20:36.520 +K + +00:20:37.159 --> 00:20:43.360 +value + +00:20:38.919 --> 00:20:46.280 +cool um so any any questions about + +00:20:43.360 --> 00:20:47.880 +these you'll you'll see a bunch of uh + +00:20:46.280 --> 00:20:50.520 +people evaluating on this human ofel + +00:20:47.880 --> 00:20:52.760 +with passive K including all of the you + +00:20:50.520 --> 00:20:57.520 +know new llms that come out it's a very + +00:20:52.760 --> 00:20:57.520 +standard Edge yeah + +00:21:01.760 --> 00:21:06.039 +is yeah that that's a good um question I + +00:21:04.919 --> 00:21:07.840 +think I'm going to cover that a little + +00:21:06.039 --> 00:21:11.039 +bit later but I might as well say it now + +00:21:07.840 --> 00:21:13.640 +so llms + +00:21:11.039 --> 00:21:15.080 +are llms are good at code because they + +00:21:13.640 --> 00:21:16.880 +intentionally include a lot of code + +00:21:15.080 --> 00:21:19.520 +training data in LL training and the + +00:21:16.880 --> 00:21:22.679 +reason for that is twofold um the first + +00:21:19.520 --> 00:21:25.320 +one is that code generation is a huge + +00:21:22.679 --> 00:21:26.960 +application of llms right now and like + +00:21:25.320 --> 00:21:28.679 +if you had an llm that couldn't do code + +00:21:26.960 --> 00:21:32.320 +generation it'd be kind of embarrassing + +00:21:28.679 --> 00:21:33.960 +so um Everybody includes this number two + +00:21:32.320 --> 00:21:36.600 +uh code has been shown to improve kind + +00:21:33.960 --> 00:21:38.080 +of the reasoning abilities of llms and + +00:21:36.600 --> 00:21:41.640 +because of that people include code for + +00:21:38.080 --> 00:21:43.440 +that purpose so yeah um it's not that + +00:21:41.640 --> 00:21:45.600 +LMS are inherently good at code or + +00:21:43.440 --> 00:21:48.840 +anything it's that they have lots of + +00:21:45.600 --> 00:21:51.640 +lots of code TR and I'll I'll explain + +00:21:48.840 --> 00:21:54.279 +exactly how they construct this + +00:21:51.640 --> 00:21:57.200 +St and actually if you remember last + +00:21:54.279 --> 00:21:59.640 +time uh I talked about the pile which + +00:21:57.200 --> 00:22:01.039 +was or not last time but uh when I + +00:21:59.640 --> 00:22:03.159 +talked about the tour of large language + +00:22:01.039 --> 00:22:06.360 +models I talked about the pile and the + +00:22:03.159 --> 00:22:09.799 +pile is almost half toe for + +00:22:06.360 --> 00:22:12.000 +example cool any other + +00:22:09.799 --> 00:22:17.240 +questions + +00:22:12.000 --> 00:22:19.320 +okay so another uh a first Improvement + +00:22:17.240 --> 00:22:22.080 +or at least change that we can make to + +00:22:19.320 --> 00:22:23.880 +human ofel is uh going to broader + +00:22:22.080 --> 00:22:26.720 +domains and covering a broader variety + +00:22:23.880 --> 00:22:28.559 +of libraries and this is a data set that + +00:22:26.720 --> 00:22:30.880 +we created actually a long time ago but + +00:22:28.559 --> 00:22:33.799 +but we recently added execution based + +00:22:30.880 --> 00:22:36.159 +evaluation to it it's called konola and + +00:22:33.799 --> 00:22:36.919 +the execution based uh evaluation one is + +00:22:36.159 --> 00:22:40.360 +called + +00:22:36.919 --> 00:22:43.039 +odex and basically what we did here is + +00:22:40.360 --> 00:22:45.720 +we scraped data from stack Overflow + +00:22:43.039 --> 00:22:48.039 +including uh inputs and output uh + +00:22:45.720 --> 00:22:50.559 +Solutions and then based on this scraped + +00:22:48.039 --> 00:22:54.240 +data we uh did some manual curation to + +00:22:50.559 --> 00:22:57.640 +turn these into like actual questions um + +00:22:54.240 --> 00:22:59.640 +and answers about how you could write uh + +00:22:57.640 --> 00:23:01.799 +solve programming + +00:22:59.640 --> 00:23:04.080 +problems and + +00:23:01.799 --> 00:23:05.600 +um because this is scraped from stack + +00:23:04.080 --> 00:23:09.159 +Overflow there's no restriction that + +00:23:05.600 --> 00:23:10.520 +this is from the python standard Library + +00:23:09.159 --> 00:23:13.200 +which also means that it can cover a + +00:23:10.520 --> 00:23:14.919 +very wide variety of libraries and it's + +00:23:13.200 --> 00:23:16.760 +approximately according to the + +00:23:14.919 --> 00:23:20.320 +popularity of the libraries because we + +00:23:16.760 --> 00:23:24.159 +took popular posts so um that's a a good + +00:23:20.320 --> 00:23:25.400 +thing uh you know it it is a reasonable + +00:23:24.159 --> 00:23:26.559 +way to come up with a realistic + +00:23:25.400 --> 00:23:29.520 +distribution of libraries that you + +00:23:26.559 --> 00:23:31.799 +should be looking at um odex adds + +00:23:29.520 --> 00:23:34.159 +execution based evaluation previously + +00:23:31.799 --> 00:23:36.679 +what we had was we only had the snippet + +00:23:34.159 --> 00:23:40.600 +that was able to solve the problem as + +00:23:36.679 --> 00:23:42.360 +opposed to um as opposed to being able + +00:23:40.600 --> 00:23:46.880 +to execute unit + +00:23:42.360 --> 00:23:49.440 +tests and just to show how this has a + +00:23:46.880 --> 00:23:52.000 +broader variety of libraries on the top + +00:23:49.440 --> 00:23:53.919 +we have the distribution of odex + +00:23:52.000 --> 00:23:57.320 +libraries and we can see about half of + +00:23:53.919 --> 00:23:59.600 +them use libraries and this includes a + +00:23:57.320 --> 00:24:01.279 +variety of things including pandas + +00:23:59.600 --> 00:24:04.799 +numpy + +00:24:01.279 --> 00:24:06.400 +um reg o selections you know all of + +00:24:04.799 --> 00:24:09.279 +these should be libraries that look + +00:24:06.400 --> 00:24:14.559 +familiar to you um in contrast if we + +00:24:09.279 --> 00:24:17.200 +look at human eval human eval is right + +00:24:14.559 --> 00:24:18.840 +here so you can see almost all of the + +00:24:17.200 --> 00:24:20.600 +questions require no libraries and all + +00:24:18.840 --> 00:24:22.120 +of the other ones require libraries that + +00:24:20.600 --> 00:24:24.360 +were included in the pipe onstead + +00:24:22.120 --> 00:24:27.640 +libraries so + +00:24:24.360 --> 00:24:29.120 +um in reality this is probably more what + +00:24:27.640 --> 00:24:30.120 +your program in queries are going to + +00:24:29.120 --> 00:24:31.240 +look like they're not going to look like + +00:24:30.120 --> 00:24:33.600 +lead code they're going to look like + +00:24:31.240 --> 00:24:33.600 +using + +00:24:35.360 --> 00:24:42.080 +APS so um originally when we did conal + +00:24:40.039 --> 00:24:44.200 +we didn't use execution based evaluation + +00:24:42.080 --> 00:24:47.480 +because creating unit tests uh for lots + +00:24:44.200 --> 00:24:51.360 +of stack Overflow posts is hard + +00:24:47.480 --> 00:24:53.640 +um specifically there's two issues the + +00:24:51.360 --> 00:24:55.000 +first one is that it requires that code + +00:24:53.640 --> 00:24:58.880 +be easily + +00:24:55.000 --> 00:25:02.320 +executable um now think about + +00:24:58.880 --> 00:25:04.559 +how you would do that for Matt plot lib + +00:25:02.320 --> 00:25:06.200 +for example how would you create a unit + +00:25:04.559 --> 00:25:08.080 +test to test whether Matt plot lib + +00:25:06.200 --> 00:25:10.760 +successfully created a bar chart for + +00:25:08.080 --> 00:25:12.440 +something it's kind of tough right you + +00:25:10.760 --> 00:25:13.840 +like you would have to get the image and + +00:25:12.440 --> 00:25:16.919 +you'd have to confirm that the image was + +00:25:13.840 --> 00:25:21.200 +a bar chart and uh other things like + +00:25:16.919 --> 00:25:22.720 +that um even worse what if it was uh + +00:25:21.200 --> 00:25:25.600 +kind of like a server framework like + +00:25:22.720 --> 00:25:27.440 +ajango how would you confirm that ajango + +00:25:25.600 --> 00:25:30.559 +you know server is working appropriately + +00:25:27.440 --> 00:25:32.600 +and that's kind of tricky so um actually + +00:25:30.559 --> 00:25:34.480 +coming up with realistic unit tests for + +00:25:32.600 --> 00:25:36.919 +real programs can be + +00:25:34.480 --> 00:25:38.840 +difficult um another problem with + +00:25:36.919 --> 00:25:41.640 +execution based evaluation is it ignores + +00:25:38.840 --> 00:25:45.320 +stylistic considerations so I could + +00:25:41.640 --> 00:25:48.279 +write very spaghetti like very spaghetti + +00:25:45.320 --> 00:25:50.200 +code and as long as it executed properly + +00:25:48.279 --> 00:25:52.559 +it would still be judged as correct and + +00:25:50.200 --> 00:25:54.399 +sometimes that's actually an issue so + +00:25:52.559 --> 00:25:56.360 +usually it's not a problem because + +00:25:54.399 --> 00:25:58.600 +language models write reasonably good + +00:25:56.360 --> 00:26:00.600 +code but sometimes you want to match the + +00:25:58.600 --> 00:26:05.039 +or other things like that + +00:26:00.600 --> 00:26:06.559 +so some alternatives are blue score + +00:26:05.039 --> 00:26:09.000 +which we've talked about before it's + +00:26:06.559 --> 00:26:12.679 +basically count calculating the engram + +00:26:09.000 --> 00:26:16.919 +overlap between a gold standard human uh + +00:26:12.679 --> 00:26:20.440 +implementation and a uh in the system + +00:26:16.919 --> 00:26:24.000 +output and there's also specifically + +00:26:20.440 --> 00:26:26.480 +adapted methods for evaluating code and + +00:26:24.000 --> 00:26:29.080 +so there's a method called code blue and + +00:26:26.480 --> 00:26:31.360 +basically the way code blue works is it + +00:26:29.080 --> 00:26:35.240 +also considers the syntax and semantic + +00:26:31.360 --> 00:26:37.080 +flow of the code so it measures overlap + +00:26:35.240 --> 00:26:40.120 +between + +00:26:37.080 --> 00:26:42.120 +strings in the original code but it also + +00:26:40.120 --> 00:26:48.640 +considers overlap between the syntax + +00:26:42.120 --> 00:26:53.000 +trees of the code and uh whether the + +00:26:48.640 --> 00:26:56.320 +um these like semantic information flow + +00:26:53.000 --> 00:26:57.919 +graphs look similar so uh all all of + +00:26:56.320 --> 00:26:59.440 +these things work together to calculate + +00:26:57.919 --> 00:27:02.720 +the C + +00:26:59.440 --> 00:27:04.480 +St one thing I I should mention is how + +00:27:02.720 --> 00:27:06.840 +do we get these syntax trees in the + +00:27:04.480 --> 00:27:09.039 +first place um for example if we're + +00:27:06.840 --> 00:27:12.919 +talking about python there's a python + +00:27:09.039 --> 00:27:14.760 +Library uh for ab abstract syntax tree + +00:27:12.919 --> 00:27:16.559 +it's just part of the standard library + +00:27:14.760 --> 00:27:18.320 +and it's necessary to run the python + +00:27:16.559 --> 00:27:20.559 +interpreter so you can just get these + +00:27:18.320 --> 00:27:24.320 +trees directly from the python ASD + +00:27:20.559 --> 00:27:25.880 +Library uh not hard to do uh for this I + +00:27:24.320 --> 00:27:27.840 +forget what they did in the code blue + +00:27:25.880 --> 00:27:30.679 +thing but there are uh analyzers that + +00:27:27.840 --> 00:27:32.120 +allow you to analyze this control FL so + +00:27:30.679 --> 00:27:34.159 +this is taking advantage of the fact + +00:27:32.120 --> 00:27:37.440 +that code is you know predictable it has + +00:27:34.159 --> 00:27:41.480 +predictable syntax and you can you + +00:27:37.440 --> 00:27:43.960 +can6 um one disadvantage of blue and + +00:27:41.480 --> 00:27:45.799 +code blue of course is that you know you + +00:27:43.960 --> 00:27:47.679 +can write two very different looking + +00:27:45.799 --> 00:27:49.559 +programs that actually are both correct + +00:27:47.679 --> 00:27:51.799 +and blue will underestimate the goodness + +00:27:49.559 --> 00:27:54.440 +of those programs so maybe using both of + +00:27:51.799 --> 00:27:57.159 +them together is uh is + +00:27:54.440 --> 00:28:00.120 +appropriate uh if if you can write unit + +00:27:57.159 --> 00:28:00.120 +Test please + +00:28:00.559 --> 00:28:04.279 +um another one which I'll just cover + +00:28:02.600 --> 00:28:05.399 +very briefly we talked about BT score + +00:28:04.279 --> 00:28:08.159 +before when I was talking about + +00:28:05.399 --> 00:28:11.120 +evaluation of uh you know generated text + +00:28:08.159 --> 00:28:13.480 +and there's also code BT score which um + +00:28:11.120 --> 00:28:15.799 +we uh we created here at + +00:28:13.480 --> 00:28:20.080 +CMU and it's basically an embedding + +00:28:15.799 --> 00:28:21.760 +based metric uh to compare code and so + +00:28:20.080 --> 00:28:23.399 +Bert score if you remember basically + +00:28:21.760 --> 00:28:25.679 +what it did is it calculated the coign + +00:28:23.399 --> 00:28:27.840 +similarity between each of the tokens uh + +00:28:25.679 --> 00:28:30.159 +between a generated text and a reference + +00:28:27.840 --> 00:28:34.279 +text we do exactly the same thing for + +00:28:30.159 --> 00:28:36.080 +code um so we calculate the Sim cosine + +00:28:34.279 --> 00:28:39.200 +similarity between tokens for a + +00:28:36.080 --> 00:28:42.960 +reference code and generated + +00:28:39.200 --> 00:28:45.000 +code and we released a model called + +00:28:42.960 --> 00:28:46.559 +codir which was basically Bert but + +00:28:45.000 --> 00:28:49.440 +continued trained on lots and lots of + +00:28:46.559 --> 00:28:51.840 +code uh that allowed us to do that and + +00:28:49.440 --> 00:28:55.480 +um basically we were able to demonstrate + +00:28:51.840 --> 00:28:59.200 +that this gave better correlation both + +00:28:55.480 --> 00:29:01.480 +with final execution accuracy and with + +00:28:59.200 --> 00:29:05.200 +human judgments of whether the the code + +00:29:01.480 --> 00:29:08.000 +was correct and so um some people uh + +00:29:05.200 --> 00:29:09.559 +created a data set of human correctness + +00:29:08.000 --> 00:29:12.559 +judgments and we were able to put a + +00:29:09.559 --> 00:29:14.240 +little better with that as well um why + +00:29:12.559 --> 00:29:15.640 +do we care about correlation with + +00:29:14.240 --> 00:29:17.399 +execution + +00:29:15.640 --> 00:29:20.200 +accuracy + +00:29:17.399 --> 00:29:22.320 +um this is important in the cases when + +00:29:20.200 --> 00:29:23.559 +we can't create unit tests or when + +00:29:22.320 --> 00:29:26.120 +creating unit test would be too + +00:29:23.559 --> 00:29:27.519 +expensive so this gives us a better + +00:29:26.120 --> 00:29:30.640 +approximation for what we would get if + +00:29:27.519 --> 00:29:30.640 +we ran tests + +00:29:39.840 --> 00:29:45.000 +in yeah so we did not we did not + +00:29:42.600 --> 00:29:46.799 +consider code structure here uh would + +00:29:45.000 --> 00:29:48.480 +different variable names affect it yes + +00:29:46.799 --> 00:29:50.159 +different variable names would affect it + +00:29:48.480 --> 00:29:51.799 +but not as much as the other metrics + +00:29:50.159 --> 00:29:53.960 +which is why it's better why it has + +00:29:51.799 --> 00:29:56.720 +better + +00:29:53.960 --> 00:30:00.000 +correlations and like for example + +00:29:56.720 --> 00:30:03.679 +codir I imagine probably gives very + +00:30:00.000 --> 00:30:05.120 +similar representations to I and J just + +00:30:03.679 --> 00:30:07.960 +because they're both used in iterators + +00:30:05.120 --> 00:30:09.039 +all the time whereas uh a normal Burt + +00:30:07.960 --> 00:30:10.960 +model would give very different + +00:30:09.039 --> 00:30:12.760 +representations to I and J right because + +00:30:10.960 --> 00:30:14.960 +I is like a personal pronoun and J is + +00:30:12.760 --> 00:30:17.200 +not so um that's the reason why + +00:30:14.960 --> 00:30:20.399 +continued training would + +00:30:17.200 --> 00:30:24.799 +help cool any other + +00:30:20.399 --> 00:30:26.640 +things okay so another um another place + +00:30:24.799 --> 00:30:29.480 +where code generation can be useful uh + +00:30:26.640 --> 00:30:33.440 +we had the example of collab uh is in + +00:30:29.480 --> 00:30:36.200 +collab notebooks and this or in uh data + +00:30:33.440 --> 00:30:38.519 +science notebooks this paper was by uh + +00:30:36.200 --> 00:30:41.440 +Google so this might actually even be + +00:30:38.519 --> 00:30:43.960 +used in the collab thing because collab + +00:30:41.440 --> 00:30:45.640 +is a Google thing um but data data + +00:30:43.960 --> 00:30:47.320 +science notebooks allow for incremental + +00:30:45.640 --> 00:30:50.519 +implementation I'm sure a lot of people + +00:30:47.320 --> 00:30:53.559 +here or almost everybody here uses them + +00:30:50.519 --> 00:30:55.279 +um and another interesting thing is say + +00:30:53.559 --> 00:30:57.519 +allow for evaluation of code generation + +00:30:55.279 --> 00:30:58.960 +in context uh or incremental code + +00:30:57.519 --> 00:31:00.639 +generation + +00:30:58.960 --> 00:31:02.720 +and so you start out with like a + +00:31:00.639 --> 00:31:04.880 +notebook and then you have AAL + +00:31:02.720 --> 00:31:06.600 +languageand and then youate the output + +00:31:04.880 --> 00:31:09.240 +AAL language command you generate the + +00:31:06.600 --> 00:31:10.799 +output etc etc so this is an extal + +00:31:09.240 --> 00:31:14.519 +example from the STA + +00:31:10.799 --> 00:31:17.519 +set um so this paper is very nice it it + +00:31:14.519 --> 00:31:20.320 +has a lot of uh you know it's a nice + +00:31:17.519 --> 00:31:21.720 +data set one other thing that was really + +00:31:20.320 --> 00:31:24.200 +interesting from this paper is it + +00:31:21.720 --> 00:31:27.919 +demonstrated the problem of data leakage + +00:31:24.200 --> 00:31:29.679 +in evaluating models and this is a Rel + +00:31:27.919 --> 00:31:32.440 +relatively large problem I don't know if + +00:31:29.679 --> 00:31:33.799 +we have a silver bullet solution for + +00:31:32.440 --> 00:31:36.120 +this but it's an important thing to be + +00:31:33.799 --> 00:31:38.120 +aware of uh not just for code generation + +00:31:36.120 --> 00:31:39.639 +but these are examples from code + +00:31:38.120 --> 00:31:43.519 +generation + +00:31:39.639 --> 00:31:45.679 +so here um in the arcade data set they + +00:31:43.519 --> 00:31:48.519 +basically both evaluated existing + +00:31:45.679 --> 00:31:51.720 +notebooks and they evaluated notebooks + +00:31:48.519 --> 00:31:53.279 +that um existing notebooks that they got + +00:31:51.720 --> 00:31:55.960 +from the web and they evaluated + +00:31:53.279 --> 00:31:59.000 +notebooks that they actually created + +00:31:55.960 --> 00:32:00.399 +themselves and there's very very Stark + +00:31:59.000 --> 00:32:02.600 +difference between the notebooks that + +00:32:00.399 --> 00:32:04.440 +were created on the web and the + +00:32:02.600 --> 00:32:07.399 +notebooks that they evaluated themselves + +00:32:04.440 --> 00:32:10.159 +so like most of the code generation + +00:32:07.399 --> 00:32:11.679 +models except for Palm uh which was the + +00:32:10.159 --> 00:32:14.760 +best model when they created this data + +00:32:11.679 --> 00:32:17.360 +set did really poorly or did really well + +00:32:14.760 --> 00:32:21.120 +on the existing data and quite poorly on + +00:32:17.360 --> 00:32:25.279 +the new data um which is probably an + +00:32:21.120 --> 00:32:28.159 +indication of um probably an indication + +00:32:25.279 --> 00:32:29.720 +of the fact that you know this is to + +00:32:28.159 --> 00:32:32.240 +some extent leaked into the training + +00:32:29.720 --> 00:32:35.320 +data of the language models there was + +00:32:32.240 --> 00:32:37.760 +also a very recent + +00:32:35.320 --> 00:32:40.240 +um paper actually I think this might be + +00:32:37.760 --> 00:32:43.159 +2024 there was a very recent paper that + +00:32:40.240 --> 00:32:45.880 +did a similar thing uh where they + +00:32:43.159 --> 00:32:48.440 +evaluated on human ofel and then their + +00:32:45.880 --> 00:32:52.000 +live codebench in live codebench + +00:32:48.440 --> 00:32:55.639 +basically what they did is they tried to + +00:32:52.000 --> 00:32:58.519 +pick problems from Le code and other + +00:32:55.639 --> 00:33:00.519 +websites that were more recent versus + +00:32:58.519 --> 00:33:01.960 +less recent and they have some really + +00:33:00.519 --> 00:33:04.880 +nice graphs in their paper where they + +00:33:01.960 --> 00:33:06.519 +demonstrate that the less recent ones + +00:33:04.880 --> 00:33:08.159 +before the training cut off have like a + +00:33:06.519 --> 00:33:10.080 +high accuracy and then suddenly it drops + +00:33:08.159 --> 00:33:12.639 +right at the trading C off of the the + +00:33:10.080 --> 00:33:13.480 +models so this is something to to be + +00:33:12.639 --> 00:33:17.360 +aware + +00:33:13.480 --> 00:33:20.519 +of and what this figure is showing here + +00:33:17.360 --> 00:33:24.039 +is this figure is showing on the xaxis + +00:33:20.519 --> 00:33:26.840 +pass it one on the Live code bench easy + +00:33:24.039 --> 00:33:28.679 +and then pass it one on human ofel so we + +00:33:26.840 --> 00:33:31.480 +see this kn + +00:33:28.679 --> 00:33:34.039 +correlation between + +00:33:31.480 --> 00:33:35.919 +essentially like passing on life code + +00:33:34.039 --> 00:33:37.399 +bench easy and passing on human ofel + +00:33:35.919 --> 00:33:40.000 +then we have this group of models that + +00:33:37.399 --> 00:33:42.159 +are kind of like up here and these are + +00:33:40.000 --> 00:33:43.960 +ones where basically it's likely that + +00:33:42.159 --> 00:33:46.480 +human ofel leaked into the training data + +00:33:43.960 --> 00:33:48.840 +because they're getting better scores on + +00:33:46.480 --> 00:33:50.919 +human ofel than you would expect that + +00:33:48.840 --> 00:33:53.360 +they get uh you know just looking at + +00:33:50.919 --> 00:33:55.360 +their uh you know performance on another + +00:33:53.360 --> 00:33:57.320 +data set there's also a nice like + +00:33:55.360 --> 00:34:00.000 +analogous one for math reasoning + +00:33:57.320 --> 00:34:01.519 +problems um like this so this is + +00:34:00.000 --> 00:34:03.039 +definitely something to be aware of if + +00:34:01.519 --> 00:34:04.559 +you're looking only at like very + +00:34:03.039 --> 00:34:06.200 +standard benchmarks that people are + +00:34:04.559 --> 00:34:11.159 +trading + +00:34:06.200 --> 00:34:11.159 +in cool um any questions about + +00:34:12.119 --> 00:34:19.240 +this okay um another data set uh that I + +00:34:17.720 --> 00:34:20.599 +I really like the concept of and + +00:34:19.240 --> 00:34:22.919 +recently it's gotten a little bit of + +00:34:20.599 --> 00:34:25.399 +Buzz because it was used in a um an + +00:34:22.919 --> 00:34:28.399 +evaluation of a new coding assistant + +00:34:25.399 --> 00:34:30.480 +called Devon but this is um + +00:34:28.399 --> 00:34:32.240 +something called sbench and it's issues + +00:34:30.480 --> 00:34:34.639 +from GitHub and code + +00:34:32.240 --> 00:34:37.119 +bases uh is the input and you want to + +00:34:34.639 --> 00:34:39.480 +generate a poll request to basically uh + +00:34:37.119 --> 00:34:42.919 +solve these issues and so your input is + +00:34:39.480 --> 00:34:45.800 +like data leak in gbdt due to warm start + +00:34:42.919 --> 00:34:48.800 +this is about non standard then you have + +00:34:45.800 --> 00:34:51.159 +the code base um it generates a PR for + +00:34:48.800 --> 00:34:53.079 +you and then it's run through the unit + +00:34:51.159 --> 00:34:55.919 +tests to see if it passes all the unit + +00:34:53.079 --> 00:34:57.160 +test post PRS so it's very similar to + +00:34:55.919 --> 00:34:59.240 +you know what you would be doing in a + +00:34:57.160 --> 00:35:01.280 +well Main software project you open a + +00:34:59.240 --> 00:35:05.240 +issue and then you open a poll request + +00:35:01.280 --> 00:35:07.800 +to fix an issue um this requires things + +00:35:05.240 --> 00:35:10.240 +like long context understanding um being + +00:35:07.800 --> 00:35:13.200 +able to do very precise implementations + +00:35:10.240 --> 00:35:14.720 +based on large software projects and + +00:35:13.200 --> 00:35:17.920 +right now the state-of-the-art on this + +00:35:14.720 --> 00:35:20.680 +is at about 14% so it's definitely not a + +00:35:17.920 --> 00:35:23.119 +solv problem at all um in the original + +00:35:20.680 --> 00:35:27.920 +paper uh the the state-of-the-art method + +00:35:23.119 --> 00:35:29.400 +was like 6% or something like that so um + +00:35:27.920 --> 00:35:32.079 +I imagine that we're not going to get up + +00:35:29.400 --> 00:35:33.880 +to 90% anytime soon because it's + +00:35:32.079 --> 00:35:35.720 +probably solving the easier ones and the + +00:35:33.880 --> 00:35:37.280 +harder ones are you know far beyond the + +00:35:35.720 --> 00:35:39.920 +ability of any language model we have at + +00:35:37.280 --> 00:35:42.320 +the moment um but I I really like this + +00:35:39.920 --> 00:35:43.960 +Benchmark one caveat if you really like + +00:35:42.320 --> 00:35:45.520 +this Benchmark is that it's kind of + +00:35:43.960 --> 00:35:47.760 +heavy to run so you need to be a little + +00:35:45.520 --> 00:35:51.000 +bit careful uh because you need to pull + +00:35:47.760 --> 00:35:54.280 +in like full repositories to um to run + +00:35:51.000 --> 00:35:56.319 +on so yeah be a little + +00:35:54.280 --> 00:35:57.920 +bit sorry there's so many like + +00:35:56.319 --> 00:35:59.640 +interesting data sets recently in this + +00:35:57.920 --> 00:36:01.079 +area that I I spent a lot of time on + +00:35:59.640 --> 00:36:04.240 +data set so I'll try to go a little bit + +00:36:01.079 --> 00:36:06.200 +more quickly but um uh a final one is + +00:36:04.240 --> 00:36:09.359 +design to code and this is also a very + +00:36:06.200 --> 00:36:11.520 +recent data set um basically the idea is + +00:36:09.359 --> 00:36:16.359 +code generation from websites so your + +00:36:11.520 --> 00:36:18.119 +input is a website and your output is uh + +00:36:16.359 --> 00:36:22.520 +like JavaScript code that implements + +00:36:18.119 --> 00:36:24.960 +that website and or or css or HTML code + +00:36:22.520 --> 00:36:26.880 +that implements the website so I I + +00:36:24.960 --> 00:36:30.119 +really like this because you know it's a + +00:36:26.880 --> 00:36:32.280 +good test bed for multi modal models and + +00:36:30.119 --> 00:36:34.040 +there aren't a whole lot of strong open + +00:36:32.280 --> 00:36:36.160 +source multimodal models that can solve + +00:36:34.040 --> 00:36:36.960 +this at the moment so I think it's kind + +00:36:36.160 --> 00:36:39.720 +of + +00:36:36.960 --> 00:36:41.480 +cool um they also proposed a design to + +00:36:39.720 --> 00:36:43.480 +code model that does the best on this + +00:36:41.480 --> 00:36:47.119 +data set out of uh you know any of the + +00:36:43.480 --> 00:36:47.119 +open source models but it's still far + +00:36:47.400 --> 00:36:53.040 +from and then the question becomes how + +00:36:50.680 --> 00:36:56.079 +do they um evaluate this in the first + +00:36:53.040 --> 00:36:59.440 +place and basically the idea is that + +00:36:56.079 --> 00:37:01.400 +they do highle visual similarity and so + +00:36:59.440 --> 00:37:03.920 +they calculate visual embeddings of the + +00:37:01.400 --> 00:37:06.119 +generated sites and then they also do + +00:37:03.920 --> 00:37:08.240 +lowl element similarity so they try to + +00:37:06.119 --> 00:37:10.440 +identify all of the elements in the + +00:37:08.240 --> 00:37:12.119 +generated web page and make sure that uh + +00:37:10.440 --> 00:37:15.720 +they recall all of the generated + +00:37:12.119 --> 00:37:18.760 +elements so um I think this is nice one + +00:37:15.720 --> 00:37:21.000 +thing if you notice um if you use even + +00:37:18.760 --> 00:37:25.960 +state-ofthe-art like closed models like + +00:37:21.000 --> 00:37:28.040 +CLA 3 or um GPD 4 is they're really bad + +00:37:25.960 --> 00:37:29.440 +at this recall they it can generate + +00:37:28.040 --> 00:37:31.800 +something that looks like maybe a little + +00:37:29.440 --> 00:37:33.839 +bit similar but it will be missing like + +00:37:31.800 --> 00:37:35.720 +the elements the design will be off you + +00:37:33.839 --> 00:37:37.720 +know other stuff like that so I think + +00:37:35.720 --> 00:37:41.079 +even in the closed like strong models + +00:37:37.720 --> 00:37:41.079 +this is not a Sol + +00:37:41.319 --> 00:37:47.079 +problem cool uh + +00:37:45.000 --> 00:37:49.880 +yeah + +00:37:47.079 --> 00:37:51.880 +problem um so why is that a hard problem + +00:37:49.880 --> 00:37:54.200 +for the models I don't actually have a + +00:37:51.880 --> 00:37:57.200 +really confident answer to that but I + +00:37:54.200 --> 00:37:57.200 +think + +00:38:00.240 --> 00:38:05.200 +so one thing I can tell you is that they + +00:38:02.839 --> 00:38:08.839 +are able to + +00:38:05.200 --> 00:38:12.000 +improve um so they're able to generate + +00:38:08.839 --> 00:38:14.720 +something and then I say no that's bad + +00:38:12.000 --> 00:38:16.160 +please like make it better and it's + +00:38:14.720 --> 00:38:17.800 +generally better the second time + +00:38:16.160 --> 00:38:19.920 +especially if you give specific things + +00:38:17.800 --> 00:38:22.319 +like oh uh but the background on the + +00:38:19.920 --> 00:38:25.160 +generated site is white but actually it + +00:38:22.319 --> 00:38:27.599 +should be black and if you think about + +00:38:25.160 --> 00:38:31.480 +like even a skilled human programmer do + +00:38:27.599 --> 00:38:35.119 +you think you could write like website + +00:38:31.480 --> 00:38:37.680 +code and then view it once and then it + +00:38:35.119 --> 00:38:40.319 +would be correct I think you probably + +00:38:37.680 --> 00:38:42.160 +couldn't right and so like we're asking + +00:38:40.319 --> 00:38:44.040 +models to do essentially the same thing + +00:38:42.160 --> 00:38:46.920 +except they're like even worse than us + +00:38:44.040 --> 00:38:48.560 +and you know keeping track of all the V + +00:38:46.920 --> 00:38:50.720 +visual elements and stuff so I think + +00:38:48.560 --> 00:38:52.480 +it's more like this problem probably + +00:38:50.720 --> 00:38:54.720 +just needs iterative refinement + +00:38:52.480 --> 00:38:58.839 +otherwise it's like asking too much of a + +00:38:54.720 --> 00:39:02.640 +model maybe I don't know + +00:38:58.839 --> 00:39:04.520 +cool okay so um let's go into methods + +00:39:02.640 --> 00:39:06.920 +and code generation has some unique + +00:39:04.520 --> 00:39:09.400 +things um the basic method that you can + +00:39:06.920 --> 00:39:11.240 +always use is a code generating LM and + +00:39:09.400 --> 00:39:13.040 +so you feed in previous code or you feed + +00:39:11.240 --> 00:39:16.040 +in whatever context you have into the LM + +00:39:13.040 --> 00:39:18.079 +and you generate um uh from it and + +00:39:16.040 --> 00:39:20.079 +virtually all Serius LMS are trained on + +00:39:18.079 --> 00:39:23.079 +code nowadays like I I just mentioned + +00:39:20.079 --> 00:39:23.079 +before + +00:39:23.119 --> 00:39:29.920 +um one one important thing here is uh + +00:39:28.560 --> 00:39:31.240 +when you're generating if you're + +00:39:29.920 --> 00:39:33.040 +generating for something like code + +00:39:31.240 --> 00:39:34.480 +generation I definitely suggest that you + +00:39:33.040 --> 00:39:36.119 +modify your temperature settings + +00:39:34.480 --> 00:39:38.359 +appropriately and set it to a low + +00:39:36.119 --> 00:39:42.160 +temperature um otherwise you'll get kind + +00:39:38.359 --> 00:39:45.079 +of crazy uh code but if you set it to a + +00:39:42.160 --> 00:39:45.079 +low temperature you can get + +00:39:46.440 --> 00:39:52.160 +better anyway um one really core + +00:39:49.640 --> 00:39:54.240 +capability of code LMS especially ones + +00:39:52.160 --> 00:39:55.599 +that you use in your IDE like uh + +00:39:54.240 --> 00:39:58.160 +co-pilot is + +00:39:55.599 --> 00:40:00.000 +infilling and um + +00:39:58.160 --> 00:40:03.680 +the the paper that proposed this is + +00:40:00.000 --> 00:40:05.920 +actually by Daniel Freed at LTI here and + +00:40:03.680 --> 00:40:09.160 +um + +00:40:05.920 --> 00:40:11.240 +the basically what you want to do often + +00:40:09.160 --> 00:40:13.000 +is you have previous code you have next + +00:40:11.240 --> 00:40:14.680 +code and you want to just fill in like a + +00:40:13.000 --> 00:40:17.960 +line that's missing like you want to add + +00:40:14.680 --> 00:40:19.040 +an extra you know if statement or or + +00:40:17.960 --> 00:40:22.720 +some sort of + +00:40:19.040 --> 00:40:24.880 +modification and so the way that at + +00:40:22.720 --> 00:40:27.000 +least this paper proposed it and the way + +00:40:24.880 --> 00:40:29.800 +that I think most LMS are actually doing + +00:40:27.000 --> 00:40:30.640 +this is they take a standard left to + +00:40:29.800 --> 00:40:33.200 +right + +00:40:30.640 --> 00:40:36.040 +LM and what they want to do is they want + +00:40:33.200 --> 00:40:39.040 +to infill this code chunk and so what + +00:40:36.040 --> 00:40:40.440 +they do is they put a mask in the place + +00:40:39.040 --> 00:40:42.119 +where they want to fill the chunk which + +00:40:40.440 --> 00:40:46.280 +would also be where your cursor is in + +00:40:42.119 --> 00:40:49.960 +your IDE right uh at that point and then + +00:40:46.280 --> 00:40:52.680 +they have Mas to zero and then at the + +00:40:49.960 --> 00:40:57.400 +end they put mask to zero again and then + +00:40:52.680 --> 00:40:59.000 +they output the like you know all of the + +00:40:57.400 --> 00:41:01.040 +code that you want to generate there and + +00:40:59.000 --> 00:41:02.839 +so you can just kind of arbitrarily + +00:41:01.040 --> 00:41:05.480 +generate these trunks by pulling you + +00:41:02.839 --> 00:41:07.000 +know masking out chunks uh putting in + +00:41:05.480 --> 00:41:08.960 +The Mask token and then moving it to the + +00:41:07.000 --> 00:41:10.440 +end of the sequence and then you can + +00:41:08.960 --> 00:41:13.160 +just use a standard left to right Auto + +00:41:10.440 --> 00:41:15.359 +regressive language model to solve this + +00:41:13.160 --> 00:41:17.040 +problem so this is really important if + +00:41:15.359 --> 00:41:18.520 +you want to build like a co-pilot style + +00:41:17.040 --> 00:41:20.160 +thing and all of the code language + +00:41:18.520 --> 00:41:23.680 +models that I talk about at the end of + +00:41:20.160 --> 00:41:23.680 +this class uh use this + +00:41:24.800 --> 00:41:30.440 +technique um another thing is there's + +00:41:28.160 --> 00:41:33.760 +lots of available information uh for + +00:41:30.440 --> 00:41:36.040 +learning coding things um or for solving + +00:41:33.760 --> 00:41:38.880 +coding tasks this includes you know the + +00:41:36.040 --> 00:41:40.440 +current code context of course um also + +00:41:38.880 --> 00:41:41.920 +the description of the issue that you + +00:41:40.440 --> 00:41:45.160 +want to be fixing like if you're solving + +00:41:41.920 --> 00:41:49.240 +a poll request um repo context from + +00:41:45.160 --> 00:41:51.880 +other files um what tabs you have open + +00:41:49.240 --> 00:41:55.920 +uh so that that's also an important + +00:41:51.880 --> 00:41:58.599 +thing and when GitHub co-pilot came out + +00:41:55.920 --> 00:42:01.960 +they didn't really tell you the details + +00:41:58.599 --> 00:42:04.480 +of how they were doing this but um + +00:42:01.960 --> 00:42:09.079 +GitHub co-pilot is written in JavaScript + +00:42:04.480 --> 00:42:11.839 +and uh there was a p PhD student I think + +00:42:09.079 --> 00:42:14.000 +from maybe Georgia Tech or something uh + +00:42:11.839 --> 00:42:16.839 +who or Master student who basically went + +00:42:14.000 --> 00:42:19.160 +in and took the JavaScript and like Dem + +00:42:16.839 --> 00:42:21.839 +minified it and like reverse engineered + +00:42:19.160 --> 00:42:23.640 +what was actually happening um and uh + +00:42:21.839 --> 00:42:26.680 +wrote A Blog about it and this blog is + +00:42:23.640 --> 00:42:28.800 +is great uh so basically what uh + +00:42:26.680 --> 00:42:32.200 +co-pilot was doing which also kind of + +00:42:28.800 --> 00:42:33.839 +gives you a gold standard um way of uh + +00:42:32.200 --> 00:42:36.920 +looking + +00:42:33.839 --> 00:42:39.440 +at uh you know what kind of information + +00:42:36.920 --> 00:42:43.440 +is necessary to create a good model is + +00:42:39.440 --> 00:42:45.240 +first they extract um information for + +00:42:43.440 --> 00:42:47.400 +the prompt given the current document + +00:42:45.240 --> 00:42:49.240 +and the cursor position so they take the + +00:42:47.400 --> 00:42:51.720 +current document where is the cursor and + +00:42:49.240 --> 00:42:54.640 +what is before this and what is after + +00:42:51.720 --> 00:42:56.960 +this um they identify the relative path + +00:42:54.640 --> 00:42:59.960 +of the file and what language it's in so + +00:42:56.960 --> 00:43:01.760 +they they identifi python files or + +00:42:59.960 --> 00:43:04.240 +JavaScript files or + +00:43:01.760 --> 00:43:07.440 +whatever they find the most recently + +00:43:04.240 --> 00:43:09.800 +accessed 20 files in the same language + +00:43:07.440 --> 00:43:12.599 +so like if you've opened 20 tabs they + +00:43:09.800 --> 00:43:15.559 +keep track of which tab you had + +00:43:12.599 --> 00:43:18.280 +open um and then the actual prompt that + +00:43:15.559 --> 00:43:22.119 +they send over includes text that is + +00:43:18.280 --> 00:43:23.640 +before text that's after um similar + +00:43:22.119 --> 00:43:26.520 +files out of the 20 files that you've + +00:43:23.640 --> 00:43:29.480 +opened recently um also information from + +00:43:26.520 --> 00:43:31.760 +imported files and metadata about the + +00:43:29.480 --> 00:43:33.079 +language and the path so all of this is + +00:43:31.760 --> 00:43:37.079 +sent to the + +00:43:33.079 --> 00:43:38.720 +model um and so this is just basically + +00:43:37.079 --> 00:43:40.160 +it's really good prompt engineering + +00:43:38.720 --> 00:43:41.760 +right they're figuring out a good way to + +00:43:40.160 --> 00:43:44.200 +get all of the information that would be + +00:43:41.760 --> 00:43:45.680 +useful uh for getting this model to work + +00:43:44.200 --> 00:43:49.559 +into the + +00:43:45.680 --> 00:43:50.920 +prompt um so I there's much much more + +00:43:49.559 --> 00:43:52.839 +information in this plug it's a really + +00:43:50.920 --> 00:43:57.400 +nice blog if you uh if you want to see + +00:43:52.839 --> 00:43:57.400 +about it but um that's the basic + +00:43:57.640 --> 00:44:00.240 +any any + +00:44:01.240 --> 00:44:07.160 +questions okay + +00:44:03.520 --> 00:44:11.240 +cool yeah is this just what gets sent + +00:44:07.160 --> 00:44:13.520 +over to theot server or does + +00:44:11.240 --> 00:44:15.240 +copilot this is what gets sent over to + +00:44:13.520 --> 00:44:17.920 +the co-pilot server but the way they're + +00:44:15.240 --> 00:44:20.960 +sending it makes me guess that like all + +00:44:17.920 --> 00:44:22.839 +of this is red so like they also are + +00:44:20.960 --> 00:44:24.559 +considering I didn't mention it here but + +00:44:22.839 --> 00:44:26.000 +they're considering the token limit and + +00:44:24.559 --> 00:44:27.599 +other stuff like that so that kind of + +00:44:26.000 --> 00:44:30.760 +makes me feel like this is + +00:44:27.599 --> 00:44:30.760 +actually the + +00:44:32.240 --> 00:44:38.440 +pr uh cool + +00:44:35.359 --> 00:44:41.040 +so another uh thing that you can do is + +00:44:38.440 --> 00:44:42.520 +retrieval based code generation and + +00:44:41.040 --> 00:44:45.640 +retrieval based code + +00:44:42.520 --> 00:44:47.599 +generation uh basically what it does is + +00:44:45.640 --> 00:44:50.920 +it's like rag for code + +00:44:47.599 --> 00:44:53.240 +Generation Um and this has been around + +00:44:50.920 --> 00:44:55.640 +for a while including our work that I + +00:44:53.240 --> 00:44:57.680 +cited here and a few more in in + +00:44:55.640 --> 00:44:59.960 +2018 um + +00:44:57.680 --> 00:45:03.000 +and so one way you can do this is you + +00:44:59.960 --> 00:45:07.160 +can retrieve similar code from online + +00:45:03.000 --> 00:45:09.720 +and then use it to basically prompt a + +00:45:07.160 --> 00:45:11.920 +retrieval augmented language model uh + +00:45:09.720 --> 00:45:14.480 +this is good if you have a model that's + +00:45:11.920 --> 00:45:16.920 +not super good at code in the first + +00:45:14.480 --> 00:45:19.920 +place or you know it's making mistakes + +00:45:16.920 --> 00:45:21.680 +it's also good if you have a large code + +00:45:19.920 --> 00:45:23.040 +base like that's inter internal and you + +00:45:21.680 --> 00:45:24.200 +know the language model was not trained + +00:45:23.040 --> 00:45:26.359 +on it but you still want to use that + +00:45:24.200 --> 00:45:27.559 +code base for code generation so it's + +00:45:26.359 --> 00:45:29.599 +really good if you're working at like a + +00:45:27.559 --> 00:45:32.160 +big company for example that has a very + +00:45:29.599 --> 00:45:33.319 +constant coding style but hasn't trained + +00:45:32.160 --> 00:45:37.160 +its own + +00:45:33.319 --> 00:45:39.720 +LM um also particularly in code there's + +00:45:37.160 --> 00:45:43.559 +also documentation uh which can be + +00:45:39.720 --> 00:45:46.920 +retrieved and so we have new libraries + +00:45:43.559 --> 00:45:51.359 +all the time right and one frustrating + +00:45:46.920 --> 00:45:53.119 +thing when using like uh chat jpt or CLA + +00:45:51.359 --> 00:45:57.400 +or something like that when you're + +00:45:53.119 --> 00:45:59.559 +writing programs is that it can use old + +00:45:57.400 --> 00:46:03.480 +versions of libraries that are no longer + +00:45:59.559 --> 00:46:05.359 +compatible and so um in this paper uh + +00:46:03.480 --> 00:46:08.359 +which this is one of our papers too we + +00:46:05.359 --> 00:46:10.079 +called it DOC prompting um basically the + +00:46:08.359 --> 00:46:13.720 +idea is that + +00:46:10.079 --> 00:46:17.440 +you have your natural language input and + +00:46:13.720 --> 00:46:20.119 +then you look up uh similar thing + +00:46:17.440 --> 00:46:23.240 +similar documentation so you find like + +00:46:20.119 --> 00:46:25.319 +pigment is a general syntax highlighter + +00:46:23.240 --> 00:46:28.160 +uh so you can uh find syntax + +00:46:25.319 --> 00:46:31.160 +highlighting um you can also look up the + +00:46:28.160 --> 00:46:32.640 +lexer you can look up the HTML formatter + +00:46:31.160 --> 00:46:35.119 +and then all of the things that have + +00:46:32.640 --> 00:46:37.000 +similar documentation then you can uh + +00:46:35.119 --> 00:46:39.480 +append that to the prompt and then have + +00:46:37.000 --> 00:46:41.680 +that Genera output and we demonstrate + +00:46:39.480 --> 00:46:43.200 +that this is good both in general but + +00:46:41.680 --> 00:46:44.800 +also it's particularly good when you're + +00:46:43.200 --> 00:46:46.240 +dealing with new libraries that haven't + +00:46:44.800 --> 00:46:48.280 +been seen before or libraries that have + +00:46:46.240 --> 00:46:50.119 +been updated so this is another thing + +00:46:48.280 --> 00:46:53.000 +that you can + +00:46:50.119 --> 00:46:55.720 +do + +00:46:53.000 --> 00:46:57.520 +cool um another thing that you can do + +00:46:55.720 --> 00:47:00.040 +with code that you can't do easily with + +00:46:57.520 --> 00:47:04.040 +natural language is execution + +00:47:00.040 --> 00:47:06.119 +feedback and so this is a a paper where + +00:47:04.040 --> 00:47:09.359 +basically they do something that's + +00:47:06.119 --> 00:47:10.319 +rather simple but they generate multiple + +00:47:09.359 --> 00:47:13.359 +types of + +00:47:10.319 --> 00:47:14.559 +code or multiple instances of code so + +00:47:13.359 --> 00:47:16.880 +they basically sample different + +00:47:14.559 --> 00:47:19.960 +varieties of code and I was talking + +00:47:16.880 --> 00:47:22.720 +about like casset K right uh before + +00:47:19.960 --> 00:47:25.000 +casset K is good if you have some way to + +00:47:22.720 --> 00:47:26.520 +confirm which output is correct like you + +00:47:25.000 --> 00:47:28.040 +already have unit tests and you can run + +00:47:26.520 --> 00:47:29.440 +the unit test and identify which one + +00:47:28.040 --> 00:47:31.839 +passes the unit test or you can have a + +00:47:29.440 --> 00:47:34.160 +human check it but in the case when you + +00:47:31.839 --> 00:47:35.640 +can't do that what can you do and + +00:47:34.160 --> 00:47:38.079 +basically what you can do is you can + +00:47:35.640 --> 00:47:40.800 +execute all of the code Snippets that + +00:47:38.079 --> 00:47:43.839 +the model generated and check if the + +00:47:40.800 --> 00:47:48.520 +outputs overlap with each other and if + +00:47:43.839 --> 00:47:50.680 +you have um you know 30 programs that + +00:47:48.520 --> 00:47:53.680 +all generate very similar outputs then + +00:47:50.680 --> 00:47:55.079 +those outputs you know then that program + +00:47:53.680 --> 00:47:56.520 +is probably correct and then you can + +00:47:55.079 --> 00:48:00.000 +just pick one of them according to some + +00:47:56.520 --> 00:48:02.160 +criteria Ian specifically in this case + +00:48:00.000 --> 00:48:03.960 +they picked the program that has the + +00:48:02.160 --> 00:48:05.599 +lowest base risk like when we talked + +00:48:03.960 --> 00:48:09.040 +about minimum base risk and the decoding + +00:48:05.599 --> 00:48:10.839 +much so um they they basically execute a + +00:48:09.040 --> 00:48:12.800 +lot and then calculate the base risk of + +00:48:10.839 --> 00:48:17.000 +that + +00:48:12.800 --> 00:48:17.000 +that cool um + +00:48:17.680 --> 00:48:24.440 +yeah yeah and so like self consistency + +00:48:21.599 --> 00:48:26.079 +is a variety of Base risk um and they're + +00:48:24.440 --> 00:48:27.640 +using base risk here because outputs + +00:48:26.079 --> 00:48:30.720 +might not be exact the same but being + +00:48:27.640 --> 00:48:30.720 +closer is probably better + +00:48:34.160 --> 00:48:39.040 +than + +00:48:36.760 --> 00:48:40.559 +comp comparison of the code yeah that's + +00:48:39.040 --> 00:48:42.880 +a good question especially if you use + +00:48:40.559 --> 00:48:44.319 +something good like uh code BT score to + +00:48:42.880 --> 00:48:46.280 +do that comparison you might not even + +00:48:44.319 --> 00:48:50.280 +need to that's + +00:48:46.280 --> 00:48:50.280 +that I don't think they did that in + +00:48:50.559 --> 00:48:57.240 +this cool um another interesting thing + +00:48:54.920 --> 00:48:59.760 +um is there's + +00:48:57.240 --> 00:49:04.119 +several lines of work on fixing based on + +00:48:59.760 --> 00:49:06.720 +eror messages so the basic idea is you + +00:49:04.119 --> 00:49:08.160 +generate code you try to run it you get + +00:49:06.720 --> 00:49:13.280 +an airor message from it and then you + +00:49:08.160 --> 00:49:16.200 +feed that back to the llm um in order to + +00:49:13.280 --> 00:49:17.520 +you know correct the error and like llms + +00:49:16.200 --> 00:49:19.119 +if you give them an err and you give + +00:49:17.520 --> 00:49:20.839 +them buggy code they do have some + +00:49:19.119 --> 00:49:24.599 +capacity to do that especially as you + +00:49:20.839 --> 00:49:28.839 +get to theer llm so uh this is kind of a + +00:49:24.599 --> 00:49:31.200 +a nice uh paradigm this paper intercode + +00:49:28.839 --> 00:49:33.880 +actually generalizes this a bit and it's + +00:49:31.200 --> 00:49:38.359 +more recent that's why I cited it here + +00:49:33.880 --> 00:49:40.000 +and uh so this also um like says you can + +00:49:38.359 --> 00:49:42.640 +do single turn code generation you can + +00:49:40.000 --> 00:49:44.960 +also say oh could you please try again + +00:49:42.640 --> 00:49:46.400 +um you can also uh do planning and + +00:49:44.960 --> 00:49:48.160 +solving and other stuff like that so + +00:49:46.400 --> 00:49:49.960 +this is a good kind of like environment + +00:49:48.160 --> 00:49:52.079 +if you're interested in making these + +00:49:49.960 --> 00:49:56.720 +more like interactive coding assistance + +00:49:52.079 --> 00:49:56.720 +for example so you could take a look bre + +00:49:58.359 --> 00:50:03.359 +cool + +00:50:00.119 --> 00:50:07.119 +um another important topic is code + +00:50:03.359 --> 00:50:08.880 +synthesis from input output examples so + +00:50:07.119 --> 00:50:12.319 +actually when you said code generation + +00:50:08.880 --> 00:50:14.760 +or code synthesis like five years ago or + +00:50:12.319 --> 00:50:17.440 +10 years ago a lot of people would think + +00:50:14.760 --> 00:50:19.440 +about this uh so this is actually this + +00:50:17.440 --> 00:50:22.440 +has been around a lot longer than code + +00:50:19.440 --> 00:50:24.160 +synthesis um than serious inquiries into + +00:50:22.440 --> 00:50:27.680 +code synthesis from natural + +00:50:24.160 --> 00:50:30.680 +language um + +00:50:27.680 --> 00:50:33.839 +so basically the way this works is it + +00:50:30.680 --> 00:50:35.319 +can have no natural language whatsoever + +00:50:33.839 --> 00:50:39.119 +um but you still can try to guess the + +00:50:35.319 --> 00:50:42.000 +input from uh input output examples when + +00:50:39.119 --> 00:50:44.319 +would you want to do this so one example + +00:50:42.000 --> 00:50:45.839 +of this is something called flashfill + +00:50:44.319 --> 00:50:48.599 +which has been around for a very long + +00:50:45.839 --> 00:50:51.839 +time in Microsoft Excel and basically + +00:50:48.599 --> 00:50:55.400 +the way it works is you have one column + +00:50:51.839 --> 00:50:58.640 +and um the column might be + +00:50:55.400 --> 00:50:58.640 +like uh + +00:50:59.559 --> 00:51:02.880 +R new + +00:51:03.040 --> 00:51:12.799 +big and uh + +00:51:06.559 --> 00:51:12.799 +else just pick on three because he also + +00:51:14.040 --> 00:51:19.599 +up and so we have this column and then + +00:51:17.160 --> 00:51:19.599 +we have like + +00:51:20.400 --> 00:51:26.760 +gig um and from like one or a couple + +00:51:25.160 --> 00:51:28.400 +examples basically what it does is it + +00:51:26.760 --> 00:51:30.319 +tries to induce a program that can + +00:51:28.400 --> 00:51:33.319 +generate all the other examples properly + +00:51:30.319 --> 00:51:35.599 +so in this particular case that would be + +00:51:33.319 --> 00:51:38.440 +um you know like + +00:51:35.599 --> 00:51:40.480 +split take the first character from the + +00:51:38.440 --> 00:51:43.280 +first one and all of the last one and + +00:51:40.480 --> 00:51:45.280 +then concatenate and then M or something + +00:51:43.280 --> 00:51:48.280 +like that right + +00:51:45.280 --> 00:51:50.079 +um and so this is useful in some cases + +00:51:48.280 --> 00:51:51.599 +like you know in Excel when you have + +00:51:50.079 --> 00:51:53.359 +this long sheet and you want to fill in + +00:51:51.599 --> 00:51:56.160 +the rest of it and this has actually + +00:51:53.359 --> 00:51:57.720 +been deployed uh you know in Excel in + +00:51:56.160 --> 00:52:00.960 +white + +00:51:57.720 --> 00:52:02.559 +used um if you're interested in this + +00:52:00.960 --> 00:52:06.040 +topic there's a fair amount of work in + +00:52:02.559 --> 00:52:08.839 +it um my there's a little bit less work + +00:52:06.040 --> 00:52:10.240 +now because most people are focusing on + +00:52:08.839 --> 00:52:12.400 +uh learning programs from natural + +00:52:10.240 --> 00:52:14.839 +language and other stuff like this but + +00:52:12.400 --> 00:52:16.480 +uh this slightly older Pap paper called + +00:52:14.839 --> 00:52:19.359 +interpret explains a bunch of the + +00:52:16.480 --> 00:52:22.880 +different methods that people used and + +00:52:19.359 --> 00:52:25.920 +um how you uh like how they compare and + +00:52:22.880 --> 00:52:28.119 +stuff and also um Joshua ten and bums + +00:52:25.920 --> 00:52:29.880 +group from MI has done a lot on program + +00:52:28.119 --> 00:52:31.319 +synthesis from input output examples so + +00:52:29.880 --> 00:52:32.359 +you could also take a look at that that + +00:52:31.319 --> 00:52:35.079 +sounds + +00:52:32.359 --> 00:52:38.240 +interesting um one thing about this is + +00:52:35.079 --> 00:52:40.280 +these generally are mostly done on + +00:52:38.240 --> 00:52:43.319 +domain specific languages so they're + +00:52:40.280 --> 00:52:46.839 +mostly done like only for reg X's or + +00:52:43.319 --> 00:52:48.480 +they're done only for you know SQL or + +00:52:46.839 --> 00:52:50.079 +something like that not for the more + +00:52:48.480 --> 00:52:51.960 +general purpose languages just because + +00:52:50.079 --> 00:52:54.079 +the problem without any natural language + +00:52:51.960 --> 00:52:56.520 +specification is harder and so you need + +00:52:54.079 --> 00:52:57.520 +to like make the search space smaller or + +00:52:56.520 --> 00:53:01.559 +Additionally you needed to make the + +00:52:57.520 --> 00:53:04.440 +search small for theable so um that's a + +00:53:01.559 --> 00:53:04.440 +another thing to know + +00:53:04.799 --> 00:53:09.440 +about cool um any questions about + +00:53:09.480 --> 00:53:14.440 +these nice okay so finally in the the + +00:53:12.559 --> 00:53:15.599 +last few minutes I'd like to talk about + +00:53:14.440 --> 00:53:18.480 +um code + +00:53:15.599 --> 00:53:22.880 +LMS and I'm going to go through about + +00:53:18.480 --> 00:53:24.599 +four of them the first one is codex and + +00:53:22.880 --> 00:53:26.200 +so yeah actually what I should mention + +00:53:24.599 --> 00:53:28.079 +is all of the LMS that I talked about up + +00:53:26.200 --> 00:53:30.640 +until this point are code LMS because + +00:53:28.079 --> 00:53:31.680 +every LM trains on code so I'm mainly + +00:53:30.640 --> 00:53:36.119 +going to be talking about one + +00:53:31.680 --> 00:53:39.200 +specifically for code this time um so + +00:53:36.119 --> 00:53:42.480 +codex is the first and kind of like + +00:53:39.200 --> 00:53:45.880 +first really big impact Cod LM um it was + +00:53:42.480 --> 00:53:47.720 +created by open AI um originally I don't + +00:53:45.880 --> 00:53:49.079 +know about the deployed model now + +00:53:47.720 --> 00:53:51.599 +because you know they don't release the + +00:53:49.079 --> 00:53:53.799 +details of it but originally this was + +00:53:51.599 --> 00:53:57.920 +trained by continued training from + +00:53:53.799 --> 00:53:59.799 +gpt3 so they had a text M and then they + +00:53:57.920 --> 00:54:03.079 +just continued training it on lots and + +00:53:59.799 --> 00:54:05.680 +lots of code from GitHub um so yeah the + +00:54:03.079 --> 00:54:08.799 +data was lots of data from GitHub um if + +00:54:05.680 --> 00:54:11.280 +you did anything on GitHub at any point + +00:54:08.799 --> 00:54:14.119 +in your life uh you might be uh + +00:54:11.280 --> 00:54:17.720 +contributing to codep so thank you on + +00:54:14.119 --> 00:54:22.440 +behalf of open AI a 80 billion dollar + +00:54:17.720 --> 00:54:24.599 +company and uh importantly it Powers I + +00:54:22.440 --> 00:54:27.599 +believe it still Powers GitHub + +00:54:24.599 --> 00:54:31.160 +co-pilot one interesting thing is they + +00:54:27.599 --> 00:54:33.119 +had a large version of codex um and then + +00:54:31.160 --> 00:54:35.799 +they had a smaller version of codex + +00:54:33.119 --> 00:54:38.359 +called code kushman and the thing + +00:54:35.799 --> 00:54:40.040 +actually powering GitHub co-pilot is not + +00:54:38.359 --> 00:54:42.839 +the the largest version it's not code Da + +00:54:40.040 --> 00:54:46.359 +Vinci it's code kushman which is uh + +00:54:42.839 --> 00:54:48.680 +smaller and much faster and the reason + +00:54:46.359 --> 00:54:50.640 +why is probably twofold number one um + +00:54:48.680 --> 00:54:54.160 +you need really fast responses when + +00:54:50.640 --> 00:54:55.760 +you're you know working on code and + +00:54:54.160 --> 00:54:57.440 +there's actually in co-pilot there's + +00:54:55.760 --> 00:55:00.280 +some cach and other stuff like that to + +00:54:57.440 --> 00:55:01.960 +make your responses very fast as well um + +00:55:00.280 --> 00:55:03.400 +the second reason is probably it' just + +00:55:01.960 --> 00:55:05.040 +be too expensive for them to run Da + +00:55:03.400 --> 00:55:06.760 +Vinci over all the code bases for how + +00:55:05.040 --> 00:55:10.400 +much they're charging you for co-pilot + +00:55:06.760 --> 00:55:12.119 +so like every single time you like + +00:55:10.400 --> 00:55:14.280 +change something in one of your files if + +00:55:12.119 --> 00:55:17.079 +you're using copilot it's rerunning in + +00:55:14.280 --> 00:55:19.359 +llm and that would become very expensive + +00:55:17.079 --> 00:55:20.599 +if you look look at the token count so I + +00:55:19.359 --> 00:55:21.839 +think they're using a smaller model + +00:55:20.599 --> 00:55:22.920 +because of that but nonetheless it's + +00:55:21.839 --> 00:55:27.039 +very + +00:55:22.920 --> 00:55:28.640 +good um cool + +00:55:27.039 --> 00:55:30.680 +so now I want to get into some more + +00:55:28.640 --> 00:55:33.880 +modern models uh the first one I want to + +00:55:30.680 --> 00:55:35.520 +get into is uh star coder 2 and the + +00:55:33.880 --> 00:55:38.359 +reason why I want to talk about this + +00:55:35.520 --> 00:55:40.160 +first is because uh not necessarily that + +00:55:38.359 --> 00:55:41.880 +it's like absolutely the best one + +00:55:40.160 --> 00:55:43.400 +although it's very good but it's one of + +00:55:41.880 --> 00:55:45.319 +the models that actually tells us + +00:55:43.400 --> 00:55:47.240 +everything about their training data and + +00:55:45.319 --> 00:55:50.400 +training process and stuff so we know uh + +00:55:47.240 --> 00:55:53.039 +everything about them so the creator of + +00:55:50.400 --> 00:55:54.440 +This was um the big science project + +00:55:53.039 --> 00:55:56.880 +which was led by hugging face and + +00:55:54.440 --> 00:55:58.680 +service now um + +00:55:56.880 --> 00:56:02.079 +and includes lots and lots of people + +00:55:58.680 --> 00:56:04.960 +from various universities and things um + +00:56:02.079 --> 00:56:09.319 +the architecture is mostly llama style + +00:56:04.960 --> 00:56:11.960 +it has 3B 7B and 15b variants um one + +00:56:09.319 --> 00:56:15.480 +interesting thing about all code LMS is + +00:56:11.960 --> 00:56:17.680 +that they all do long context they all + +00:56:15.480 --> 00:56:20.359 +do longer context and they all + +00:56:17.680 --> 00:56:23.200 +reconfigure rope for longer context + +00:56:20.359 --> 00:56:25.280 +specifically so you know rope has a + +00:56:23.200 --> 00:56:28.599 +Theta parameter that allows you to tell + +00:56:25.280 --> 00:56:31.720 +how long the um like sign sine waves and + +00:56:28.599 --> 00:56:33.720 +stuff like that are and they all always + +00:56:31.720 --> 00:56:36.079 +um change the parameters so that the + +00:56:33.720 --> 00:56:38.599 +context is longer so that's another good + +00:56:36.079 --> 00:56:38.599 +thing to know + +00:56:38.640 --> 00:56:44.559 +about the the training data section of + +00:56:42.000 --> 00:56:48.799 +this paper is really fascinating I can + +00:56:44.559 --> 00:56:51.240 +like it it's a really good way to look + +00:56:48.799 --> 00:56:54.160 +at you know how much data engineering + +00:56:51.240 --> 00:56:55.960 +goes into making a good model um and + +00:56:54.160 --> 00:56:57.960 +just very shortly they give a lot more + +00:56:55.960 --> 00:57:00.640 +detail in the paper but it's trained on + +00:56:57.960 --> 00:57:04.839 +code uh including the stack which is + +00:57:00.640 --> 00:57:06.920 +just a huge uh amount like repository of + +00:57:04.839 --> 00:57:08.359 +code that I'll talk about in a second + +00:57:06.920 --> 00:57:10.559 +separately from that it was trained on + +00:57:08.359 --> 00:57:13.079 +GitHub issues it was trained on poll + +00:57:10.559 --> 00:57:16.000 +requests Jupiter notebooks keggle + +00:57:13.079 --> 00:57:18.319 +notebooks documentation and also + +00:57:16.000 --> 00:57:23.440 +intermediate representations from uh + +00:57:18.319 --> 00:57:26.440 +llvm so llvm is a uh you know like + +00:57:23.440 --> 00:57:28.920 +intermediate uh compiler style thing + +00:57:26.440 --> 00:57:30.839 +that is used for compiling code and it + +00:57:28.920 --> 00:57:34.400 +was also trained on a few code relevant + +00:57:30.839 --> 00:57:38.440 +natural language data sets + +00:57:34.400 --> 00:57:39.960 +um so for pre-processing they do + +00:57:38.440 --> 00:57:42.640 +something pretty interesting which is + +00:57:39.960 --> 00:57:44.240 +they add metadata tags such as the repo + +00:57:42.640 --> 00:57:48.119 +name and the file name and other stuff + +00:57:44.240 --> 00:57:49.799 +like this uh 50% of the time and they do + +00:57:48.119 --> 00:57:51.599 +this 50% of the time because they want + +00:57:49.799 --> 00:57:54.400 +the model to work with them but also be + +00:57:51.599 --> 00:57:57.079 +robust without them um and so you can + +00:57:54.400 --> 00:57:59.839 +either add them or not add them at test + +00:57:57.079 --> 00:58:03.079 +time uh they also do infilling every + +00:57:59.839 --> 00:58:05.960 +serus code LM does infilling Based + +00:58:03.079 --> 00:58:07.480 +training um one interesting thing about + +00:58:05.960 --> 00:58:08.960 +this from the training perspective is + +00:58:07.480 --> 00:58:12.000 +they actually trained it for four to + +00:58:08.960 --> 00:58:14.359 +five epochs um which is much more than + +00:58:12.000 --> 00:58:17.160 +we normally do so normally we only train + +00:58:14.359 --> 00:58:18.359 +for like one Epoch over you know all of + +00:58:17.160 --> 00:58:20.079 +the data we have but here they were + +00:58:18.359 --> 00:58:21.319 +training for monger and that's just + +00:58:20.079 --> 00:58:23.359 +because the amount of data they can get + +00:58:21.319 --> 00:58:24.400 +for code is less than the amount of data + +00:58:23.359 --> 00:58:27.200 +they can get for all the national + +00:58:24.400 --> 00:58:30.039 +language I + +00:58:27.200 --> 00:58:33.200 +so the data set that they created is uh + +00:58:30.039 --> 00:58:36.119 +the stack 2 and this is a code + +00:58:33.200 --> 00:58:37.839 +pre-training data set um one interesting + +00:58:36.119 --> 00:58:40.039 +thing that they thought about was uh + +00:58:37.839 --> 00:58:42.960 +license considerations so I talked about + +00:58:40.039 --> 00:58:44.480 +the um how copyright is a problem when + +00:58:42.960 --> 00:58:46.640 +trading large language models two + +00:58:44.480 --> 00:58:48.880 +classes ago and so here they + +00:58:46.640 --> 00:58:50.119 +specifically tried to find things with + +00:58:48.880 --> 00:58:52.520 +permissive + +00:58:50.119 --> 00:58:53.880 +licenses and so what they did is they + +00:58:52.520 --> 00:58:57.000 +basically looked at the license on + +00:58:53.880 --> 00:58:59.520 +GitHub um and if the GitHub license was + +00:58:57.000 --> 00:59:01.440 +permissive they marked it as permissive + +00:58:59.520 --> 00:59:02.880 +um then they tried to detect licenses + +00:59:01.440 --> 00:59:05.720 +and then um if all of them were + +00:59:02.880 --> 00:59:08.000 +permissive they marked it as + +00:59:05.720 --> 00:59:10.480 +permissive this is a huge table that + +00:59:08.000 --> 00:59:14.160 +they have in the paper of all of the + +00:59:10.480 --> 00:59:15.480 +data that they have and um you know I'm + +00:59:14.160 --> 00:59:16.920 +not going to go through all of this + +00:59:15.480 --> 00:59:18.920 +obviously but what you can see is some + +00:59:16.920 --> 00:59:22.480 +of the biggest data sets are like + +00:59:18.920 --> 00:59:26.280 +Java um + +00:59:22.480 --> 00:59:28.640 +PHP markdown + +00:59:26.280 --> 00:59:30.039 +and uh Python and other stuff like that + +00:59:28.640 --> 00:59:32.240 +so you can see the major programming + +00:59:30.039 --> 00:59:35.559 +languages have lots of data but there's + +00:59:32.240 --> 00:59:38.400 +also a long tail so if you like your uh + +00:59:35.559 --> 00:59:40.440 +you know more esoteric uh but cool + +00:59:38.400 --> 00:59:43.960 +programming languages like rust yes it + +00:59:40.440 --> 00:59:46.160 +has rust too so um we can do all all of + +00:59:43.960 --> 00:59:46.160 +those + +00:59:46.480 --> 00:59:53.079 +things so the next model that I'd like + +00:59:49.799 --> 00:59:55.200 +to talk about is cod llama and cod llama + +00:59:53.079 --> 00:59:57.920 +is another competitive model it came out + +00:59:55.200 --> 00:59:59.480 +a little bit before star coder and star + +00:59:57.920 --> 01:00:02.680 +coder 2 and deep sea coder which I'm + +00:59:59.480 --> 01:00:04.079 +going to talk about um this is a created + +01:00:02.680 --> 01:00:08.319 +by + +01:00:04.079 --> 01:00:11.160 +meta and um the architecture is the same + +01:00:08.319 --> 01:00:14.280 +as llama 2 uh basically and they did + +01:00:11.160 --> 01:00:16.400 +continued training from llama 2 um but + +01:00:14.280 --> 01:00:18.000 +they trained it on longer input contexts + +01:00:16.400 --> 01:00:21.720 +and they also extended the length of + +01:00:18.000 --> 01:00:23.559 +rope so uh those are you know standard + +01:00:21.720 --> 01:00:26.680 +things for code language + +01:00:23.559 --> 01:00:28.680 +models it was trained on DED code and + +01:00:26.680 --> 01:00:30.400 +also synthetically created instruction + +01:00:28.680 --> 01:00:33.280 +data so they created like instruction + +01:00:30.400 --> 01:00:37.920 +tuning data specifically for + +01:00:33.280 --> 01:00:39.480 +code um and the training was incremental + +01:00:37.920 --> 01:00:42.559 +with various data sets and what I mean + +01:00:39.480 --> 01:00:45.599 +by this is they trained on 500 billion + +01:00:42.559 --> 01:00:47.599 +uh I believe tokens of code and then + +01:00:45.599 --> 01:00:50.400 +they did long context fine tuning on 20 + +01:00:47.599 --> 01:00:52.599 +billion tokens and then they also did + +01:00:50.400 --> 01:00:55.400 +instruction tuning they also have a + +01:00:52.599 --> 01:00:57.079 +python specific one and the reason why + +01:00:55.400 --> 01:00:59.640 +they have a p specific one is not + +01:00:57.079 --> 01:01:02.319 +because python is more import important + +01:00:59.640 --> 01:01:03.839 +uh uh necessarily but because a lot of + +01:01:02.319 --> 01:01:05.559 +the benchmarks are in Python because + +01:01:03.839 --> 01:01:06.920 +machine learning people like who are + +01:01:05.559 --> 01:01:09.240 +creating benchmarks they also like + +01:01:06.920 --> 01:01:11.200 +python so python is more common in the + +01:01:09.240 --> 01:01:14.240 +benchmarks so they basically wanted to + +01:01:11.200 --> 01:01:15.720 +do well on the benchmarks I think uh and + +01:01:14.240 --> 01:01:17.920 +and created a data set that does well in + +01:01:15.720 --> 01:01:19.240 +the benchmarks but um if you are + +01:01:17.920 --> 01:01:23.160 +creating python you can use the code + +01:01:19.240 --> 01:01:25.280 +llama python it's better at pipelines so + +01:01:23.160 --> 01:01:28.000 +um and then the final one I'd like to + +01:01:25.280 --> 01:01:29.839 +talk about is is a deep seek coder uh + +01:01:28.000 --> 01:01:32.079 +this is notable because it's a very + +01:01:29.839 --> 01:01:34.599 +strong model it it's maybe the strongest + +01:01:32.079 --> 01:01:38.799 +model on average over all the code + +01:01:34.599 --> 01:01:41.599 +models um they did 87% the data is not + +01:01:38.799 --> 01:01:44.640 +super clear but they did 87% source code + +01:01:41.599 --> 01:01:46.359 +10% English um from markdown in stock + +01:01:44.640 --> 01:01:51.160 +exchange and 3% Chinese because it's + +01:01:46.359 --> 01:01:53.559 +from a Chinese company deep seek um and + +01:01:51.160 --> 01:01:54.960 +they did standard prepr uh but one + +01:01:53.559 --> 01:01:57.319 +interesting thing they did is they + +01:01:54.960 --> 01:01:59.200 +included Library dependencies so they + +01:01:57.319 --> 01:02:01.799 +basically crawled the dependency graph + +01:01:59.200 --> 01:02:03.640 +of libraries pulled out files from the + +01:02:01.799 --> 01:02:06.000 +libraries that were referenced and then + +01:02:03.640 --> 01:02:07.440 +used them in training and so that's + +01:02:06.000 --> 01:02:09.319 +particularly useful if you want the + +01:02:07.440 --> 01:02:12.920 +model to be able to reference external + +01:02:09.319 --> 01:02:14.039 +libraries well um so that's kind of an + +01:02:12.920 --> 01:02:17.279 +interesting + +01:02:14.039 --> 01:02:19.599 +thing um the architecture is pretty + +01:02:17.279 --> 01:02:22.960 +standard it's llama likee with 1.3 + +01:02:19.599 --> 01:02:24.599 +billion 6.7 billion in 33b variants and + +01:02:22.960 --> 01:02:27.279 +it has a reconfigured work like the + +01:02:24.599 --> 01:02:30.520 +others and they on two trillion + +01:02:27.279 --> 01:02:34.200 +tokens um so then a question becomes + +01:02:30.520 --> 01:02:36.680 +which one to use um and I created a + +01:02:34.200 --> 01:02:39.160 +summary here um all of them have + +01:02:36.680 --> 01:02:40.760 +somewhat similar performance uh this is + +01:02:39.160 --> 01:02:42.760 +they're compared in the star coder 2 + +01:02:40.760 --> 01:02:45.640 +paper so you can go in and look at + +01:02:42.760 --> 01:02:48.160 +details at the starcode to paper um + +01:02:45.640 --> 01:02:51.119 +deeps coder seems to be strong on + +01:02:48.160 --> 01:02:52.799 +standard programming tasks um whereas + +01:02:51.119 --> 01:02:54.799 +star coder seems to be strong on data + +01:02:52.799 --> 01:02:56.680 +science notebooks so like on average + +01:02:54.799 --> 01:02:59.160 +it's better at kind of sound notebooks + +01:02:56.680 --> 01:03:02.079 +but all of them are good models um all + +01:02:59.160 --> 01:03:05.440 +of them are not quite as good as uh like + +01:03:02.079 --> 01:03:08.920 +gp4 quad on like they're very uh you + +01:03:05.440 --> 01:03:10.799 +know more complex tasks but uh they're + +01:03:08.920 --> 01:03:12.359 +available and you can find to them and + +01:03:10.799 --> 01:03:16.880 +do other things like that as + +01:03:12.359 --> 01:03:21.599 +well one caveat about the Deep seek + +01:03:16.880 --> 01:03:24.640 +thing is actually if I go back to this + +01:03:21.599 --> 01:03:27.559 +slide um a lot of the models up here are + +01:03:24.640 --> 01:03:29.640 +deep seek um so you do need to be a + +01:03:27.559 --> 01:03:31.400 +little bit careful about like + +01:03:29.640 --> 01:03:34.400 +interpreting their human Evel results + +01:03:31.400 --> 01:03:36.319 +because it's possible that the model uh + +01:03:34.400 --> 01:03:38.799 +was trained on data very similar to + +01:03:36.319 --> 01:03:40.279 +human eval or something like that so do + +01:03:38.799 --> 01:03:42.880 +take that with a grain of salt but even + +01:03:40.279 --> 01:03:44.520 +on other data sets where presumably the + +01:03:42.880 --> 01:03:46.760 +model has not seen those data sets it + +01:03:44.520 --> 01:03:49.920 +still does very well so it's not like + +01:03:46.760 --> 01:03:51.480 +it's um you know as you can see it's + +01:03:49.920 --> 01:03:54.640 +still one of the most competitive code + +01:03:51.480 --> 01:03:57.680 +models even on this new LCB um data set + +01:03:54.640 --> 01:04:01.359 +so uh that's want into the + +01:03:57.680 --> 01:04:03.000 +a cool um that's all I have for today I + +01:04:01.359 --> 01:04:04.359 +you know I love to talk about this topic + +01:04:03.000 --> 01:04:06.480 +I've done a lot of research on it so I'm + +01:04:04.359 --> 01:04:11.200 +happy to discuss any questions if people + +01:04:06.480 --> 01:04:14.720 +have them either in front of everyone or + +01:04:11.200 --> 01:04:14.720 +after any any + +01:04:16.480 --> 01:04:24.160 +questions uh yeah just wondering there a + +01:04:20.359 --> 01:04:27.720 +like enfor the outut during using things + +01:04:24.160 --> 01:04:27.720 +other than models + +01:04:30.599 --> 01:04:36.599 +yeah great question is there a way to + +01:04:33.640 --> 01:04:38.200 +enforce uh restrictions at decoding time + +01:04:36.599 --> 01:04:39.760 +other than using the model's uh + +01:04:38.200 --> 01:04:42.240 +probabilities because this is code and + +01:04:39.760 --> 01:04:42.240 +we know the + +01:04:42.440 --> 01:04:51.079 +syntax yes and no um there + +01:04:46.319 --> 01:04:53.200 +are for code it's not always immediately + +01:04:51.079 --> 01:04:54.400 +obvious like I mean one one thing you + +01:04:53.200 --> 01:04:55.960 +could do is just generate a bunch of + +01:04:54.400 --> 01:04:58.520 +results and throw out all the syntax + +01:04:55.960 --> 01:04:59.480 +incorrect on that's easy right um but if + +01:04:58.520 --> 01:05:02.520 +you don't want to do that and you want + +01:04:59.480 --> 01:05:04.839 +to do it at decoding time it's dependent + +01:05:02.520 --> 01:05:07.480 +on you being able to have an incremental + +01:05:04.839 --> 01:05:09.079 +syntax parser that allows you to like + +01:05:07.480 --> 01:05:12.400 +throw out bad + +01:05:09.079 --> 01:05:14.160 +hypotheses like incrementally and that's + +01:05:12.400 --> 01:05:16.240 +possible that's very easy for some + +01:05:14.160 --> 01:05:17.200 +languages and not possible not as easy + +01:05:16.240 --> 01:05:20.559 +for other + +01:05:17.200 --> 01:05:23.720 +languages um one really big thing right + +01:05:20.559 --> 01:05:26.599 +now is Json so like a lot of the time + +01:05:23.720 --> 01:05:28.319 +people want to Output Json uh in you + +01:05:26.599 --> 01:05:31.559 +know then par the Json and use it in + +01:05:28.319 --> 01:05:36.640 +some Downstream test and there actually + +01:05:31.559 --> 01:05:36.640 +are libraries um just to give a + +01:05:38.559 --> 01:05:45.839 +few um here's one this Library called + +01:05:42.640 --> 01:05:48.799 +outlines um is one that basically allows + +01:05:45.839 --> 01:05:50.440 +you to incorporate syntactic constraints + +01:05:48.799 --> 01:05:53.240 +through like weighted finite State + +01:05:50.440 --> 01:05:55.160 +automata and other stuff like this um to + +01:05:53.240 --> 01:05:57.680 +allow you to throw away anything that + +01:05:55.160 --> 01:06:02.039 +doesn't here to your grammar another + +01:05:57.680 --> 01:06:02.039 +popular one which + +01:06:02.720 --> 01:06:06.880 +is nice but a little bit more + +01:06:07.160 --> 01:06:12.760 +complicated is + +01:06:09.799 --> 01:06:15.160 +um this one uh + +01:06:12.760 --> 01:06:17.200 +guidance so if you want to look at like + +01:06:15.160 --> 01:06:19.720 +constrained generation of outputs I + +01:06:17.200 --> 01:06:21.640 +would definitely recommend uh looking at + +01:06:19.720 --> 01:06:22.839 +one of these two either outlines or or + +01:06:21.640 --> 01:06:24.440 +guidance and they both give you + +01:06:22.839 --> 01:06:26.520 +different ways to add constraints to + +01:06:24.440 --> 01:06:29.079 +Output um we did actually talk about + +01:06:26.520 --> 01:06:31.200 +outlines a little bit during the like uh + +01:06:29.079 --> 01:06:34.599 +generation class but um we didn't go + +01:06:31.200 --> 01:06:35.760 +into a lot of details so uh yeah but I I + +01:06:34.599 --> 01:06:39.559 +would recommend + +01:06:35.760 --> 01:06:39.559 +this cool any other + +01:06:39.599 --> 01:06:43.920 +questions okay if not uh I guess we can + +01:06:42.079 --> 01:06:47.880 +finish up and I'm happy to talk we have + +01:06:43.920 --> 01:06:47.880 +a little bit of extra time diff --git a/CMU Advanced NLP 2024 (18) Knowledge and Language Models/CMU Advanced NLP 2024 (18) Knowledge and Language Models.mp4 b/CMU Advanced NLP 2024 (18) Knowledge and Language Models/CMU Advanced NLP 2024 (18) Knowledge and Language Models.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b145cd501fd840c61a2541c6e5e7e40e77020730 --- /dev/null +++ b/CMU Advanced NLP 2024 (18) Knowledge and Language Models/CMU Advanced NLP 2024 (18) Knowledge and Language Models.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b246f116f9c543f9cc995a334954a8947064a1bc3950d9acdc34b8bf42b8771 +size 74113017 diff --git a/CMU Advanced NLP 2024 (18) Knowledge and Language Models/metadata.json b/CMU Advanced NLP 2024 (18) Knowledge and Language Models/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..f66225a1b3488254bd298f85fc38983c38a562f6 --- /dev/null +++ b/CMU Advanced NLP 2024 (18) Knowledge and Language Models/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=IwEYCbdgJ9U", + "title": "CMU Advanced NLP 2024 (18) Knowledge and Language Models" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (18) Knowledge and Language Models/transcript.srt b/CMU Advanced NLP 2024 (18) Knowledge and Language Models/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..d1d3375be628dad36e47ed181d35e4ccaeadd7f1 --- /dev/null +++ b/CMU Advanced NLP 2024 (18) Knowledge and Language Models/transcript.srt @@ -0,0 +1,6803 @@ +1 +00:00:00,120 --> 00:00:04,880 +everyone I today I'd like to talk about + +2 +00:00:02,760 --> 00:00:07,399 +uh learning from knowledge bases uh + +3 +00:00:04,880 --> 00:00:11,440 +learning from in for knowledge bases + +4 +00:00:07,399 --> 00:00:14,799 +this is kind of a a shift uh from a lot + +5 +00:00:11,440 --> 00:00:16,480 +of the stuff that we've done so far uh + +6 +00:00:14,799 --> 00:00:18,439 +and I'm going to be talking about like a + +7 +00:00:16,480 --> 00:00:20,480 +different information Source some + +8 +00:00:18,439 --> 00:00:21,960 +relatively different algorithms compared + +9 +00:00:20,480 --> 00:00:26,080 +to the stuff that we talked about up + +10 +00:00:21,960 --> 00:00:28,880 +until this point so um you know it might + +11 +00:00:26,080 --> 00:00:32,360 +be uh interesting it might be different + +12 +00:00:28,880 --> 00:00:35,640 +so uh get started with + +13 +00:00:32,360 --> 00:00:37,360 +that so I'm going to be talking about + +14 +00:00:35,640 --> 00:00:40,000 +knowledge bases and knowledge bases are + +15 +00:00:37,360 --> 00:00:43,039 +basically a structured databases of + +16 +00:00:40,000 --> 00:00:46,079 +knowledge and they can contain a lot of + +17 +00:00:43,039 --> 00:00:48,559 +things but most commonly when people are + +18 +00:00:46,079 --> 00:00:50,600 +talking about them they are talking + +19 +00:00:48,559 --> 00:00:53,160 +about relational knowledge bases that + +20 +00:00:50,600 --> 00:00:55,559 +include things like entities which are + +21 +00:00:53,160 --> 00:00:57,399 +nodes in a graph and relations which are + +22 +00:00:55,559 --> 00:01:00,239 +edges between + +23 +00:00:57,399 --> 00:01:02,079 +nodes and + +24 +00:01:00,239 --> 00:01:03,879 +I'll I'll talk about some examples of + +25 +00:01:02,079 --> 00:01:05,479 +this in a little bit to make that a + +26 +00:01:03,879 --> 00:01:08,040 +little bit more concrete and then some + +27 +00:01:05,479 --> 00:01:11,240 +of the questions that we ask about these + +28 +00:01:08,040 --> 00:01:14,400 +are how can we learn to create and + +29 +00:01:11,240 --> 00:01:16,799 +expand knowledge bases with uh you know + +30 +00:01:14,400 --> 00:01:18,439 +neural network based methods and then + +31 +00:01:16,799 --> 00:01:20,200 +the second question is how can we learn + +32 +00:01:18,439 --> 00:01:22,600 +from the information in knowledge bases + +33 +00:01:20,200 --> 00:01:24,720 +to improve like neural network models or + +34 +00:01:22,600 --> 00:01:27,560 +uh use them in effective + +35 +00:01:24,720 --> 00:01:31,479 +ways and how can we use uh structured + +36 +00:01:27,560 --> 00:01:31,479 +knowledge to answer questions + +37 +00:01:32,200 --> 00:01:37,159 +so the first uh thing I'd like to talk + +38 +00:01:35,000 --> 00:01:40,960 +about a little bit is types of knowledge + +39 +00:01:37,159 --> 00:01:43,079 +bases and they come in several different + +40 +00:01:40,960 --> 00:01:46,119 +varieties the first one I'd like to talk + +41 +00:01:43,079 --> 00:01:48,560 +about is a very uh classical one called + +42 +00:01:46,119 --> 00:01:50,960 +wordnet has anyone actually ever used + +43 +00:01:48,560 --> 00:01:53,479 +wordnet + +44 +00:01:50,960 --> 00:01:55,520 +before I see at least one person raising + +45 +00:01:53,479 --> 00:01:57,640 +their hand so it's not entirely uh + +46 +00:01:55,520 --> 00:02:00,119 +hasn't entirely disappeared has anyone + +47 +00:01:57,640 --> 00:02:03,240 +heard of wordnet before + +48 +00:02:00,119 --> 00:02:05,079 +okay more more people um so basically + +49 +00:02:03,240 --> 00:02:06,960 +this used to be a really big thing in in + +50 +00:02:05,079 --> 00:02:10,440 +natural language processing it's not So + +51 +00:02:06,960 --> 00:02:12,319 +Much Anymore um but I I want to explain + +52 +00:02:10,440 --> 00:02:14,800 +about it because I want to explain why + +53 +00:02:12,319 --> 00:02:17,360 +this is maybe like less necessary to use + +54 +00:02:14,800 --> 00:02:19,599 +but actual knowledge bases are still + +55 +00:02:17,360 --> 00:02:23,160 +more necessary to + +56 +00:02:19,599 --> 00:02:26,280 +use and so wordnet is a large database + +57 +00:02:23,160 --> 00:02:29,560 +of words and specifically what it does + +58 +00:02:26,280 --> 00:02:32,720 +is each word or something they call a + +59 +00:02:29,560 --> 00:02:37,120 +syn set is a node and then there are + +60 +00:02:32,720 --> 00:02:42,560 +relationships between nodes and the + +61 +00:02:37,120 --> 00:02:44,319 +nodes can correspond to nouns um and or + +62 +00:02:42,560 --> 00:02:45,920 +verbs or + +63 +00:02:44,319 --> 00:02:48,360 +adjectives + +64 +00:02:45,920 --> 00:02:49,959 +and nouns have different types of + +65 +00:02:48,360 --> 00:02:53,360 +relations between them so they have + +66 +00:02:49,959 --> 00:02:56,280 +things like an is a relation so like a + +67 +00:02:53,360 --> 00:03:00,040 +hatchback is a type of car they are part + +68 +00:02:56,280 --> 00:03:02,840 +of relations uh where a wheel is a part + +69 +00:03:00,040 --> 00:03:05,720 +of a car um and they also make + +70 +00:03:02,840 --> 00:03:09,799 +distinctions between types and instances + +71 +00:03:05,720 --> 00:03:12,400 +so like Joe Biden is an instance of a + +72 +00:03:09,799 --> 00:03:16,560 +president and president is the + +73 +00:03:12,400 --> 00:03:19,239 +type so um verb relations are ordered by + +74 +00:03:16,560 --> 00:03:22,680 +specificity so like communicate is more + +75 +00:03:19,239 --> 00:03:25,799 +broad than talk so talk is you know + +76 +00:03:22,680 --> 00:03:27,519 +generally a sub class of communicate and + +77 +00:03:25,799 --> 00:03:30,720 +then whisper is generally a subass of + +78 +00:03:27,519 --> 00:03:33,159 +talk so it's ordered in this way + +79 +00:03:30,720 --> 00:03:35,920 +and then adjective relations are mostly + +80 +00:03:33,159 --> 00:03:37,720 +antonyms so like wet and wet versus dry + +81 +00:03:35,920 --> 00:03:43,599 +and other things like + +82 +00:03:37,720 --> 00:03:47,080 +this um when I said sinets uh actually + +83 +00:03:43,599 --> 00:03:50,239 +the each node is not a word despite the + +84 +00:03:47,080 --> 00:03:53,239 +name word net it's a set of words that + +85 +00:03:50,239 --> 00:03:56,200 +all have the same meaning so you might + +86 +00:03:53,239 --> 00:03:59,120 +have artifact and thing would both + +87 +00:03:56,200 --> 00:04:00,879 +correspond to this um node because they + +88 +00:03:59,120 --> 00:04:02,599 +both mean basically the same thing so + +89 +00:04:00,879 --> 00:04:04,159 +it's like sets of synonyms and this is + +90 +00:04:02,599 --> 00:04:07,599 +also important when we talk about other + +91 +00:04:04,159 --> 00:04:09,920 +types of uh knowledge bases as well and + +92 +00:04:07,599 --> 00:04:13,920 +so what was this used for um this was + +93 +00:04:09,920 --> 00:04:17,160 +used for for example uh trying to figure + +94 +00:04:13,920 --> 00:04:22,400 +out whether trying to find all the cars + +95 +00:04:17,160 --> 00:04:24,440 +that were mentioned in like a in a large + +96 +00:04:22,400 --> 00:04:27,440 +set of text so you would go through you + +97 +00:04:24,440 --> 00:04:30,280 +would identify all + +98 +00:04:27,440 --> 00:04:32,120 +sinets or you would identify all words + +99 +00:04:30,280 --> 00:04:34,120 +that corresponded to these sunsets and + +100 +00:04:32,120 --> 00:04:35,720 +then you would take a step up and find + +101 +00:04:34,120 --> 00:04:38,800 +motor car and you would know that like + +102 +00:04:35,720 --> 00:04:42,320 +all of those were mentions of cars so + +103 +00:04:38,800 --> 00:04:45,520 +like why don't we use wordnet very much + +104 +00:04:42,320 --> 00:04:45,520 +anymore any + +105 +00:04:49,160 --> 00:04:52,840 +ideas what would what would you do + +106 +00:04:51,080 --> 00:04:55,560 +instead if I told you find all the cars + +107 +00:04:52,840 --> 00:04:55,560 +in a big piece of + +108 +00:04:55,960 --> 00:05:00,160 +text yeah just do something with the + +109 +00:04:58,280 --> 00:05:02,880 +embeding just do something with + +110 +00:05:00,160 --> 00:05:04,560 +embeddings yeah so you might get um you + +111 +00:05:02,880 --> 00:05:06,720 +might get something and find all things + +112 +00:05:04,560 --> 00:05:10,360 +that were close in embedding space to a + +113 +00:05:06,720 --> 00:05:10,360 +car what what's another thing you might + +114 +00:05:11,560 --> 00:05:15,520 +do like what I would do is I would + +115 +00:05:13,639 --> 00:05:17,080 +download mistol and say does this + +116 +00:05:15,520 --> 00:05:19,880 +sentence talk about a car and it would + +117 +00:05:17,080 --> 00:05:22,199 +say yes or no and I I would you know or + +118 +00:05:19,880 --> 00:05:23,479 +I would say find all the cars in this uh + +119 +00:05:22,199 --> 00:05:25,319 +that are mentioned in the sentence and + +120 +00:05:23,479 --> 00:05:28,720 +it would get them and sure that's like + +121 +00:05:25,319 --> 00:05:31,319 +expensive but it's really easy so um you + +122 +00:05:28,720 --> 00:05:32,919 +know there are other options that might + +123 +00:05:31,319 --> 00:05:36,720 +be less expensive but that could solve a + +124 +00:05:32,919 --> 00:05:39,520 +lot of the things so word not you know + +125 +00:05:36,720 --> 00:05:41,039 +started out with more and more it it + +126 +00:05:39,520 --> 00:05:42,600 +started out being very popular in + +127 +00:05:41,039 --> 00:05:44,039 +natural language processing but now it's + +128 +00:05:42,600 --> 00:05:45,440 +less so because we can get a lot of it + +129 +00:05:44,039 --> 00:05:47,639 +from embeddings we can get a lot of it + +130 +00:05:45,440 --> 00:05:50,520 +from language models + +131 +00:05:47,639 --> 00:05:52,759 +itself um another thing that started + +132 +00:05:50,520 --> 00:05:55,759 +maybe before wordnet or even around the + +133 +00:05:52,759 --> 00:05:58,840 +same time as wordnet was this uh data + +134 +00:05:55,759 --> 00:06:00,800 +base called psych and it was a manually + +135 +00:05:58,840 --> 00:06:04,160 +curated database attempting to encode + +136 +00:06:00,800 --> 00:06:06,280 +all common sense knowledge um and the + +137 +00:06:04,160 --> 00:06:08,759 +project itself lasted for about 30 to 40 + +138 +00:06:06,280 --> 00:06:11,840 +years it might even still + +139 +00:06:08,759 --> 00:06:13,319 +exist um and so they had this huge uh + +140 +00:06:11,840 --> 00:06:15,199 +like hierarchy of all the different + +141 +00:06:13,319 --> 00:06:17,680 +types of knowledge you could have it + +142 +00:06:15,199 --> 00:06:19,680 +encoded knowledge about like events and + +143 +00:06:17,680 --> 00:06:21,479 +like which events happened before other + +144 +00:06:19,680 --> 00:06:26,840 +events and all these other stuff like + +145 +00:06:21,479 --> 00:06:29,039 +this um but the problem with this is uh + +146 +00:06:26,840 --> 00:06:31,000 +this was just too ambitious basically it + +147 +00:06:29,039 --> 00:06:35,680 +was not possible to encode all of this + +148 +00:06:31,000 --> 00:06:37,440 +manually by hand so people um like it it + +149 +00:06:35,680 --> 00:06:38,840 +did it got part of the way there but + +150 +00:06:37,440 --> 00:06:40,240 +that part of the way there was not + +151 +00:06:38,840 --> 00:06:42,560 +enough for it to be really useful in + +152 +00:06:40,240 --> 00:06:45,199 +Practical systems so it isn't this sort + +153 +00:06:42,560 --> 00:06:47,800 +of method is not used as frequently + +154 +00:06:45,199 --> 00:06:51,240 +now + +155 +00:06:47,800 --> 00:06:56,000 +um a a followup one + +156 +00:06:51,240 --> 00:06:57,479 +um which is it's successor is now uh the + +157 +00:06:56,000 --> 00:06:59,879 +the most widely used knowledge Bas is + +158 +00:06:57,479 --> 00:07:03,240 +something called dbpedia and the basic + +159 +00:06:59,879 --> 00:07:06,120 +idea behind dbpedia is that while Psych + +160 +00:07:03,240 --> 00:07:07,840 +is too difficult because they had people + +161 +00:07:06,120 --> 00:07:12,400 +on the psych project who would go in and + +162 +00:07:07,840 --> 00:07:12,400 +curate rules um for + +163 +00:07:13,280 --> 00:07:19,080 +machines Wikipedia basically they have a + +164 +00:07:17,160 --> 00:07:21,080 +very very large number of humans + +165 +00:07:19,080 --> 00:07:23,639 +curating this structured data about + +166 +00:07:21,080 --> 00:07:25,199 +entities in the world for humans they're + +167 +00:07:23,639 --> 00:07:27,879 +creating it for humans because then you + +168 +00:07:25,199 --> 00:07:29,599 +can put it on a Wikipedia page and you + +169 +00:07:27,879 --> 00:07:31,440 +can look and see it says cardig melan + +170 +00:07:29,599 --> 00:07:34,160 +University it has the former names of + +171 +00:07:31,440 --> 00:07:36,919 +Carnegie melon um it has the motto of + +172 +00:07:34,160 --> 00:07:38,759 +Carnegie melon the type of entity who it + +173 +00:07:36,919 --> 00:07:41,360 +was established by and when and other + +174 +00:07:38,759 --> 00:07:42,840 +stuff like that and because people are + +175 +00:07:41,360 --> 00:07:44,280 +no longer creating it for machines + +176 +00:07:42,840 --> 00:07:46,280 +they're creating it for humans people + +177 +00:07:44,280 --> 00:07:47,840 +are like motivated to do this so like + +178 +00:07:46,280 --> 00:07:49,960 +lots of people will do it for free so + +179 +00:07:47,840 --> 00:07:51,960 +you can actually get a reasonably sized + +180 +00:07:49,960 --> 00:07:53,639 +amount of data from this and actually + +181 +00:07:51,960 --> 00:07:55,720 +cover you know like most of the entities + +182 +00:07:53,639 --> 00:07:57,080 +in the world or not most of the entities + +183 +00:07:55,720 --> 00:08:00,120 +in the world but most of the notable + +184 +00:07:57,080 --> 00:08:03,319 +entities in uh part of the world that + +185 +00:08:00,120 --> 00:08:03,319 +have high participation in + +186 +00:08:03,479 --> 00:08:09,800 +Wikipedia um so now the the thing that a + +187 +00:08:08,039 --> 00:08:13,319 +lot of people use is something called + +188 +00:08:09,800 --> 00:08:14,919 +Wiki data this is not this name is a + +189 +00:08:13,319 --> 00:08:17,039 +little bit of a misnomer because it's + +190 +00:08:14,919 --> 00:08:18,960 +not actually that closely connected to + +191 +00:08:17,039 --> 00:08:20,639 +Wikipedia they extract data from + +192 +00:08:18,960 --> 00:08:21,720 +Wikipedia but they also extract it from + +193 +00:08:20,639 --> 00:08:24,400 +lots of other + +194 +00:08:21,720 --> 00:08:27,520 +sources and this is a curated database + +195 +00:08:24,400 --> 00:08:30,360 +of entities um it's linked it's + +196 +00:08:27,520 --> 00:08:33,959 +extremely large scale and it's + +197 +00:08:30,360 --> 00:08:38,080 +multilingual and um this is an example + +198 +00:08:33,959 --> 00:08:39,680 +of a thing from Richard fean um where + +199 +00:08:38,080 --> 00:08:42,680 +people can go in and they can actually + +200 +00:08:39,680 --> 00:08:45,320 +like add information and stuff like that + +201 +00:08:42,680 --> 00:08:47,440 +um and you know it gives information + +202 +00:08:45,320 --> 00:08:50,959 +about education and all kinds of other + +203 +00:08:47,440 --> 00:08:52,600 +stuff so um for fun I can go to the wiki + +204 +00:08:50,959 --> 00:08:55,040 +data + +205 +00:08:52,600 --> 00:08:59,360 +site does anyone have an entity they'd + +206 +00:08:55,040 --> 00:08:59,360 +like to know more about + +207 +00:09:01,640 --> 00:09:07,320 +any any ideas maybe something that has + +208 +00:09:03,959 --> 00:09:07,320 +been in the news recently + +209 +00:09:10,680 --> 00:09:16,160 +or nobody brave enough to come up with + +210 +00:09:13,040 --> 00:09:18,360 +an entity yeah + +211 +00:09:16,160 --> 00:09:20,640 +Mamba that's a good one I'm actually not + +212 +00:09:18,360 --> 00:09:23,800 +sure if that one's going to be in here + +213 +00:09:20,640 --> 00:09:27,720 +um there's lots of mambas but I don't + +214 +00:09:23,800 --> 00:09:27,720 +know about that particular Mamba let me + +215 +00:09:27,839 --> 00:09:31,200 +see do you want to know about a + +216 +00:09:29,720 --> 00:09:33,399 +different Mamba do you want about know + +217 +00:09:31,200 --> 00:09:36,040 +about Mamba the research + +218 +00:09:33,399 --> 00:09:38,399 +group so Mamba is a research group it's + +219 +00:09:36,040 --> 00:09:41,800 +the modeling and Analysis for medicine + +220 +00:09:38,399 --> 00:09:44,800 +research group um it focuses on + +221 +00:09:41,800 --> 00:09:48,000 +mathematical biology and it's in the uh + +222 +00:09:44,800 --> 00:09:51,120 +in this National Center for scientific + +223 +00:09:48,000 --> 00:09:52,519 +research in France um the chairperson is + +224 +00:09:51,120 --> 00:09:55,360 +this person and stuff like that so you + +225 +00:09:52,519 --> 00:10:00,200 +can see it has all of these things so + +226 +00:09:55,360 --> 00:10:03,920 +Mamba this Mamba is a node in the graph + +227 +00:10:00,200 --> 00:10:06,839 +and then the edges are pointing um the + +228 +00:10:03,920 --> 00:10:09,440 +edges are labeled with like instance of + +229 +00:10:06,839 --> 00:10:11,200 +and then the next note is research group + +230 +00:10:09,440 --> 00:10:13,000 +so research group is like another note + +231 +00:10:11,200 --> 00:10:17,120 +in the graph and so you can click + +232 +00:10:13,000 --> 00:10:18,680 +through this and it has its own ID and + +233 +00:10:17,120 --> 00:10:21,200 +other things like + +234 +00:10:18,680 --> 00:10:22,839 +this also you'll notice that research + +235 +00:10:21,200 --> 00:10:24,160 +group is translated into lots of + +236 +00:10:22,839 --> 00:10:27,440 +different languages in the world so you + +237 +00:10:24,160 --> 00:10:30,120 +can use it multi multilingually and um + +238 +00:10:27,440 --> 00:10:33,880 +and other things like that + +239 +00:10:30,120 --> 00:10:37,000 +um even minor entities like Graham + +240 +00:10:33,880 --> 00:10:40,160 +nuig are included in this and it has a + +241 +00:10:37,000 --> 00:10:42,240 +little bit of um like information about + +242 +00:10:40,160 --> 00:10:45,480 +me like my PhD was in Kyoto University + +243 +00:10:42,240 --> 00:10:45,480 +in 2012 I am a + +244 +00:10:45,600 --> 00:10:52,079 +human I I am male uh and first name last + +245 +00:10:50,519 --> 00:10:53,720 +name University teacher computer + +246 +00:10:52,079 --> 00:10:56,279 +scientist natural language processing + +247 +00:10:53,720 --> 00:10:58,639 +this is all right um because this is + +248 +00:10:56,279 --> 00:11:00,240 +mostly hand curated it even has the IDS + +249 +00:10:58,639 --> 00:11:04,240 +of my advisor + +250 +00:11:00,240 --> 00:11:06,519 +advisers um the reason why it has all of + +251 +00:11:04,240 --> 00:11:09,839 +this stuff actually is because like 15 + +252 +00:11:06,519 --> 00:11:12,160 +years ago or like 10 years ago I entered + +253 +00:11:09,839 --> 00:11:14,399 +in my uh my information into the + +254 +00:11:12,160 --> 00:11:16,240 +mathematical genealogy project uh which + +255 +00:11:14,399 --> 00:11:18,880 +is this project about who your advisers + +256 +00:11:16,240 --> 00:11:20,680 +were because I wanted to see like who my + +257 +00:11:18,880 --> 00:11:22,800 +mathematical like siblings were and + +258 +00:11:20,680 --> 00:11:24,519 +stuff like that and uh somehow they + +259 +00:11:22,800 --> 00:11:27,360 +managed to pull that out and keep this + +260 +00:11:24,519 --> 00:11:28,760 +like 10 years later so um basically + +261 +00:11:27,360 --> 00:11:30,519 +they're pulling information from like + +262 +00:11:28,760 --> 00:11:32,800 +many many different structured data + +263 +00:11:30,519 --> 00:11:34,160 +sources that they can use so uh they can + +264 +00:11:32,800 --> 00:11:37,480 +pull it in there I don't know where they + +265 +00:11:34,160 --> 00:11:39,440 +got that I'm human uh but maybe that was + +266 +00:11:37,480 --> 00:11:43,240 +inferred from some piece of data + +267 +00:11:39,440 --> 00:11:44,760 +somewhere online or something cool um + +268 +00:11:43,240 --> 00:11:46,839 +another good thing about this that + +269 +00:11:44,760 --> 00:11:52,680 +actually I didn't mention directly in + +270 +00:11:46,839 --> 00:11:52,680 +the um in the lecture note or + +271 +00:11:54,680 --> 00:12:01,120 +slides is that there's a query language + +272 +00:11:57,360 --> 00:12:04,320 +for this yeah and a query language this + +273 +00:12:01,120 --> 00:12:06,839 +query language is called Sparkle so + +274 +00:12:04,320 --> 00:12:10,680 +there's a sequel for querying relational + +275 +00:12:06,839 --> 00:12:14,399 +databases and Sparkle is for querying + +276 +00:12:10,680 --> 00:12:15,240 +these uh knowledge bases and let me see + +277 +00:12:14,399 --> 00:12:18,279 +if I + +278 +00:12:15,240 --> 00:12:22,560 +can I asked chat + +279 +00:12:18,279 --> 00:12:24,560 +GPT to write me a sparkle query to find + +280 +00:12:22,560 --> 00:12:26,839 +all presidents of Carnegie melon + +281 +00:12:24,560 --> 00:12:31,160 +University so let's see if Chad GPT is + +282 +00:12:26,839 --> 00:12:31,160 +capable of doing that um + +283 +00:12:35,639 --> 00:12:39,680 +okay that's a problem let me + +284 +00:12:41,279 --> 00:12:47,000 +see okay there's there's an errand there + +285 +00:12:43,880 --> 00:12:48,360 +but like if uh uh if I could find a I + +286 +00:12:47,000 --> 00:12:50,160 +don't want to waste time in class like + +287 +00:12:48,360 --> 00:12:52,079 +finding a working query but basically + +288 +00:12:50,160 --> 00:12:53,399 +you can put it in a query and it allows + +289 +00:12:52,079 --> 00:12:56,120 +you to do a lot of things that are + +290 +00:12:53,399 --> 00:13:00,519 +similar to what you can do in SQL so you + +291 +00:12:56,120 --> 00:13:02,720 +can find like all of the edges of nodes + +292 +00:13:00,519 --> 00:13:05,279 +that satisfy a particular relation so + +293 +00:13:02,720 --> 00:13:07,360 +you could say I want for Carnegie melon + +294 +00:13:05,279 --> 00:13:10,160 +University to find all things that + +295 +00:13:07,360 --> 00:13:13,519 +followed the like president of relation + +296 +00:13:10,160 --> 00:13:14,959 +and that would give me all um you know + +297 +00:13:13,519 --> 00:13:18,680 +all presidents of Carnegie melon + +298 +00:13:14,959 --> 00:13:20,440 +University you can also like filter um + +299 +00:13:18,680 --> 00:13:22,160 +filter by their start date and end date + +300 +00:13:20,440 --> 00:13:24,120 +so find all of the preceden between a + +301 +00:13:22,160 --> 00:13:25,839 +certain time and a another time or + +302 +00:13:24,120 --> 00:13:30,480 +things like + +303 +00:13:25,839 --> 00:13:34,199 +that so this is good if you want to get + +304 +00:13:30,480 --> 00:13:36,600 +like high reli high reliability data um + +305 +00:13:34,199 --> 00:13:39,839 +in a scalable way because like if I ask + +306 +00:13:36,600 --> 00:13:41,920 +chat GPT like one of my favorite um one + +307 +00:13:39,839 --> 00:13:45,720 +of my favorite queries for chat GPT is + +308 +00:13:41,920 --> 00:13:48,600 +like name all of the name all of the + +309 +00:13:45,720 --> 00:13:51,959 +presidents that were born uh east of the + +310 +00:13:48,600 --> 00:13:53,880 +Mississippi River um and I've never + +311 +00:13:51,959 --> 00:13:56,519 +successfully gotten chat GPT to be able + +312 +00:13:53,880 --> 00:13:57,800 +to do this um because there's lots of + +313 +00:13:56,519 --> 00:13:59,560 +presidents who were born east of the + +314 +00:13:57,800 --> 00:14:02,320 +Mississippi River and it starts counting + +315 +00:13:59,560 --> 00:14:04,079 +them it can't distinguish what position + +316 +00:14:02,320 --> 00:14:05,639 +is east of the Mississippi and what + +317 +00:14:04,079 --> 00:14:09,120 +position is the west west of the + +318 +00:14:05,639 --> 00:14:11,279 +Mississippi but if you write a uh like a + +319 +00:14:09,120 --> 00:14:14,759 +sparkle query it's not that hard to do + +320 +00:14:11,279 --> 00:14:16,480 +that so there are um you know there are + +321 +00:14:14,759 --> 00:14:18,639 +certain types of questions especially + +322 +00:14:16,480 --> 00:14:20,399 +information aggregation and complex + +323 +00:14:18,639 --> 00:14:22,839 +relations and stuff that uh language + +324 +00:14:20,399 --> 00:14:26,600 +models are not very good + +325 +00:14:22,839 --> 00:14:28,120 +at cool um so that's kind of an intro to + +326 +00:14:26,600 --> 00:14:31,240 +knowledge bases why you might want to + +327 +00:14:28,120 --> 00:14:33,759 +think about them any questions so far + +328 +00:14:31,240 --> 00:14:33,759 +for + +329 +00:14:34,759 --> 00:14:39,720 +discussion okay um I will move on next + +330 +00:14:38,320 --> 00:14:41,199 +so the next thing I'd like to talk about + +331 +00:14:39,720 --> 00:14:43,839 +is learning representations for + +332 +00:14:41,199 --> 00:14:45,519 +knowledge bases um so knowledge bases + +333 +00:14:43,839 --> 00:14:48,000 +are great but one problem is they're + +334 +00:14:45,519 --> 00:14:51,040 +like inherently + +335 +00:14:48,000 --> 00:14:55,040 +incomplete and even with extremely large + +336 +00:14:51,040 --> 00:14:58,279 +scale uh it becomes impossible to have + +337 +00:14:55,040 --> 00:15:00,360 +them be complete and the reason why is + +338 +00:14:58,279 --> 00:15:03,639 +uh for examp example in Freebase which + +339 +00:15:00,360 --> 00:15:05,480 +was the predecessor to Wiki data um 71% + +340 +00:15:03,639 --> 00:15:08,560 +of humans didn't have a date of + +341 +00:15:05,480 --> 00:15:10,560 +birth um and probably every human + +342 +00:15:08,560 --> 00:15:12,079 +actually has a date of birth right um + +343 +00:15:10,560 --> 00:15:15,880 +you know we're pretty much guaranteed + +344 +00:15:12,079 --> 00:15:17,639 +for that to be the case so the issue is + +345 +00:15:15,880 --> 00:15:19,160 +like for very famous entities you want + +346 +00:15:17,639 --> 00:15:21,040 +lots of detailed information like you + +347 +00:15:19,160 --> 00:15:24,000 +can know absolutely everything about Joe + +348 +00:15:21,040 --> 00:15:25,759 +Biden or Barack Obama but you know at + +349 +00:15:24,000 --> 00:15:26,880 +the same time for Less major entities + +350 +00:15:25,759 --> 00:15:28,079 +you still want them in the knowledge + +351 +00:15:26,880 --> 00:15:30,079 +base but you're not going to be able to + +352 +00:15:28,079 --> 00:15:31,519 +get all that information or should you + +353 +00:15:30,079 --> 00:15:35,600 +for privacy + +354 +00:15:31,519 --> 00:15:36,680 +purposes and so the idea is um for + +355 +00:15:35,600 --> 00:15:38,079 +information that's written on the + +356 +00:15:36,680 --> 00:15:40,600 +internet somewhere can you perform + +357 +00:15:38,079 --> 00:15:42,759 +relation extraction which essentially + +358 +00:15:40,600 --> 00:15:44,600 +allows you to extract this information + +359 +00:15:42,759 --> 00:15:46,360 +and create your own knowledge bases and + +360 +00:15:44,600 --> 00:15:47,680 +stuff like this and this can also be + +361 +00:15:46,360 --> 00:15:50,079 +useful if you want to create it for like + +362 +00:15:47,680 --> 00:15:52,199 +a specialized domain or um or other + +363 +00:15:50,079 --> 00:15:55,000 +stuff like + +364 +00:15:52,199 --> 00:15:59,519 +that so there's a bunch of ways that + +365 +00:15:55,000 --> 00:16:03,079 +people do this um and one kind of + +366 +00:15:59,519 --> 00:16:06,120 +popular way that people have tried to do + +367 +00:16:03,079 --> 00:16:09,199 +relation extraction is through uh + +368 +00:16:06,120 --> 00:16:12,560 +leveraging consistency in embedding + +369 +00:16:09,199 --> 00:16:15,319 +space and so this is the most famous + +370 +00:16:12,560 --> 00:16:17,959 +example from word de uh what seems like + +371 +00:16:15,319 --> 00:16:21,880 +ages ago uh in + +372 +00:16:17,959 --> 00:16:23,920 +2013 and in the word Toc paper one of + +373 +00:16:21,880 --> 00:16:26,279 +the big you know exciting things was + +374 +00:16:23,920 --> 00:16:28,639 +essentially they demonstrated that + +375 +00:16:26,279 --> 00:16:30,120 +vectors in embedding space had kind of + +376 +00:16:28,639 --> 00:16:31,839 +in + +377 +00:16:30,120 --> 00:16:33,160 +you know meaning and actually the + +378 +00:16:31,839 --> 00:16:34,600 +vectors in embedding space could + +379 +00:16:33,160 --> 00:16:37,639 +correspond to relations between + +380 +00:16:34,600 --> 00:16:39,480 +embeddings so like uh we would have man + +381 +00:16:37,639 --> 00:16:41,000 +pointing to woman in approximately the + +382 +00:16:39,480 --> 00:16:42,920 +same direction that we had Uncle + +383 +00:16:41,000 --> 00:16:46,600 +pointing to Aunt and King pointing to + +384 +00:16:42,920 --> 00:16:49,680 +Queen and so um then you could do things + +385 +00:16:46,600 --> 00:16:51,440 +like you could take Kings subtract out + +386 +00:16:49,680 --> 00:16:53,560 +the vector that corresponded to + +387 +00:16:51,440 --> 00:16:58,360 +plurality uh add the vector that + +388 +00:16:53,560 --> 00:17:00,839 +corresponded to um you know uh to going + +389 +00:16:58,360 --> 00:17:04,319 +from masculine to feminine words and + +390 +00:17:00,839 --> 00:17:05,559 +then um like read the vector to that + +391 +00:17:04,319 --> 00:17:07,160 +were plural and you'd be able to + +392 +00:17:05,559 --> 00:17:09,439 +identify the plural by just knowing + +393 +00:17:07,160 --> 00:17:11,000 +these two uh vectors the plural of green + +394 +00:17:09,439 --> 00:17:14,000 +by just knowing those two + +395 +00:17:11,000 --> 00:17:14,000 +vectors + +396 +00:17:14,160 --> 00:17:21,880 +um but it turns out that you can either + +397 +00:17:18,199 --> 00:17:21,880 +learn embeddings + +398 +00:17:22,720 --> 00:17:28,240 +from like uh you can either learn + +399 +00:17:25,000 --> 00:17:30,400 +embeddings from text or you can use the + +400 +00:17:28,240 --> 00:17:32,039 +fact that you have a big knowledge base + +401 +00:17:30,400 --> 00:17:34,880 +that was curated by humans like Wiki + +402 +00:17:32,039 --> 00:17:36,120 +data to improve the embeddings of a + +403 +00:17:34,880 --> 00:17:39,559 +neural model + +404 +00:17:36,120 --> 00:17:41,799 +itself and so another pretty large uh + +405 +00:17:39,559 --> 00:17:43,600 +research area that a lot of people have + +406 +00:17:41,799 --> 00:17:47,120 +focused on is how do you get good + +407 +00:17:43,600 --> 00:17:48,720 +embeddings of a Knowledge Graph and this + +408 +00:17:47,120 --> 00:17:50,600 +is important if you want to do any sort + +409 +00:17:48,720 --> 00:17:52,799 +of like Knowledge Graph Search or other + +410 +00:17:50,600 --> 00:17:54,160 +things like this like for example one of + +411 +00:17:52,799 --> 00:17:56,799 +the really nice things about knowledge + +412 +00:17:54,160 --> 00:17:58,880 +graphs is they have information about a + +413 +00:17:56,799 --> 00:18:00,200 +whole bunch of really sparse entities + +414 +00:17:58,880 --> 00:18:03,240 +that aren't mentioned very much on the + +415 +00:18:00,200 --> 00:18:05,679 +internet for example and so because of + +416 +00:18:03,240 --> 00:18:07,440 +that you can um you can leverage the + +417 +00:18:05,679 --> 00:18:10,720 +knowledge graph structure together with + +418 +00:18:07,440 --> 00:18:10,720 +text to learn better embeddings + +419 +00:18:11,240 --> 00:18:18,520 +overall and so this particular paper is + +420 +00:18:15,280 --> 00:18:20,960 +one example of it um and the way they do + +421 +00:18:18,520 --> 00:18:23,280 +this is they express uh Knowledge Graph + +422 +00:18:20,960 --> 00:18:25,919 +triples is additive + +423 +00:18:23,280 --> 00:18:28,480 +Transformations and they minimize the + +424 +00:18:25,919 --> 00:18:31,640 +distance uh of existing triples with a + +425 +00:18:28,480 --> 00:18:35,039 +margin based loss so the way they do + +426 +00:18:31,640 --> 00:18:38,240 +this is they have the head um in the + +427 +00:18:35,039 --> 00:18:40,799 +tail and L is the vector corresponding + +428 +00:18:38,240 --> 00:18:42,679 +to like the link between the things that + +429 +00:18:40,799 --> 00:18:47,960 +corresponds to a + +430 +00:18:42,679 --> 00:18:52,159 +relation and so you go uh you have H and + +431 +00:18:47,960 --> 00:18:53,559 +T and here um like this is L but here + +432 +00:18:52,159 --> 00:18:55,640 +it's written as are because I got this + +433 +00:18:53,559 --> 00:18:58,120 +from a different paper and basically you + +434 +00:18:55,640 --> 00:18:59,480 +you try to go from H to T um according + +435 +00:18:58,120 --> 00:19:00,919 +to the relation + +436 +00:18:59,480 --> 00:19:05,120 +uh Vector + +437 +00:19:00,919 --> 00:19:07,200 +are and you use a hinge loss where um + +438 +00:19:05,120 --> 00:19:10,039 +for the hinge loss you you have a hinge + +439 +00:19:07,200 --> 00:19:12,640 +parameter and then you try to upweight + +440 +00:19:10,039 --> 00:19:15,760 +the example of a true triple and + +441 +00:19:12,640 --> 00:19:17,960 +downweight the example of a of a false + +442 +00:19:15,760 --> 00:19:19,880 +triple so this could be one that was + +443 +00:19:17,960 --> 00:19:22,080 +like randomly sampled to be incorrect + +444 +00:19:19,880 --> 00:19:22,080 +for + +445 +00:19:23,760 --> 00:19:29,080 +example um one interesting thing about + +446 +00:19:26,880 --> 00:19:31,559 +knowledge graph embeddings is like a lot + +447 +00:19:29,080 --> 00:19:33,600 +of famous AI researchers got their start + +448 +00:19:31,559 --> 00:19:36,000 +in Knowledge Graph embeddings and so + +449 +00:19:33,600 --> 00:19:39,760 +Richard soer is one of them if you know + +450 +00:19:36,000 --> 00:19:44,320 +he's the CEO of vi.com search engine now + +451 +00:19:39,760 --> 00:19:46,679 +um and uh this was a first attempt at + +452 +00:19:44,320 --> 00:19:49,679 +predicting relations they basically + +453 +00:19:46,679 --> 00:19:55,400 +created a um MLP that tries to predict + +454 +00:19:49,679 --> 00:19:58,880 +whether a relation exists so they have + +455 +00:19:55,400 --> 00:20:00,760 +a matrix for the left side of the + +456 +00:19:58,880 --> 00:20:03,320 +relation a matrix for the right side of + +457 +00:20:00,760 --> 00:20:05,080 +the relation and then they feed in the + +458 +00:20:03,320 --> 00:20:07,559 +embeddings of each of the entities in + +459 +00:20:05,080 --> 00:20:08,919 +the relation they have a nonlinearity + +460 +00:20:07,559 --> 00:20:11,799 +and then they have another Vector that + +461 +00:20:08,919 --> 00:20:14,720 +tries to predict the um the probability + +462 +00:20:11,799 --> 00:20:16,679 +of the uh actual relation being correct + +463 +00:20:14,720 --> 00:20:18,960 +so you would run this through a sigmoid + +464 +00:20:16,679 --> 00:20:21,000 +and then uh if it was one the relation + +465 +00:20:18,960 --> 00:20:24,039 +was likely to exist if it was Zero then + +466 +00:20:21,000 --> 00:20:25,480 +the relation was likely to not exist and + +467 +00:20:24,039 --> 00:20:27,799 +then they also propos something called a + +468 +00:20:25,480 --> 00:20:31,480 +neural tensor Network and this adds a + +469 +00:20:27,799 --> 00:20:34,000 +bilinear feature extractor um and so + +470 +00:20:31,480 --> 00:20:37,440 +basically what this is saying is we have + +471 +00:20:34,000 --> 00:20:40,000 +the embedding here the embedding here we + +472 +00:20:37,440 --> 00:20:41,840 +have a matrix and then we calculate the + +473 +00:20:40,000 --> 00:20:43,080 +dot product between the embedding after + +474 +00:20:41,840 --> 00:20:45,799 +transformation it looks a lot like + +475 +00:20:43,080 --> 00:20:47,720 +attention actually in a way um because + +476 +00:20:45,799 --> 00:20:50,000 +we had the bilinear attention so it's + +477 +00:20:47,720 --> 00:20:53,640 +similar to that as well and then we also + +478 +00:20:50,000 --> 00:20:56,840 +have the MLP so this part corresponds to + +479 +00:20:53,640 --> 00:21:00,320 +MLP and then we have a bias + +480 +00:20:56,840 --> 00:21:02,200 +term and um this is a powerful model but + +481 +00:21:00,320 --> 00:21:05,400 +it's a bit overparameterized so we + +482 +00:21:02,200 --> 00:21:08,120 +actually later um uh this kind of fell + +483 +00:21:05,400 --> 00:21:10,360 +out of uh favor towards these more + +484 +00:21:08,120 --> 00:21:14,520 +simple models that we're using uh kind + +485 +00:21:10,360 --> 00:21:14,520 +of just linear projections between the + +486 +00:21:17,600 --> 00:21:22,279 +two so there's um there's a lot of + +487 +00:21:20,120 --> 00:21:25,320 +methods like this these methods are + +488 +00:21:22,279 --> 00:21:27,039 +basically assuming that we have either + +489 +00:21:25,320 --> 00:21:29,080 +Knowledge Graph + +490 +00:21:27,039 --> 00:21:30,799 +embeddings um and we want to learn + +491 +00:21:29,080 --> 00:21:32,480 +relations or they're assuming that we + +492 +00:21:30,799 --> 00:21:34,320 +don't have any information at all about + +493 +00:21:32,480 --> 00:21:36,840 +the knowledge graph and we want to learn + +494 +00:21:34,320 --> 00:21:40,039 +the knowledge graph embedding themselves + +495 +00:21:36,840 --> 00:21:42,400 +it's been used for both of them but um I + +496 +00:21:40,039 --> 00:21:44,000 +I'd say now it's probably most useful + +497 +00:21:42,400 --> 00:21:45,520 +for learning Knowledge Graph embeddings + +498 +00:21:44,000 --> 00:21:50,480 +if you want to do any sort of Knowledge + +499 +00:21:45,520 --> 00:21:50,480 +Graph based modeling uh which can be + +500 +00:21:51,240 --> 00:21:55,919 +useful um cool any questions about these + +501 +00:21:57,360 --> 00:22:01,679 +ones okay + +502 +00:21:59,520 --> 00:22:04,360 +next um actually this part might be a + +503 +00:22:01,679 --> 00:22:06,600 +little bit simpler than the uh than the + +504 +00:22:04,360 --> 00:22:09,000 +like knowledge graft based approaches so + +505 +00:22:06,600 --> 00:22:10,960 +another method for relations extraction + +506 +00:22:09,000 --> 00:22:13,440 +is learning from text + +507 +00:22:10,960 --> 00:22:16,120 +directly + +508 +00:22:13,440 --> 00:22:19,080 +and the first question about this is how + +509 +00:22:16,120 --> 00:22:22,200 +do you get training data to learn uh + +510 +00:22:19,080 --> 00:22:24,480 +about relation learn relation extraction + +511 +00:22:22,200 --> 00:22:26,720 +and so there was this very influential + +512 +00:22:24,480 --> 00:22:28,279 +paper a distant supervision for relation + +513 +00:22:26,720 --> 00:22:31,120 +extraction I would say it's almost one + +514 +00:22:28,279 --> 00:22:32,880 +of the first or certainly one of the + +515 +00:22:31,120 --> 00:22:34,559 +most influential papers on like data + +516 +00:22:32,880 --> 00:22:35,960 +augmentation or synthetic data for + +517 +00:22:34,559 --> 00:22:38,400 +natural language + +518 +00:22:35,960 --> 00:22:40,440 +processing and basically the idea is you + +519 +00:22:38,400 --> 00:22:44,279 +already have a knowledge base that has + +520 +00:22:40,440 --> 00:22:47,440 +some entries in it like Wiki data and so + +521 +00:22:44,279 --> 00:22:50,919 +then given in entity relation entity + +522 +00:22:47,440 --> 00:22:52,919 +triples um can you extract all text that + +523 +00:22:50,919 --> 00:22:54,799 +matches this particular relation type + +524 +00:22:52,919 --> 00:22:56,480 +and use it to train a relation extractor + +525 +00:22:54,799 --> 00:22:59,640 +a supervised relation + +526 +00:22:56,480 --> 00:23:01,880 +extractor so the way this works + +527 +00:22:59,640 --> 00:23:04,039 +is like let's say we have this is an old + +528 +00:23:01,880 --> 00:23:06,120 +paper so the examples are also old but + +529 +00:23:04,039 --> 00:23:08,039 +um let's say we have Steven Spielberg + +530 +00:23:06,120 --> 00:23:10,159 +being a director of the film Saving + +531 +00:23:08,039 --> 00:23:12,840 +Private Ryan and that's included in our + +532 +00:23:10,159 --> 00:23:14,840 +uh our knowledge base so what it would + +533 +00:23:12,840 --> 00:23:17,080 +do is it would find all sentences that + +534 +00:23:14,840 --> 00:23:19,400 +have Steven Spielberg and Saving Private + +535 +00:23:17,080 --> 00:23:22,080 +Ryan included in them and it would label + +536 +00:23:19,400 --> 00:23:24,159 +this as like a positive example of that + +537 +00:23:22,080 --> 00:23:28,240 +relation so this + +538 +00:23:24,159 --> 00:23:30,760 +is in general often it's okay it it + +539 +00:23:28,240 --> 00:23:34,480 +works reasonably well but the problem + +540 +00:23:30,760 --> 00:23:37,200 +with this is there are also um negative + +541 +00:23:34,480 --> 00:23:38,840 +examples of this so like for example + +542 +00:23:37,200 --> 00:23:40,480 +here I think the first one is kind of a + +543 +00:23:38,840 --> 00:23:43,240 +negative example for the director + +544 +00:23:40,480 --> 00:23:45,880 +relation because Steven Spielberg's film + +545 +00:23:43,240 --> 00:23:48,120 +Saving Private Ryan doesn't actually + +546 +00:23:45,880 --> 00:23:50,000 +tell you he's the director it just tells + +547 +00:23:48,120 --> 00:23:52,520 +you that he's somehow affiliated with it + +548 +00:23:50,000 --> 00:23:54,840 +he could be the writer or he could be uh + +549 +00:23:52,520 --> 00:23:57,679 +the actor or or something else like that + +550 +00:23:54,840 --> 00:24:00,440 +so this is a nice way to create data for + +551 +00:23:57,679 --> 00:24:03,640 +basically free but at the same time uh + +552 +00:24:00,440 --> 00:24:06,159 +you can like create noisy examples and + +553 +00:24:03,640 --> 00:24:06,159 +that can be a + +554 +00:24:07,159 --> 00:24:14,600 +problem so um there's been a lot of work + +555 +00:24:11,400 --> 00:24:16,000 +about this um relationship uh relation + +556 +00:24:14,600 --> 00:24:17,840 +classification with neural networks + +557 +00:24:16,000 --> 00:24:20,840 +there's a lot of uh different methods + +558 +00:24:17,840 --> 00:24:23,159 +that could be uh doing this most of them + +559 +00:24:20,840 --> 00:24:24,919 +work by extracting features and then + +560 +00:24:23,159 --> 00:24:27,039 +classifying somehow although there are + +561 +00:24:24,919 --> 00:24:29,960 +some uh large language model based + +562 +00:24:27,039 --> 00:24:33,120 +methods now um one one thing about + +563 +00:24:29,960 --> 00:24:35,440 +relation extraction or not kind of like + +564 +00:24:33,120 --> 00:24:36,799 +information extraction in general is + +565 +00:24:35,440 --> 00:24:38,559 +that very often you want to run this + +566 +00:24:36,799 --> 00:24:40,200 +over like a huge Corpus you want to run + +567 +00:24:38,559 --> 00:24:42,320 +it over the whole internet or other + +568 +00:24:40,200 --> 00:24:45,000 +things like that so from that point of + +569 +00:24:42,320 --> 00:24:47,159 +view like I I said I could just ask + +570 +00:24:45,000 --> 00:24:49,480 +mistol to give me the answer about like + +571 +00:24:47,159 --> 00:24:52,440 +whether cars are included in sentences + +572 +00:24:49,480 --> 00:24:55,120 +but if you want to run you know gp4 over + +573 +00:24:52,440 --> 00:24:56,799 +the whole internet that's a pretty big + +574 +00:24:55,120 --> 00:25:00,159 +budget and you might want to reconsider + +575 +00:24:56,799 --> 00:25:02,440 +that so there are so um there is also + +576 +00:25:00,159 --> 00:25:04,440 +some you know benefit in having cheap + +577 +00:25:02,440 --> 00:25:07,200 +and lightweight + +578 +00:25:04,440 --> 00:25:09,159 +methods so basically what this + +579 +00:25:07,200 --> 00:25:11,279 +particular paper did is it extracted + +580 +00:25:09,159 --> 00:25:12,760 +features in in classified so it + +581 +00:25:11,279 --> 00:25:15,600 +extracted lexical features of the + +582 +00:25:12,760 --> 00:25:20,240 +entities themselves and features of the + +583 +00:25:15,600 --> 00:25:22,360 +whole span and so like the way I uh most + +584 +00:25:20,240 --> 00:25:26,960 +modern methods for this do this is they + +585 +00:25:22,360 --> 00:25:29,399 +basically um extract features from the + +586 +00:25:26,960 --> 00:25:31,679 +first part of the first entity the + +587 +00:25:29,399 --> 00:25:33,760 +second part of the the first entity the + +588 +00:25:31,679 --> 00:25:36,360 +first part of the second entity and the + +589 +00:25:33,760 --> 00:25:37,720 +last part of the uh second entity and + +590 +00:25:36,360 --> 00:25:39,600 +take all of those embeddings feed them + +591 +00:25:37,720 --> 00:25:41,440 +into like an MLP or something like that + +592 +00:25:39,600 --> 00:25:44,039 +and then make a prediction about whether + +593 +00:25:41,440 --> 00:25:45,760 +that relation exists so if you have an + +594 +00:25:44,039 --> 00:25:47,840 +embedding model this is relatively easy + +595 +00:25:45,760 --> 00:25:50,360 +to do you feed it through like uh + +596 +00:25:47,840 --> 00:25:51,919 +Roberta or you feed it through mistol + +597 +00:25:50,360 --> 00:25:54,559 +and get the embeddings for each of the + +598 +00:25:51,919 --> 00:25:55,840 +tokens and um and then you make a + +599 +00:25:54,559 --> 00:25:58,840 +prediction based on those four + +600 +00:25:55,840 --> 00:25:58,840 +embeddings + +601 +00:26:00,600 --> 00:26:04,840 +um the details of that are like not + +602 +00:26:03,520 --> 00:26:07,320 +super important unless you're going to + +603 +00:26:04,840 --> 00:26:09,279 +go in and implement it yourself so you + +604 +00:26:07,320 --> 00:26:10,919 +can um like if you're actually going to + +605 +00:26:09,279 --> 00:26:12,120 +be doing relation extraction obviously + +606 +00:26:10,919 --> 00:26:14,279 +the details are important but I'm + +607 +00:26:12,120 --> 00:26:16,000 +assuming that most people won't be uh + +608 +00:26:14,279 --> 00:26:19,720 +you know doing that as your final + +609 +00:26:16,000 --> 00:26:21,240 +project but um one really interesting + +610 +00:26:19,720 --> 00:26:22,919 +thing that is relevant even if you're + +611 +00:26:21,240 --> 00:26:26,360 +not doing relationship relation + +612 +00:26:22,919 --> 00:26:29,360 +extraction is how you can model noise + +613 +00:26:26,360 --> 00:26:32,600 +because this um as I said they're + +614 +00:26:29,360 --> 00:26:35,720 +creating lots of like semi noisy data + +615 +00:26:32,600 --> 00:26:38,919 +and a lot of the work in getting good + +616 +00:26:35,720 --> 00:26:40,360 +bottles for relation extraction has been + +617 +00:26:38,919 --> 00:26:41,799 +how do we deal with this distant + +618 +00:26:40,360 --> 00:26:43,799 +supervision noise and I'm just going to + +619 +00:26:41,799 --> 00:26:45,760 +give one example here but there's like a + +620 +00:26:43,799 --> 00:26:49,120 +series of papers after this that also + +621 +00:26:45,760 --> 00:26:50,600 +tried to do similar things so the idea + +622 +00:26:49,120 --> 00:26:53,600 +is that there's noise in the distant + +623 +00:26:50,600 --> 00:26:56,559 +supervision labels um and so we want to + +624 +00:26:53,600 --> 00:27:01,039 +model and mitigate that noise and the + +625 +00:26:56,559 --> 00:27:03,919 +way this paper does this is they have an + +626 +00:27:01,039 --> 00:27:06,679 +encoder and from the encoder you + +627 +00:27:03,919 --> 00:27:10,960 +calculate embeddings and make + +628 +00:27:06,679 --> 00:27:14,279 +predictions and so you have a small set + +629 +00:27:10,960 --> 00:27:16,080 +of like very high quality data and this + +630 +00:27:14,279 --> 00:27:17,760 +small set of very high quality data you + +631 +00:27:16,080 --> 00:27:19,880 +can basically trust that all of the data + +632 +00:27:17,760 --> 00:27:22,320 +is not noisy like maybe it's manually + +633 +00:27:19,880 --> 00:27:23,720 +annotated data and you have like 5,000 + +634 +00:27:22,320 --> 00:27:25,000 +examples of it or something like that + +635 +00:27:23,720 --> 00:27:26,880 +and then separately from that you have + +636 +00:27:25,000 --> 00:27:28,440 +like 5 million examples of automatically + +637 +00:27:26,880 --> 00:27:30,799 +labeled data that might be good might + +638 +00:27:28,440 --> 00:27:32,679 +not be good and so what they do is + +639 +00:27:30,799 --> 00:27:34,200 +essentially at the beginning they take + +640 +00:27:32,679 --> 00:27:36,520 +this encoder get embeddings make + +641 +00:27:34,200 --> 00:27:38,000 +predictions over the high quality data + +642 +00:27:36,520 --> 00:27:40,320 +and then they have a separate noise + +643 +00:27:38,000 --> 00:27:43,440 +modeling layer where what this noise + +644 +00:27:40,320 --> 00:27:46,919 +modeling layer does is it has a + +645 +00:27:43,440 --> 00:27:50,039 +transition Matrix which says given that + +646 +00:27:46,919 --> 00:27:53,279 +this given that we made a particular + +647 +00:27:50,039 --> 00:27:55,159 +prediction over classes because this is + +648 +00:27:53,279 --> 00:27:59,919 +essentially a multiclass classification + +649 +00:27:55,159 --> 00:28:01,519 +problem they transform the + +650 +00:27:59,919 --> 00:28:03,159 +sorry I don't remember if they transform + +651 +00:28:01,519 --> 00:28:04,640 +the probabilities or the low Jets I + +652 +00:28:03,159 --> 00:28:07,320 +think it's the probabilities but they + +653 +00:28:04,640 --> 00:28:12,799 +transform the probabilities and get a + +654 +00:28:07,320 --> 00:28:14,720 +final uh distribution after noise and so + +655 +00:28:12,799 --> 00:28:17,399 +that means that you can basically smooth + +656 +00:28:14,720 --> 00:28:19,240 +out this uh distribution and account for + +657 +00:28:17,399 --> 00:28:20,880 +the fact that the labels may be noisy or + +658 +00:28:19,240 --> 00:28:24,399 +may may not be + +659 +00:28:20,880 --> 00:28:26,600 +noisy um then they add additional + +660 +00:28:24,399 --> 00:28:28,559 +normalization on this transition Matrix + +661 +00:28:26,600 --> 00:28:32,440 +using something called Trace normal + +662 +00:28:28,559 --> 00:28:35,840 +ization to move this Matrix closer to + +663 +00:28:32,440 --> 00:28:38,480 +the identity function which says that + +664 +00:28:35,840 --> 00:28:40,720 +the predictions are probably not wrong + +665 +00:28:38,480 --> 00:28:43,159 +all the time uh the predictions are + +666 +00:28:40,720 --> 00:28:45,360 +probably correct you know a lot of the + +667 +00:28:43,159 --> 00:28:46,600 +time they're not correct all the time uh + +668 +00:28:45,360 --> 00:28:49,720 +so then you have that Trace + +669 +00:28:46,600 --> 00:28:51,880 +normalization competing with um this uh + +670 +00:28:49,720 --> 00:28:55,440 +trying to give you like a more smooth + +671 +00:28:51,880 --> 00:28:58,760 +distribution and and reduce your uh L + +672 +00:28:55,440 --> 00:29:00,320 +like reduce your loss so um I I think + +673 +00:28:58,760 --> 00:29:02,559 +this is actually a pretty interesting + +674 +00:29:00,320 --> 00:29:04,480 +idea and it can be used not just for + +675 +00:29:02,559 --> 00:29:08,600 +relation extraction but also in cases + +676 +00:29:04,480 --> 00:29:08,600 +where um you might have noisy labels + +677 +00:29:08,799 --> 00:29:14,320 +overall um so are there any questions + +678 +00:29:12,360 --> 00:29:15,720 +about this or any of the things that are + +679 +00:29:14,320 --> 00:29:18,480 +going on + +680 +00:29:15,720 --> 00:29:20,279 +here um even if you're completely + +681 +00:29:18,480 --> 00:29:21,960 +uninterested in relation extraction I'd + +682 +00:29:20,279 --> 00:29:23,720 +encourage you to think about like what + +683 +00:29:21,960 --> 00:29:26,159 +are + +684 +00:29:23,720 --> 00:29:27,360 +some examples of things that you are + +685 +00:29:26,159 --> 00:29:29,519 +interested in where you could get + +686 +00:29:27,360 --> 00:29:31,840 +potentially labels and how could you for + +687 +00:29:29,519 --> 00:29:34,880 +theise there like that might be uh you + +688 +00:29:31,840 --> 00:29:34,880 +know a thing to + +689 +00:29:35,679 --> 00:29:39,919 +about okay so this was a very very brief + +690 +00:29:38,320 --> 00:29:42,679 +overview of how we create knowledge + +691 +00:29:39,919 --> 00:29:44,080 +bases uh from textual data or from + +692 +00:29:42,679 --> 00:29:47,159 +Knowledge Graph data structured + +693 +00:29:44,080 --> 00:29:48,840 +Knowledge Graph data um so now I like to + +694 +00:29:47,159 --> 00:29:51,519 +talk a little bit about how to use + +695 +00:29:48,840 --> 00:29:53,960 +knowledge bases to inform neural + +696 +00:29:51,519 --> 00:29:56,159 +models and there's a bunch of different + +697 +00:29:53,960 --> 00:29:59,519 +ways to do this + +698 +00:29:56,159 --> 00:30:02,600 +um the + +699 +00:29:59,519 --> 00:30:06,960 +the first way um is to + +700 +00:30:02,600 --> 00:30:09,840 +improve embeddings uh + +701 +00:30:06,960 --> 00:30:11,960 +with existing lexicons and this example + +702 +00:30:09,840 --> 00:30:14,679 +is using non-contextual embeddings like + +703 +00:30:11,960 --> 00:30:16,240 +not the not the ones we get from neural + +704 +00:30:14,679 --> 00:30:17,919 +language models but once we get from + +705 +00:30:16,240 --> 00:30:20,919 +just running a embedding model like word + +706 +00:30:17,919 --> 00:30:22,960 +toac or something like this um and what + +707 +00:30:20,919 --> 00:30:25,640 +they did in this paper is they + +708 +00:30:22,960 --> 00:30:27,600 +essentially um retrofitted embeddings to + +709 +00:30:25,640 --> 00:30:30,840 +existing lexicons by doing post Hawk + +710 +00:30:27,600 --> 00:30:34,080 +trans of the embeddings so that they + +711 +00:30:30,840 --> 00:30:36,840 +matched the um the knowledge graph for + +712 +00:30:34,080 --> 00:30:39,080 +lexon better and so the way they did + +713 +00:30:36,840 --> 00:30:41,880 +this is + +714 +00:30:39,080 --> 00:30:43,720 +um they started out with pre-trained + +715 +00:30:41,880 --> 00:30:45,399 +embeddings and they had a double + +716 +00:30:43,720 --> 00:30:47,240 +objective of making the transform + +717 +00:30:45,399 --> 00:30:49,120 +embeddings close to the neighbors and + +718 +00:30:47,240 --> 00:30:52,519 +close to the original + +719 +00:30:49,120 --> 00:30:58,840 +embedding and the way they did this is + +720 +00:30:52,519 --> 00:30:58,840 +they essentially had um this + +721 +00:30:59,799 --> 00:31:03,720 +this regularization term over here so + +722 +00:31:01,880 --> 00:31:06,200 +this regularization term is basically + +723 +00:31:03,720 --> 00:31:08,279 +saying um I don't want you to move your + +724 +00:31:06,200 --> 00:31:09,360 +embeddings too far away from how they + +725 +00:31:08,279 --> 00:31:11,679 +were + +726 +00:31:09,360 --> 00:31:14,799 +initialized and then at the same time I + +727 +00:31:11,679 --> 00:31:17,279 +would like you to make these uh + +728 +00:31:14,799 --> 00:31:19,600 +embeddings closer to each other if they + +729 +00:31:17,279 --> 00:31:21,240 +are synonyms of each other so they did + +730 +00:31:19,600 --> 00:31:23,600 +this using word net and they basically + +731 +00:31:21,240 --> 00:31:26,200 +took the words uh that were synonyms to + +732 +00:31:23,600 --> 00:31:28,679 +each other in sinets with each other and + +733 +00:31:26,200 --> 00:31:30,000 +they tried to regularize the synonyms to + +734 +00:31:28,679 --> 00:31:32,120 +be closer together but also the + +735 +00:31:30,000 --> 00:31:33,639 +embeddings to be closer to how they + +736 +00:31:32,120 --> 00:31:35,960 +started + +737 +00:31:33,639 --> 00:31:38,799 +out and there were also examples of + +738 +00:31:35,960 --> 00:31:40,720 +forcing anms away from each other so + +739 +00:31:38,799 --> 00:31:42,480 +like if you're um this is a little bit + +740 +00:31:40,720 --> 00:31:44,799 +of an older work so it was working on + +741 +00:31:42,480 --> 00:31:47,600 +non-contextualized embeddings but we + +742 +00:31:44,799 --> 00:31:49,399 +could do something very similar for um + +743 +00:31:47,600 --> 00:31:52,000 +more modern models in like Knowledge + +744 +00:31:49,399 --> 00:31:55,320 +Graph embeddings for example so let's + +745 +00:31:52,000 --> 00:31:58,960 +say we had + +746 +00:31:55,320 --> 00:32:03,240 +um a model that ident + +747 +00:31:58,960 --> 00:32:06,600 +entities and then different examples of + +748 +00:32:03,240 --> 00:32:06,600 +those entities across different + +749 +00:32:07,159 --> 00:32:11,480 +contexts um let's go back to the wiki + +750 +00:32:20,639 --> 00:32:26,840 +data and so um if we had lots of + +751 +00:32:23,960 --> 00:32:29,360 +examples of Joe Biden um Joe Biden is + +752 +00:32:26,840 --> 00:32:35,159 +referred to in a number ways like Joe + +753 +00:32:29,360 --> 00:32:44,440 +Biden Joseph Biden Joseph R Biden um J + +754 +00:32:35,159 --> 00:32:47,880 +jrb I guess um pus 48 46 sorry um and uh + +755 +00:32:44,440 --> 00:32:50,799 +so you could find different examples of + +756 +00:32:47,880 --> 00:32:52,799 +things that match these strings um and + +757 +00:32:50,799 --> 00:32:55,360 +even do entity linking uh which I'll + +758 +00:32:52,799 --> 00:32:57,200 +I'll talk about in a little bit and then + +759 +00:32:55,360 --> 00:32:58,760 +encourag the embeddings for all of these + +760 +00:32:57,200 --> 00:33:01,360 +different instances is to be closer + +761 +00:32:58,760 --> 00:33:04,039 +together to make your model like disting + +762 +00:33:01,360 --> 00:33:06,799 +uh distinguish them less and Ure that + +763 +00:33:04,039 --> 00:33:08,399 +they uh they get closer edings and that + +764 +00:33:06,799 --> 00:33:11,639 +could improve like question answering + +765 +00:33:08,399 --> 00:33:11,639 +look up other stuff like + +766 +00:33:12,960 --> 00:33:19,880 +that + +767 +00:33:14,919 --> 00:33:23,399 +cool um yeah I have a question about + +768 +00:33:19,880 --> 00:33:25,399 +this so what happens if you do like subw + +769 +00:33:23,399 --> 00:33:28,000 +modeling and then you don't have like + +770 +00:33:25,399 --> 00:33:30,440 +the embedment for that entire string + +771 +00:33:28,000 --> 00:33:32,320 +that is supposed to be Clos yeah what + +772 +00:33:30,440 --> 00:33:34,279 +happens if you do subword modeling and + +773 +00:33:32,320 --> 00:33:35,480 +you don't have the embedding uh you + +774 +00:33:34,279 --> 00:33:37,159 +don't have a single embedding that + +775 +00:33:35,480 --> 00:33:40,360 +corresponds to an entity so that's a + +776 +00:33:37,159 --> 00:33:42,559 +really good question um let me + +777 +00:33:40,360 --> 00:33:44,240 +check I don't think I actually have + +778 +00:33:42,559 --> 00:33:46,600 +these on the slide so I might have to + +779 +00:33:44,240 --> 00:33:46,600 +open a + +780 +00:33:53,639 --> 00:33:59,720 +paper yeah okay so there's a lot of + +781 +00:33:56,440 --> 00:33:59,720 +different ways to handle this + +782 +00:34:11,520 --> 00:34:18,079 +so there there's two papers um the first + +783 +00:34:14,879 --> 00:34:20,000 +paper is uh a really nice paper very + +784 +00:34:18,079 --> 00:34:22,359 +influential on the subject of + +785 +00:34:20,000 --> 00:34:25,359 +co-reference resolution and co-reference + +786 +00:34:22,359 --> 00:34:27,240 +resolution um is essentially trying to + +787 +00:34:25,359 --> 00:34:30,000 +identify when two spans correspond to + +788 +00:34:27,240 --> 00:34:32,320 +each other so like if I say Joe B Joe + +789 +00:34:30,000 --> 00:34:34,359 +Biden early in a document and then later + +790 +00:34:32,320 --> 00:34:35,480 +in a document it just says Biden we want + +791 +00:34:34,359 --> 00:34:38,839 +to know that those two things are + +792 +00:34:35,480 --> 00:34:40,919 +referring to each other and then um we + +793 +00:34:38,839 --> 00:34:42,839 +had a paper later where we generalized + +794 +00:34:40,919 --> 00:34:44,839 +this and applied you know very similar + +795 +00:34:42,839 --> 00:34:48,079 +methodology to like lots and lots of + +796 +00:34:44,839 --> 00:34:50,760 +different analysis tasks but I can um I + +797 +00:34:48,079 --> 00:34:53,839 +can show the beginning here and + +798 +00:34:50,760 --> 00:34:59,320 +basically the methodology that they use + +799 +00:34:53,839 --> 00:35:02,440 +here um is they add + +800 +00:34:59,320 --> 00:35:04,440 +a and this is specifically for modeling + +801 +00:35:02,440 --> 00:35:08,240 +spans and getting embeddings out of + +802 +00:35:04,440 --> 00:35:09,040 +spans of uh tokens and what they did is + +803 +00:35:08,240 --> 00:35:13,079 +they + +804 +00:35:09,040 --> 00:35:14,920 +essentially have a model where you take + +805 +00:35:13,079 --> 00:35:16,440 +the thing from the beginning the + +806 +00:35:14,920 --> 00:35:18,760 +embedding from the beginning of the span + +807 +00:35:16,440 --> 00:35:22,040 +the embedding from the end of the span + +808 +00:35:18,760 --> 00:35:24,280 +and the average embedding of all of the + +809 +00:35:22,040 --> 00:35:26,280 +embeddings in the span and that gives + +810 +00:35:24,280 --> 00:35:27,480 +you three vectors for any span right + +811 +00:35:26,280 --> 00:35:30,160 +because you can always get the beginning + +812 +00:35:27,480 --> 00:35:33,280 +that and in the mean and then based on + +813 +00:35:30,160 --> 00:35:36,560 +that they feed that through um like a + +814 +00:35:33,280 --> 00:35:37,800 +neural network and get a new edting so + +815 +00:35:36,560 --> 00:35:40,000 +they feed that through a transformation + +816 +00:35:37,800 --> 00:35:42,520 +and get a new edting and so that's the + +817 +00:35:40,000 --> 00:35:44,200 +method that they used and I think our + +818 +00:35:42,520 --> 00:35:46,640 +paper actually has a + +819 +00:35:44,200 --> 00:35:49,640 +better + +820 +00:35:46,640 --> 00:35:52,640 +um a better figure of how you can + +821 +00:35:49,640 --> 00:35:56,680 +actually use that actually maybe it + +822 +00:35:52,640 --> 00:35:58,160 +doesn't okay but anyway um yeah because + +823 +00:35:56,680 --> 00:36:00,240 +uh yeah here's the figure + +824 +00:35:58,160 --> 00:36:01,520 +so then you can use that for a number of + +825 +00:36:00,240 --> 00:36:03,040 +things you could use that to like look + +826 +00:36:01,520 --> 00:36:06,359 +up something in a knowledge base you + +827 +00:36:03,040 --> 00:36:08,599 +could also use that to um decide whether + +828 +00:36:06,359 --> 00:36:10,440 +two spans are co-referent by feeding in + +829 +00:36:08,599 --> 00:36:12,800 +like the first span and the second Span + +830 +00:36:10,440 --> 00:36:14,960 +in and then predicting whether those two + +831 +00:36:12,800 --> 00:36:19,640 +spans cor correspond to each other or + +832 +00:36:14,960 --> 00:36:21,240 +not so this general idea of modeling + +833 +00:36:19,640 --> 00:36:22,960 +spans and then modeling relations + +834 +00:36:21,240 --> 00:36:24,520 +between the spans allows you to solve + +835 +00:36:22,960 --> 00:36:26,119 +like lots of different tasks like part + +836 +00:36:24,520 --> 00:36:27,920 +of speech tagging or named entity + +837 +00:36:26,119 --> 00:36:30,319 +recognition or relation extraction or + +838 +00:36:27,920 --> 00:36:31,920 +other stuff like that so um yeah + +839 +00:36:30,319 --> 00:36:34,040 +actually I realized now that I should + +840 +00:36:31,920 --> 00:36:35,079 +have probably talked about these in the + +841 +00:36:34,040 --> 00:36:36,560 +slides where I was talking about + +842 +00:36:35,079 --> 00:36:38,599 +modeling but that that would be my + +843 +00:36:36,560 --> 00:36:42,319 +recommended way of doing + +844 +00:36:38,599 --> 00:36:42,319 +it cool any other + +845 +00:36:43,839 --> 00:36:49,480 +questions nice okay + +846 +00:36:46,880 --> 00:36:52,880 +um + +847 +00:36:49,480 --> 00:36:55,119 +so another question is how can we inject + +848 +00:36:52,880 --> 00:36:56,640 +knowledge into language models um + +849 +00:36:55,119 --> 00:36:58,720 +there's a bunch of different ways to do + +850 +00:36:56,640 --> 00:37:03,079 +this um + +851 +00:36:58,720 --> 00:37:05,000 +one very easy way is to somehow look up + +852 +00:37:03,079 --> 00:37:09,640 +relevant knowledge in your knowledge + +853 +00:37:05,000 --> 00:37:09,640 +graph and um oh + +854 +00:37:10,280 --> 00:37:15,440 +sorry I was presenting on my own screen + +855 +00:37:13,040 --> 00:37:18,240 +not the screen that everybody can see so + +856 +00:37:15,440 --> 00:37:22,000 +um to look up all of the uh knowledge in + +857 +00:37:18,240 --> 00:37:24,000 +a Knowledge Graph and um somehow provide + +858 +00:37:22,000 --> 00:37:26,800 +it to the model one way you can provide + +859 +00:37:24,000 --> 00:37:28,720 +it to the model is through prompting um + +860 +00:37:26,800 --> 00:37:32,400 +but the problem with with prompting is + +861 +00:37:28,720 --> 00:37:33,920 +that you're not necessarily going to uh + +862 +00:37:32,400 --> 00:37:37,319 +be able + +863 +00:37:33,920 --> 00:37:41,359 +to utilize knowledge that is kind of + +864 +00:37:37,319 --> 00:37:43,920 +like minority knowledge because the + +865 +00:37:41,359 --> 00:37:47,560 +embeddings of the entities that you're + +866 +00:37:43,920 --> 00:37:49,440 +presenting may not be you know like well + +867 +00:37:47,560 --> 00:37:51,839 +learned so + +868 +00:37:49,440 --> 00:37:53,200 +you're requiring essentially the model + +869 +00:37:51,839 --> 00:37:55,359 +to be able to generalize from the + +870 +00:37:53,200 --> 00:37:57,880 +knowledge you provide in + +871 +00:37:55,359 --> 00:38:00,839 +the prompt despite the fact that the + +872 +00:37:57,880 --> 00:38:02,240 +prompt is like minor entities or other + +873 +00:38:00,839 --> 00:38:07,040 +things like that that are not as well + +874 +00:38:02,240 --> 00:38:10,400 +learned so is another um method to + +875 +00:38:07,040 --> 00:38:13,440 +handle this um we previously proposed a + +876 +00:38:10,400 --> 00:38:15,599 +method that allows you + +877 +00:38:13,440 --> 00:38:18,319 +to essentially + +878 +00:38:15,599 --> 00:38:21,319 +predict instead of predicting directly + +879 +00:38:18,319 --> 00:38:24,920 +the words here you can predict a tag + +880 +00:38:21,319 --> 00:38:27,200 +that says birth name or a given name or + +881 +00:38:24,920 --> 00:38:31,480 +family name or something like that and + +882 +00:38:27,200 --> 00:38:32,839 +then post talk the model will fill in uh + +883 +00:38:31,480 --> 00:38:36,720 +that like birth + +884 +00:38:32,839 --> 00:38:39,400 +name text based on a knowledge base so + +885 +00:38:36,720 --> 00:38:41,079 +um you know if you have a a Wikipedia + +886 +00:38:39,400 --> 00:38:44,240 +article about Barack Obama that you're + +887 +00:38:41,079 --> 00:38:48,680 +trying to write it could predict um + +888 +00:38:44,240 --> 00:38:52,040 +birth name born uh birth name comma born + +889 +00:38:48,680 --> 00:38:55,359 +in birth date and that's like a very + +890 +00:38:52,040 --> 00:38:56,880 +very common thing in Wikipedia right so + +891 +00:38:55,359 --> 00:39:00,960 +because of that it can predict it very + +892 +00:38:56,880 --> 00:39:03,160 +consistently very uh formulaically and + +893 +00:39:00,960 --> 00:39:04,599 +that allows you to um you know with high + +894 +00:39:03,160 --> 00:39:06,079 +confidence get something that makes + +895 +00:39:04,599 --> 00:39:08,599 +sense and is factual and reduce + +896 +00:39:06,079 --> 00:39:11,400 +hallucination and other stuff like that + +897 +00:39:08,599 --> 00:39:12,599 +so um basically how could you inject + +898 +00:39:11,400 --> 00:39:14,280 +this into language models there's + +899 +00:39:12,599 --> 00:39:16,240 +multiple ways one is prompting that's + +900 +00:39:14,280 --> 00:39:18,160 +maybe the easier way another way is + +901 +00:39:16,240 --> 00:39:21,520 +through like templatic generation like + +902 +00:39:18,160 --> 00:39:23,200 +this where you generate placeholders uh + +903 +00:39:21,520 --> 00:39:25,200 +for all the information you want to add + +904 +00:39:23,200 --> 00:39:26,480 +and then you add the information uh + +905 +00:39:25,200 --> 00:39:29,359 +directly from the knowledge base through + +906 +00:39:26,480 --> 00:39:29,359 +the placeholders like + +907 +00:39:30,680 --> 00:39:36,800 +cool um there there's details about this + +908 +00:39:34,240 --> 00:39:38,920 +in the paper like how we um formulate a + +909 +00:39:36,800 --> 00:39:41,319 +training objective for something like + +910 +00:39:38,920 --> 00:39:43,480 +this and the difficulty in formulating a + +911 +00:39:41,319 --> 00:39:46,400 +training objective is that you need to + +912 +00:39:43,480 --> 00:39:48,280 +figure out when you want to replace + +913 +00:39:46,400 --> 00:39:49,720 +things so like you might not always want + +914 +00:39:48,280 --> 00:39:51,000 +to replace with birth name you might + +915 +00:39:49,720 --> 00:39:53,920 +want to replace with given name and + +916 +00:39:51,000 --> 00:39:55,839 +family name and we demonstrate that you + +917 +00:39:53,920 --> 00:39:58,400 +can figure out how to do this by + +918 +00:39:55,839 --> 00:40:00,960 +essentially like Mar iing over the + +919 +00:39:58,400 --> 00:40:03,520 +various ways of uh of doing this but + +920 +00:40:00,960 --> 00:40:05,880 +that's kind of more complex detail + +921 +00:40:03,520 --> 00:40:05,880 +that's in the + +922 +00:40:08,440 --> 00:40:15,480 +paper another really interesting + +923 +00:40:11,000 --> 00:40:17,319 +question um that uh we this is a also a + +924 +00:40:15,480 --> 00:40:19,440 +paper that I was involved in from uh + +925 +00:40:17,319 --> 00:40:22,040 +four years ago but I feel like this is + +926 +00:40:19,440 --> 00:40:25,040 +not entirely solved even in like modern + +927 +00:40:22,040 --> 00:40:26,920 +rag systems uh today is how can we + +928 +00:40:25,040 --> 00:40:28,880 +reason over a lot of text that's + +929 +00:40:26,920 --> 00:40:32,440 +included in a knowledge + +930 +00:40:28,880 --> 00:40:35,839 +base um oh sorry reason over Text corpus + +931 +00:40:32,440 --> 00:40:40,480 +like we reason over knowledge bases + +932 +00:40:35,839 --> 00:40:43,280 +and basically uh what we did was we + +933 +00:40:40,480 --> 00:40:44,960 +answered questions using text corpora as + +934 +00:40:43,280 --> 00:40:48,680 +a traceable knowledge + +935 +00:40:44,960 --> 00:40:52,800 +bases and we did relevance matching over + +936 +00:40:48,680 --> 00:40:54,920 +mentions um and the way we did this is + +937 +00:40:52,800 --> 00:40:57,440 +we created mentioned + +938 +00:40:54,920 --> 00:40:59,480 +vectors and the mentioned vectors + +939 +00:40:57,440 --> 00:41:01,720 +vectors of all of the mentions in the + +940 +00:40:59,480 --> 00:41:04,920 +knowledge base of particular + +941 +00:41:01,720 --> 00:41:05,920 +entities um and then we retrieved + +942 +00:41:04,920 --> 00:41:09,599 +relevant + +943 +00:41:05,920 --> 00:41:13,440 +mentions um from pre-trained Models uh + +944 +00:41:09,599 --> 00:41:15,040 +so we we ran embeddings and generated uh + +945 +00:41:13,440 --> 00:41:16,000 +embeddings for each of the mentions in + +946 +00:41:15,040 --> 00:41:20,440 +the whole + +947 +00:41:16,000 --> 00:41:25,440 +Corpus and based on this let let + +948 +00:41:20,440 --> 00:41:29,119 +me find the place over here so based on + +949 +00:41:25,440 --> 00:41:32,720 +this we basically um encoded all of + +950 +00:41:29,119 --> 00:41:35,040 +these uh in here and then we had a dense + +951 +00:41:32,720 --> 00:41:37,359 +query vector and the dense query Vector + +952 +00:41:35,040 --> 00:41:41,640 +was specifically trained so that it + +953 +00:41:37,359 --> 00:41:44,280 +would be able to identify entity + +954 +00:41:41,640 --> 00:41:46,760 +mentions that answered the problem so if + +955 +00:41:44,280 --> 00:41:50,240 +we had like when was The Grateful Dead + +956 +00:41:46,760 --> 00:41:52,520 +and uh Bob Dylan album released uh we + +957 +00:41:50,240 --> 00:41:54,760 +would have Bob Dylan be one vector The + +958 +00:41:52,520 --> 00:41:56,560 +Grateful Dead be another vector and the + +959 +00:41:54,760 --> 00:41:58,200 +model would be specifically trained so + +960 +00:41:56,560 --> 00:42:00,040 +that when you took took the entity + +961 +00:41:58,200 --> 00:42:03,319 +embedding of this and matched it with an + +962 +00:42:00,040 --> 00:42:05,400 +entity embedding in this big Corpus of + +963 +00:42:03,319 --> 00:42:07,920 +encoded things here it would be most + +964 +00:42:05,400 --> 00:42:10,400 +likely to return relevant information to + +965 +00:42:07,920 --> 00:42:13,160 +answer these like entity relation + +966 +00:42:10,400 --> 00:42:14,680 +questions so then the question is how do + +967 +00:42:13,160 --> 00:42:18,040 +we train a model like this how do we + +968 +00:42:14,680 --> 00:42:20,280 +train like a dense uh embedding model so + +969 +00:42:18,040 --> 00:42:21,520 +that it gets relevant information for + +970 +00:42:20,280 --> 00:42:23,800 +answering + +971 +00:42:21,520 --> 00:42:26,920 +questions and basically the way we did + +972 +00:42:23,800 --> 00:42:29,280 +this was through week supervision uh + +973 +00:42:26,920 --> 00:42:31,640 +just like I talked about for relation + +974 +00:42:29,280 --> 00:42:33,599 +extraction in relation extraction we can + +975 +00:42:31,640 --> 00:42:35,680 +create weak supervision by taking a big + +976 +00:42:33,599 --> 00:42:37,960 +existing knowledge base and identifying + +977 +00:42:35,680 --> 00:42:40,920 +all of the sentences where the answer is + +978 +00:42:37,960 --> 00:42:43,319 +included and so what we did is we took + +979 +00:42:40,920 --> 00:42:45,880 +this big existing knowledge base and + +980 +00:42:43,319 --> 00:42:47,920 +said okay what are some of the relations + +981 +00:42:45,880 --> 00:42:49,800 +in the knowledge base one example of a + +982 +00:42:47,920 --> 00:42:51,559 +relation in the knowledge base is Steven + +983 +00:42:49,800 --> 00:42:54,359 +Spielberg is the director of Saving + +984 +00:42:51,559 --> 00:42:57,319 +Private Ryan so we created questions + +985 +00:42:54,359 --> 00:42:59,119 +that said um + +986 +00:42:57,319 --> 00:43:01,079 +was the director of Saving Private Ryan + +987 +00:42:59,119 --> 00:43:03,920 +we can create those with templates uh + +988 +00:43:01,079 --> 00:43:06,359 +easily for many different relations and + +989 +00:43:03,920 --> 00:43:09,480 +then we took the embedding for Saving + +990 +00:43:06,359 --> 00:43:10,760 +Private Ryan in that question and we + +991 +00:43:09,480 --> 00:43:14,200 +tried to + +992 +00:43:10,760 --> 00:43:17,119 +upweight all of the Saving Private Ryan + +993 +00:43:14,200 --> 00:43:19,680 +embeddings over all of Wikipedia where + +994 +00:43:17,119 --> 00:43:23,160 +Steven Spielberg cooccurred in that + +995 +00:43:19,680 --> 00:43:25,640 +sentence so that tries to match um you + +996 +00:43:23,160 --> 00:43:27,079 +know artificially created questions with + +997 +00:43:25,640 --> 00:43:29,040 +sentences that would be the answer + +998 +00:43:27,079 --> 00:43:31,040 +answer to that question and so that + +999 +00:43:29,040 --> 00:43:32,480 +gives you like supervision it gives you + +1000 +00:43:31,040 --> 00:43:35,079 +a lot of data to train over it gives you + +1001 +00:43:32,480 --> 00:43:38,920 +a good model so that that allowed us to + +1002 +00:43:35,079 --> 00:43:41,319 +learn this model well so um this is one + +1003 +00:43:38,920 --> 00:43:43,160 +example of how you can do like rag spe + +1004 +00:43:41,319 --> 00:43:46,200 +specifically like informed by knowledge + +1005 +00:43:43,160 --> 00:43:46,200 +bases and stuff like + +1006 +00:43:47,280 --> 00:43:52,160 +that um any any questions about this + +1007 +00:43:53,480 --> 00:43:57,680 +or + +1008 +00:43:55,079 --> 00:44:00,079 +okay so another thing that I I'd like to + +1009 +00:43:57,680 --> 00:44:03,599 +go into is uh something we call schema + +1010 +00:44:00,079 --> 00:44:06,240 +free extraction and so if I go back to + +1011 +00:44:03,599 --> 00:44:09,960 +the wiki Data + +1012 +00:44:06,240 --> 00:44:10,760 +Page um Wiki data has something we call + +1013 +00:44:09,960 --> 00:44:13,599 +a + +1014 +00:44:10,760 --> 00:44:16,880 +schema and the schema is basically like + +1015 +00:44:13,599 --> 00:44:19,640 +what are the relations that are included + +1016 +00:44:16,880 --> 00:44:21,000 +in the database so one of the relations + +1017 +00:44:19,640 --> 00:44:25,079 +that's included in the databas is + +1018 +00:44:21,000 --> 00:44:25,079 +instance of I guess also + +1019 +00:44:25,200 --> 00:44:29,040 +image lots of images + +1020 +00:44:29,079 --> 00:44:33,880 +um + +1021 +00:44:30,440 --> 00:44:35,680 +signature uh sex or gender country of + +1022 +00:44:33,880 --> 00:44:38,319 +citizenship and these relations are like + +1023 +00:44:35,680 --> 00:44:41,079 +decided a priori by the people who + +1024 +00:44:38,319 --> 00:44:43,200 +created Wiki data um and there's lots + +1025 +00:44:41,079 --> 00:44:45,880 +and lots of them but that doesn't + +1026 +00:44:43,200 --> 00:44:48,880 +necessarily mean + +1027 +00:44:45,880 --> 00:44:50,400 +that like similarly to the problem of + +1028 +00:44:48,880 --> 00:44:51,839 +not having all of the entities we can't + +1029 +00:44:50,400 --> 00:44:55,119 +have all of the relations and just to + +1030 +00:44:51,839 --> 00:44:57,280 +give one example I was um in preparation + +1031 +00:44:55,119 --> 00:44:59,680 +for our large language models lecture I + +1032 +00:44:57,280 --> 00:45:02,640 +actually created some structured data + +1033 +00:44:59,680 --> 00:45:04,319 +about large language models and some of + +1034 +00:45:02,640 --> 00:45:06,119 +the instru the structured data about + +1035 +00:45:04,319 --> 00:45:09,319 +large language models that I created was + +1036 +00:45:06,119 --> 00:45:11,440 +like what is the variety of positional + +1037 +00:45:09,319 --> 00:45:13,079 +embedding that they're using or + +1038 +00:45:11,440 --> 00:45:15,800 +positional embedding variety and + +1039 +00:45:13,079 --> 00:45:18,720 +positional embedding variety is not in + +1040 +00:45:15,800 --> 00:45:20,359 +Wiki data I think um I'd be surprised if + +1041 +00:45:18,720 --> 00:45:23,200 +it was in Wiki data but I think it's not + +1042 +00:45:20,359 --> 00:45:25,760 +in Wiki data um so like as you go down + +1043 +00:45:23,200 --> 00:45:27,760 +to like more esoteric Concepts or like + +1044 +00:45:25,760 --> 00:45:29,599 +specialized domains or stuff like that + +1045 +00:45:27,760 --> 00:45:31,359 +you're almost always guaranteed to not + +1046 +00:45:29,599 --> 00:45:34,040 +you know have all the entities you need + +1047 +00:45:31,359 --> 00:45:36,680 +or not have all the relations you need + +1048 +00:45:34,040 --> 00:45:38,160 +so that's the problem that schema free + +1049 +00:45:36,680 --> 00:45:39,920 +extraction is trying to solve it's + +1050 +00:45:38,160 --> 00:45:41,680 +trying to figure out how we can like + +1051 +00:45:39,920 --> 00:45:45,920 +jointly figure out the schema together + +1052 +00:45:41,680 --> 00:45:45,920 +with uh the information you want to + +1053 +00:45:48,480 --> 00:45:54,040 +extract and the um the most famous + +1054 +00:45:52,319 --> 00:45:55,599 +example of this is something called open + +1055 +00:45:54,040 --> 00:45:57,200 +information extraction in open + +1056 +00:45:55,599 --> 00:46:01,160 +information extraction basically what + +1057 +00:45:57,200 --> 00:46:04,040 +it's saying is um we don't need a schema + +1058 +00:46:01,160 --> 00:46:06,359 +uh there's no there's no schema um the + +1059 +00:46:04,040 --> 00:46:08,720 +only schema that we have is the actual + +1060 +00:46:06,359 --> 00:46:12,200 +text in the sentences that we're + +1061 +00:46:08,720 --> 00:46:14,520 +referring to um the entities so if we + +1062 +00:46:12,200 --> 00:46:16,040 +have United United has a Hub in Chicago + +1063 +00:46:14,520 --> 00:46:17,359 +which is the headquarters of United + +1064 +00:46:16,040 --> 00:46:21,200 +Continental + +1065 +00:46:17,359 --> 00:46:25,880 +Holdings um the relation is literally + +1066 +00:46:21,200 --> 00:46:29,359 +has a Hub in um that that's the relation + +1067 +00:46:25,880 --> 00:46:33,359 +um and then for this we have Chicago is + +1068 +00:46:29,359 --> 00:46:35,559 +the headquarters of um but the problem + +1069 +00:46:33,359 --> 00:46:37,520 +with this uh is that this cannot + +1070 +00:46:35,559 --> 00:46:40,359 +abstract away so if we had another + +1071 +00:46:37,520 --> 00:46:42,000 +sentence that said Chicago or United + +1072 +00:46:40,359 --> 00:46:44,319 +Continental Holdings has its + +1073 +00:46:42,000 --> 00:46:45,720 +headquarters in Chicago that would be + +1074 +00:46:44,319 --> 00:46:49,800 +treated as completely different you + +1075 +00:46:45,720 --> 00:46:49,800 +wouldn't be able to like group those two + +1076 +00:46:51,119 --> 00:46:57,720 +together so um in open information + +1077 +00:46:55,000 --> 00:47:00,079 +extraction actually a lot of the methods + +1078 +00:46:57,720 --> 00:47:02,800 +this is one of the few things where + +1079 +00:47:00,079 --> 00:47:05,480 +people still use rule-based systems as + +1080 +00:47:02,800 --> 00:47:07,640 +kind of like uh you know almost + +1081 +00:47:05,480 --> 00:47:09,319 +state-of-the-art systems but basically + +1082 +00:47:07,640 --> 00:47:11,559 +the reason why you're able to do this is + +1083 +00:47:09,319 --> 00:47:14,440 +it's not actually that hard to extract + +1084 +00:47:11,559 --> 00:47:16,839 +kind of the relevant strings between uh + +1085 +00:47:14,440 --> 00:47:19,599 +two entities and so the both the + +1086 +00:47:16,839 --> 00:47:21,359 +Precision and recall are pretty high and + +1087 +00:47:19,599 --> 00:47:24,079 +another reason why people use rule-based + +1088 +00:47:21,359 --> 00:47:25,760 +systems is because they um like you want + +1089 +00:47:24,079 --> 00:47:27,440 +to run it over the whole web and running + +1090 +00:47:25,760 --> 00:47:29,079 +a neural model over the whole web is + +1091 +00:47:27,440 --> 00:47:32,000 +expensive so you can use a role-based + +1092 +00:47:29,079 --> 00:47:35,319 +model so some examples of this include + +1093 +00:47:32,000 --> 00:47:37,640 +text Runner and Reverb um the basic + +1094 +00:47:35,319 --> 00:47:41,000 +ideas behind them is that you use a + +1095 +00:47:37,640 --> 00:47:43,720 +parser to extract um to do a syntactic + +1096 +00:47:41,000 --> 00:47:45,760 +analysis of the sentence um in extract + +1097 +00:47:43,720 --> 00:47:47,640 +during according to rules so for example + +1098 +00:47:45,760 --> 00:47:50,160 +the relation must contain a + +1099 +00:47:47,640 --> 00:47:52,720 +predicate um the subject and object must + +1100 +00:47:50,160 --> 00:47:56,040 +be noun phrases other things like + +1101 +00:47:52,720 --> 00:47:57,640 +this um and then what they did later is + +1102 +00:47:56,040 --> 00:47:59,240 +what they did in this this paper + +1103 +00:47:57,640 --> 00:48:00,800 +arguably this is maybe no longer + +1104 +00:47:59,240 --> 00:48:02,280 +necessary with the compute power we have + +1105 +00:48:00,800 --> 00:48:04,000 +now but they trained an even faster + +1106 +00:48:02,280 --> 00:48:06,960 +model to extract over large amounts of + +1107 +00:48:04,000 --> 00:48:08,720 +data so they basically um use this as a + +1108 +00:48:06,960 --> 00:48:10,599 +su weak supervision and then train a + +1109 +00:48:08,720 --> 00:48:12,160 +model that could do it even faster with + +1110 +00:48:10,599 --> 00:48:14,680 +the sequence base + +1111 +00:48:12,160 --> 00:48:18,119 +model + +1112 +00:48:14,680 --> 00:48:19,880 +um another thing that they did was um + +1113 +00:48:18,119 --> 00:48:22,280 +they aggregated multiple pieces of + +1114 +00:48:19,880 --> 00:48:24,480 +evidence heris to find common and + +1115 +00:48:22,280 --> 00:48:28,760 +therefore potentially reliable + +1116 +00:48:24,480 --> 00:48:28,760 +extractions so like + +1117 +00:48:29,800 --> 00:48:36,960 +any piece of text on the internet like + +1118 +00:48:31,559 --> 00:48:40,200 +could be a lie right so um you know + +1119 +00:48:36,960 --> 00:48:43,400 +if I I might write on my blog United has + +1120 +00:48:40,200 --> 00:48:45,119 +a Hub in like Denver or on the other + +1121 +00:48:43,400 --> 00:48:48,240 +hand + +1122 +00:48:45,119 --> 00:48:50,839 +um wait a set + +1123 +00:48:48,240 --> 00:48:52,680 +right some something has a Hub in Denver + +1124 +00:48:50,839 --> 00:48:54,960 +but United has a Hub in Pittsburgh is + +1125 +00:48:52,680 --> 00:48:58,040 +definitely wrong so let's uh let's go + +1126 +00:48:54,960 --> 00:49:00,000 +with that um uh so somebody could write + +1127 +00:48:58,040 --> 00:49:02,359 +that on the internet and in fact because + +1128 +00:49:00,000 --> 00:49:06,440 +I just said it it's probably in YouTube + +1129 +00:49:02,359 --> 00:49:09,119 +comments somewhere but um uh + +1130 +00:49:06,440 --> 00:49:10,760 +like any any piece of information on the + +1131 +00:49:09,119 --> 00:49:13,079 +internet could be wrong so basically + +1132 +00:49:10,760 --> 00:49:16,680 +they had um heuristic methods to filter + +1133 +00:49:13,079 --> 00:49:19,559 +these out and usually these were + +1134 +00:49:16,680 --> 00:49:21,559 +frequency based so it's like um if both + +1135 +00:49:19,559 --> 00:49:23,520 +United and Pittsburgh are very common + +1136 +00:49:21,559 --> 00:49:26,000 +but it's very rare for somebody to says + +1137 +00:49:23,520 --> 00:49:27,799 +say United has a Hub in Pittsburgh then + +1138 +00:49:26,000 --> 00:49:29,200 +that means it's statistically unlikely + +1139 +00:49:27,799 --> 00:49:30,799 +for this to be correct because if it + +1140 +00:49:29,200 --> 00:49:33,280 +were correct we'd expect to see it much + +1141 +00:49:30,799 --> 00:49:36,799 +more frequently so um those were the + +1142 +00:49:33,280 --> 00:49:36,799 +kind of things that they they did + +1143 +00:49:37,520 --> 00:49:44,440 +here there's also some neural models for + +1144 +00:49:40,400 --> 00:49:46,839 +open IE um I I think these are uh used + +1145 +00:49:44,440 --> 00:49:48,440 +maybe a little bit less often um but + +1146 +00:49:46,839 --> 00:49:52,559 +basically heuristics are still not + +1147 +00:49:48,440 --> 00:49:55,280 +perfect and so what they did the problem + +1148 +00:49:52,559 --> 00:49:56,720 +with um like not relying on heuristics + +1149 +00:49:55,280 --> 00:49:58,880 +is you need to get training data from + +1150 +00:49:56,720 --> 00:50:01,880 +somewhere so there's a rather clever + +1151 +00:49:58,880 --> 00:50:03,599 +paper um and again if you're not + +1152 +00:50:01,880 --> 00:50:05,119 +interested in relation extraction in + +1153 +00:50:03,599 --> 00:50:07,559 +particular I think this is one thing + +1154 +00:50:05,119 --> 00:50:10,000 +that's still worth paying attention to + +1155 +00:50:07,559 --> 00:50:12,680 +um which is + +1156 +00:50:10,000 --> 00:50:14,559 +they demonstrated that it's possible to + +1157 +00:50:12,680 --> 00:50:16,319 +create relatively large data sets by + +1158 +00:50:14,559 --> 00:50:18,160 +asking people simple + +1159 +00:50:16,319 --> 00:50:21,440 +questions + +1160 +00:50:18,160 --> 00:50:24,480 +and in particular they wanted to + +1161 +00:50:21,440 --> 00:50:27,119 +get relation extraction data sets that + +1162 +00:50:24,480 --> 00:50:30,799 +are like um + +1163 +00:50:27,119 --> 00:50:34,200 +who finished something like UCD finished + +1164 +00:50:30,799 --> 00:50:37,760 +the two 2006 championships and if you + +1165 +00:50:34,200 --> 00:50:40,720 +ask people like okay select this span um + +1166 +00:50:37,760 --> 00:50:44,559 +select the entity span the relations + +1167 +00:50:40,720 --> 00:50:46,160 +span and the um in the second entity the + +1168 +00:50:44,559 --> 00:50:49,079 +head entity the relation and the tail + +1169 +00:50:46,160 --> 00:50:51,839 +entity select it on this interface and + +1170 +00:50:49,079 --> 00:50:54,200 +then uh tell me is it this relation or + +1171 +00:50:51,839 --> 00:50:55,640 +this relation or this relation that's + +1172 +00:50:54,200 --> 00:50:58,160 +actually pretty hard and getting like + +1173 +00:50:55,640 --> 00:51:01,280 +crowd workers to start learning how to + +1174 +00:50:58,160 --> 00:51:03,280 +do that task is a bit tricky and it + +1175 +00:51:01,280 --> 00:51:06,400 +takes some you know it takes some time + +1176 +00:51:03,280 --> 00:51:07,799 +to get them onboarded basically um but + +1177 +00:51:06,400 --> 00:51:09,760 +basically what they said is instead + +1178 +00:51:07,799 --> 00:51:11,359 +we'll just ask them questions where the + +1179 +00:51:09,760 --> 00:51:14,240 +answer to the question basically gives + +1180 +00:51:11,359 --> 00:51:17,160 +us the answer to what the relation is so + +1181 +00:51:14,240 --> 00:51:20,319 +they ask like who finished something and + +1182 +00:51:17,160 --> 00:51:23,680 +the answer is like UCD and um what did + +1183 +00:51:20,319 --> 00:51:25,359 +someone finish the 2006 Championship + +1184 +00:51:23,680 --> 00:51:28,920 +what did someone fish some finish + +1185 +00:51:25,359 --> 00:51:31,760 +something as and basically um in doing + +1186 +00:51:28,920 --> 00:51:33,319 +this they created uh something called + +1187 +00:51:31,760 --> 00:51:34,359 +semantic roles which we're actually + +1188 +00:51:33,319 --> 00:51:35,960 +probably going to talk about a little + +1189 +00:51:34,359 --> 00:51:37,559 +bit later but you can take the semantic + +1190 +00:51:35,960 --> 00:51:41,200 +roles and then you can use them to + +1191 +00:51:37,559 --> 00:51:43,920 +annotate uh relation extraction data and + +1192 +00:51:41,200 --> 00:51:46,720 +then they trained a supervised neural + +1193 +00:51:43,920 --> 00:51:46,720 +tager for + +1194 +00:51:48,799 --> 00:51:53,480 +this + +1195 +00:51:50,480 --> 00:51:56,040 +cool um so another thing I'd like to + +1196 +00:51:53,480 --> 00:51:57,880 +talk about is I talked about learning um + +1197 +00:51:56,040 --> 00:51:59,920 +information about entities from entity + +1198 +00:51:57,880 --> 00:52:02,079 +embeddings but you can actually learn + +1199 +00:51:59,920 --> 00:52:04,520 +information about relations from + +1200 +00:52:02,079 --> 00:52:07,680 +relation information about other + +1201 +00:52:04,520 --> 00:52:12,359 +relations and this can help solve the + +1202 +00:52:07,680 --> 00:52:16,119 +problem um of like essentially the fact + +1203 +00:52:12,359 --> 00:52:18,760 +that open IE is not able to abstract and + +1204 +00:52:16,119 --> 00:52:20,680 +generalize so word embeddings or entity + +1205 +00:52:18,760 --> 00:52:23,079 +embeddings give information of the word + +1206 +00:52:20,680 --> 00:52:26,920 +in context um which can be indicative + +1207 +00:52:23,079 --> 00:52:29,640 +for knowledge uh knowledge bases + +1208 +00:52:26,920 --> 00:52:32,640 +but other relations or combinations + +1209 +00:52:29,640 --> 00:52:34,960 +thereof are also indicative of them and + +1210 +00:52:32,640 --> 00:52:36,960 +um if anybody is familiar with graphs or + +1211 +00:52:34,960 --> 00:52:39,520 +graph processing there's the whole idea + +1212 +00:52:36,960 --> 00:52:41,400 +of um link prediction where you're given + +1213 +00:52:39,520 --> 00:52:42,680 +like a a small number of links in a + +1214 +00:52:41,400 --> 00:52:45,760 +graph and you want to predict what other + +1215 +00:52:42,680 --> 00:52:50,559 +links are likely to uh + +1216 +00:52:45,760 --> 00:52:52,920 +exist and like as I said um a lot of uh + +1217 +00:52:50,559 --> 00:52:54,839 +you know very prominent AI researchers + +1218 +00:52:52,920 --> 00:52:57,440 +got their start in uh relation + +1219 +00:52:54,839 --> 00:53:01,480 +extraction and uh it sker is another one + +1220 +00:52:57,440 --> 00:53:04,319 +of them actually um and uh basically + +1221 +00:53:01,480 --> 00:53:07,880 +this 2009 paper proposed to use tensor + +1222 +00:53:04,319 --> 00:53:09,400 +de composition to do uh induction of + +1223 +00:53:07,880 --> 00:53:13,520 +relations + +1224 +00:53:09,400 --> 00:53:15,319 +and the way it worked is um you model + +1225 +00:53:13,520 --> 00:53:18,400 +relations by decomposing a tensor + +1226 +00:53:15,319 --> 00:53:21,599 +containing entity relation entity tles + +1227 +00:53:18,400 --> 00:53:24,000 +so you have the left entity the right + +1228 +00:53:21,599 --> 00:53:27,160 +entity and whether the relation exists + +1229 +00:53:24,000 --> 00:53:31,319 +is this big um uh big tensor in the + +1230 +00:53:27,160 --> 00:53:33,160 +Middle where these are embeddings of the + +1231 +00:53:31,319 --> 00:53:35,760 +left entity these are embeddings of the + +1232 +00:53:33,160 --> 00:53:38,839 +right entity and then the the depth of + +1233 +00:53:35,760 --> 00:53:40,680 +the tensor is like which relations exist + +1234 +00:53:38,839 --> 00:53:43,760 +and so we know that some exist so we + +1235 +00:53:40,680 --> 00:53:46,640 +give them a one we know others exist um + +1236 +00:53:43,760 --> 00:53:48,680 +don't exist so we give them a zero um + +1237 +00:53:46,640 --> 00:53:51,040 +and then we do a low rank approximation + +1238 +00:53:48,680 --> 00:53:52,559 +of this tensor and if we do a low rank + +1239 +00:53:51,040 --> 00:53:55,720 +approximation of the tensor we have + +1240 +00:53:52,559 --> 00:53:57,280 +reconstruction ER basically so when we + +1241 +00:53:55,720 --> 00:53:59,960 +reconstruct the are some things that + +1242 +00:53:57,280 --> 00:54:01,960 +were previously zero become one and so + +1243 +00:53:59,960 --> 00:54:04,760 +the things that were previously zero and + +1244 +00:54:01,960 --> 00:54:07,880 +then become close to one are the ones + +1245 +00:54:04,760 --> 00:54:10,559 +that we think like actually might exist + +1246 +00:54:07,880 --> 00:54:12,000 +they might be real um they might be real + +1247 +00:54:10,559 --> 00:54:13,640 +relations that we were just missing + +1248 +00:54:12,000 --> 00:54:16,599 +because our previous knowledge base was + +1249 +00:54:13,640 --> 00:54:16,599 +complete uh + +1250 +00:54:18,640 --> 00:54:26,880 +incomplete and um one thing that takes + +1251 +00:54:21,799 --> 00:54:28,559 +us a step further is uh what if if we + +1252 +00:54:26,880 --> 00:54:30,079 +actually do have a knowledge basee or + +1253 +00:54:28,559 --> 00:54:31,839 +what if we even have multiple knowledge + +1254 +00:54:30,079 --> 00:54:35,520 +bases like what if we have Wiki data and + +1255 +00:54:31,839 --> 00:54:36,640 +we have wordnet and we have um uh other + +1256 +00:54:35,520 --> 00:54:38,920 +things like + +1257 +00:54:36,640 --> 00:54:40,680 +this and in addition to that we also + +1258 +00:54:38,920 --> 00:54:43,400 +have open IE + +1259 +00:54:40,680 --> 00:54:45,960 +extractions so there's an idea of + +1260 +00:54:43,400 --> 00:54:47,880 +something called Universal schema and + +1261 +00:54:45,960 --> 00:54:50,200 +what Universal schema do is they embed + +1262 +00:54:47,880 --> 00:54:55,119 +relations from multiple schema or + +1263 +00:54:50,200 --> 00:54:56,960 +schemata in the same space and based on + +1264 +00:54:55,119 --> 00:54:59,559 +this they then + +1265 +00:54:56,960 --> 00:55:01,359 +predict which ones exist are likely to + +1266 +00:54:59,559 --> 00:55:04,400 +exist or which ones are not likely to + +1267 +00:55:01,359 --> 00:55:06,680 +exist so here we might have a free base + +1268 +00:55:04,400 --> 00:55:08,640 +or Wiki data we might have another uh + +1269 +00:55:06,680 --> 00:55:11,559 +kind of relation extraction data set + +1270 +00:55:08,640 --> 00:55:15,480 +called Tac and then on the training data + +1271 +00:55:11,559 --> 00:55:17,040 +set we have um like all of these uh + +1272 +00:55:15,480 --> 00:55:20,240 +things that are like positive or + +1273 +00:55:17,040 --> 00:55:23,960 +negative or something like this and then + +1274 +00:55:20,240 --> 00:55:26,960 +on the heldout data set we have only + +1275 +00:55:23,960 --> 00:55:29,480 +information about like open + +1276 +00:55:26,960 --> 00:55:30,920 +for example so um for all of the + +1277 +00:55:29,480 --> 00:55:33,079 +entities that exist in the knowledge + +1278 +00:55:30,920 --> 00:55:34,839 +base we know you know whether the + +1279 +00:55:33,079 --> 00:55:36,039 +relations exist for but for all the + +1280 +00:55:34,839 --> 00:55:39,640 +entities that don't exist in the + +1281 +00:55:36,039 --> 00:55:41,760 +database we don't know and so uh then + +1282 +00:55:39,640 --> 00:55:43,839 +just from the existence of open IE + +1283 +00:55:41,760 --> 00:55:45,480 +relations or non-existence of open IE + +1284 +00:55:43,839 --> 00:55:47,920 +relations we can predict that other + +1285 +00:55:45,480 --> 00:55:49,359 +relations might exist for example so + +1286 +00:55:47,920 --> 00:55:51,079 +this is a great way to combine the two + +1287 +00:55:49,359 --> 00:55:53,920 +together like open IE you can run it + +1288 +00:55:51,079 --> 00:55:55,880 +over you know very large data sets um + +1289 +00:55:53,920 --> 00:55:58,000 +but it doesn't have a good schema free + +1290 +00:55:55,880 --> 00:56:00,400 +uh Wiki data has a good schema but you + +1291 +00:55:58,000 --> 00:56:02,960 +can't you know it's all manually created + +1292 +00:56:00,400 --> 00:56:04,720 +so you can suggest other ones and one + +1293 +00:56:02,960 --> 00:56:07,960 +other like interesting thing is you can + +1294 +00:56:04,720 --> 00:56:09,640 +suggest other um things that might exist + +1295 +00:56:07,960 --> 00:56:13,039 +in Wiki data but you could also track + +1296 +00:56:09,640 --> 00:56:15,039 +that back to the original text that + +1297 +00:56:13,039 --> 00:56:17,000 +indicated that it might exist in Wiki + +1298 +00:56:15,039 --> 00:56:18,720 +data so then you could have a human go + +1299 +00:56:17,000 --> 00:56:20,520 +back and check it to make sure that + +1300 +00:56:18,720 --> 00:56:24,200 +that's actually true and trustworthy and + +1301 +00:56:20,520 --> 00:56:24,200 +other things like that + +1302 +00:56:26,400 --> 00:56:31,400 +cool um so if you like uh you like + +1303 +00:56:29,400 --> 00:56:33,160 +tensors or you like linear algebra or + +1304 +00:56:31,400 --> 00:56:34,720 +things like this this is maybe something + +1305 +00:56:33,160 --> 00:56:37,880 +that you could take a look at and think + +1306 +00:56:34,720 --> 00:56:40,240 +a little bit more about um any any + +1307 +00:56:37,880 --> 00:56:40,240 +questions + +1308 +00:56:42,799 --> 00:56:46,240 +here okay + +1309 +00:56:46,880 --> 00:56:53,680 +cool um so another thing I'd like to + +1310 +00:56:50,640 --> 00:56:56,920 +talk about is uh modeling relation paths + +1311 +00:56:53,680 --> 00:57:00,359 +so this is a really nice uh idea + +1312 +00:56:56,920 --> 00:57:00,359 +which is you + +1313 +00:57:00,440 --> 00:57:05,000 +can make inferences across multiple hops + +1314 +00:57:04,240 --> 00:57:08,400 +of + +1315 +00:57:05,000 --> 00:57:12,280 +relations um based on uh particular + +1316 +00:57:08,400 --> 00:57:14,200 +relations existing and so um multi-step + +1317 +00:57:12,280 --> 00:57:17,280 +passs can be informative for indicating + +1318 +00:57:14,200 --> 00:57:20,000 +whether individual relations exist so um + +1319 +00:57:17,280 --> 00:57:24,400 +for example uh given a word given a + +1320 +00:57:20,000 --> 00:57:27,960 +particular word in a paper title + +1321 +00:57:24,400 --> 00:57:29,880 +recommend a venue in which to the paper + +1322 +00:57:27,960 --> 00:57:32,559 +and so this is the the problem that they + +1323 +00:57:29,880 --> 00:57:36,079 +were trying to solve and then basically + +1324 +00:57:32,559 --> 00:57:38,440 +you have a word um you + +1325 +00:57:36,079 --> 00:57:41,119 +find if you have that word in your paper + +1326 +00:57:38,440 --> 00:57:42,920 +title you then find other papers that + +1327 +00:57:41,119 --> 00:57:45,280 +have that title uh that have that word + +1328 +00:57:42,920 --> 00:57:48,359 +in their title and those papers are in a + +1329 +00:57:45,280 --> 00:57:52,039 +journal and that gets a high weight with + +1330 +00:57:48,359 --> 00:57:54,119 +respect to like that your paper being + +1331 +00:57:52,039 --> 00:57:56,839 +you know relevant to that particular + +1332 +00:57:54,119 --> 00:57:59,880 +Journal you can also say + +1333 +00:57:56,839 --> 00:58:01,000 +okay I have a a word find papers with + +1334 +00:57:59,880 --> 00:58:03,240 +that word in the + +1335 +00:58:01,000 --> 00:58:07,240 +title find the first author of that + +1336 +00:58:03,240 --> 00:58:09,280 +paper find another paper uh that had + +1337 +00:58:07,240 --> 00:58:11,599 +that author as a first author and then + +1338 +00:58:09,280 --> 00:58:13,240 +find the Journal of it and they + +1339 +00:58:11,599 --> 00:58:15,839 +demonstrate a way where you can like + +1340 +00:58:13,240 --> 00:58:18,280 +expand these paths and feed them into a + +1341 +00:58:15,839 --> 00:58:22,400 +prediction model and use that to predict + +1342 +00:58:18,280 --> 00:58:25,480 +um you know additional relations so + +1343 +00:58:22,400 --> 00:58:26,680 +unlike this method here this method was + +1344 +00:58:25,480 --> 00:58:29,240 +saying like + +1345 +00:58:26,680 --> 00:58:30,920 +other single relations are indicative of + +1346 +00:58:29,240 --> 00:58:34,160 +a particular relation + +1347 +00:58:30,920 --> 00:58:36,880 +existing this paper is saying not just + +1348 +00:58:34,160 --> 00:58:38,720 +individual relations are indicative of + +1349 +00:58:36,880 --> 00:58:40,640 +another relation existing but actually + +1350 +00:58:38,720 --> 00:58:43,839 +relation paths are indicative of a + +1351 +00:58:40,640 --> 00:58:46,400 +relation existing so this is more um + +1352 +00:58:43,839 --> 00:58:46,400 +expressive + +1353 +00:58:47,520 --> 00:58:55,359 +basically um and this followup paper + +1354 +00:58:52,640 --> 00:58:57,480 +uh using differentiable logic rules + +1355 +00:58:55,359 --> 00:59:00,799 +actually made this endtoend + +1356 +00:58:57,480 --> 00:59:03,079 +trainable so this allows you to consider + +1357 +00:59:00,799 --> 00:59:07,599 +whole paths in a differentiable + +1358 +00:59:03,079 --> 00:59:09,960 +framework and so the way they did this + +1359 +00:59:07,599 --> 00:59:13,359 +is like if you have you know City in + +1360 +00:59:09,960 --> 00:59:16,440 +country and has office in country um + +1361 +00:59:13,359 --> 00:59:18,920 +that or sorry City and Country and has + +1362 +00:59:16,440 --> 00:59:22,200 +office in city that indicates has office + +1363 +00:59:18,920 --> 00:59:24,160 +in country and I I'm sure you know many + +1364 +00:59:22,200 --> 00:59:26,760 +people here have thought like learned + +1365 +00:59:24,160 --> 00:59:29,520 +about logic and you know and induction + +1366 +00:59:26,760 --> 00:59:32,720 +from or deduction from uh logic rules + +1367 +00:59:29,520 --> 00:59:34,359 +and stuff like this but the problem is + +1368 +00:59:32,720 --> 00:59:37,079 +deduction from logic rules is very + +1369 +00:59:34,359 --> 00:59:39,039 +fragile like there are cases where there + +1370 +00:59:37,079 --> 00:59:41,119 +are counter examples so if you say that + +1371 +00:59:39,039 --> 00:59:43,280 +something is always true deductively + +1372 +00:59:41,119 --> 00:59:45,839 +then um that can cause problems so in + +1373 +00:59:43,280 --> 00:59:47,839 +reality it's like if you have two pieces + +1374 +00:59:45,839 --> 00:59:52,400 +of information something can become much + +1375 +00:59:47,839 --> 00:59:56,920 +much more likely um and so you know just + +1376 +00:59:52,400 --> 00:59:59,880 +to give an example um somebody studying + +1377 +00:59:56,920 --> 01:00:01,280 +studying at CMU makes it very likely + +1378 +00:59:59,880 --> 01:00:03,799 +much more likely that they're studying + +1379 +01:00:01,280 --> 01:00:06,359 +computer science and much less likely + +1380 +01:00:03,799 --> 01:00:08,000 +that they're studying medicine or + +1381 +01:00:06,359 --> 01:00:09,520 +something like that but that doesn't + +1382 +01:00:08,000 --> 01:00:11,720 +mean that it like + +1383 +01:00:09,520 --> 01:00:13,559 +entirely the first one is definitely not + +1384 +01:00:11,720 --> 01:00:15,480 +entirely implied and I'm sure there's + +1385 +01:00:13,559 --> 01:00:16,760 +like a few people at CMU who are somehow + +1386 +01:00:15,480 --> 01:00:18,440 +studying medicine through a joint + +1387 +01:00:16,760 --> 01:00:21,480 +program with pit or something like that + +1388 +01:00:18,440 --> 01:00:24,400 +so you know like very it's very rare + +1389 +01:00:21,480 --> 01:00:26,799 +that logic rules are hard and fast and + +1390 +01:00:24,400 --> 01:00:28,480 +so basically what they do is they treat + +1391 +01:00:26,799 --> 01:00:30,559 +each path as a sequence of Matrix + +1392 +01:00:28,480 --> 01:00:34,839 +multiplies it where they have a rule + +1393 +01:00:30,559 --> 01:00:36,599 +weight um like this and um in the end + +1394 +01:00:34,839 --> 01:00:38,359 +that allows you to make a a prediction + +1395 +01:00:36,599 --> 01:00:40,839 +about whether a predic logic rule is + +1396 +01:00:38,359 --> 01:00:40,839 +correct or + +1397 +01:00:40,880 --> 01:00:49,319 +not um so this is uh i' I've been + +1398 +01:00:46,880 --> 01:00:51,119 +working mostly in like structured + +1399 +01:00:49,319 --> 01:00:54,480 +knowledge space structured knowledge + +1400 +01:00:51,119 --> 01:00:56,599 +graphs other uh other things like this + +1401 +01:00:54,480 --> 01:00:59,760 +um I I don't + +1402 +01:00:56,599 --> 01:01:02,720 +think there's a whole lot of work that + +1403 +01:00:59,760 --> 01:01:05,640 +directly applies this to language models + +1404 +01:01:02,720 --> 01:01:07,319 +um like differentiable logic rules and + +1405 +01:01:05,640 --> 01:01:10,079 +language models or things like that just + +1406 +01:01:07,319 --> 01:01:12,440 +because it's less clean it's you know uh + +1407 +01:01:10,079 --> 01:01:13,839 +harder um there there's a little bit of + +1408 +01:01:12,440 --> 01:01:16,079 +work which I'm going to talk about now + +1409 +01:01:13,839 --> 01:01:18,599 +but I think like this kind of work is + +1410 +01:01:16,079 --> 01:01:21,440 +interesting because a lot of models are + +1411 +01:01:18,599 --> 01:01:23,119 +not super great at reasoning and how to + +1412 +01:01:21,440 --> 01:01:25,119 +like allow them to be better at + +1413 +01:01:23,119 --> 01:01:26,559 +reasoning is kind of an open problem so + +1414 +01:01:25,119 --> 01:01:28,039 +learning from these old older works that + +1415 +01:01:26,559 --> 01:01:30,200 +did it in a more structured space and + +1416 +01:01:28,039 --> 01:01:32,160 +trying to figure out how to apply them + +1417 +01:01:30,200 --> 01:01:34,400 +to less structured spaces is still + +1418 +01:01:32,160 --> 01:01:36,240 +interesting I think + +1419 +01:01:34,400 --> 01:01:39,160 +so + +1420 +01:01:36,240 --> 01:01:40,720 +cool um then the final talk topic I want + +1421 +01:01:39,160 --> 01:01:42,920 +to talk about is probing knowledge in + +1422 +01:01:40,720 --> 01:01:44,920 +LMS and so we have these knowledge bases + +1423 +01:01:42,920 --> 01:01:47,319 +that encode you know tons and tons of + +1424 +01:01:44,920 --> 01:01:49,880 +knowledge um which allows us to figure + +1425 +01:01:47,319 --> 01:01:52,200 +out you know oh well how well do uh + +1426 +01:01:49,880 --> 01:01:56,200 +language models know about these + +1427 +01:01:52,200 --> 01:01:59,079 +things and so + +1428 +01:01:56,200 --> 01:02:02,760 +traditional um kind of QA machine + +1429 +01:01:59,079 --> 01:02:04,799 +reading comprehension rag models um + +1430 +01:02:02,760 --> 01:02:06,359 +usually referred to external resources + +1431 +01:02:04,799 --> 01:02:10,039 +to answer questions like Wikipedia + +1432 +01:02:06,359 --> 01:02:14,359 +articles um or things like this but then + +1433 +01:02:10,039 --> 01:02:16,119 +the question is without doing rag can we + +1434 +01:02:14,359 --> 01:02:18,160 +you know answer questions like what + +1435 +01:02:16,119 --> 01:02:20,920 +knowledge is + +1436 +01:02:18,160 --> 01:02:24,079 +encoded and so the first paper that kind + +1437 +01:02:20,920 --> 01:02:26,520 +of handled this sort of problem uh is + +1438 +01:02:24,079 --> 01:02:29,200 +this paper which actually was also + +1439 +01:02:26,520 --> 01:02:33,359 +called uh + +1440 +01:02:29,200 --> 01:02:35,960 +wama surprisingly um or released a + +1441 +01:02:33,359 --> 01:02:41,000 +resource called llama except it was l m + +1442 +01:02:35,960 --> 01:02:44,880 +a um but what they did is they + +1443 +01:02:41,000 --> 01:02:46,960 +uh used they in contrast to using + +1444 +01:02:44,880 --> 01:02:50,000 +structural queries like SQL or or + +1445 +01:02:46,960 --> 01:02:52,119 +Sparkle two query KBS they tried to use + +1446 +01:02:50,000 --> 01:02:54,240 +natural language prompts to query LM so + +1447 +01:02:52,119 --> 01:02:58,160 +this was actually one of the the first + +1448 +01:02:54,240 --> 01:03:02,359 +uh kind of paper on prompts uh prompting + +1449 +01:02:58,160 --> 01:03:05,079 +for uh language models in a way and the + +1450 +01:03:02,359 --> 01:03:08,359 +way they did this is they had um they + +1451 +01:03:05,079 --> 01:03:10,039 +did like Dante was born in mask and then + +1452 +01:03:08,359 --> 01:03:13,279 +they tried to fill in the mask using a + +1453 +01:03:10,039 --> 01:03:15,839 +mask language model and uh and output + +1454 +01:03:13,279 --> 01:03:18,559 +Florence so + +1455 +01:03:15,839 --> 01:03:19,960 +um when they did this work now now we + +1456 +01:03:18,559 --> 01:03:21,359 +don't do this quite as much but when + +1457 +01:03:19,960 --> 01:03:23,520 +they did this work they basically used + +1458 +01:03:21,359 --> 01:03:25,440 +the knowledge base as the ground truth + +1459 +01:03:23,520 --> 01:03:28,880 +and tried to probe whether the knowledge + +1460 +01:03:25,440 --> 01:03:31,520 +in in um in the knowledge base was also + +1461 +01:03:28,880 --> 01:03:34,880 +uh recoverable from the neural + +1462 +01:03:31,520 --> 01:03:37,720 +map um and they proposed the Llama + +1463 +01:03:34,880 --> 01:03:39,760 +Benchmark um basically it was manual + +1464 +01:03:37,720 --> 01:03:42,480 +prompts for 41 relations they created + +1465 +01:03:39,760 --> 01:03:44,839 +the prompts manually uh so like X was + +1466 +01:03:42,480 --> 01:03:46,480 +founded in y The Prompt template and + +1467 +01:03:44,839 --> 01:03:49,400 +they filled in the subjects and had the + +1468 +01:03:46,480 --> 01:03:52,160 +LMS uh for such as Bert predict the + +1469 +01:03:49,400 --> 01:03:55,839 +objects uh like blueberg LP was founded + +1470 +01:03:52,160 --> 01:03:59,000 +in mask and they demonstrated that like + +1471 +01:03:55,839 --> 01:04:02,440 +basically Elmo uh Transformer XL and + +1472 +01:03:59,000 --> 01:04:04,960 +Bert base got uh you know up to 31% + +1473 +01:04:02,440 --> 01:04:06,480 +accuracy now I'm sure uh the modern + +1474 +01:04:04,960 --> 01:04:09,200 +language models would have much higher + +1475 +01:04:06,480 --> 01:04:11,279 +accuracy than + +1476 +01:04:09,200 --> 01:04:13,920 +that + +1477 +01:04:11,279 --> 01:04:17,839 +um this is a a follow-up paper that we + +1478 +01:04:13,920 --> 01:04:21,160 +did to this um where we tried to do this + +1479 +01:04:17,839 --> 01:04:23,400 +multilingually um I I think this is + +1480 +01:04:21,160 --> 01:04:25,680 +really let + +1481 +01:04:23,400 --> 01:04:29,520 +me I think one thing that's interesting + +1482 +01:04:25,680 --> 01:04:31,960 +interesting about this paper is um even + +1483 +01:04:29,520 --> 01:04:37,240 +if you're not interested in multilingual + +1484 +01:04:31,960 --> 01:04:38,920 +stuff per se there is an interesting + +1485 +01:04:37,240 --> 01:04:40,760 +dichotomy about like what knowledge is + +1486 +01:04:38,920 --> 01:04:43,079 +included in LMS and whether we can + +1487 +01:04:40,760 --> 01:04:46,000 +retrieve it and the reason why I'm + +1488 +01:04:43,079 --> 01:04:48,359 +saying this is because in this paper + +1489 +01:04:46,000 --> 01:04:51,200 +we created + +1490 +01:04:48,359 --> 01:04:52,599 +queries from a knowledge base and + +1491 +01:04:51,200 --> 01:04:54,160 +because we created queries from a + +1492 +01:04:52,599 --> 01:04:55,760 +knowledge base and knowledge bases are + +1493 +01:04:54,160 --> 01:04:57,240 +multilingual we can also create + +1494 +01:04:55,760 --> 01:05:00,039 +multilingual queries from knowledge + +1495 +01:04:57,240 --> 01:05:01,720 +bases right so we can use exactly the + +1496 +01:05:00,039 --> 01:05:03,359 +same entities but just ask the same + +1497 +01:05:01,720 --> 01:05:05,920 +question in different languages and so + +1498 +01:05:03,359 --> 01:05:07,480 +we had a bunch of people manually uh + +1499 +01:05:05,920 --> 01:05:10,119 +create prompts for all of these + +1500 +01:05:07,480 --> 01:05:13,000 +languages here and you can see that in + +1501 +01:05:10,119 --> 01:05:15,960 +English it's much better at responding + +1502 +01:05:13,000 --> 01:05:19,000 +uh to these queries than it is in any + +1503 +01:05:15,960 --> 01:05:21,039 +other language and in particular like + +1504 +01:05:19,000 --> 01:05:22,880 +lower resource languages or languages + +1505 +01:05:21,039 --> 01:05:26,400 +that are less similar to English it did + +1506 +01:05:22,880 --> 01:05:29,079 +much worse and notably we we counted the + +1507 +01:05:26,400 --> 01:05:32,160 +answer correct if it got it + +1508 +01:05:29,079 --> 01:05:34,279 +um we we had two settings one setting is + +1509 +01:05:32,160 --> 01:05:35,799 +we counted the answer correct if it only + +1510 +01:05:34,279 --> 01:05:38,359 +if it answered in the language we + +1511 +01:05:35,799 --> 01:05:39,680 +queried it in but we in other setting we + +1512 +01:05:38,359 --> 01:05:42,640 +also counted the answer correct if it + +1513 +01:05:39,680 --> 01:05:44,200 +answered in any language so we um it + +1514 +01:05:42,640 --> 01:05:46,640 +didn't necessarily have to even know the + +1515 +01:05:44,200 --> 01:05:48,200 +name of the entity in that uh language + +1516 +01:05:46,640 --> 01:05:50,520 +and we would still count it + +1517 +01:05:48,200 --> 01:05:54,720 +correct and so what I mean by there's a + +1518 +01:05:50,520 --> 01:05:56,440 +dichotomy between the information that + +1519 +01:05:54,720 --> 01:05:59,240 +language models have + +1520 +01:05:56,440 --> 01:06:02,480 +encoded and whether they're able to + +1521 +01:05:59,240 --> 01:06:02,480 +retrieve it + +1522 +01:06:02,680 --> 01:06:07,640 +is in English it's able to answer the + +1523 +01:06:06,000 --> 01:06:10,799 +models we tested were able to answer + +1524 +01:06:07,640 --> 01:06:13,000 +like 177% of queries + +1525 +01:06:10,799 --> 01:06:14,359 +but if the fact that they're able to + +1526 +01:06:13,000 --> 01:06:16,160 +answer in English means that the + +1527 +01:06:14,359 --> 01:06:18,520 +language model quote unquote knows the + +1528 +01:06:16,160 --> 01:06:20,200 +answer right like it knows the answer in + +1529 +01:06:18,520 --> 01:06:22,680 +English we're asking exactly the same + +1530 +01:06:20,200 --> 01:06:24,400 +question in all the other languages so + +1531 +01:06:22,680 --> 01:06:26,079 +you know it should know the answer in + +1532 +01:06:24,400 --> 01:06:27,680 +the other languages too + +1533 +01:06:26,079 --> 01:06:30,000 +but it's not able to retrieve the answer + +1534 +01:06:27,680 --> 01:06:33,079 +because we asked in another language + +1535 +01:06:30,000 --> 01:06:35,920 +so um that brings up some interesting + +1536 +01:06:33,079 --> 01:06:38,079 +questions about how we can make models + +1537 +01:06:35,920 --> 01:06:39,680 +better at retrieving the the knowledge + +1538 +01:06:38,079 --> 01:06:43,559 +that they already know in English when + +1539 +01:06:39,680 --> 01:06:45,520 +you query them in other languages or um + +1540 +01:06:43,559 --> 01:06:48,119 +and there was another paper recently I + +1541 +01:06:45,520 --> 01:06:52,720 +don't know if I'd be able to find it um + +1542 +01:06:48,119 --> 01:06:56,119 +exactly which is um they + +1543 +01:06:52,720 --> 01:07:01,799 +prompted models with personas and so + +1544 +01:06:56,119 --> 01:07:04,599 +they said I um you know I am a old man I + +1545 +01:07:01,799 --> 01:07:07,160 +am an old woman I am a young man I am + +1546 +01:07:04,599 --> 01:07:10,039 +young woman I am a child or something + +1547 +01:07:07,160 --> 01:07:12,799 +like that um or they also talked about + +1548 +01:07:10,039 --> 01:07:15,640 +things like uh physical disabilities and + +1549 +01:07:12,799 --> 01:07:17,200 +things and they said um please answer + +1550 +01:07:15,640 --> 01:07:19,640 +this question after they prompted with a + +1551 +01:07:17,200 --> 01:07:22,680 +Persona and just having that Persona + +1552 +01:07:19,640 --> 01:07:24,839 +greatly changed the ability of the model + +1553 +01:07:22,680 --> 01:07:26,400 +to answer questions so it's this very + +1554 +01:07:24,839 --> 01:07:28,200 +weird thing which which is like the + +1555 +01:07:26,400 --> 01:07:29,799 +models are actually capable of answering + +1556 +01:07:28,200 --> 01:07:31,520 +the questions but based on how you probe + +1557 +01:07:29,799 --> 01:07:32,880 +them whether it's in like different + +1558 +01:07:31,520 --> 01:07:34,599 +languages or if you give them a + +1559 +01:07:32,880 --> 01:07:36,839 +different Persona they manage to answer + +1560 +01:07:34,599 --> 01:07:39,000 +things differently and so on the plus + +1561 +01:07:36,839 --> 01:07:42,920 +side like you can create you can make + +1562 +01:07:39,000 --> 01:07:44,799 +ways to reduce the language models + +1563 +01:07:42,920 --> 01:07:45,920 +performance by giving it like a Persona + +1564 +01:07:44,799 --> 01:07:49,839 +that shouldn't be good at answering + +1565 +01:07:45,920 --> 01:07:53,279 +questions or something like that um + +1566 +01:07:49,839 --> 01:07:54,839 +but on the plus side um like when you're + +1567 +01:07:53,279 --> 01:07:57,279 +doing code generation there was this + +1568 +01:07:54,839 --> 01:07:58,960 +magic prompt which is like um I have + +1569 +01:07:57,279 --> 01:08:01,319 +checked this carefully in all the unit + +1570 +01:07:58,960 --> 01:08:03,240 +tests pass and that would improve your + +1571 +01:08:01,319 --> 01:08:05,760 +code generation accuracy by like five + +1572 +01:08:03,240 --> 01:08:07,559 +five points or something like that so um + +1573 +01:08:05,760 --> 01:08:09,240 +you just get the the model in the right + +1574 +01:08:07,559 --> 01:08:11,359 +mood to answer the question accurately + +1575 +01:08:09,240 --> 01:08:13,319 +and it does a better job at doing it so + +1576 +01:08:11,359 --> 01:08:15,960 +it's kind of uh it goes in both + +1577 +01:08:13,319 --> 01:08:15,960 +directions I + +1578 +01:08:16,679 --> 01:08:27,080 +guess cool um yeah uh any any questions + +1579 +01:08:23,679 --> 01:08:30,120 +here um another thing that you can do uh + +1580 +01:08:27,080 --> 01:08:31,000 +is fine-tune models specifically so + +1581 +01:08:30,120 --> 01:08:34,080 +they're good at answering + +1582 +01:08:31,000 --> 01:08:35,560 +knowledge-based questions so um uh this + +1583 +01:08:34,080 --> 01:08:38,080 +paper demonstrated that you could find + +1584 +01:08:35,560 --> 01:08:39,480 +tune models uh on synthetically created + +1585 +01:08:38,080 --> 01:08:41,159 +knowledge based questions and that would + +1586 +01:08:39,480 --> 01:08:42,920 +improve the ability of the model to + +1587 +01:08:41,159 --> 01:08:47,679 +answer questions about knowledge + +1588 +01:08:42,920 --> 01:08:47,679 +bases um it's + +1589 +01:08:49,120 --> 01:08:57,440 +uh yeah um it's pretty straightforward + +1590 +01:08:53,199 --> 01:08:57,440 +so uh there's that + +1591 +01:08:57,799 --> 01:09:03,120 +um yeah we already talked about this in + +1592 +01:09:00,000 --> 01:09:07,560 +the rag class so I think I might skip + +1593 +01:09:03,120 --> 01:09:10,239 +that um a final paper that I'd like to + +1594 +01:09:07,560 --> 01:09:12,600 +talk about this is also a paper uh done + +1595 +01:09:10,239 --> 01:09:13,759 +by my student Jung B Jong and this is + +1596 +01:09:12,600 --> 01:09:16,080 +interesting from the point of view of + +1597 +01:09:13,759 --> 01:09:18,000 +multihop reasoning and so I talked a + +1598 +01:09:16,080 --> 01:09:19,679 +little bit about like multihop reasoning + +1599 +01:09:18,000 --> 01:09:23,239 +along reasoning + +1600 +01:09:19,679 --> 01:09:26,159 +chains um in knowledge bases and this is + +1601 +01:09:23,239 --> 01:09:28,520 +one example of multihop reasoning + +1602 +01:09:26,159 --> 01:09:30,080 +among along reasoning chains within the + +1603 +01:09:28,520 --> 01:09:33,400 +parameters of the model so testing + +1604 +01:09:30,080 --> 01:09:36,759 +whether models can answer + +1605 +01:09:33,400 --> 01:09:38,480 +um Can it answer multihop questions and + +1606 +01:09:36,759 --> 01:09:40,839 +basically what we did here is we took a + +1607 +01:09:38,480 --> 01:09:42,679 +knowledge base and a knowledge base can + +1608 +01:09:40,839 --> 01:09:44,279 +have + +1609 +01:09:42,679 --> 01:09:49,480 +um + +1610 +01:09:44,279 --> 01:09:49,480 +like uh country country is + +1611 +01:09:49,600 --> 01:09:52,600 +US + +1612 +01:09:53,480 --> 01:09:58,600 +president um and then a + +1613 +01:10:00,880 --> 01:10:06,560 +birthday um and so we can create these + +1614 +01:10:04,280 --> 01:10:08,640 +multihop questions right uh and just + +1615 +01:10:06,560 --> 01:10:10,280 +follow the relation links and then we + +1616 +01:10:08,640 --> 01:10:11,440 +know the answer to the multihop question + +1617 +01:10:10,280 --> 01:10:13,560 +by following the link and we can + +1618 +01:10:11,440 --> 01:10:18,159 +generate you know the question given a + +1619 +01:10:13,560 --> 01:10:19,800 +template um so we did this and had like + +1620 +01:10:18,159 --> 01:10:22,800 +question one which is return the artist + +1621 +01:10:19,800 --> 01:10:25,719 +who recorded party a over um and then + +1622 +01:10:22,800 --> 01:10:28,159 +where in Georgia does uh Usher live and + +1623 +01:10:25,719 --> 01:10:29,920 +then we can turn this into a question + +1624 +01:10:28,159 --> 01:10:31,679 +which part of Georgia in which part of + +1625 +01:10:29,920 --> 01:10:34,239 +Georgia does the artist that recorded + +1626 +01:10:31,679 --> 01:10:37,560 +the party8 overlive and so we now have a + +1627 +01:10:34,239 --> 01:10:45,000 +multi multihop question and what we did + +1628 +01:10:37,560 --> 01:10:47,440 +is we measured whether um the model was + +1629 +01:10:45,000 --> 01:10:49,760 +able to answer the first question the + +1630 +01:10:47,440 --> 01:10:53,320 +second question and the comp like + +1631 +01:10:49,760 --> 01:10:56,120 +compound question and what we found is + +1632 +01:10:53,320 --> 01:10:59,440 +like what we would expect + +1633 +01:10:56,120 --> 01:11:01,719 +if models were like perfect knowledge + +1634 +01:10:59,440 --> 01:11:04,360 +processors right + +1635 +01:11:01,719 --> 01:11:08,120 +is we have + +1636 +01:11:04,360 --> 01:11:10,800 +like yes on the first question + +1637 +01:11:08,120 --> 01:11:14,000 +no + +1638 +01:11:10,800 --> 01:11:16,560 +yes um yes on the first question and no + +1639 +01:11:14,000 --> 01:11:16,560 +on the first + +1640 +01:11:17,199 --> 01:11:24,760 +question and we would expect that + +1641 +01:11:21,920 --> 01:11:26,080 +basically if it knew both of the answers + +1642 +01:11:24,760 --> 01:11:27,239 +to the first question and the second + +1643 +01:11:26,080 --> 01:11:30,600 +question it would get the compound + +1644 +01:11:27,239 --> 01:11:31,800 +question right and if it got uh like + +1645 +01:11:30,600 --> 01:11:34,800 +either of them wrong it would get it + +1646 +01:11:31,800 --> 01:11:37,120 +wrong right um you know in the in the + +1647 +01:11:34,800 --> 01:11:39,400 +ideal world where the knowledge of the + +1648 +01:11:37,120 --> 01:11:41,280 +two sub questions is necessary to answer + +1649 +01:11:39,400 --> 01:11:43,880 +the comp composite question and the + +1650 +01:11:41,280 --> 01:11:45,840 +model is a perfect knowledge processor + +1651 +01:11:43,880 --> 01:11:47,120 +and basically what we found we tried a + +1652 +01:11:45,840 --> 01:11:49,280 +whole bunch of different types of + +1653 +01:11:47,120 --> 01:11:51,199 +questions and what we found is this is + +1654 +01:11:49,280 --> 01:11:55,960 +totally not the case like it's not the + +1655 +01:11:51,199 --> 01:11:58,520 +case at all um and what we found in said + +1656 +01:11:55,960 --> 01:12:01,560 +is if it's able to answer the second + +1657 +01:11:58,520 --> 01:12:04,120 +question correctly it was much more + +1658 +01:12:01,560 --> 01:12:07,480 +likely to be able to answer the + +1659 +01:12:04,120 --> 01:12:08,840 +composite question um even if it can + +1660 +01:12:07,480 --> 01:12:11,000 +answer the first question that has + +1661 +01:12:08,840 --> 01:12:13,120 +almost no relation with whether it could + +1662 +01:12:11,000 --> 01:12:15,520 +answer the composite question at all so + +1663 +01:12:13,120 --> 01:12:17,679 +it's more like somehow from the answer + +1664 +01:12:15,520 --> 01:12:19,320 +to the second question it was able to to + +1665 +01:12:17,679 --> 01:12:22,280 +get the answer right and it kind of + +1666 +01:12:19,320 --> 01:12:24,040 +makes sense actually because like um + +1667 +01:12:22,280 --> 01:12:26,320 +let's say the answer to the second + +1668 +01:12:24,040 --> 01:12:27,920 +question is some like really long list + +1669 +01:12:26,320 --> 01:12:30,719 +like who are all the presidents of the + +1670 +01:12:27,920 --> 01:12:33,320 +United States um or something like that + +1671 +01:12:30,719 --> 01:12:35,639 +that's just hard to answer um so if I + +1672 +01:12:33,320 --> 01:12:38,000 +said who are all the presidents of the + +1673 +01:12:35,639 --> 01:12:40,800 +country where Washington DC is located + +1674 +01:12:38,000 --> 01:12:42,679 +in um you know like the second question + +1675 +01:12:40,800 --> 01:12:44,040 +is really hard so that's hard to get but + +1676 +01:12:42,679 --> 01:12:46,120 +if I say + +1677 +01:12:44,040 --> 01:12:49,920 +um + +1678 +01:12:46,120 --> 01:12:53,520 +uh what what is the + +1679 +01:12:49,920 --> 01:12:57,120 +capital what is the capital of the + +1680 +01:12:53,520 --> 01:12:57,120 +country uh + +1681 +01:12:57,400 --> 01:13:02,440 +what is what is the capital of the + +1682 +01:12:58,840 --> 01:13:05,400 +country where the most + +1683 +01:13:02,440 --> 01:13:06,800 +um people live or something like that + +1684 +01:13:05,400 --> 01:13:08,679 +even if you weren't sure about the + +1685 +01:13:06,800 --> 01:13:10,880 +country where the most people live you + +1686 +01:13:08,679 --> 01:13:13,040 +could pick a random capital and get it + +1687 +01:13:10,880 --> 01:13:16,199 +right some of the time or something like + +1688 +01:13:13,040 --> 01:13:18,239 +that so um that's what we found in this + +1689 +01:13:16,199 --> 01:13:19,800 +paper and I I think like another nice + +1690 +01:13:18,239 --> 01:13:22,360 +thing about knowledge bases is they + +1691 +01:13:19,800 --> 01:13:24,880 +allow you to ask like really interesting + +1692 +01:13:22,360 --> 01:13:26,400 +questions like this about what language + +1693 +01:13:24,880 --> 01:13:29,120 +model know or what language models don't + +1694 +01:13:26,400 --> 01:13:31,040 +know in a structured way so um I think + +1695 +01:13:29,120 --> 01:13:32,280 +if you're interested in probing language + +1696 +01:13:31,040 --> 01:13:35,320 +models and what they know and what they + +1697 +01:13:32,280 --> 01:13:38,639 +can infer what logic they can do that's + +1698 +01:13:35,320 --> 01:13:42,320 +good um cool yeah that's all I have for + +1699 +01:13:38,639 --> 01:13:44,920 +today um are there any questions or + +1700 +01:13:42,320 --> 01:13:48,679 +discussion or things like that or happy + +1701 +01:13:44,920 --> 01:13:48,679 +to talk up here too \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (18) Knowledge and Language Models/transcript.vtt b/CMU Advanced NLP 2024 (18) Knowledge and Language Models/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..3f473736e2e1300758a26e37a4dff0f53cb1fa1a --- /dev/null +++ b/CMU Advanced NLP 2024 (18) Knowledge and Language Models/transcript.vtt @@ -0,0 +1,5104 @@ +WEBVTT + +00:00:00.120 --> 00:00:04.880 +everyone I today I'd like to talk about + +00:00:02.760 --> 00:00:07.399 +uh learning from knowledge bases uh + +00:00:04.880 --> 00:00:11.440 +learning from in for knowledge bases + +00:00:07.399 --> 00:00:14.799 +this is kind of a a shift uh from a lot + +00:00:11.440 --> 00:00:16.480 +of the stuff that we've done so far uh + +00:00:14.799 --> 00:00:18.439 +and I'm going to be talking about like a + +00:00:16.480 --> 00:00:20.480 +different information Source some + +00:00:18.439 --> 00:00:21.960 +relatively different algorithms compared + +00:00:20.480 --> 00:00:26.080 +to the stuff that we talked about up + +00:00:21.960 --> 00:00:28.880 +until this point so um you know it might + +00:00:26.080 --> 00:00:32.360 +be uh interesting it might be different + +00:00:28.880 --> 00:00:35.640 +so uh get started with + +00:00:32.360 --> 00:00:37.360 +that so I'm going to be talking about + +00:00:35.640 --> 00:00:40.000 +knowledge bases and knowledge bases are + +00:00:37.360 --> 00:00:43.039 +basically a structured databases of + +00:00:40.000 --> 00:00:46.079 +knowledge and they can contain a lot of + +00:00:43.039 --> 00:00:48.559 +things but most commonly when people are + +00:00:46.079 --> 00:00:50.600 +talking about them they are talking + +00:00:48.559 --> 00:00:53.160 +about relational knowledge bases that + +00:00:50.600 --> 00:00:55.559 +include things like entities which are + +00:00:53.160 --> 00:00:57.399 +nodes in a graph and relations which are + +00:00:55.559 --> 00:01:00.239 +edges between + +00:00:57.399 --> 00:01:02.079 +nodes and + +00:01:00.239 --> 00:01:03.879 +I'll I'll talk about some examples of + +00:01:02.079 --> 00:01:05.479 +this in a little bit to make that a + +00:01:03.879 --> 00:01:08.040 +little bit more concrete and then some + +00:01:05.479 --> 00:01:11.240 +of the questions that we ask about these + +00:01:08.040 --> 00:01:14.400 +are how can we learn to create and + +00:01:11.240 --> 00:01:16.799 +expand knowledge bases with uh you know + +00:01:14.400 --> 00:01:18.439 +neural network based methods and then + +00:01:16.799 --> 00:01:20.200 +the second question is how can we learn + +00:01:18.439 --> 00:01:22.600 +from the information in knowledge bases + +00:01:20.200 --> 00:01:24.720 +to improve like neural network models or + +00:01:22.600 --> 00:01:27.560 +uh use them in effective + +00:01:24.720 --> 00:01:31.479 +ways and how can we use uh structured + +00:01:27.560 --> 00:01:31.479 +knowledge to answer questions + +00:01:32.200 --> 00:01:37.159 +so the first uh thing I'd like to talk + +00:01:35.000 --> 00:01:40.960 +about a little bit is types of knowledge + +00:01:37.159 --> 00:01:43.079 +bases and they come in several different + +00:01:40.960 --> 00:01:46.119 +varieties the first one I'd like to talk + +00:01:43.079 --> 00:01:48.560 +about is a very uh classical one called + +00:01:46.119 --> 00:01:50.960 +wordnet has anyone actually ever used + +00:01:48.560 --> 00:01:53.479 +wordnet + +00:01:50.960 --> 00:01:55.520 +before I see at least one person raising + +00:01:53.479 --> 00:01:57.640 +their hand so it's not entirely uh + +00:01:55.520 --> 00:02:00.119 +hasn't entirely disappeared has anyone + +00:01:57.640 --> 00:02:03.240 +heard of wordnet before + +00:02:00.119 --> 00:02:05.079 +okay more more people um so basically + +00:02:03.240 --> 00:02:06.960 +this used to be a really big thing in in + +00:02:05.079 --> 00:02:10.440 +natural language processing it's not So + +00:02:06.960 --> 00:02:12.319 +Much Anymore um but I I want to explain + +00:02:10.440 --> 00:02:14.800 +about it because I want to explain why + +00:02:12.319 --> 00:02:17.360 +this is maybe like less necessary to use + +00:02:14.800 --> 00:02:19.599 +but actual knowledge bases are still + +00:02:17.360 --> 00:02:23.160 +more necessary to + +00:02:19.599 --> 00:02:26.280 +use and so wordnet is a large database + +00:02:23.160 --> 00:02:29.560 +of words and specifically what it does + +00:02:26.280 --> 00:02:32.720 +is each word or something they call a + +00:02:29.560 --> 00:02:37.120 +syn set is a node and then there are + +00:02:32.720 --> 00:02:42.560 +relationships between nodes and the + +00:02:37.120 --> 00:02:44.319 +nodes can correspond to nouns um and or + +00:02:42.560 --> 00:02:45.920 +verbs or + +00:02:44.319 --> 00:02:48.360 +adjectives + +00:02:45.920 --> 00:02:49.959 +and nouns have different types of + +00:02:48.360 --> 00:02:53.360 +relations between them so they have + +00:02:49.959 --> 00:02:56.280 +things like an is a relation so like a + +00:02:53.360 --> 00:03:00.040 +hatchback is a type of car they are part + +00:02:56.280 --> 00:03:02.840 +of relations uh where a wheel is a part + +00:03:00.040 --> 00:03:05.720 +of a car um and they also make + +00:03:02.840 --> 00:03:09.799 +distinctions between types and instances + +00:03:05.720 --> 00:03:12.400 +so like Joe Biden is an instance of a + +00:03:09.799 --> 00:03:16.560 +president and president is the + +00:03:12.400 --> 00:03:19.239 +type so um verb relations are ordered by + +00:03:16.560 --> 00:03:22.680 +specificity so like communicate is more + +00:03:19.239 --> 00:03:25.799 +broad than talk so talk is you know + +00:03:22.680 --> 00:03:27.519 +generally a sub class of communicate and + +00:03:25.799 --> 00:03:30.720 +then whisper is generally a subass of + +00:03:27.519 --> 00:03:33.159 +talk so it's ordered in this way + +00:03:30.720 --> 00:03:35.920 +and then adjective relations are mostly + +00:03:33.159 --> 00:03:37.720 +antonyms so like wet and wet versus dry + +00:03:35.920 --> 00:03:43.599 +and other things like + +00:03:37.720 --> 00:03:47.080 +this um when I said sinets uh actually + +00:03:43.599 --> 00:03:50.239 +the each node is not a word despite the + +00:03:47.080 --> 00:03:53.239 +name word net it's a set of words that + +00:03:50.239 --> 00:03:56.200 +all have the same meaning so you might + +00:03:53.239 --> 00:03:59.120 +have artifact and thing would both + +00:03:56.200 --> 00:04:00.879 +correspond to this um node because they + +00:03:59.120 --> 00:04:02.599 +both mean basically the same thing so + +00:04:00.879 --> 00:04:04.159 +it's like sets of synonyms and this is + +00:04:02.599 --> 00:04:07.599 +also important when we talk about other + +00:04:04.159 --> 00:04:09.920 +types of uh knowledge bases as well and + +00:04:07.599 --> 00:04:13.920 +so what was this used for um this was + +00:04:09.920 --> 00:04:17.160 +used for for example uh trying to figure + +00:04:13.920 --> 00:04:22.400 +out whether trying to find all the cars + +00:04:17.160 --> 00:04:24.440 +that were mentioned in like a in a large + +00:04:22.400 --> 00:04:27.440 +set of text so you would go through you + +00:04:24.440 --> 00:04:30.280 +would identify all + +00:04:27.440 --> 00:04:32.120 +sinets or you would identify all words + +00:04:30.280 --> 00:04:34.120 +that corresponded to these sunsets and + +00:04:32.120 --> 00:04:35.720 +then you would take a step up and find + +00:04:34.120 --> 00:04:38.800 +motor car and you would know that like + +00:04:35.720 --> 00:04:42.320 +all of those were mentions of cars so + +00:04:38.800 --> 00:04:45.520 +like why don't we use wordnet very much + +00:04:42.320 --> 00:04:45.520 +anymore any + +00:04:49.160 --> 00:04:52.840 +ideas what would what would you do + +00:04:51.080 --> 00:04:55.560 +instead if I told you find all the cars + +00:04:52.840 --> 00:04:55.560 +in a big piece of + +00:04:55.960 --> 00:05:00.160 +text yeah just do something with the + +00:04:58.280 --> 00:05:02.880 +embeding just do something with + +00:05:00.160 --> 00:05:04.560 +embeddings yeah so you might get um you + +00:05:02.880 --> 00:05:06.720 +might get something and find all things + +00:05:04.560 --> 00:05:10.360 +that were close in embedding space to a + +00:05:06.720 --> 00:05:10.360 +car what what's another thing you might + +00:05:11.560 --> 00:05:15.520 +do like what I would do is I would + +00:05:13.639 --> 00:05:17.080 +download mistol and say does this + +00:05:15.520 --> 00:05:19.880 +sentence talk about a car and it would + +00:05:17.080 --> 00:05:22.199 +say yes or no and I I would you know or + +00:05:19.880 --> 00:05:23.479 +I would say find all the cars in this uh + +00:05:22.199 --> 00:05:25.319 +that are mentioned in the sentence and + +00:05:23.479 --> 00:05:28.720 +it would get them and sure that's like + +00:05:25.319 --> 00:05:31.319 +expensive but it's really easy so um you + +00:05:28.720 --> 00:05:32.919 +know there are other options that might + +00:05:31.319 --> 00:05:36.720 +be less expensive but that could solve a + +00:05:32.919 --> 00:05:39.520 +lot of the things so word not you know + +00:05:36.720 --> 00:05:41.039 +started out with more and more it it + +00:05:39.520 --> 00:05:42.600 +started out being very popular in + +00:05:41.039 --> 00:05:44.039 +natural language processing but now it's + +00:05:42.600 --> 00:05:45.440 +less so because we can get a lot of it + +00:05:44.039 --> 00:05:47.639 +from embeddings we can get a lot of it + +00:05:45.440 --> 00:05:50.520 +from language models + +00:05:47.639 --> 00:05:52.759 +itself um another thing that started + +00:05:50.520 --> 00:05:55.759 +maybe before wordnet or even around the + +00:05:52.759 --> 00:05:58.840 +same time as wordnet was this uh data + +00:05:55.759 --> 00:06:00.800 +base called psych and it was a manually + +00:05:58.840 --> 00:06:04.160 +curated database attempting to encode + +00:06:00.800 --> 00:06:06.280 +all common sense knowledge um and the + +00:06:04.160 --> 00:06:08.759 +project itself lasted for about 30 to 40 + +00:06:06.280 --> 00:06:11.840 +years it might even still + +00:06:08.759 --> 00:06:13.319 +exist um and so they had this huge uh + +00:06:11.840 --> 00:06:15.199 +like hierarchy of all the different + +00:06:13.319 --> 00:06:17.680 +types of knowledge you could have it + +00:06:15.199 --> 00:06:19.680 +encoded knowledge about like events and + +00:06:17.680 --> 00:06:21.479 +like which events happened before other + +00:06:19.680 --> 00:06:26.840 +events and all these other stuff like + +00:06:21.479 --> 00:06:29.039 +this um but the problem with this is uh + +00:06:26.840 --> 00:06:31.000 +this was just too ambitious basically it + +00:06:29.039 --> 00:06:35.680 +was not possible to encode all of this + +00:06:31.000 --> 00:06:37.440 +manually by hand so people um like it it + +00:06:35.680 --> 00:06:38.840 +did it got part of the way there but + +00:06:37.440 --> 00:06:40.240 +that part of the way there was not + +00:06:38.840 --> 00:06:42.560 +enough for it to be really useful in + +00:06:40.240 --> 00:06:45.199 +Practical systems so it isn't this sort + +00:06:42.560 --> 00:06:47.800 +of method is not used as frequently + +00:06:45.199 --> 00:06:51.240 +now + +00:06:47.800 --> 00:06:56.000 +um a a followup one + +00:06:51.240 --> 00:06:57.479 +um which is it's successor is now uh the + +00:06:56.000 --> 00:06:59.879 +the most widely used knowledge Bas is + +00:06:57.479 --> 00:07:03.240 +something called dbpedia and the basic + +00:06:59.879 --> 00:07:06.120 +idea behind dbpedia is that while Psych + +00:07:03.240 --> 00:07:07.840 +is too difficult because they had people + +00:07:06.120 --> 00:07:12.400 +on the psych project who would go in and + +00:07:07.840 --> 00:07:12.400 +curate rules um for + +00:07:13.280 --> 00:07:19.080 +machines Wikipedia basically they have a + +00:07:17.160 --> 00:07:21.080 +very very large number of humans + +00:07:19.080 --> 00:07:23.639 +curating this structured data about + +00:07:21.080 --> 00:07:25.199 +entities in the world for humans they're + +00:07:23.639 --> 00:07:27.879 +creating it for humans because then you + +00:07:25.199 --> 00:07:29.599 +can put it on a Wikipedia page and you + +00:07:27.879 --> 00:07:31.440 +can look and see it says cardig melan + +00:07:29.599 --> 00:07:34.160 +University it has the former names of + +00:07:31.440 --> 00:07:36.919 +Carnegie melon um it has the motto of + +00:07:34.160 --> 00:07:38.759 +Carnegie melon the type of entity who it + +00:07:36.919 --> 00:07:41.360 +was established by and when and other + +00:07:38.759 --> 00:07:42.840 +stuff like that and because people are + +00:07:41.360 --> 00:07:44.280 +no longer creating it for machines + +00:07:42.840 --> 00:07:46.280 +they're creating it for humans people + +00:07:44.280 --> 00:07:47.840 +are like motivated to do this so like + +00:07:46.280 --> 00:07:49.960 +lots of people will do it for free so + +00:07:47.840 --> 00:07:51.960 +you can actually get a reasonably sized + +00:07:49.960 --> 00:07:53.639 +amount of data from this and actually + +00:07:51.960 --> 00:07:55.720 +cover you know like most of the entities + +00:07:53.639 --> 00:07:57.080 +in the world or not most of the entities + +00:07:55.720 --> 00:08:00.120 +in the world but most of the notable + +00:07:57.080 --> 00:08:03.319 +entities in uh part of the world that + +00:08:00.120 --> 00:08:03.319 +have high participation in + +00:08:03.479 --> 00:08:09.800 +Wikipedia um so now the the thing that a + +00:08:08.039 --> 00:08:13.319 +lot of people use is something called + +00:08:09.800 --> 00:08:14.919 +Wiki data this is not this name is a + +00:08:13.319 --> 00:08:17.039 +little bit of a misnomer because it's + +00:08:14.919 --> 00:08:18.960 +not actually that closely connected to + +00:08:17.039 --> 00:08:20.639 +Wikipedia they extract data from + +00:08:18.960 --> 00:08:21.720 +Wikipedia but they also extract it from + +00:08:20.639 --> 00:08:24.400 +lots of other + +00:08:21.720 --> 00:08:27.520 +sources and this is a curated database + +00:08:24.400 --> 00:08:30.360 +of entities um it's linked it's + +00:08:27.520 --> 00:08:33.959 +extremely large scale and it's + +00:08:30.360 --> 00:08:38.080 +multilingual and um this is an example + +00:08:33.959 --> 00:08:39.680 +of a thing from Richard fean um where + +00:08:38.080 --> 00:08:42.680 +people can go in and they can actually + +00:08:39.680 --> 00:08:45.320 +like add information and stuff like that + +00:08:42.680 --> 00:08:47.440 +um and you know it gives information + +00:08:45.320 --> 00:08:50.959 +about education and all kinds of other + +00:08:47.440 --> 00:08:52.600 +stuff so um for fun I can go to the wiki + +00:08:50.959 --> 00:08:55.040 +data + +00:08:52.600 --> 00:08:59.360 +site does anyone have an entity they'd + +00:08:55.040 --> 00:08:59.360 +like to know more about + +00:09:01.640 --> 00:09:07.320 +any any ideas maybe something that has + +00:09:03.959 --> 00:09:07.320 +been in the news recently + +00:09:10.680 --> 00:09:16.160 +or nobody brave enough to come up with + +00:09:13.040 --> 00:09:18.360 +an entity yeah + +00:09:16.160 --> 00:09:20.640 +Mamba that's a good one I'm actually not + +00:09:18.360 --> 00:09:23.800 +sure if that one's going to be in here + +00:09:20.640 --> 00:09:27.720 +um there's lots of mambas but I don't + +00:09:23.800 --> 00:09:27.720 +know about that particular Mamba let me + +00:09:27.839 --> 00:09:31.200 +see do you want to know about a + +00:09:29.720 --> 00:09:33.399 +different Mamba do you want about know + +00:09:31.200 --> 00:09:36.040 +about Mamba the research + +00:09:33.399 --> 00:09:38.399 +group so Mamba is a research group it's + +00:09:36.040 --> 00:09:41.800 +the modeling and Analysis for medicine + +00:09:38.399 --> 00:09:44.800 +research group um it focuses on + +00:09:41.800 --> 00:09:48.000 +mathematical biology and it's in the uh + +00:09:44.800 --> 00:09:51.120 +in this National Center for scientific + +00:09:48.000 --> 00:09:52.519 +research in France um the chairperson is + +00:09:51.120 --> 00:09:55.360 +this person and stuff like that so you + +00:09:52.519 --> 00:10:00.200 +can see it has all of these things so + +00:09:55.360 --> 00:10:03.920 +Mamba this Mamba is a node in the graph + +00:10:00.200 --> 00:10:06.839 +and then the edges are pointing um the + +00:10:03.920 --> 00:10:09.440 +edges are labeled with like instance of + +00:10:06.839 --> 00:10:11.200 +and then the next note is research group + +00:10:09.440 --> 00:10:13.000 +so research group is like another note + +00:10:11.200 --> 00:10:17.120 +in the graph and so you can click + +00:10:13.000 --> 00:10:18.680 +through this and it has its own ID and + +00:10:17.120 --> 00:10:21.200 +other things like + +00:10:18.680 --> 00:10:22.839 +this also you'll notice that research + +00:10:21.200 --> 00:10:24.160 +group is translated into lots of + +00:10:22.839 --> 00:10:27.440 +different languages in the world so you + +00:10:24.160 --> 00:10:30.120 +can use it multi multilingually and um + +00:10:27.440 --> 00:10:33.880 +and other things like that + +00:10:30.120 --> 00:10:37.000 +um even minor entities like Graham + +00:10:33.880 --> 00:10:40.160 +nuig are included in this and it has a + +00:10:37.000 --> 00:10:42.240 +little bit of um like information about + +00:10:40.160 --> 00:10:45.480 +me like my PhD was in Kyoto University + +00:10:42.240 --> 00:10:45.480 +in 2012 I am a + +00:10:45.600 --> 00:10:52.079 +human I I am male uh and first name last + +00:10:50.519 --> 00:10:53.720 +name University teacher computer + +00:10:52.079 --> 00:10:56.279 +scientist natural language processing + +00:10:53.720 --> 00:10:58.639 +this is all right um because this is + +00:10:56.279 --> 00:11:00.240 +mostly hand curated it even has the IDS + +00:10:58.639 --> 00:11:04.240 +of my advisor + +00:11:00.240 --> 00:11:06.519 +advisers um the reason why it has all of + +00:11:04.240 --> 00:11:09.839 +this stuff actually is because like 15 + +00:11:06.519 --> 00:11:12.160 +years ago or like 10 years ago I entered + +00:11:09.839 --> 00:11:14.399 +in my uh my information into the + +00:11:12.160 --> 00:11:16.240 +mathematical genealogy project uh which + +00:11:14.399 --> 00:11:18.880 +is this project about who your advisers + +00:11:16.240 --> 00:11:20.680 +were because I wanted to see like who my + +00:11:18.880 --> 00:11:22.800 +mathematical like siblings were and + +00:11:20.680 --> 00:11:24.519 +stuff like that and uh somehow they + +00:11:22.800 --> 00:11:27.360 +managed to pull that out and keep this + +00:11:24.519 --> 00:11:28.760 +like 10 years later so um basically + +00:11:27.360 --> 00:11:30.519 +they're pulling information from like + +00:11:28.760 --> 00:11:32.800 +many many different structured data + +00:11:30.519 --> 00:11:34.160 +sources that they can use so uh they can + +00:11:32.800 --> 00:11:37.480 +pull it in there I don't know where they + +00:11:34.160 --> 00:11:39.440 +got that I'm human uh but maybe that was + +00:11:37.480 --> 00:11:43.240 +inferred from some piece of data + +00:11:39.440 --> 00:11:44.760 +somewhere online or something cool um + +00:11:43.240 --> 00:11:46.839 +another good thing about this that + +00:11:44.760 --> 00:11:52.680 +actually I didn't mention directly in + +00:11:46.839 --> 00:11:52.680 +the um in the lecture note or + +00:11:54.680 --> 00:12:01.120 +slides is that there's a query language + +00:11:57.360 --> 00:12:04.320 +for this yeah and a query language this + +00:12:01.120 --> 00:12:06.839 +query language is called Sparkle so + +00:12:04.320 --> 00:12:10.680 +there's a sequel for querying relational + +00:12:06.839 --> 00:12:14.399 +databases and Sparkle is for querying + +00:12:10.680 --> 00:12:15.240 +these uh knowledge bases and let me see + +00:12:14.399 --> 00:12:18.279 +if I + +00:12:15.240 --> 00:12:22.560 +can I asked chat + +00:12:18.279 --> 00:12:24.560 +GPT to write me a sparkle query to find + +00:12:22.560 --> 00:12:26.839 +all presidents of Carnegie melon + +00:12:24.560 --> 00:12:31.160 +University so let's see if Chad GPT is + +00:12:26.839 --> 00:12:31.160 +capable of doing that um + +00:12:35.639 --> 00:12:39.680 +okay that's a problem let me + +00:12:41.279 --> 00:12:47.000 +see okay there's there's an errand there + +00:12:43.880 --> 00:12:48.360 +but like if uh uh if I could find a I + +00:12:47.000 --> 00:12:50.160 +don't want to waste time in class like + +00:12:48.360 --> 00:12:52.079 +finding a working query but basically + +00:12:50.160 --> 00:12:53.399 +you can put it in a query and it allows + +00:12:52.079 --> 00:12:56.120 +you to do a lot of things that are + +00:12:53.399 --> 00:13:00.519 +similar to what you can do in SQL so you + +00:12:56.120 --> 00:13:02.720 +can find like all of the edges of nodes + +00:13:00.519 --> 00:13:05.279 +that satisfy a particular relation so + +00:13:02.720 --> 00:13:07.360 +you could say I want for Carnegie melon + +00:13:05.279 --> 00:13:10.160 +University to find all things that + +00:13:07.360 --> 00:13:13.519 +followed the like president of relation + +00:13:10.160 --> 00:13:14.959 +and that would give me all um you know + +00:13:13.519 --> 00:13:18.680 +all presidents of Carnegie melon + +00:13:14.959 --> 00:13:20.440 +University you can also like filter um + +00:13:18.680 --> 00:13:22.160 +filter by their start date and end date + +00:13:20.440 --> 00:13:24.120 +so find all of the preceden between a + +00:13:22.160 --> 00:13:25.839 +certain time and a another time or + +00:13:24.120 --> 00:13:30.480 +things like + +00:13:25.839 --> 00:13:34.199 +that so this is good if you want to get + +00:13:30.480 --> 00:13:36.600 +like high reli high reliability data um + +00:13:34.199 --> 00:13:39.839 +in a scalable way because like if I ask + +00:13:36.600 --> 00:13:41.920 +chat GPT like one of my favorite um one + +00:13:39.839 --> 00:13:45.720 +of my favorite queries for chat GPT is + +00:13:41.920 --> 00:13:48.600 +like name all of the name all of the + +00:13:45.720 --> 00:13:51.959 +presidents that were born uh east of the + +00:13:48.600 --> 00:13:53.880 +Mississippi River um and I've never + +00:13:51.959 --> 00:13:56.519 +successfully gotten chat GPT to be able + +00:13:53.880 --> 00:13:57.800 +to do this um because there's lots of + +00:13:56.519 --> 00:13:59.560 +presidents who were born east of the + +00:13:57.800 --> 00:14:02.320 +Mississippi River and it starts counting + +00:13:59.560 --> 00:14:04.079 +them it can't distinguish what position + +00:14:02.320 --> 00:14:05.639 +is east of the Mississippi and what + +00:14:04.079 --> 00:14:09.120 +position is the west west of the + +00:14:05.639 --> 00:14:11.279 +Mississippi but if you write a uh like a + +00:14:09.120 --> 00:14:14.759 +sparkle query it's not that hard to do + +00:14:11.279 --> 00:14:16.480 +that so there are um you know there are + +00:14:14.759 --> 00:14:18.639 +certain types of questions especially + +00:14:16.480 --> 00:14:20.399 +information aggregation and complex + +00:14:18.639 --> 00:14:22.839 +relations and stuff that uh language + +00:14:20.399 --> 00:14:26.600 +models are not very good + +00:14:22.839 --> 00:14:28.120 +at cool um so that's kind of an intro to + +00:14:26.600 --> 00:14:31.240 +knowledge bases why you might want to + +00:14:28.120 --> 00:14:33.759 +think about them any questions so far + +00:14:31.240 --> 00:14:33.759 +for + +00:14:34.759 --> 00:14:39.720 +discussion okay um I will move on next + +00:14:38.320 --> 00:14:41.199 +so the next thing I'd like to talk about + +00:14:39.720 --> 00:14:43.839 +is learning representations for + +00:14:41.199 --> 00:14:45.519 +knowledge bases um so knowledge bases + +00:14:43.839 --> 00:14:48.000 +are great but one problem is they're + +00:14:45.519 --> 00:14:51.040 +like inherently + +00:14:48.000 --> 00:14:55.040 +incomplete and even with extremely large + +00:14:51.040 --> 00:14:58.279 +scale uh it becomes impossible to have + +00:14:55.040 --> 00:15:00.360 +them be complete and the reason why is + +00:14:58.279 --> 00:15:03.639 +uh for examp example in Freebase which + +00:15:00.360 --> 00:15:05.480 +was the predecessor to Wiki data um 71% + +00:15:03.639 --> 00:15:08.560 +of humans didn't have a date of + +00:15:05.480 --> 00:15:10.560 +birth um and probably every human + +00:15:08.560 --> 00:15:12.079 +actually has a date of birth right um + +00:15:10.560 --> 00:15:15.880 +you know we're pretty much guaranteed + +00:15:12.079 --> 00:15:17.639 +for that to be the case so the issue is + +00:15:15.880 --> 00:15:19.160 +like for very famous entities you want + +00:15:17.639 --> 00:15:21.040 +lots of detailed information like you + +00:15:19.160 --> 00:15:24.000 +can know absolutely everything about Joe + +00:15:21.040 --> 00:15:25.759 +Biden or Barack Obama but you know at + +00:15:24.000 --> 00:15:26.880 +the same time for Less major entities + +00:15:25.759 --> 00:15:28.079 +you still want them in the knowledge + +00:15:26.880 --> 00:15:30.079 +base but you're not going to be able to + +00:15:28.079 --> 00:15:31.519 +get all that information or should you + +00:15:30.079 --> 00:15:35.600 +for privacy + +00:15:31.519 --> 00:15:36.680 +purposes and so the idea is um for + +00:15:35.600 --> 00:15:38.079 +information that's written on the + +00:15:36.680 --> 00:15:40.600 +internet somewhere can you perform + +00:15:38.079 --> 00:15:42.759 +relation extraction which essentially + +00:15:40.600 --> 00:15:44.600 +allows you to extract this information + +00:15:42.759 --> 00:15:46.360 +and create your own knowledge bases and + +00:15:44.600 --> 00:15:47.680 +stuff like this and this can also be + +00:15:46.360 --> 00:15:50.079 +useful if you want to create it for like + +00:15:47.680 --> 00:15:52.199 +a specialized domain or um or other + +00:15:50.079 --> 00:15:55.000 +stuff like + +00:15:52.199 --> 00:15:59.519 +that so there's a bunch of ways that + +00:15:55.000 --> 00:16:03.079 +people do this um and one kind of + +00:15:59.519 --> 00:16:06.120 +popular way that people have tried to do + +00:16:03.079 --> 00:16:09.199 +relation extraction is through uh + +00:16:06.120 --> 00:16:12.560 +leveraging consistency in embedding + +00:16:09.199 --> 00:16:15.319 +space and so this is the most famous + +00:16:12.560 --> 00:16:17.959 +example from word de uh what seems like + +00:16:15.319 --> 00:16:21.880 +ages ago uh in + +00:16:17.959 --> 00:16:23.920 +2013 and in the word Toc paper one of + +00:16:21.880 --> 00:16:26.279 +the big you know exciting things was + +00:16:23.920 --> 00:16:28.639 +essentially they demonstrated that + +00:16:26.279 --> 00:16:30.120 +vectors in embedding space had kind of + +00:16:28.639 --> 00:16:31.839 +in + +00:16:30.120 --> 00:16:33.160 +you know meaning and actually the + +00:16:31.839 --> 00:16:34.600 +vectors in embedding space could + +00:16:33.160 --> 00:16:37.639 +correspond to relations between + +00:16:34.600 --> 00:16:39.480 +embeddings so like uh we would have man + +00:16:37.639 --> 00:16:41.000 +pointing to woman in approximately the + +00:16:39.480 --> 00:16:42.920 +same direction that we had Uncle + +00:16:41.000 --> 00:16:46.600 +pointing to Aunt and King pointing to + +00:16:42.920 --> 00:16:49.680 +Queen and so um then you could do things + +00:16:46.600 --> 00:16:51.440 +like you could take Kings subtract out + +00:16:49.680 --> 00:16:53.560 +the vector that corresponded to + +00:16:51.440 --> 00:16:58.360 +plurality uh add the vector that + +00:16:53.560 --> 00:17:00.839 +corresponded to um you know uh to going + +00:16:58.360 --> 00:17:04.319 +from masculine to feminine words and + +00:17:00.839 --> 00:17:05.559 +then um like read the vector to that + +00:17:04.319 --> 00:17:07.160 +were plural and you'd be able to + +00:17:05.559 --> 00:17:09.439 +identify the plural by just knowing + +00:17:07.160 --> 00:17:11.000 +these two uh vectors the plural of green + +00:17:09.439 --> 00:17:14.000 +by just knowing those two + +00:17:11.000 --> 00:17:14.000 +vectors + +00:17:14.160 --> 00:17:21.880 +um but it turns out that you can either + +00:17:18.199 --> 00:17:21.880 +learn embeddings + +00:17:22.720 --> 00:17:28.240 +from like uh you can either learn + +00:17:25.000 --> 00:17:30.400 +embeddings from text or you can use the + +00:17:28.240 --> 00:17:32.039 +fact that you have a big knowledge base + +00:17:30.400 --> 00:17:34.880 +that was curated by humans like Wiki + +00:17:32.039 --> 00:17:36.120 +data to improve the embeddings of a + +00:17:34.880 --> 00:17:39.559 +neural model + +00:17:36.120 --> 00:17:41.799 +itself and so another pretty large uh + +00:17:39.559 --> 00:17:43.600 +research area that a lot of people have + +00:17:41.799 --> 00:17:47.120 +focused on is how do you get good + +00:17:43.600 --> 00:17:48.720 +embeddings of a Knowledge Graph and this + +00:17:47.120 --> 00:17:50.600 +is important if you want to do any sort + +00:17:48.720 --> 00:17:52.799 +of like Knowledge Graph Search or other + +00:17:50.600 --> 00:17:54.160 +things like this like for example one of + +00:17:52.799 --> 00:17:56.799 +the really nice things about knowledge + +00:17:54.160 --> 00:17:58.880 +graphs is they have information about a + +00:17:56.799 --> 00:18:00.200 +whole bunch of really sparse entities + +00:17:58.880 --> 00:18:03.240 +that aren't mentioned very much on the + +00:18:00.200 --> 00:18:05.679 +internet for example and so because of + +00:18:03.240 --> 00:18:07.440 +that you can um you can leverage the + +00:18:05.679 --> 00:18:10.720 +knowledge graph structure together with + +00:18:07.440 --> 00:18:10.720 +text to learn better embeddings + +00:18:11.240 --> 00:18:18.520 +overall and so this particular paper is + +00:18:15.280 --> 00:18:20.960 +one example of it um and the way they do + +00:18:18.520 --> 00:18:23.280 +this is they express uh Knowledge Graph + +00:18:20.960 --> 00:18:25.919 +triples is additive + +00:18:23.280 --> 00:18:28.480 +Transformations and they minimize the + +00:18:25.919 --> 00:18:31.640 +distance uh of existing triples with a + +00:18:28.480 --> 00:18:35.039 +margin based loss so the way they do + +00:18:31.640 --> 00:18:38.240 +this is they have the head um in the + +00:18:35.039 --> 00:18:40.799 +tail and L is the vector corresponding + +00:18:38.240 --> 00:18:42.679 +to like the link between the things that + +00:18:40.799 --> 00:18:47.960 +corresponds to a + +00:18:42.679 --> 00:18:52.159 +relation and so you go uh you have H and + +00:18:47.960 --> 00:18:53.559 +T and here um like this is L but here + +00:18:52.159 --> 00:18:55.640 +it's written as are because I got this + +00:18:53.559 --> 00:18:58.120 +from a different paper and basically you + +00:18:55.640 --> 00:18:59.480 +you try to go from H to T um according + +00:18:58.120 --> 00:19:00.919 +to the relation + +00:18:59.480 --> 00:19:05.120 +uh Vector + +00:19:00.919 --> 00:19:07.200 +are and you use a hinge loss where um + +00:19:05.120 --> 00:19:10.039 +for the hinge loss you you have a hinge + +00:19:07.200 --> 00:19:12.640 +parameter and then you try to upweight + +00:19:10.039 --> 00:19:15.760 +the example of a true triple and + +00:19:12.640 --> 00:19:17.960 +downweight the example of a of a false + +00:19:15.760 --> 00:19:19.880 +triple so this could be one that was + +00:19:17.960 --> 00:19:22.080 +like randomly sampled to be incorrect + +00:19:19.880 --> 00:19:22.080 +for + +00:19:23.760 --> 00:19:29.080 +example um one interesting thing about + +00:19:26.880 --> 00:19:31.559 +knowledge graph embeddings is like a lot + +00:19:29.080 --> 00:19:33.600 +of famous AI researchers got their start + +00:19:31.559 --> 00:19:36.000 +in Knowledge Graph embeddings and so + +00:19:33.600 --> 00:19:39.760 +Richard soer is one of them if you know + +00:19:36.000 --> 00:19:44.320 +he's the CEO of vi.com search engine now + +00:19:39.760 --> 00:19:46.679 +um and uh this was a first attempt at + +00:19:44.320 --> 00:19:49.679 +predicting relations they basically + +00:19:46.679 --> 00:19:55.400 +created a um MLP that tries to predict + +00:19:49.679 --> 00:19:58.880 +whether a relation exists so they have + +00:19:55.400 --> 00:20:00.760 +a matrix for the left side of the + +00:19:58.880 --> 00:20:03.320 +relation a matrix for the right side of + +00:20:00.760 --> 00:20:05.080 +the relation and then they feed in the + +00:20:03.320 --> 00:20:07.559 +embeddings of each of the entities in + +00:20:05.080 --> 00:20:08.919 +the relation they have a nonlinearity + +00:20:07.559 --> 00:20:11.799 +and then they have another Vector that + +00:20:08.919 --> 00:20:14.720 +tries to predict the um the probability + +00:20:11.799 --> 00:20:16.679 +of the uh actual relation being correct + +00:20:14.720 --> 00:20:18.960 +so you would run this through a sigmoid + +00:20:16.679 --> 00:20:21.000 +and then uh if it was one the relation + +00:20:18.960 --> 00:20:24.039 +was likely to exist if it was Zero then + +00:20:21.000 --> 00:20:25.480 +the relation was likely to not exist and + +00:20:24.039 --> 00:20:27.799 +then they also propos something called a + +00:20:25.480 --> 00:20:31.480 +neural tensor Network and this adds a + +00:20:27.799 --> 00:20:34.000 +bilinear feature extractor um and so + +00:20:31.480 --> 00:20:37.440 +basically what this is saying is we have + +00:20:34.000 --> 00:20:40.000 +the embedding here the embedding here we + +00:20:37.440 --> 00:20:41.840 +have a matrix and then we calculate the + +00:20:40.000 --> 00:20:43.080 +dot product between the embedding after + +00:20:41.840 --> 00:20:45.799 +transformation it looks a lot like + +00:20:43.080 --> 00:20:47.720 +attention actually in a way um because + +00:20:45.799 --> 00:20:50.000 +we had the bilinear attention so it's + +00:20:47.720 --> 00:20:53.640 +similar to that as well and then we also + +00:20:50.000 --> 00:20:56.840 +have the MLP so this part corresponds to + +00:20:53.640 --> 00:21:00.320 +MLP and then we have a bias + +00:20:56.840 --> 00:21:02.200 +term and um this is a powerful model but + +00:21:00.320 --> 00:21:05.400 +it's a bit overparameterized so we + +00:21:02.200 --> 00:21:08.120 +actually later um uh this kind of fell + +00:21:05.400 --> 00:21:10.360 +out of uh favor towards these more + +00:21:08.120 --> 00:21:14.520 +simple models that we're using uh kind + +00:21:10.360 --> 00:21:14.520 +of just linear projections between the + +00:21:17.600 --> 00:21:22.279 +two so there's um there's a lot of + +00:21:20.120 --> 00:21:25.320 +methods like this these methods are + +00:21:22.279 --> 00:21:27.039 +basically assuming that we have either + +00:21:25.320 --> 00:21:29.080 +Knowledge Graph + +00:21:27.039 --> 00:21:30.799 +embeddings um and we want to learn + +00:21:29.080 --> 00:21:32.480 +relations or they're assuming that we + +00:21:30.799 --> 00:21:34.320 +don't have any information at all about + +00:21:32.480 --> 00:21:36.840 +the knowledge graph and we want to learn + +00:21:34.320 --> 00:21:40.039 +the knowledge graph embedding themselves + +00:21:36.840 --> 00:21:42.400 +it's been used for both of them but um I + +00:21:40.039 --> 00:21:44.000 +I'd say now it's probably most useful + +00:21:42.400 --> 00:21:45.520 +for learning Knowledge Graph embeddings + +00:21:44.000 --> 00:21:50.480 +if you want to do any sort of Knowledge + +00:21:45.520 --> 00:21:50.480 +Graph based modeling uh which can be + +00:21:51.240 --> 00:21:55.919 +useful um cool any questions about these + +00:21:57.360 --> 00:22:01.679 +ones okay + +00:21:59.520 --> 00:22:04.360 +next um actually this part might be a + +00:22:01.679 --> 00:22:06.600 +little bit simpler than the uh than the + +00:22:04.360 --> 00:22:09.000 +like knowledge graft based approaches so + +00:22:06.600 --> 00:22:10.960 +another method for relations extraction + +00:22:09.000 --> 00:22:13.440 +is learning from text + +00:22:10.960 --> 00:22:16.120 +directly + +00:22:13.440 --> 00:22:19.080 +and the first question about this is how + +00:22:16.120 --> 00:22:22.200 +do you get training data to learn uh + +00:22:19.080 --> 00:22:24.480 +about relation learn relation extraction + +00:22:22.200 --> 00:22:26.720 +and so there was this very influential + +00:22:24.480 --> 00:22:28.279 +paper a distant supervision for relation + +00:22:26.720 --> 00:22:31.120 +extraction I would say it's almost one + +00:22:28.279 --> 00:22:32.880 +of the first or certainly one of the + +00:22:31.120 --> 00:22:34.559 +most influential papers on like data + +00:22:32.880 --> 00:22:35.960 +augmentation or synthetic data for + +00:22:34.559 --> 00:22:38.400 +natural language + +00:22:35.960 --> 00:22:40.440 +processing and basically the idea is you + +00:22:38.400 --> 00:22:44.279 +already have a knowledge base that has + +00:22:40.440 --> 00:22:47.440 +some entries in it like Wiki data and so + +00:22:44.279 --> 00:22:50.919 +then given in entity relation entity + +00:22:47.440 --> 00:22:52.919 +triples um can you extract all text that + +00:22:50.919 --> 00:22:54.799 +matches this particular relation type + +00:22:52.919 --> 00:22:56.480 +and use it to train a relation extractor + +00:22:54.799 --> 00:22:59.640 +a supervised relation + +00:22:56.480 --> 00:23:01.880 +extractor so the way this works + +00:22:59.640 --> 00:23:04.039 +is like let's say we have this is an old + +00:23:01.880 --> 00:23:06.120 +paper so the examples are also old but + +00:23:04.039 --> 00:23:08.039 +um let's say we have Steven Spielberg + +00:23:06.120 --> 00:23:10.159 +being a director of the film Saving + +00:23:08.039 --> 00:23:12.840 +Private Ryan and that's included in our + +00:23:10.159 --> 00:23:14.840 +uh our knowledge base so what it would + +00:23:12.840 --> 00:23:17.080 +do is it would find all sentences that + +00:23:14.840 --> 00:23:19.400 +have Steven Spielberg and Saving Private + +00:23:17.080 --> 00:23:22.080 +Ryan included in them and it would label + +00:23:19.400 --> 00:23:24.159 +this as like a positive example of that + +00:23:22.080 --> 00:23:28.240 +relation so this + +00:23:24.159 --> 00:23:30.760 +is in general often it's okay it it + +00:23:28.240 --> 00:23:34.480 +works reasonably well but the problem + +00:23:30.760 --> 00:23:37.200 +with this is there are also um negative + +00:23:34.480 --> 00:23:38.840 +examples of this so like for example + +00:23:37.200 --> 00:23:40.480 +here I think the first one is kind of a + +00:23:38.840 --> 00:23:43.240 +negative example for the director + +00:23:40.480 --> 00:23:45.880 +relation because Steven Spielberg's film + +00:23:43.240 --> 00:23:48.120 +Saving Private Ryan doesn't actually + +00:23:45.880 --> 00:23:50.000 +tell you he's the director it just tells + +00:23:48.120 --> 00:23:52.520 +you that he's somehow affiliated with it + +00:23:50.000 --> 00:23:54.840 +he could be the writer or he could be uh + +00:23:52.520 --> 00:23:57.679 +the actor or or something else like that + +00:23:54.840 --> 00:24:00.440 +so this is a nice way to create data for + +00:23:57.679 --> 00:24:03.640 +basically free but at the same time uh + +00:24:00.440 --> 00:24:06.159 +you can like create noisy examples and + +00:24:03.640 --> 00:24:06.159 +that can be a + +00:24:07.159 --> 00:24:14.600 +problem so um there's been a lot of work + +00:24:11.400 --> 00:24:16.000 +about this um relationship uh relation + +00:24:14.600 --> 00:24:17.840 +classification with neural networks + +00:24:16.000 --> 00:24:20.840 +there's a lot of uh different methods + +00:24:17.840 --> 00:24:23.159 +that could be uh doing this most of them + +00:24:20.840 --> 00:24:24.919 +work by extracting features and then + +00:24:23.159 --> 00:24:27.039 +classifying somehow although there are + +00:24:24.919 --> 00:24:29.960 +some uh large language model based + +00:24:27.039 --> 00:24:33.120 +methods now um one one thing about + +00:24:29.960 --> 00:24:35.440 +relation extraction or not kind of like + +00:24:33.120 --> 00:24:36.799 +information extraction in general is + +00:24:35.440 --> 00:24:38.559 +that very often you want to run this + +00:24:36.799 --> 00:24:40.200 +over like a huge Corpus you want to run + +00:24:38.559 --> 00:24:42.320 +it over the whole internet or other + +00:24:40.200 --> 00:24:45.000 +things like that so from that point of + +00:24:42.320 --> 00:24:47.159 +view like I I said I could just ask + +00:24:45.000 --> 00:24:49.480 +mistol to give me the answer about like + +00:24:47.159 --> 00:24:52.440 +whether cars are included in sentences + +00:24:49.480 --> 00:24:55.120 +but if you want to run you know gp4 over + +00:24:52.440 --> 00:24:56.799 +the whole internet that's a pretty big + +00:24:55.120 --> 00:25:00.159 +budget and you might want to reconsider + +00:24:56.799 --> 00:25:02.440 +that so there are so um there is also + +00:25:00.159 --> 00:25:04.440 +some you know benefit in having cheap + +00:25:02.440 --> 00:25:07.200 +and lightweight + +00:25:04.440 --> 00:25:09.159 +methods so basically what this + +00:25:07.200 --> 00:25:11.279 +particular paper did is it extracted + +00:25:09.159 --> 00:25:12.760 +features in in classified so it + +00:25:11.279 --> 00:25:15.600 +extracted lexical features of the + +00:25:12.760 --> 00:25:20.240 +entities themselves and features of the + +00:25:15.600 --> 00:25:22.360 +whole span and so like the way I uh most + +00:25:20.240 --> 00:25:26.960 +modern methods for this do this is they + +00:25:22.360 --> 00:25:29.399 +basically um extract features from the + +00:25:26.960 --> 00:25:31.679 +first part of the first entity the + +00:25:29.399 --> 00:25:33.760 +second part of the the first entity the + +00:25:31.679 --> 00:25:36.360 +first part of the second entity and the + +00:25:33.760 --> 00:25:37.720 +last part of the uh second entity and + +00:25:36.360 --> 00:25:39.600 +take all of those embeddings feed them + +00:25:37.720 --> 00:25:41.440 +into like an MLP or something like that + +00:25:39.600 --> 00:25:44.039 +and then make a prediction about whether + +00:25:41.440 --> 00:25:45.760 +that relation exists so if you have an + +00:25:44.039 --> 00:25:47.840 +embedding model this is relatively easy + +00:25:45.760 --> 00:25:50.360 +to do you feed it through like uh + +00:25:47.840 --> 00:25:51.919 +Roberta or you feed it through mistol + +00:25:50.360 --> 00:25:54.559 +and get the embeddings for each of the + +00:25:51.919 --> 00:25:55.840 +tokens and um and then you make a + +00:25:54.559 --> 00:25:58.840 +prediction based on those four + +00:25:55.840 --> 00:25:58.840 +embeddings + +00:26:00.600 --> 00:26:04.840 +um the details of that are like not + +00:26:03.520 --> 00:26:07.320 +super important unless you're going to + +00:26:04.840 --> 00:26:09.279 +go in and implement it yourself so you + +00:26:07.320 --> 00:26:10.919 +can um like if you're actually going to + +00:26:09.279 --> 00:26:12.120 +be doing relation extraction obviously + +00:26:10.919 --> 00:26:14.279 +the details are important but I'm + +00:26:12.120 --> 00:26:16.000 +assuming that most people won't be uh + +00:26:14.279 --> 00:26:19.720 +you know doing that as your final + +00:26:16.000 --> 00:26:21.240 +project but um one really interesting + +00:26:19.720 --> 00:26:22.919 +thing that is relevant even if you're + +00:26:21.240 --> 00:26:26.360 +not doing relationship relation + +00:26:22.919 --> 00:26:29.360 +extraction is how you can model noise + +00:26:26.360 --> 00:26:32.600 +because this um as I said they're + +00:26:29.360 --> 00:26:35.720 +creating lots of like semi noisy data + +00:26:32.600 --> 00:26:38.919 +and a lot of the work in getting good + +00:26:35.720 --> 00:26:40.360 +bottles for relation extraction has been + +00:26:38.919 --> 00:26:41.799 +how do we deal with this distant + +00:26:40.360 --> 00:26:43.799 +supervision noise and I'm just going to + +00:26:41.799 --> 00:26:45.760 +give one example here but there's like a + +00:26:43.799 --> 00:26:49.120 +series of papers after this that also + +00:26:45.760 --> 00:26:50.600 +tried to do similar things so the idea + +00:26:49.120 --> 00:26:53.600 +is that there's noise in the distant + +00:26:50.600 --> 00:26:56.559 +supervision labels um and so we want to + +00:26:53.600 --> 00:27:01.039 +model and mitigate that noise and the + +00:26:56.559 --> 00:27:03.919 +way this paper does this is they have an + +00:27:01.039 --> 00:27:06.679 +encoder and from the encoder you + +00:27:03.919 --> 00:27:10.960 +calculate embeddings and make + +00:27:06.679 --> 00:27:14.279 +predictions and so you have a small set + +00:27:10.960 --> 00:27:16.080 +of like very high quality data and this + +00:27:14.279 --> 00:27:17.760 +small set of very high quality data you + +00:27:16.080 --> 00:27:19.880 +can basically trust that all of the data + +00:27:17.760 --> 00:27:22.320 +is not noisy like maybe it's manually + +00:27:19.880 --> 00:27:23.720 +annotated data and you have like 5,000 + +00:27:22.320 --> 00:27:25.000 +examples of it or something like that + +00:27:23.720 --> 00:27:26.880 +and then separately from that you have + +00:27:25.000 --> 00:27:28.440 +like 5 million examples of automatically + +00:27:26.880 --> 00:27:30.799 +labeled data that might be good might + +00:27:28.440 --> 00:27:32.679 +not be good and so what they do is + +00:27:30.799 --> 00:27:34.200 +essentially at the beginning they take + +00:27:32.679 --> 00:27:36.520 +this encoder get embeddings make + +00:27:34.200 --> 00:27:38.000 +predictions over the high quality data + +00:27:36.520 --> 00:27:40.320 +and then they have a separate noise + +00:27:38.000 --> 00:27:43.440 +modeling layer where what this noise + +00:27:40.320 --> 00:27:46.919 +modeling layer does is it has a + +00:27:43.440 --> 00:27:50.039 +transition Matrix which says given that + +00:27:46.919 --> 00:27:53.279 +this given that we made a particular + +00:27:50.039 --> 00:27:55.159 +prediction over classes because this is + +00:27:53.279 --> 00:27:59.919 +essentially a multiclass classification + +00:27:55.159 --> 00:28:01.519 +problem they transform the + +00:27:59.919 --> 00:28:03.159 +sorry I don't remember if they transform + +00:28:01.519 --> 00:28:04.640 +the probabilities or the low Jets I + +00:28:03.159 --> 00:28:07.320 +think it's the probabilities but they + +00:28:04.640 --> 00:28:12.799 +transform the probabilities and get a + +00:28:07.320 --> 00:28:14.720 +final uh distribution after noise and so + +00:28:12.799 --> 00:28:17.399 +that means that you can basically smooth + +00:28:14.720 --> 00:28:19.240 +out this uh distribution and account for + +00:28:17.399 --> 00:28:20.880 +the fact that the labels may be noisy or + +00:28:19.240 --> 00:28:24.399 +may may not be + +00:28:20.880 --> 00:28:26.600 +noisy um then they add additional + +00:28:24.399 --> 00:28:28.559 +normalization on this transition Matrix + +00:28:26.600 --> 00:28:32.440 +using something called Trace normal + +00:28:28.559 --> 00:28:35.840 +ization to move this Matrix closer to + +00:28:32.440 --> 00:28:38.480 +the identity function which says that + +00:28:35.840 --> 00:28:40.720 +the predictions are probably not wrong + +00:28:38.480 --> 00:28:43.159 +all the time uh the predictions are + +00:28:40.720 --> 00:28:45.360 +probably correct you know a lot of the + +00:28:43.159 --> 00:28:46.600 +time they're not correct all the time uh + +00:28:45.360 --> 00:28:49.720 +so then you have that Trace + +00:28:46.600 --> 00:28:51.880 +normalization competing with um this uh + +00:28:49.720 --> 00:28:55.440 +trying to give you like a more smooth + +00:28:51.880 --> 00:28:58.760 +distribution and and reduce your uh L + +00:28:55.440 --> 00:29:00.320 +like reduce your loss so um I I think + +00:28:58.760 --> 00:29:02.559 +this is actually a pretty interesting + +00:29:00.320 --> 00:29:04.480 +idea and it can be used not just for + +00:29:02.559 --> 00:29:08.600 +relation extraction but also in cases + +00:29:04.480 --> 00:29:08.600 +where um you might have noisy labels + +00:29:08.799 --> 00:29:14.320 +overall um so are there any questions + +00:29:12.360 --> 00:29:15.720 +about this or any of the things that are + +00:29:14.320 --> 00:29:18.480 +going on + +00:29:15.720 --> 00:29:20.279 +here um even if you're completely + +00:29:18.480 --> 00:29:21.960 +uninterested in relation extraction I'd + +00:29:20.279 --> 00:29:23.720 +encourage you to think about like what + +00:29:21.960 --> 00:29:26.159 +are + +00:29:23.720 --> 00:29:27.360 +some examples of things that you are + +00:29:26.159 --> 00:29:29.519 +interested in where you could get + +00:29:27.360 --> 00:29:31.840 +potentially labels and how could you for + +00:29:29.519 --> 00:29:34.880 +theise there like that might be uh you + +00:29:31.840 --> 00:29:34.880 +know a thing to + +00:29:35.679 --> 00:29:39.919 +about okay so this was a very very brief + +00:29:38.320 --> 00:29:42.679 +overview of how we create knowledge + +00:29:39.919 --> 00:29:44.080 +bases uh from textual data or from + +00:29:42.679 --> 00:29:47.159 +Knowledge Graph data structured + +00:29:44.080 --> 00:29:48.840 +Knowledge Graph data um so now I like to + +00:29:47.159 --> 00:29:51.519 +talk a little bit about how to use + +00:29:48.840 --> 00:29:53.960 +knowledge bases to inform neural + +00:29:51.519 --> 00:29:56.159 +models and there's a bunch of different + +00:29:53.960 --> 00:29:59.519 +ways to do this + +00:29:56.159 --> 00:30:02.600 +um the + +00:29:59.519 --> 00:30:06.960 +the first way um is to + +00:30:02.600 --> 00:30:09.840 +improve embeddings uh + +00:30:06.960 --> 00:30:11.960 +with existing lexicons and this example + +00:30:09.840 --> 00:30:14.679 +is using non-contextual embeddings like + +00:30:11.960 --> 00:30:16.240 +not the not the ones we get from neural + +00:30:14.679 --> 00:30:17.919 +language models but once we get from + +00:30:16.240 --> 00:30:20.919 +just running a embedding model like word + +00:30:17.919 --> 00:30:22.960 +toac or something like this um and what + +00:30:20.919 --> 00:30:25.640 +they did in this paper is they + +00:30:22.960 --> 00:30:27.600 +essentially um retrofitted embeddings to + +00:30:25.640 --> 00:30:30.840 +existing lexicons by doing post Hawk + +00:30:27.600 --> 00:30:34.080 +trans of the embeddings so that they + +00:30:30.840 --> 00:30:36.840 +matched the um the knowledge graph for + +00:30:34.080 --> 00:30:39.080 +lexon better and so the way they did + +00:30:36.840 --> 00:30:41.880 +this is + +00:30:39.080 --> 00:30:43.720 +um they started out with pre-trained + +00:30:41.880 --> 00:30:45.399 +embeddings and they had a double + +00:30:43.720 --> 00:30:47.240 +objective of making the transform + +00:30:45.399 --> 00:30:49.120 +embeddings close to the neighbors and + +00:30:47.240 --> 00:30:52.519 +close to the original + +00:30:49.120 --> 00:30:58.840 +embedding and the way they did this is + +00:30:52.519 --> 00:30:58.840 +they essentially had um this + +00:30:59.799 --> 00:31:03.720 +this regularization term over here so + +00:31:01.880 --> 00:31:06.200 +this regularization term is basically + +00:31:03.720 --> 00:31:08.279 +saying um I don't want you to move your + +00:31:06.200 --> 00:31:09.360 +embeddings too far away from how they + +00:31:08.279 --> 00:31:11.679 +were + +00:31:09.360 --> 00:31:14.799 +initialized and then at the same time I + +00:31:11.679 --> 00:31:17.279 +would like you to make these uh + +00:31:14.799 --> 00:31:19.600 +embeddings closer to each other if they + +00:31:17.279 --> 00:31:21.240 +are synonyms of each other so they did + +00:31:19.600 --> 00:31:23.600 +this using word net and they basically + +00:31:21.240 --> 00:31:26.200 +took the words uh that were synonyms to + +00:31:23.600 --> 00:31:28.679 +each other in sinets with each other and + +00:31:26.200 --> 00:31:30.000 +they tried to regularize the synonyms to + +00:31:28.679 --> 00:31:32.120 +be closer together but also the + +00:31:30.000 --> 00:31:33.639 +embeddings to be closer to how they + +00:31:32.120 --> 00:31:35.960 +started + +00:31:33.639 --> 00:31:38.799 +out and there were also examples of + +00:31:35.960 --> 00:31:40.720 +forcing anms away from each other so + +00:31:38.799 --> 00:31:42.480 +like if you're um this is a little bit + +00:31:40.720 --> 00:31:44.799 +of an older work so it was working on + +00:31:42.480 --> 00:31:47.600 +non-contextualized embeddings but we + +00:31:44.799 --> 00:31:49.399 +could do something very similar for um + +00:31:47.600 --> 00:31:52.000 +more modern models in like Knowledge + +00:31:49.399 --> 00:31:55.320 +Graph embeddings for example so let's + +00:31:52.000 --> 00:31:58.960 +say we had + +00:31:55.320 --> 00:32:03.240 +um a model that ident + +00:31:58.960 --> 00:32:06.600 +entities and then different examples of + +00:32:03.240 --> 00:32:06.600 +those entities across different + +00:32:07.159 --> 00:32:11.480 +contexts um let's go back to the wiki + +00:32:20.639 --> 00:32:26.840 +data and so um if we had lots of + +00:32:23.960 --> 00:32:29.360 +examples of Joe Biden um Joe Biden is + +00:32:26.840 --> 00:32:35.159 +referred to in a number ways like Joe + +00:32:29.360 --> 00:32:44.440 +Biden Joseph Biden Joseph R Biden um J + +00:32:35.159 --> 00:32:47.880 +jrb I guess um pus 48 46 sorry um and uh + +00:32:44.440 --> 00:32:50.799 +so you could find different examples of + +00:32:47.880 --> 00:32:52.799 +things that match these strings um and + +00:32:50.799 --> 00:32:55.360 +even do entity linking uh which I'll + +00:32:52.799 --> 00:32:57.200 +I'll talk about in a little bit and then + +00:32:55.360 --> 00:32:58.760 +encourag the embeddings for all of these + +00:32:57.200 --> 00:33:01.360 +different instances is to be closer + +00:32:58.760 --> 00:33:04.039 +together to make your model like disting + +00:33:01.360 --> 00:33:06.799 +uh distinguish them less and Ure that + +00:33:04.039 --> 00:33:08.399 +they uh they get closer edings and that + +00:33:06.799 --> 00:33:11.639 +could improve like question answering + +00:33:08.399 --> 00:33:11.639 +look up other stuff like + +00:33:12.960 --> 00:33:19.880 +that + +00:33:14.919 --> 00:33:23.399 +cool um yeah I have a question about + +00:33:19.880 --> 00:33:25.399 +this so what happens if you do like subw + +00:33:23.399 --> 00:33:28.000 +modeling and then you don't have like + +00:33:25.399 --> 00:33:30.440 +the embedment for that entire string + +00:33:28.000 --> 00:33:32.320 +that is supposed to be Clos yeah what + +00:33:30.440 --> 00:33:34.279 +happens if you do subword modeling and + +00:33:32.320 --> 00:33:35.480 +you don't have the embedding uh you + +00:33:34.279 --> 00:33:37.159 +don't have a single embedding that + +00:33:35.480 --> 00:33:40.360 +corresponds to an entity so that's a + +00:33:37.159 --> 00:33:42.559 +really good question um let me + +00:33:40.360 --> 00:33:44.240 +check I don't think I actually have + +00:33:42.559 --> 00:33:46.600 +these on the slide so I might have to + +00:33:44.240 --> 00:33:46.600 +open a + +00:33:53.639 --> 00:33:59.720 +paper yeah okay so there's a lot of + +00:33:56.440 --> 00:33:59.720 +different ways to handle this + +00:34:11.520 --> 00:34:18.079 +so there there's two papers um the first + +00:34:14.879 --> 00:34:20.000 +paper is uh a really nice paper very + +00:34:18.079 --> 00:34:22.359 +influential on the subject of + +00:34:20.000 --> 00:34:25.359 +co-reference resolution and co-reference + +00:34:22.359 --> 00:34:27.240 +resolution um is essentially trying to + +00:34:25.359 --> 00:34:30.000 +identify when two spans correspond to + +00:34:27.240 --> 00:34:32.320 +each other so like if I say Joe B Joe + +00:34:30.000 --> 00:34:34.359 +Biden early in a document and then later + +00:34:32.320 --> 00:34:35.480 +in a document it just says Biden we want + +00:34:34.359 --> 00:34:38.839 +to know that those two things are + +00:34:35.480 --> 00:34:40.919 +referring to each other and then um we + +00:34:38.839 --> 00:34:42.839 +had a paper later where we generalized + +00:34:40.919 --> 00:34:44.839 +this and applied you know very similar + +00:34:42.839 --> 00:34:48.079 +methodology to like lots and lots of + +00:34:44.839 --> 00:34:50.760 +different analysis tasks but I can um I + +00:34:48.079 --> 00:34:53.839 +can show the beginning here and + +00:34:50.760 --> 00:34:59.320 +basically the methodology that they use + +00:34:53.839 --> 00:35:02.440 +here um is they add + +00:34:59.320 --> 00:35:04.440 +a and this is specifically for modeling + +00:35:02.440 --> 00:35:08.240 +spans and getting embeddings out of + +00:35:04.440 --> 00:35:09.040 +spans of uh tokens and what they did is + +00:35:08.240 --> 00:35:13.079 +they + +00:35:09.040 --> 00:35:14.920 +essentially have a model where you take + +00:35:13.079 --> 00:35:16.440 +the thing from the beginning the + +00:35:14.920 --> 00:35:18.760 +embedding from the beginning of the span + +00:35:16.440 --> 00:35:22.040 +the embedding from the end of the span + +00:35:18.760 --> 00:35:24.280 +and the average embedding of all of the + +00:35:22.040 --> 00:35:26.280 +embeddings in the span and that gives + +00:35:24.280 --> 00:35:27.480 +you three vectors for any span right + +00:35:26.280 --> 00:35:30.160 +because you can always get the beginning + +00:35:27.480 --> 00:35:33.280 +that and in the mean and then based on + +00:35:30.160 --> 00:35:36.560 +that they feed that through um like a + +00:35:33.280 --> 00:35:37.800 +neural network and get a new edting so + +00:35:36.560 --> 00:35:40.000 +they feed that through a transformation + +00:35:37.800 --> 00:35:42.520 +and get a new edting and so that's the + +00:35:40.000 --> 00:35:44.200 +method that they used and I think our + +00:35:42.520 --> 00:35:46.640 +paper actually has a + +00:35:44.200 --> 00:35:49.640 +better + +00:35:46.640 --> 00:35:52.640 +um a better figure of how you can + +00:35:49.640 --> 00:35:56.680 +actually use that actually maybe it + +00:35:52.640 --> 00:35:58.160 +doesn't okay but anyway um yeah because + +00:35:56.680 --> 00:36:00.240 +uh yeah here's the figure + +00:35:58.160 --> 00:36:01.520 +so then you can use that for a number of + +00:36:00.240 --> 00:36:03.040 +things you could use that to like look + +00:36:01.520 --> 00:36:06.359 +up something in a knowledge base you + +00:36:03.040 --> 00:36:08.599 +could also use that to um decide whether + +00:36:06.359 --> 00:36:10.440 +two spans are co-referent by feeding in + +00:36:08.599 --> 00:36:12.800 +like the first span and the second Span + +00:36:10.440 --> 00:36:14.960 +in and then predicting whether those two + +00:36:12.800 --> 00:36:19.640 +spans cor correspond to each other or + +00:36:14.960 --> 00:36:21.240 +not so this general idea of modeling + +00:36:19.640 --> 00:36:22.960 +spans and then modeling relations + +00:36:21.240 --> 00:36:24.520 +between the spans allows you to solve + +00:36:22.960 --> 00:36:26.119 +like lots of different tasks like part + +00:36:24.520 --> 00:36:27.920 +of speech tagging or named entity + +00:36:26.119 --> 00:36:30.319 +recognition or relation extraction or + +00:36:27.920 --> 00:36:31.920 +other stuff like that so um yeah + +00:36:30.319 --> 00:36:34.040 +actually I realized now that I should + +00:36:31.920 --> 00:36:35.079 +have probably talked about these in the + +00:36:34.040 --> 00:36:36.560 +slides where I was talking about + +00:36:35.079 --> 00:36:38.599 +modeling but that that would be my + +00:36:36.560 --> 00:36:42.319 +recommended way of doing + +00:36:38.599 --> 00:36:42.319 +it cool any other + +00:36:43.839 --> 00:36:49.480 +questions nice okay + +00:36:46.880 --> 00:36:52.880 +um + +00:36:49.480 --> 00:36:55.119 +so another question is how can we inject + +00:36:52.880 --> 00:36:56.640 +knowledge into language models um + +00:36:55.119 --> 00:36:58.720 +there's a bunch of different ways to do + +00:36:56.640 --> 00:37:03.079 +this um + +00:36:58.720 --> 00:37:05.000 +one very easy way is to somehow look up + +00:37:03.079 --> 00:37:09.640 +relevant knowledge in your knowledge + +00:37:05.000 --> 00:37:09.640 +graph and um oh + +00:37:10.280 --> 00:37:15.440 +sorry I was presenting on my own screen + +00:37:13.040 --> 00:37:18.240 +not the screen that everybody can see so + +00:37:15.440 --> 00:37:22.000 +um to look up all of the uh knowledge in + +00:37:18.240 --> 00:37:24.000 +a Knowledge Graph and um somehow provide + +00:37:22.000 --> 00:37:26.800 +it to the model one way you can provide + +00:37:24.000 --> 00:37:28.720 +it to the model is through prompting um + +00:37:26.800 --> 00:37:32.400 +but the problem with with prompting is + +00:37:28.720 --> 00:37:33.920 +that you're not necessarily going to uh + +00:37:32.400 --> 00:37:37.319 +be able + +00:37:33.920 --> 00:37:41.359 +to utilize knowledge that is kind of + +00:37:37.319 --> 00:37:43.920 +like minority knowledge because the + +00:37:41.359 --> 00:37:47.560 +embeddings of the entities that you're + +00:37:43.920 --> 00:37:49.440 +presenting may not be you know like well + +00:37:47.560 --> 00:37:51.839 +learned so + +00:37:49.440 --> 00:37:53.200 +you're requiring essentially the model + +00:37:51.839 --> 00:37:55.359 +to be able to generalize from the + +00:37:53.200 --> 00:37:57.880 +knowledge you provide in + +00:37:55.359 --> 00:38:00.839 +the prompt despite the fact that the + +00:37:57.880 --> 00:38:02.240 +prompt is like minor entities or other + +00:38:00.839 --> 00:38:07.040 +things like that that are not as well + +00:38:02.240 --> 00:38:10.400 +learned so is another um method to + +00:38:07.040 --> 00:38:13.440 +handle this um we previously proposed a + +00:38:10.400 --> 00:38:15.599 +method that allows you + +00:38:13.440 --> 00:38:18.319 +to essentially + +00:38:15.599 --> 00:38:21.319 +predict instead of predicting directly + +00:38:18.319 --> 00:38:24.920 +the words here you can predict a tag + +00:38:21.319 --> 00:38:27.200 +that says birth name or a given name or + +00:38:24.920 --> 00:38:31.480 +family name or something like that and + +00:38:27.200 --> 00:38:32.839 +then post talk the model will fill in uh + +00:38:31.480 --> 00:38:36.720 +that like birth + +00:38:32.839 --> 00:38:39.400 +name text based on a knowledge base so + +00:38:36.720 --> 00:38:41.079 +um you know if you have a a Wikipedia + +00:38:39.400 --> 00:38:44.240 +article about Barack Obama that you're + +00:38:41.079 --> 00:38:48.680 +trying to write it could predict um + +00:38:44.240 --> 00:38:52.040 +birth name born uh birth name comma born + +00:38:48.680 --> 00:38:55.359 +in birth date and that's like a very + +00:38:52.040 --> 00:38:56.880 +very common thing in Wikipedia right so + +00:38:55.359 --> 00:39:00.960 +because of that it can predict it very + +00:38:56.880 --> 00:39:03.160 +consistently very uh formulaically and + +00:39:00.960 --> 00:39:04.599 +that allows you to um you know with high + +00:39:03.160 --> 00:39:06.079 +confidence get something that makes + +00:39:04.599 --> 00:39:08.599 +sense and is factual and reduce + +00:39:06.079 --> 00:39:11.400 +hallucination and other stuff like that + +00:39:08.599 --> 00:39:12.599 +so um basically how could you inject + +00:39:11.400 --> 00:39:14.280 +this into language models there's + +00:39:12.599 --> 00:39:16.240 +multiple ways one is prompting that's + +00:39:14.280 --> 00:39:18.160 +maybe the easier way another way is + +00:39:16.240 --> 00:39:21.520 +through like templatic generation like + +00:39:18.160 --> 00:39:23.200 +this where you generate placeholders uh + +00:39:21.520 --> 00:39:25.200 +for all the information you want to add + +00:39:23.200 --> 00:39:26.480 +and then you add the information uh + +00:39:25.200 --> 00:39:29.359 +directly from the knowledge base through + +00:39:26.480 --> 00:39:29.359 +the placeholders like + +00:39:30.680 --> 00:39:36.800 +cool um there there's details about this + +00:39:34.240 --> 00:39:38.920 +in the paper like how we um formulate a + +00:39:36.800 --> 00:39:41.319 +training objective for something like + +00:39:38.920 --> 00:39:43.480 +this and the difficulty in formulating a + +00:39:41.319 --> 00:39:46.400 +training objective is that you need to + +00:39:43.480 --> 00:39:48.280 +figure out when you want to replace + +00:39:46.400 --> 00:39:49.720 +things so like you might not always want + +00:39:48.280 --> 00:39:51.000 +to replace with birth name you might + +00:39:49.720 --> 00:39:53.920 +want to replace with given name and + +00:39:51.000 --> 00:39:55.839 +family name and we demonstrate that you + +00:39:53.920 --> 00:39:58.400 +can figure out how to do this by + +00:39:55.839 --> 00:40:00.960 +essentially like Mar iing over the + +00:39:58.400 --> 00:40:03.520 +various ways of uh of doing this but + +00:40:00.960 --> 00:40:05.880 +that's kind of more complex detail + +00:40:03.520 --> 00:40:05.880 +that's in the + +00:40:08.440 --> 00:40:15.480 +paper another really interesting + +00:40:11.000 --> 00:40:17.319 +question um that uh we this is a also a + +00:40:15.480 --> 00:40:19.440 +paper that I was involved in from uh + +00:40:17.319 --> 00:40:22.040 +four years ago but I feel like this is + +00:40:19.440 --> 00:40:25.040 +not entirely solved even in like modern + +00:40:22.040 --> 00:40:26.920 +rag systems uh today is how can we + +00:40:25.040 --> 00:40:28.880 +reason over a lot of text that's + +00:40:26.920 --> 00:40:32.440 +included in a knowledge + +00:40:28.880 --> 00:40:35.839 +base um oh sorry reason over Text corpus + +00:40:32.440 --> 00:40:40.480 +like we reason over knowledge bases + +00:40:35.839 --> 00:40:43.280 +and basically uh what we did was we + +00:40:40.480 --> 00:40:44.960 +answered questions using text corpora as + +00:40:43.280 --> 00:40:48.680 +a traceable knowledge + +00:40:44.960 --> 00:40:52.800 +bases and we did relevance matching over + +00:40:48.680 --> 00:40:54.920 +mentions um and the way we did this is + +00:40:52.800 --> 00:40:57.440 +we created mentioned + +00:40:54.920 --> 00:40:59.480 +vectors and the mentioned vectors + +00:40:57.440 --> 00:41:01.720 +vectors of all of the mentions in the + +00:40:59.480 --> 00:41:04.920 +knowledge base of particular + +00:41:01.720 --> 00:41:05.920 +entities um and then we retrieved + +00:41:04.920 --> 00:41:09.599 +relevant + +00:41:05.920 --> 00:41:13.440 +mentions um from pre-trained Models uh + +00:41:09.599 --> 00:41:15.040 +so we we ran embeddings and generated uh + +00:41:13.440 --> 00:41:16.000 +embeddings for each of the mentions in + +00:41:15.040 --> 00:41:20.440 +the whole + +00:41:16.000 --> 00:41:25.440 +Corpus and based on this let let + +00:41:20.440 --> 00:41:29.119 +me find the place over here so based on + +00:41:25.440 --> 00:41:32.720 +this we basically um encoded all of + +00:41:29.119 --> 00:41:35.040 +these uh in here and then we had a dense + +00:41:32.720 --> 00:41:37.359 +query vector and the dense query Vector + +00:41:35.040 --> 00:41:41.640 +was specifically trained so that it + +00:41:37.359 --> 00:41:44.280 +would be able to identify entity + +00:41:41.640 --> 00:41:46.760 +mentions that answered the problem so if + +00:41:44.280 --> 00:41:50.240 +we had like when was The Grateful Dead + +00:41:46.760 --> 00:41:52.520 +and uh Bob Dylan album released uh we + +00:41:50.240 --> 00:41:54.760 +would have Bob Dylan be one vector The + +00:41:52.520 --> 00:41:56.560 +Grateful Dead be another vector and the + +00:41:54.760 --> 00:41:58.200 +model would be specifically trained so + +00:41:56.560 --> 00:42:00.040 +that when you took took the entity + +00:41:58.200 --> 00:42:03.319 +embedding of this and matched it with an + +00:42:00.040 --> 00:42:05.400 +entity embedding in this big Corpus of + +00:42:03.319 --> 00:42:07.920 +encoded things here it would be most + +00:42:05.400 --> 00:42:10.400 +likely to return relevant information to + +00:42:07.920 --> 00:42:13.160 +answer these like entity relation + +00:42:10.400 --> 00:42:14.680 +questions so then the question is how do + +00:42:13.160 --> 00:42:18.040 +we train a model like this how do we + +00:42:14.680 --> 00:42:20.280 +train like a dense uh embedding model so + +00:42:18.040 --> 00:42:21.520 +that it gets relevant information for + +00:42:20.280 --> 00:42:23.800 +answering + +00:42:21.520 --> 00:42:26.920 +questions and basically the way we did + +00:42:23.800 --> 00:42:29.280 +this was through week supervision uh + +00:42:26.920 --> 00:42:31.640 +just like I talked about for relation + +00:42:29.280 --> 00:42:33.599 +extraction in relation extraction we can + +00:42:31.640 --> 00:42:35.680 +create weak supervision by taking a big + +00:42:33.599 --> 00:42:37.960 +existing knowledge base and identifying + +00:42:35.680 --> 00:42:40.920 +all of the sentences where the answer is + +00:42:37.960 --> 00:42:43.319 +included and so what we did is we took + +00:42:40.920 --> 00:42:45.880 +this big existing knowledge base and + +00:42:43.319 --> 00:42:47.920 +said okay what are some of the relations + +00:42:45.880 --> 00:42:49.800 +in the knowledge base one example of a + +00:42:47.920 --> 00:42:51.559 +relation in the knowledge base is Steven + +00:42:49.800 --> 00:42:54.359 +Spielberg is the director of Saving + +00:42:51.559 --> 00:42:57.319 +Private Ryan so we created questions + +00:42:54.359 --> 00:42:59.119 +that said um + +00:42:57.319 --> 00:43:01.079 +was the director of Saving Private Ryan + +00:42:59.119 --> 00:43:03.920 +we can create those with templates uh + +00:43:01.079 --> 00:43:06.359 +easily for many different relations and + +00:43:03.920 --> 00:43:09.480 +then we took the embedding for Saving + +00:43:06.359 --> 00:43:10.760 +Private Ryan in that question and we + +00:43:09.480 --> 00:43:14.200 +tried to + +00:43:10.760 --> 00:43:17.119 +upweight all of the Saving Private Ryan + +00:43:14.200 --> 00:43:19.680 +embeddings over all of Wikipedia where + +00:43:17.119 --> 00:43:23.160 +Steven Spielberg cooccurred in that + +00:43:19.680 --> 00:43:25.640 +sentence so that tries to match um you + +00:43:23.160 --> 00:43:27.079 +know artificially created questions with + +00:43:25.640 --> 00:43:29.040 +sentences that would be the answer + +00:43:27.079 --> 00:43:31.040 +answer to that question and so that + +00:43:29.040 --> 00:43:32.480 +gives you like supervision it gives you + +00:43:31.040 --> 00:43:35.079 +a lot of data to train over it gives you + +00:43:32.480 --> 00:43:38.920 +a good model so that that allowed us to + +00:43:35.079 --> 00:43:41.319 +learn this model well so um this is one + +00:43:38.920 --> 00:43:43.160 +example of how you can do like rag spe + +00:43:41.319 --> 00:43:46.200 +specifically like informed by knowledge + +00:43:43.160 --> 00:43:46.200 +bases and stuff like + +00:43:47.280 --> 00:43:52.160 +that um any any questions about this + +00:43:53.480 --> 00:43:57.680 +or + +00:43:55.079 --> 00:44:00.079 +okay so another thing that I I'd like to + +00:43:57.680 --> 00:44:03.599 +go into is uh something we call schema + +00:44:00.079 --> 00:44:06.240 +free extraction and so if I go back to + +00:44:03.599 --> 00:44:09.960 +the wiki Data + +00:44:06.240 --> 00:44:10.760 +Page um Wiki data has something we call + +00:44:09.960 --> 00:44:13.599 +a + +00:44:10.760 --> 00:44:16.880 +schema and the schema is basically like + +00:44:13.599 --> 00:44:19.640 +what are the relations that are included + +00:44:16.880 --> 00:44:21.000 +in the database so one of the relations + +00:44:19.640 --> 00:44:25.079 +that's included in the databas is + +00:44:21.000 --> 00:44:25.079 +instance of I guess also + +00:44:25.200 --> 00:44:29.040 +image lots of images + +00:44:29.079 --> 00:44:33.880 +um + +00:44:30.440 --> 00:44:35.680 +signature uh sex or gender country of + +00:44:33.880 --> 00:44:38.319 +citizenship and these relations are like + +00:44:35.680 --> 00:44:41.079 +decided a priori by the people who + +00:44:38.319 --> 00:44:43.200 +created Wiki data um and there's lots + +00:44:41.079 --> 00:44:45.880 +and lots of them but that doesn't + +00:44:43.200 --> 00:44:48.880 +necessarily mean + +00:44:45.880 --> 00:44:50.400 +that like similarly to the problem of + +00:44:48.880 --> 00:44:51.839 +not having all of the entities we can't + +00:44:50.400 --> 00:44:55.119 +have all of the relations and just to + +00:44:51.839 --> 00:44:57.280 +give one example I was um in preparation + +00:44:55.119 --> 00:44:59.680 +for our large language models lecture I + +00:44:57.280 --> 00:45:02.640 +actually created some structured data + +00:44:59.680 --> 00:45:04.319 +about large language models and some of + +00:45:02.640 --> 00:45:06.119 +the instru the structured data about + +00:45:04.319 --> 00:45:09.319 +large language models that I created was + +00:45:06.119 --> 00:45:11.440 +like what is the variety of positional + +00:45:09.319 --> 00:45:13.079 +embedding that they're using or + +00:45:11.440 --> 00:45:15.800 +positional embedding variety and + +00:45:13.079 --> 00:45:18.720 +positional embedding variety is not in + +00:45:15.800 --> 00:45:20.359 +Wiki data I think um I'd be surprised if + +00:45:18.720 --> 00:45:23.200 +it was in Wiki data but I think it's not + +00:45:20.359 --> 00:45:25.760 +in Wiki data um so like as you go down + +00:45:23.200 --> 00:45:27.760 +to like more esoteric Concepts or like + +00:45:25.760 --> 00:45:29.599 +specialized domains or stuff like that + +00:45:27.760 --> 00:45:31.359 +you're almost always guaranteed to not + +00:45:29.599 --> 00:45:34.040 +you know have all the entities you need + +00:45:31.359 --> 00:45:36.680 +or not have all the relations you need + +00:45:34.040 --> 00:45:38.160 +so that's the problem that schema free + +00:45:36.680 --> 00:45:39.920 +extraction is trying to solve it's + +00:45:38.160 --> 00:45:41.680 +trying to figure out how we can like + +00:45:39.920 --> 00:45:45.920 +jointly figure out the schema together + +00:45:41.680 --> 00:45:45.920 +with uh the information you want to + +00:45:48.480 --> 00:45:54.040 +extract and the um the most famous + +00:45:52.319 --> 00:45:55.599 +example of this is something called open + +00:45:54.040 --> 00:45:57.200 +information extraction in open + +00:45:55.599 --> 00:46:01.160 +information extraction basically what + +00:45:57.200 --> 00:46:04.040 +it's saying is um we don't need a schema + +00:46:01.160 --> 00:46:06.359 +uh there's no there's no schema um the + +00:46:04.040 --> 00:46:08.720 +only schema that we have is the actual + +00:46:06.359 --> 00:46:12.200 +text in the sentences that we're + +00:46:08.720 --> 00:46:14.520 +referring to um the entities so if we + +00:46:12.200 --> 00:46:16.040 +have United United has a Hub in Chicago + +00:46:14.520 --> 00:46:17.359 +which is the headquarters of United + +00:46:16.040 --> 00:46:21.200 +Continental + +00:46:17.359 --> 00:46:25.880 +Holdings um the relation is literally + +00:46:21.200 --> 00:46:29.359 +has a Hub in um that that's the relation + +00:46:25.880 --> 00:46:33.359 +um and then for this we have Chicago is + +00:46:29.359 --> 00:46:35.559 +the headquarters of um but the problem + +00:46:33.359 --> 00:46:37.520 +with this uh is that this cannot + +00:46:35.559 --> 00:46:40.359 +abstract away so if we had another + +00:46:37.520 --> 00:46:42.000 +sentence that said Chicago or United + +00:46:40.359 --> 00:46:44.319 +Continental Holdings has its + +00:46:42.000 --> 00:46:45.720 +headquarters in Chicago that would be + +00:46:44.319 --> 00:46:49.800 +treated as completely different you + +00:46:45.720 --> 00:46:49.800 +wouldn't be able to like group those two + +00:46:51.119 --> 00:46:57.720 +together so um in open information + +00:46:55.000 --> 00:47:00.079 +extraction actually a lot of the methods + +00:46:57.720 --> 00:47:02.800 +this is one of the few things where + +00:47:00.079 --> 00:47:05.480 +people still use rule-based systems as + +00:47:02.800 --> 00:47:07.640 +kind of like uh you know almost + +00:47:05.480 --> 00:47:09.319 +state-of-the-art systems but basically + +00:47:07.640 --> 00:47:11.559 +the reason why you're able to do this is + +00:47:09.319 --> 00:47:14.440 +it's not actually that hard to extract + +00:47:11.559 --> 00:47:16.839 +kind of the relevant strings between uh + +00:47:14.440 --> 00:47:19.599 +two entities and so the both the + +00:47:16.839 --> 00:47:21.359 +Precision and recall are pretty high and + +00:47:19.599 --> 00:47:24.079 +another reason why people use rule-based + +00:47:21.359 --> 00:47:25.760 +systems is because they um like you want + +00:47:24.079 --> 00:47:27.440 +to run it over the whole web and running + +00:47:25.760 --> 00:47:29.079 +a neural model over the whole web is + +00:47:27.440 --> 00:47:32.000 +expensive so you can use a role-based + +00:47:29.079 --> 00:47:35.319 +model so some examples of this include + +00:47:32.000 --> 00:47:37.640 +text Runner and Reverb um the basic + +00:47:35.319 --> 00:47:41.000 +ideas behind them is that you use a + +00:47:37.640 --> 00:47:43.720 +parser to extract um to do a syntactic + +00:47:41.000 --> 00:47:45.760 +analysis of the sentence um in extract + +00:47:43.720 --> 00:47:47.640 +during according to rules so for example + +00:47:45.760 --> 00:47:50.160 +the relation must contain a + +00:47:47.640 --> 00:47:52.720 +predicate um the subject and object must + +00:47:50.160 --> 00:47:56.040 +be noun phrases other things like + +00:47:52.720 --> 00:47:57.640 +this um and then what they did later is + +00:47:56.040 --> 00:47:59.240 +what they did in this this paper + +00:47:57.640 --> 00:48:00.800 +arguably this is maybe no longer + +00:47:59.240 --> 00:48:02.280 +necessary with the compute power we have + +00:48:00.800 --> 00:48:04.000 +now but they trained an even faster + +00:48:02.280 --> 00:48:06.960 +model to extract over large amounts of + +00:48:04.000 --> 00:48:08.720 +data so they basically um use this as a + +00:48:06.960 --> 00:48:10.599 +su weak supervision and then train a + +00:48:08.720 --> 00:48:12.160 +model that could do it even faster with + +00:48:10.599 --> 00:48:14.680 +the sequence base + +00:48:12.160 --> 00:48:18.119 +model + +00:48:14.680 --> 00:48:19.880 +um another thing that they did was um + +00:48:18.119 --> 00:48:22.280 +they aggregated multiple pieces of + +00:48:19.880 --> 00:48:24.480 +evidence heris to find common and + +00:48:22.280 --> 00:48:28.760 +therefore potentially reliable + +00:48:24.480 --> 00:48:28.760 +extractions so like + +00:48:29.800 --> 00:48:36.960 +any piece of text on the internet like + +00:48:31.559 --> 00:48:40.200 +could be a lie right so um you know + +00:48:36.960 --> 00:48:43.400 +if I I might write on my blog United has + +00:48:40.200 --> 00:48:45.119 +a Hub in like Denver or on the other + +00:48:43.400 --> 00:48:48.240 +hand + +00:48:45.119 --> 00:48:50.839 +um wait a set + +00:48:48.240 --> 00:48:52.680 +right some something has a Hub in Denver + +00:48:50.839 --> 00:48:54.960 +but United has a Hub in Pittsburgh is + +00:48:52.680 --> 00:48:58.040 +definitely wrong so let's uh let's go + +00:48:54.960 --> 00:49:00.000 +with that um uh so somebody could write + +00:48:58.040 --> 00:49:02.359 +that on the internet and in fact because + +00:49:00.000 --> 00:49:06.440 +I just said it it's probably in YouTube + +00:49:02.359 --> 00:49:09.119 +comments somewhere but um uh + +00:49:06.440 --> 00:49:10.760 +like any any piece of information on the + +00:49:09.119 --> 00:49:13.079 +internet could be wrong so basically + +00:49:10.760 --> 00:49:16.680 +they had um heuristic methods to filter + +00:49:13.079 --> 00:49:19.559 +these out and usually these were + +00:49:16.680 --> 00:49:21.559 +frequency based so it's like um if both + +00:49:19.559 --> 00:49:23.520 +United and Pittsburgh are very common + +00:49:21.559 --> 00:49:26.000 +but it's very rare for somebody to says + +00:49:23.520 --> 00:49:27.799 +say United has a Hub in Pittsburgh then + +00:49:26.000 --> 00:49:29.200 +that means it's statistically unlikely + +00:49:27.799 --> 00:49:30.799 +for this to be correct because if it + +00:49:29.200 --> 00:49:33.280 +were correct we'd expect to see it much + +00:49:30.799 --> 00:49:36.799 +more frequently so um those were the + +00:49:33.280 --> 00:49:36.799 +kind of things that they they did + +00:49:37.520 --> 00:49:44.440 +here there's also some neural models for + +00:49:40.400 --> 00:49:46.839 +open IE um I I think these are uh used + +00:49:44.440 --> 00:49:48.440 +maybe a little bit less often um but + +00:49:46.839 --> 00:49:52.559 +basically heuristics are still not + +00:49:48.440 --> 00:49:55.280 +perfect and so what they did the problem + +00:49:52.559 --> 00:49:56.720 +with um like not relying on heuristics + +00:49:55.280 --> 00:49:58.880 +is you need to get training data from + +00:49:56.720 --> 00:50:01.880 +somewhere so there's a rather clever + +00:49:58.880 --> 00:50:03.599 +paper um and again if you're not + +00:50:01.880 --> 00:50:05.119 +interested in relation extraction in + +00:50:03.599 --> 00:50:07.559 +particular I think this is one thing + +00:50:05.119 --> 00:50:10.000 +that's still worth paying attention to + +00:50:07.559 --> 00:50:12.680 +um which is + +00:50:10.000 --> 00:50:14.559 +they demonstrated that it's possible to + +00:50:12.680 --> 00:50:16.319 +create relatively large data sets by + +00:50:14.559 --> 00:50:18.160 +asking people simple + +00:50:16.319 --> 00:50:21.440 +questions + +00:50:18.160 --> 00:50:24.480 +and in particular they wanted to + +00:50:21.440 --> 00:50:27.119 +get relation extraction data sets that + +00:50:24.480 --> 00:50:30.799 +are like um + +00:50:27.119 --> 00:50:34.200 +who finished something like UCD finished + +00:50:30.799 --> 00:50:37.760 +the two 2006 championships and if you + +00:50:34.200 --> 00:50:40.720 +ask people like okay select this span um + +00:50:37.760 --> 00:50:44.559 +select the entity span the relations + +00:50:40.720 --> 00:50:46.160 +span and the um in the second entity the + +00:50:44.559 --> 00:50:49.079 +head entity the relation and the tail + +00:50:46.160 --> 00:50:51.839 +entity select it on this interface and + +00:50:49.079 --> 00:50:54.200 +then uh tell me is it this relation or + +00:50:51.839 --> 00:50:55.640 +this relation or this relation that's + +00:50:54.200 --> 00:50:58.160 +actually pretty hard and getting like + +00:50:55.640 --> 00:51:01.280 +crowd workers to start learning how to + +00:50:58.160 --> 00:51:03.280 +do that task is a bit tricky and it + +00:51:01.280 --> 00:51:06.400 +takes some you know it takes some time + +00:51:03.280 --> 00:51:07.799 +to get them onboarded basically um but + +00:51:06.400 --> 00:51:09.760 +basically what they said is instead + +00:51:07.799 --> 00:51:11.359 +we'll just ask them questions where the + +00:51:09.760 --> 00:51:14.240 +answer to the question basically gives + +00:51:11.359 --> 00:51:17.160 +us the answer to what the relation is so + +00:51:14.240 --> 00:51:20.319 +they ask like who finished something and + +00:51:17.160 --> 00:51:23.680 +the answer is like UCD and um what did + +00:51:20.319 --> 00:51:25.359 +someone finish the 2006 Championship + +00:51:23.680 --> 00:51:28.920 +what did someone fish some finish + +00:51:25.359 --> 00:51:31.760 +something as and basically um in doing + +00:51:28.920 --> 00:51:33.319 +this they created uh something called + +00:51:31.760 --> 00:51:34.359 +semantic roles which we're actually + +00:51:33.319 --> 00:51:35.960 +probably going to talk about a little + +00:51:34.359 --> 00:51:37.559 +bit later but you can take the semantic + +00:51:35.960 --> 00:51:41.200 +roles and then you can use them to + +00:51:37.559 --> 00:51:43.920 +annotate uh relation extraction data and + +00:51:41.200 --> 00:51:46.720 +then they trained a supervised neural + +00:51:43.920 --> 00:51:46.720 +tager for + +00:51:48.799 --> 00:51:53.480 +this + +00:51:50.480 --> 00:51:56.040 +cool um so another thing I'd like to + +00:51:53.480 --> 00:51:57.880 +talk about is I talked about learning um + +00:51:56.040 --> 00:51:59.920 +information about entities from entity + +00:51:57.880 --> 00:52:02.079 +embeddings but you can actually learn + +00:51:59.920 --> 00:52:04.520 +information about relations from + +00:52:02.079 --> 00:52:07.680 +relation information about other + +00:52:04.520 --> 00:52:12.359 +relations and this can help solve the + +00:52:07.680 --> 00:52:16.119 +problem um of like essentially the fact + +00:52:12.359 --> 00:52:18.760 +that open IE is not able to abstract and + +00:52:16.119 --> 00:52:20.680 +generalize so word embeddings or entity + +00:52:18.760 --> 00:52:23.079 +embeddings give information of the word + +00:52:20.680 --> 00:52:26.920 +in context um which can be indicative + +00:52:23.079 --> 00:52:29.640 +for knowledge uh knowledge bases + +00:52:26.920 --> 00:52:32.640 +but other relations or combinations + +00:52:29.640 --> 00:52:34.960 +thereof are also indicative of them and + +00:52:32.640 --> 00:52:36.960 +um if anybody is familiar with graphs or + +00:52:34.960 --> 00:52:39.520 +graph processing there's the whole idea + +00:52:36.960 --> 00:52:41.400 +of um link prediction where you're given + +00:52:39.520 --> 00:52:42.680 +like a a small number of links in a + +00:52:41.400 --> 00:52:45.760 +graph and you want to predict what other + +00:52:42.680 --> 00:52:50.559 +links are likely to uh + +00:52:45.760 --> 00:52:52.920 +exist and like as I said um a lot of uh + +00:52:50.559 --> 00:52:54.839 +you know very prominent AI researchers + +00:52:52.920 --> 00:52:57.440 +got their start in uh relation + +00:52:54.839 --> 00:53:01.480 +extraction and uh it sker is another one + +00:52:57.440 --> 00:53:04.319 +of them actually um and uh basically + +00:53:01.480 --> 00:53:07.880 +this 2009 paper proposed to use tensor + +00:53:04.319 --> 00:53:09.400 +de composition to do uh induction of + +00:53:07.880 --> 00:53:13.520 +relations + +00:53:09.400 --> 00:53:15.319 +and the way it worked is um you model + +00:53:13.520 --> 00:53:18.400 +relations by decomposing a tensor + +00:53:15.319 --> 00:53:21.599 +containing entity relation entity tles + +00:53:18.400 --> 00:53:24.000 +so you have the left entity the right + +00:53:21.599 --> 00:53:27.160 +entity and whether the relation exists + +00:53:24.000 --> 00:53:31.319 +is this big um uh big tensor in the + +00:53:27.160 --> 00:53:33.160 +Middle where these are embeddings of the + +00:53:31.319 --> 00:53:35.760 +left entity these are embeddings of the + +00:53:33.160 --> 00:53:38.839 +right entity and then the the depth of + +00:53:35.760 --> 00:53:40.680 +the tensor is like which relations exist + +00:53:38.839 --> 00:53:43.760 +and so we know that some exist so we + +00:53:40.680 --> 00:53:46.640 +give them a one we know others exist um + +00:53:43.760 --> 00:53:48.680 +don't exist so we give them a zero um + +00:53:46.640 --> 00:53:51.040 +and then we do a low rank approximation + +00:53:48.680 --> 00:53:52.559 +of this tensor and if we do a low rank + +00:53:51.040 --> 00:53:55.720 +approximation of the tensor we have + +00:53:52.559 --> 00:53:57.280 +reconstruction ER basically so when we + +00:53:55.720 --> 00:53:59.960 +reconstruct the are some things that + +00:53:57.280 --> 00:54:01.960 +were previously zero become one and so + +00:53:59.960 --> 00:54:04.760 +the things that were previously zero and + +00:54:01.960 --> 00:54:07.880 +then become close to one are the ones + +00:54:04.760 --> 00:54:10.559 +that we think like actually might exist + +00:54:07.880 --> 00:54:12.000 +they might be real um they might be real + +00:54:10.559 --> 00:54:13.640 +relations that we were just missing + +00:54:12.000 --> 00:54:16.599 +because our previous knowledge base was + +00:54:13.640 --> 00:54:16.599 +complete uh + +00:54:18.640 --> 00:54:26.880 +incomplete and um one thing that takes + +00:54:21.799 --> 00:54:28.559 +us a step further is uh what if if we + +00:54:26.880 --> 00:54:30.079 +actually do have a knowledge basee or + +00:54:28.559 --> 00:54:31.839 +what if we even have multiple knowledge + +00:54:30.079 --> 00:54:35.520 +bases like what if we have Wiki data and + +00:54:31.839 --> 00:54:36.640 +we have wordnet and we have um uh other + +00:54:35.520 --> 00:54:38.920 +things like + +00:54:36.640 --> 00:54:40.680 +this and in addition to that we also + +00:54:38.920 --> 00:54:43.400 +have open IE + +00:54:40.680 --> 00:54:45.960 +extractions so there's an idea of + +00:54:43.400 --> 00:54:47.880 +something called Universal schema and + +00:54:45.960 --> 00:54:50.200 +what Universal schema do is they embed + +00:54:47.880 --> 00:54:55.119 +relations from multiple schema or + +00:54:50.200 --> 00:54:56.960 +schemata in the same space and based on + +00:54:55.119 --> 00:54:59.559 +this they then + +00:54:56.960 --> 00:55:01.359 +predict which ones exist are likely to + +00:54:59.559 --> 00:55:04.400 +exist or which ones are not likely to + +00:55:01.359 --> 00:55:06.680 +exist so here we might have a free base + +00:55:04.400 --> 00:55:08.640 +or Wiki data we might have another uh + +00:55:06.680 --> 00:55:11.559 +kind of relation extraction data set + +00:55:08.640 --> 00:55:15.480 +called Tac and then on the training data + +00:55:11.559 --> 00:55:17.040 +set we have um like all of these uh + +00:55:15.480 --> 00:55:20.240 +things that are like positive or + +00:55:17.040 --> 00:55:23.960 +negative or something like this and then + +00:55:20.240 --> 00:55:26.960 +on the heldout data set we have only + +00:55:23.960 --> 00:55:29.480 +information about like open + +00:55:26.960 --> 00:55:30.920 +for example so um for all of the + +00:55:29.480 --> 00:55:33.079 +entities that exist in the knowledge + +00:55:30.920 --> 00:55:34.839 +base we know you know whether the + +00:55:33.079 --> 00:55:36.039 +relations exist for but for all the + +00:55:34.839 --> 00:55:39.640 +entities that don't exist in the + +00:55:36.039 --> 00:55:41.760 +database we don't know and so uh then + +00:55:39.640 --> 00:55:43.839 +just from the existence of open IE + +00:55:41.760 --> 00:55:45.480 +relations or non-existence of open IE + +00:55:43.839 --> 00:55:47.920 +relations we can predict that other + +00:55:45.480 --> 00:55:49.359 +relations might exist for example so + +00:55:47.920 --> 00:55:51.079 +this is a great way to combine the two + +00:55:49.359 --> 00:55:53.920 +together like open IE you can run it + +00:55:51.079 --> 00:55:55.880 +over you know very large data sets um + +00:55:53.920 --> 00:55:58.000 +but it doesn't have a good schema free + +00:55:55.880 --> 00:56:00.400 +uh Wiki data has a good schema but you + +00:55:58.000 --> 00:56:02.960 +can't you know it's all manually created + +00:56:00.400 --> 00:56:04.720 +so you can suggest other ones and one + +00:56:02.960 --> 00:56:07.960 +other like interesting thing is you can + +00:56:04.720 --> 00:56:09.640 +suggest other um things that might exist + +00:56:07.960 --> 00:56:13.039 +in Wiki data but you could also track + +00:56:09.640 --> 00:56:15.039 +that back to the original text that + +00:56:13.039 --> 00:56:17.000 +indicated that it might exist in Wiki + +00:56:15.039 --> 00:56:18.720 +data so then you could have a human go + +00:56:17.000 --> 00:56:20.520 +back and check it to make sure that + +00:56:18.720 --> 00:56:24.200 +that's actually true and trustworthy and + +00:56:20.520 --> 00:56:24.200 +other things like that + +00:56:26.400 --> 00:56:31.400 +cool um so if you like uh you like + +00:56:29.400 --> 00:56:33.160 +tensors or you like linear algebra or + +00:56:31.400 --> 00:56:34.720 +things like this this is maybe something + +00:56:33.160 --> 00:56:37.880 +that you could take a look at and think + +00:56:34.720 --> 00:56:40.240 +a little bit more about um any any + +00:56:37.880 --> 00:56:40.240 +questions + +00:56:42.799 --> 00:56:46.240 +here okay + +00:56:46.880 --> 00:56:53.680 +cool um so another thing I'd like to + +00:56:50.640 --> 00:56:56.920 +talk about is uh modeling relation paths + +00:56:53.680 --> 00:57:00.359 +so this is a really nice uh idea + +00:56:56.920 --> 00:57:00.359 +which is you + +00:57:00.440 --> 00:57:05.000 +can make inferences across multiple hops + +00:57:04.240 --> 00:57:08.400 +of + +00:57:05.000 --> 00:57:12.280 +relations um based on uh particular + +00:57:08.400 --> 00:57:14.200 +relations existing and so um multi-step + +00:57:12.280 --> 00:57:17.280 +passs can be informative for indicating + +00:57:14.200 --> 00:57:20.000 +whether individual relations exist so um + +00:57:17.280 --> 00:57:24.400 +for example uh given a word given a + +00:57:20.000 --> 00:57:27.960 +particular word in a paper title + +00:57:24.400 --> 00:57:29.880 +recommend a venue in which to the paper + +00:57:27.960 --> 00:57:32.559 +and so this is the the problem that they + +00:57:29.880 --> 00:57:36.079 +were trying to solve and then basically + +00:57:32.559 --> 00:57:38.440 +you have a word um you + +00:57:36.079 --> 00:57:41.119 +find if you have that word in your paper + +00:57:38.440 --> 00:57:42.920 +title you then find other papers that + +00:57:41.119 --> 00:57:45.280 +have that title uh that have that word + +00:57:42.920 --> 00:57:48.359 +in their title and those papers are in a + +00:57:45.280 --> 00:57:52.039 +journal and that gets a high weight with + +00:57:48.359 --> 00:57:54.119 +respect to like that your paper being + +00:57:52.039 --> 00:57:56.839 +you know relevant to that particular + +00:57:54.119 --> 00:57:59.880 +Journal you can also say + +00:57:56.839 --> 00:58:01.000 +okay I have a a word find papers with + +00:57:59.880 --> 00:58:03.240 +that word in the + +00:58:01.000 --> 00:58:07.240 +title find the first author of that + +00:58:03.240 --> 00:58:09.280 +paper find another paper uh that had + +00:58:07.240 --> 00:58:11.599 +that author as a first author and then + +00:58:09.280 --> 00:58:13.240 +find the Journal of it and they + +00:58:11.599 --> 00:58:15.839 +demonstrate a way where you can like + +00:58:13.240 --> 00:58:18.280 +expand these paths and feed them into a + +00:58:15.839 --> 00:58:22.400 +prediction model and use that to predict + +00:58:18.280 --> 00:58:25.480 +um you know additional relations so + +00:58:22.400 --> 00:58:26.680 +unlike this method here this method was + +00:58:25.480 --> 00:58:29.240 +saying like + +00:58:26.680 --> 00:58:30.920 +other single relations are indicative of + +00:58:29.240 --> 00:58:34.160 +a particular relation + +00:58:30.920 --> 00:58:36.880 +existing this paper is saying not just + +00:58:34.160 --> 00:58:38.720 +individual relations are indicative of + +00:58:36.880 --> 00:58:40.640 +another relation existing but actually + +00:58:38.720 --> 00:58:43.839 +relation paths are indicative of a + +00:58:40.640 --> 00:58:46.400 +relation existing so this is more um + +00:58:43.839 --> 00:58:46.400 +expressive + +00:58:47.520 --> 00:58:55.359 +basically um and this followup paper + +00:58:52.640 --> 00:58:57.480 +uh using differentiable logic rules + +00:58:55.359 --> 00:59:00.799 +actually made this endtoend + +00:58:57.480 --> 00:59:03.079 +trainable so this allows you to consider + +00:59:00.799 --> 00:59:07.599 +whole paths in a differentiable + +00:59:03.079 --> 00:59:09.960 +framework and so the way they did this + +00:59:07.599 --> 00:59:13.359 +is like if you have you know City in + +00:59:09.960 --> 00:59:16.440 +country and has office in country um + +00:59:13.359 --> 00:59:18.920 +that or sorry City and Country and has + +00:59:16.440 --> 00:59:22.200 +office in city that indicates has office + +00:59:18.920 --> 00:59:24.160 +in country and I I'm sure you know many + +00:59:22.200 --> 00:59:26.760 +people here have thought like learned + +00:59:24.160 --> 00:59:29.520 +about logic and you know and induction + +00:59:26.760 --> 00:59:32.720 +from or deduction from uh logic rules + +00:59:29.520 --> 00:59:34.359 +and stuff like this but the problem is + +00:59:32.720 --> 00:59:37.079 +deduction from logic rules is very + +00:59:34.359 --> 00:59:39.039 +fragile like there are cases where there + +00:59:37.079 --> 00:59:41.119 +are counter examples so if you say that + +00:59:39.039 --> 00:59:43.280 +something is always true deductively + +00:59:41.119 --> 00:59:45.839 +then um that can cause problems so in + +00:59:43.280 --> 00:59:47.839 +reality it's like if you have two pieces + +00:59:45.839 --> 00:59:52.400 +of information something can become much + +00:59:47.839 --> 00:59:56.920 +much more likely um and so you know just + +00:59:52.400 --> 00:59:59.880 +to give an example um somebody studying + +00:59:56.920 --> 01:00:01.280 +studying at CMU makes it very likely + +00:59:59.880 --> 01:00:03.799 +much more likely that they're studying + +01:00:01.280 --> 01:00:06.359 +computer science and much less likely + +01:00:03.799 --> 01:00:08.000 +that they're studying medicine or + +01:00:06.359 --> 01:00:09.520 +something like that but that doesn't + +01:00:08.000 --> 01:00:11.720 +mean that it like + +01:00:09.520 --> 01:00:13.559 +entirely the first one is definitely not + +01:00:11.720 --> 01:00:15.480 +entirely implied and I'm sure there's + +01:00:13.559 --> 01:00:16.760 +like a few people at CMU who are somehow + +01:00:15.480 --> 01:00:18.440 +studying medicine through a joint + +01:00:16.760 --> 01:00:21.480 +program with pit or something like that + +01:00:18.440 --> 01:00:24.400 +so you know like very it's very rare + +01:00:21.480 --> 01:00:26.799 +that logic rules are hard and fast and + +01:00:24.400 --> 01:00:28.480 +so basically what they do is they treat + +01:00:26.799 --> 01:00:30.559 +each path as a sequence of Matrix + +01:00:28.480 --> 01:00:34.839 +multiplies it where they have a rule + +01:00:30.559 --> 01:00:36.599 +weight um like this and um in the end + +01:00:34.839 --> 01:00:38.359 +that allows you to make a a prediction + +01:00:36.599 --> 01:00:40.839 +about whether a predic logic rule is + +01:00:38.359 --> 01:00:40.839 +correct or + +01:00:40.880 --> 01:00:49.319 +not um so this is uh i' I've been + +01:00:46.880 --> 01:00:51.119 +working mostly in like structured + +01:00:49.319 --> 01:00:54.480 +knowledge space structured knowledge + +01:00:51.119 --> 01:00:56.599 +graphs other uh other things like this + +01:00:54.480 --> 01:00:59.760 +um I I don't + +01:00:56.599 --> 01:01:02.720 +think there's a whole lot of work that + +01:00:59.760 --> 01:01:05.640 +directly applies this to language models + +01:01:02.720 --> 01:01:07.319 +um like differentiable logic rules and + +01:01:05.640 --> 01:01:10.079 +language models or things like that just + +01:01:07.319 --> 01:01:12.440 +because it's less clean it's you know uh + +01:01:10.079 --> 01:01:13.839 +harder um there there's a little bit of + +01:01:12.440 --> 01:01:16.079 +work which I'm going to talk about now + +01:01:13.839 --> 01:01:18.599 +but I think like this kind of work is + +01:01:16.079 --> 01:01:21.440 +interesting because a lot of models are + +01:01:18.599 --> 01:01:23.119 +not super great at reasoning and how to + +01:01:21.440 --> 01:01:25.119 +like allow them to be better at + +01:01:23.119 --> 01:01:26.559 +reasoning is kind of an open problem so + +01:01:25.119 --> 01:01:28.039 +learning from these old older works that + +01:01:26.559 --> 01:01:30.200 +did it in a more structured space and + +01:01:28.039 --> 01:01:32.160 +trying to figure out how to apply them + +01:01:30.200 --> 01:01:34.400 +to less structured spaces is still + +01:01:32.160 --> 01:01:36.240 +interesting I think + +01:01:34.400 --> 01:01:39.160 +so + +01:01:36.240 --> 01:01:40.720 +cool um then the final talk topic I want + +01:01:39.160 --> 01:01:42.920 +to talk about is probing knowledge in + +01:01:40.720 --> 01:01:44.920 +LMS and so we have these knowledge bases + +01:01:42.920 --> 01:01:47.319 +that encode you know tons and tons of + +01:01:44.920 --> 01:01:49.880 +knowledge um which allows us to figure + +01:01:47.319 --> 01:01:52.200 +out you know oh well how well do uh + +01:01:49.880 --> 01:01:56.200 +language models know about these + +01:01:52.200 --> 01:01:59.079 +things and so + +01:01:56.200 --> 01:02:02.760 +traditional um kind of QA machine + +01:01:59.079 --> 01:02:04.799 +reading comprehension rag models um + +01:02:02.760 --> 01:02:06.359 +usually referred to external resources + +01:02:04.799 --> 01:02:10.039 +to answer questions like Wikipedia + +01:02:06.359 --> 01:02:14.359 +articles um or things like this but then + +01:02:10.039 --> 01:02:16.119 +the question is without doing rag can we + +01:02:14.359 --> 01:02:18.160 +you know answer questions like what + +01:02:16.119 --> 01:02:20.920 +knowledge is + +01:02:18.160 --> 01:02:24.079 +encoded and so the first paper that kind + +01:02:20.920 --> 01:02:26.520 +of handled this sort of problem uh is + +01:02:24.079 --> 01:02:29.200 +this paper which actually was also + +01:02:26.520 --> 01:02:33.359 +called uh + +01:02:29.200 --> 01:02:35.960 +wama surprisingly um or released a + +01:02:33.359 --> 01:02:41.000 +resource called llama except it was l m + +01:02:35.960 --> 01:02:44.880 +a um but what they did is they + +01:02:41.000 --> 01:02:46.960 +uh used they in contrast to using + +01:02:44.880 --> 01:02:50.000 +structural queries like SQL or or + +01:02:46.960 --> 01:02:52.119 +Sparkle two query KBS they tried to use + +01:02:50.000 --> 01:02:54.240 +natural language prompts to query LM so + +01:02:52.119 --> 01:02:58.160 +this was actually one of the the first + +01:02:54.240 --> 01:03:02.359 +uh kind of paper on prompts uh prompting + +01:02:58.160 --> 01:03:05.079 +for uh language models in a way and the + +01:03:02.359 --> 01:03:08.359 +way they did this is they had um they + +01:03:05.079 --> 01:03:10.039 +did like Dante was born in mask and then + +01:03:08.359 --> 01:03:13.279 +they tried to fill in the mask using a + +01:03:10.039 --> 01:03:15.839 +mask language model and uh and output + +01:03:13.279 --> 01:03:18.559 +Florence so + +01:03:15.839 --> 01:03:19.960 +um when they did this work now now we + +01:03:18.559 --> 01:03:21.359 +don't do this quite as much but when + +01:03:19.960 --> 01:03:23.520 +they did this work they basically used + +01:03:21.359 --> 01:03:25.440 +the knowledge base as the ground truth + +01:03:23.520 --> 01:03:28.880 +and tried to probe whether the knowledge + +01:03:25.440 --> 01:03:31.520 +in in um in the knowledge base was also + +01:03:28.880 --> 01:03:34.880 +uh recoverable from the neural + +01:03:31.520 --> 01:03:37.720 +map um and they proposed the Llama + +01:03:34.880 --> 01:03:39.760 +Benchmark um basically it was manual + +01:03:37.720 --> 01:03:42.480 +prompts for 41 relations they created + +01:03:39.760 --> 01:03:44.839 +the prompts manually uh so like X was + +01:03:42.480 --> 01:03:46.480 +founded in y The Prompt template and + +01:03:44.839 --> 01:03:49.400 +they filled in the subjects and had the + +01:03:46.480 --> 01:03:52.160 +LMS uh for such as Bert predict the + +01:03:49.400 --> 01:03:55.839 +objects uh like blueberg LP was founded + +01:03:52.160 --> 01:03:59.000 +in mask and they demonstrated that like + +01:03:55.839 --> 01:04:02.440 +basically Elmo uh Transformer XL and + +01:03:59.000 --> 01:04:04.960 +Bert base got uh you know up to 31% + +01:04:02.440 --> 01:04:06.480 +accuracy now I'm sure uh the modern + +01:04:04.960 --> 01:04:09.200 +language models would have much higher + +01:04:06.480 --> 01:04:11.279 +accuracy than + +01:04:09.200 --> 01:04:13.920 +that + +01:04:11.279 --> 01:04:17.839 +um this is a a follow-up paper that we + +01:04:13.920 --> 01:04:21.160 +did to this um where we tried to do this + +01:04:17.839 --> 01:04:23.400 +multilingually um I I think this is + +01:04:21.160 --> 01:04:25.680 +really let + +01:04:23.400 --> 01:04:29.520 +me I think one thing that's interesting + +01:04:25.680 --> 01:04:31.960 +interesting about this paper is um even + +01:04:29.520 --> 01:04:37.240 +if you're not interested in multilingual + +01:04:31.960 --> 01:04:38.920 +stuff per se there is an interesting + +01:04:37.240 --> 01:04:40.760 +dichotomy about like what knowledge is + +01:04:38.920 --> 01:04:43.079 +included in LMS and whether we can + +01:04:40.760 --> 01:04:46.000 +retrieve it and the reason why I'm + +01:04:43.079 --> 01:04:48.359 +saying this is because in this paper + +01:04:46.000 --> 01:04:51.200 +we created + +01:04:48.359 --> 01:04:52.599 +queries from a knowledge base and + +01:04:51.200 --> 01:04:54.160 +because we created queries from a + +01:04:52.599 --> 01:04:55.760 +knowledge base and knowledge bases are + +01:04:54.160 --> 01:04:57.240 +multilingual we can also create + +01:04:55.760 --> 01:05:00.039 +multilingual queries from knowledge + +01:04:57.240 --> 01:05:01.720 +bases right so we can use exactly the + +01:05:00.039 --> 01:05:03.359 +same entities but just ask the same + +01:05:01.720 --> 01:05:05.920 +question in different languages and so + +01:05:03.359 --> 01:05:07.480 +we had a bunch of people manually uh + +01:05:05.920 --> 01:05:10.119 +create prompts for all of these + +01:05:07.480 --> 01:05:13.000 +languages here and you can see that in + +01:05:10.119 --> 01:05:15.960 +English it's much better at responding + +01:05:13.000 --> 01:05:19.000 +uh to these queries than it is in any + +01:05:15.960 --> 01:05:21.039 +other language and in particular like + +01:05:19.000 --> 01:05:22.880 +lower resource languages or languages + +01:05:21.039 --> 01:05:26.400 +that are less similar to English it did + +01:05:22.880 --> 01:05:29.079 +much worse and notably we we counted the + +01:05:26.400 --> 01:05:32.160 +answer correct if it got it + +01:05:29.079 --> 01:05:34.279 +um we we had two settings one setting is + +01:05:32.160 --> 01:05:35.799 +we counted the answer correct if it only + +01:05:34.279 --> 01:05:38.359 +if it answered in the language we + +01:05:35.799 --> 01:05:39.680 +queried it in but we in other setting we + +01:05:38.359 --> 01:05:42.640 +also counted the answer correct if it + +01:05:39.680 --> 01:05:44.200 +answered in any language so we um it + +01:05:42.640 --> 01:05:46.640 +didn't necessarily have to even know the + +01:05:44.200 --> 01:05:48.200 +name of the entity in that uh language + +01:05:46.640 --> 01:05:50.520 +and we would still count it + +01:05:48.200 --> 01:05:54.720 +correct and so what I mean by there's a + +01:05:50.520 --> 01:05:56.440 +dichotomy between the information that + +01:05:54.720 --> 01:05:59.240 +language models have + +01:05:56.440 --> 01:06:02.480 +encoded and whether they're able to + +01:05:59.240 --> 01:06:02.480 +retrieve it + +01:06:02.680 --> 01:06:07.640 +is in English it's able to answer the + +01:06:06.000 --> 01:06:10.799 +models we tested were able to answer + +01:06:07.640 --> 01:06:13.000 +like 177% of queries + +01:06:10.799 --> 01:06:14.359 +but if the fact that they're able to + +01:06:13.000 --> 01:06:16.160 +answer in English means that the + +01:06:14.359 --> 01:06:18.520 +language model quote unquote knows the + +01:06:16.160 --> 01:06:20.200 +answer right like it knows the answer in + +01:06:18.520 --> 01:06:22.680 +English we're asking exactly the same + +01:06:20.200 --> 01:06:24.400 +question in all the other languages so + +01:06:22.680 --> 01:06:26.079 +you know it should know the answer in + +01:06:24.400 --> 01:06:27.680 +the other languages too + +01:06:26.079 --> 01:06:30.000 +but it's not able to retrieve the answer + +01:06:27.680 --> 01:06:33.079 +because we asked in another language + +01:06:30.000 --> 01:06:35.920 +so um that brings up some interesting + +01:06:33.079 --> 01:06:38.079 +questions about how we can make models + +01:06:35.920 --> 01:06:39.680 +better at retrieving the the knowledge + +01:06:38.079 --> 01:06:43.559 +that they already know in English when + +01:06:39.680 --> 01:06:45.520 +you query them in other languages or um + +01:06:43.559 --> 01:06:48.119 +and there was another paper recently I + +01:06:45.520 --> 01:06:52.720 +don't know if I'd be able to find it um + +01:06:48.119 --> 01:06:56.119 +exactly which is um they + +01:06:52.720 --> 01:07:01.799 +prompted models with personas and so + +01:06:56.119 --> 01:07:04.599 +they said I um you know I am a old man I + +01:07:01.799 --> 01:07:07.160 +am an old woman I am a young man I am + +01:07:04.599 --> 01:07:10.039 +young woman I am a child or something + +01:07:07.160 --> 01:07:12.799 +like that um or they also talked about + +01:07:10.039 --> 01:07:15.640 +things like uh physical disabilities and + +01:07:12.799 --> 01:07:17.200 +things and they said um please answer + +01:07:15.640 --> 01:07:19.640 +this question after they prompted with a + +01:07:17.200 --> 01:07:22.680 +Persona and just having that Persona + +01:07:19.640 --> 01:07:24.839 +greatly changed the ability of the model + +01:07:22.680 --> 01:07:26.400 +to answer questions so it's this very + +01:07:24.839 --> 01:07:28.200 +weird thing which which is like the + +01:07:26.400 --> 01:07:29.799 +models are actually capable of answering + +01:07:28.200 --> 01:07:31.520 +the questions but based on how you probe + +01:07:29.799 --> 01:07:32.880 +them whether it's in like different + +01:07:31.520 --> 01:07:34.599 +languages or if you give them a + +01:07:32.880 --> 01:07:36.839 +different Persona they manage to answer + +01:07:34.599 --> 01:07:39.000 +things differently and so on the plus + +01:07:36.839 --> 01:07:42.920 +side like you can create you can make + +01:07:39.000 --> 01:07:44.799 +ways to reduce the language models + +01:07:42.920 --> 01:07:45.920 +performance by giving it like a Persona + +01:07:44.799 --> 01:07:49.839 +that shouldn't be good at answering + +01:07:45.920 --> 01:07:53.279 +questions or something like that um + +01:07:49.839 --> 01:07:54.839 +but on the plus side um like when you're + +01:07:53.279 --> 01:07:57.279 +doing code generation there was this + +01:07:54.839 --> 01:07:58.960 +magic prompt which is like um I have + +01:07:57.279 --> 01:08:01.319 +checked this carefully in all the unit + +01:07:58.960 --> 01:08:03.240 +tests pass and that would improve your + +01:08:01.319 --> 01:08:05.760 +code generation accuracy by like five + +01:08:03.240 --> 01:08:07.559 +five points or something like that so um + +01:08:05.760 --> 01:08:09.240 +you just get the the model in the right + +01:08:07.559 --> 01:08:11.359 +mood to answer the question accurately + +01:08:09.240 --> 01:08:13.319 +and it does a better job at doing it so + +01:08:11.359 --> 01:08:15.960 +it's kind of uh it goes in both + +01:08:13.319 --> 01:08:15.960 +directions I + +01:08:16.679 --> 01:08:27.080 +guess cool um yeah uh any any questions + +01:08:23.679 --> 01:08:30.120 +here um another thing that you can do uh + +01:08:27.080 --> 01:08:31.000 +is fine-tune models specifically so + +01:08:30.120 --> 01:08:34.080 +they're good at answering + +01:08:31.000 --> 01:08:35.560 +knowledge-based questions so um uh this + +01:08:34.080 --> 01:08:38.080 +paper demonstrated that you could find + +01:08:35.560 --> 01:08:39.480 +tune models uh on synthetically created + +01:08:38.080 --> 01:08:41.159 +knowledge based questions and that would + +01:08:39.480 --> 01:08:42.920 +improve the ability of the model to + +01:08:41.159 --> 01:08:47.679 +answer questions about knowledge + +01:08:42.920 --> 01:08:47.679 +bases um it's + +01:08:49.120 --> 01:08:57.440 +uh yeah um it's pretty straightforward + +01:08:53.199 --> 01:08:57.440 +so uh there's that + +01:08:57.799 --> 01:09:03.120 +um yeah we already talked about this in + +01:09:00.000 --> 01:09:07.560 +the rag class so I think I might skip + +01:09:03.120 --> 01:09:10.239 +that um a final paper that I'd like to + +01:09:07.560 --> 01:09:12.600 +talk about this is also a paper uh done + +01:09:10.239 --> 01:09:13.759 +by my student Jung B Jong and this is + +01:09:12.600 --> 01:09:16.080 +interesting from the point of view of + +01:09:13.759 --> 01:09:18.000 +multihop reasoning and so I talked a + +01:09:16.080 --> 01:09:19.679 +little bit about like multihop reasoning + +01:09:18.000 --> 01:09:23.239 +along reasoning + +01:09:19.679 --> 01:09:26.159 +chains um in knowledge bases and this is + +01:09:23.239 --> 01:09:28.520 +one example of multihop reasoning + +01:09:26.159 --> 01:09:30.080 +among along reasoning chains within the + +01:09:28.520 --> 01:09:33.400 +parameters of the model so testing + +01:09:30.080 --> 01:09:36.759 +whether models can answer + +01:09:33.400 --> 01:09:38.480 +um Can it answer multihop questions and + +01:09:36.759 --> 01:09:40.839 +basically what we did here is we took a + +01:09:38.480 --> 01:09:42.679 +knowledge base and a knowledge base can + +01:09:40.839 --> 01:09:44.279 +have + +01:09:42.679 --> 01:09:49.480 +um + +01:09:44.279 --> 01:09:49.480 +like uh country country is + +01:09:49.600 --> 01:09:52.600 +US + +01:09:53.480 --> 01:09:58.600 +president um and then a + +01:10:00.880 --> 01:10:06.560 +birthday um and so we can create these + +01:10:04.280 --> 01:10:08.640 +multihop questions right uh and just + +01:10:06.560 --> 01:10:10.280 +follow the relation links and then we + +01:10:08.640 --> 01:10:11.440 +know the answer to the multihop question + +01:10:10.280 --> 01:10:13.560 +by following the link and we can + +01:10:11.440 --> 01:10:18.159 +generate you know the question given a + +01:10:13.560 --> 01:10:19.800 +template um so we did this and had like + +01:10:18.159 --> 01:10:22.800 +question one which is return the artist + +01:10:19.800 --> 01:10:25.719 +who recorded party a over um and then + +01:10:22.800 --> 01:10:28.159 +where in Georgia does uh Usher live and + +01:10:25.719 --> 01:10:29.920 +then we can turn this into a question + +01:10:28.159 --> 01:10:31.679 +which part of Georgia in which part of + +01:10:29.920 --> 01:10:34.239 +Georgia does the artist that recorded + +01:10:31.679 --> 01:10:37.560 +the party8 overlive and so we now have a + +01:10:34.239 --> 01:10:45.000 +multi multihop question and what we did + +01:10:37.560 --> 01:10:47.440 +is we measured whether um the model was + +01:10:45.000 --> 01:10:49.760 +able to answer the first question the + +01:10:47.440 --> 01:10:53.320 +second question and the comp like + +01:10:49.760 --> 01:10:56.120 +compound question and what we found is + +01:10:53.320 --> 01:10:59.440 +like what we would expect + +01:10:56.120 --> 01:11:01.719 +if models were like perfect knowledge + +01:10:59.440 --> 01:11:04.360 +processors right + +01:11:01.719 --> 01:11:08.120 +is we have + +01:11:04.360 --> 01:11:10.800 +like yes on the first question + +01:11:08.120 --> 01:11:14.000 +no + +01:11:10.800 --> 01:11:16.560 +yes um yes on the first question and no + +01:11:14.000 --> 01:11:16.560 +on the first + +01:11:17.199 --> 01:11:24.760 +question and we would expect that + +01:11:21.920 --> 01:11:26.080 +basically if it knew both of the answers + +01:11:24.760 --> 01:11:27.239 +to the first question and the second + +01:11:26.080 --> 01:11:30.600 +question it would get the compound + +01:11:27.239 --> 01:11:31.800 +question right and if it got uh like + +01:11:30.600 --> 01:11:34.800 +either of them wrong it would get it + +01:11:31.800 --> 01:11:37.120 +wrong right um you know in the in the + +01:11:34.800 --> 01:11:39.400 +ideal world where the knowledge of the + +01:11:37.120 --> 01:11:41.280 +two sub questions is necessary to answer + +01:11:39.400 --> 01:11:43.880 +the comp composite question and the + +01:11:41.280 --> 01:11:45.840 +model is a perfect knowledge processor + +01:11:43.880 --> 01:11:47.120 +and basically what we found we tried a + +01:11:45.840 --> 01:11:49.280 +whole bunch of different types of + +01:11:47.120 --> 01:11:51.199 +questions and what we found is this is + +01:11:49.280 --> 01:11:55.960 +totally not the case like it's not the + +01:11:51.199 --> 01:11:58.520 +case at all um and what we found in said + +01:11:55.960 --> 01:12:01.560 +is if it's able to answer the second + +01:11:58.520 --> 01:12:04.120 +question correctly it was much more + +01:12:01.560 --> 01:12:07.480 +likely to be able to answer the + +01:12:04.120 --> 01:12:08.840 +composite question um even if it can + +01:12:07.480 --> 01:12:11.000 +answer the first question that has + +01:12:08.840 --> 01:12:13.120 +almost no relation with whether it could + +01:12:11.000 --> 01:12:15.520 +answer the composite question at all so + +01:12:13.120 --> 01:12:17.679 +it's more like somehow from the answer + +01:12:15.520 --> 01:12:19.320 +to the second question it was able to to + +01:12:17.679 --> 01:12:22.280 +get the answer right and it kind of + +01:12:19.320 --> 01:12:24.040 +makes sense actually because like um + +01:12:22.280 --> 01:12:26.320 +let's say the answer to the second + +01:12:24.040 --> 01:12:27.920 +question is some like really long list + +01:12:26.320 --> 01:12:30.719 +like who are all the presidents of the + +01:12:27.920 --> 01:12:33.320 +United States um or something like that + +01:12:30.719 --> 01:12:35.639 +that's just hard to answer um so if I + +01:12:33.320 --> 01:12:38.000 +said who are all the presidents of the + +01:12:35.639 --> 01:12:40.800 +country where Washington DC is located + +01:12:38.000 --> 01:12:42.679 +in um you know like the second question + +01:12:40.800 --> 01:12:44.040 +is really hard so that's hard to get but + +01:12:42.679 --> 01:12:46.120 +if I say + +01:12:44.040 --> 01:12:49.920 +um + +01:12:46.120 --> 01:12:53.520 +uh what what is the + +01:12:49.920 --> 01:12:57.120 +capital what is the capital of the + +01:12:53.520 --> 01:12:57.120 +country uh + +01:12:57.400 --> 01:13:02.440 +what is what is the capital of the + +01:12:58.840 --> 01:13:05.400 +country where the most + +01:13:02.440 --> 01:13:06.800 +um people live or something like that + +01:13:05.400 --> 01:13:08.679 +even if you weren't sure about the + +01:13:06.800 --> 01:13:10.880 +country where the most people live you + +01:13:08.679 --> 01:13:13.040 +could pick a random capital and get it + +01:13:10.880 --> 01:13:16.199 +right some of the time or something like + +01:13:13.040 --> 01:13:18.239 +that so um that's what we found in this + +01:13:16.199 --> 01:13:19.800 +paper and I I think like another nice + +01:13:18.239 --> 01:13:22.360 +thing about knowledge bases is they + +01:13:19.800 --> 01:13:24.880 +allow you to ask like really interesting + +01:13:22.360 --> 01:13:26.400 +questions like this about what language + +01:13:24.880 --> 01:13:29.120 +model know or what language models don't + +01:13:26.400 --> 01:13:31.040 +know in a structured way so um I think + +01:13:29.120 --> 01:13:32.280 +if you're interested in probing language + +01:13:31.040 --> 01:13:35.320 +models and what they know and what they + +01:13:32.280 --> 01:13:38.639 +can infer what logic they can do that's + +01:13:35.320 --> 01:13:42.320 +good um cool yeah that's all I have for + +01:13:38.639 --> 01:13:44.920 +today um are there any questions or + +01:13:42.320 --> 01:13:48.679 +discussion or things like that or happy + +01:13:44.920 --> 01:13:48.679 +to talk up here too diff --git a/CMU Advanced NLP 2024 (2) Word Representation and Text Classification/CMU Advanced NLP 2024 (2) Word Representation and Text Classification.mp4 b/CMU Advanced NLP 2024 (2) Word Representation and Text Classification/CMU Advanced NLP 2024 (2) Word Representation and Text Classification.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bf0404ca621773194393bdbddebabb5f8046c6d2 --- /dev/null +++ b/CMU Advanced NLP 2024 (2) Word Representation and Text Classification/CMU Advanced NLP 2024 (2) Word Representation and Text Classification.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35d3a9cb9cc7d1aedba6a0742b46cfd3e24c4999c46b59ae1df22321775c9102 +size 82455565 diff --git a/CMU Advanced NLP 2024 (2) Word Representation and Text Classification/metadata.json b/CMU Advanced NLP 2024 (2) Word Representation and Text Classification/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..24af40f7f30d48b3538426095e6cd3cc362e5f5e --- /dev/null +++ b/CMU Advanced NLP 2024 (2) Word Representation and Text Classification/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=wa61zdcKWyU", + "title": "CMU Advanced NLP 2024 (2) Word Representation and Text Classification" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (2) Word Representation and Text Classification/transcript.srt b/CMU Advanced NLP 2024 (2) Word Representation and Text Classification/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..29e932b166fbf3fb7bcbfd3815e9cc20dc998392 --- /dev/null +++ b/CMU Advanced NLP 2024 (2) Word Representation and Text Classification/transcript.srt @@ -0,0 +1,6119 @@ +1 +00:00:03,879 --> 00:00:07,480 +cool um so this time I'm going to talk + +2 +00:00:05,480 --> 00:00:08,880 +about word representation and text + +3 +00:00:07,480 --> 00:00:11,480 +classifiers these are kind of the + +4 +00:00:08,880 --> 00:00:14,080 +foundations that you need to know uh in + +5 +00:00:11,480 --> 00:00:15,640 +order to move on to the more complex + +6 +00:00:14,080 --> 00:00:17,920 +things that we'll be talking in future + +7 +00:00:15,640 --> 00:00:19,640 +classes uh but actually the in + +8 +00:00:17,920 --> 00:00:22,760 +particular the word representation part + +9 +00:00:19,640 --> 00:00:25,439 +is pretty important it's a major uh + +10 +00:00:22,760 --> 00:00:31,800 +thing that we need to do for all NLP + +11 +00:00:25,439 --> 00:00:34,239 +models so uh let's go into it + +12 +00:00:31,800 --> 00:00:38,200 +so last class I talked about the bag of + +13 +00:00:34,239 --> 00:00:40,239 +words model um and just to review this + +14 +00:00:38,200 --> 00:00:43,920 +was a model where basically we take each + +15 +00:00:40,239 --> 00:00:45,520 +word we represent it as a one hot Vector + +16 +00:00:43,920 --> 00:00:48,760 +uh like + +17 +00:00:45,520 --> 00:00:51,120 +this and we add all of these vectors + +18 +00:00:48,760 --> 00:00:53,160 +together we multiply the resulting + +19 +00:00:51,120 --> 00:00:55,160 +frequency vector by some weights and we + +20 +00:00:53,160 --> 00:00:57,239 +get a score out of this and we can use + +21 +00:00:55,160 --> 00:00:58,559 +this score for binary classification or + +22 +00:00:57,239 --> 00:01:00,239 +if we want to do multiclass + +23 +00:00:58,559 --> 00:01:02,519 +classification we get you know multiple + +24 +00:01:00,239 --> 00:01:05,720 +scores for each + +25 +00:01:02,519 --> 00:01:08,040 +class and the features F were just based + +26 +00:01:05,720 --> 00:01:08,920 +on our word identities and the weights + +27 +00:01:08,040 --> 00:01:12,159 +were + +28 +00:01:08,920 --> 00:01:14,680 +learned and um if we look at what's + +29 +00:01:12,159 --> 00:01:17,520 +missing in bag of words + +30 +00:01:14,680 --> 00:01:19,600 +models um we talked about handling of + +31 +00:01:17,520 --> 00:01:23,280 +conjugated or compound + +32 +00:01:19,600 --> 00:01:25,439 +words we talked about handling of word + +33 +00:01:23,280 --> 00:01:27,880 +similarity and we talked about handling + +34 +00:01:25,439 --> 00:01:30,240 +of combination features and handling of + +35 +00:01:27,880 --> 00:01:33,280 +sentence structure and so all of these + +36 +00:01:30,240 --> 00:01:35,000 +are are tricky problems uh we saw that + +37 +00:01:33,280 --> 00:01:37,000 +you know creating a rule-based system to + +38 +00:01:35,000 --> 00:01:39,000 +solve these problems is non-trivial and + +39 +00:01:37,000 --> 00:01:41,399 +at the very least would take a lot of + +40 +00:01:39,000 --> 00:01:44,079 +time and so now I want to talk about + +41 +00:01:41,399 --> 00:01:47,119 +some solutions to the problems in this + +42 +00:01:44,079 --> 00:01:49,280 +class so the first the solution to the + +43 +00:01:47,119 --> 00:01:52,240 +first problem or a solution to the first + +44 +00:01:49,280 --> 00:01:54,880 +problem is uh subword or character based + +45 +00:01:52,240 --> 00:01:57,520 +models and that's what I'll talk about + +46 +00:01:54,880 --> 00:02:00,719 +first handling of word similarity this + +47 +00:01:57,520 --> 00:02:02,960 +can be handled uh using Word edings + +48 +00:02:00,719 --> 00:02:05,079 +and the word embeddings uh will be + +49 +00:02:02,960 --> 00:02:07,159 +another thing we'll talk about this time + +50 +00:02:05,079 --> 00:02:08,879 +handling of combination features uh we + +51 +00:02:07,159 --> 00:02:11,039 +can handle through neural networks which + +52 +00:02:08,879 --> 00:02:14,040 +we'll also talk about this time and then + +53 +00:02:11,039 --> 00:02:15,560 +handling of sentence structure uh the + +54 +00:02:14,040 --> 00:02:17,720 +kind of standard way of handling this + +55 +00:02:15,560 --> 00:02:20,120 +now is through sequence-based models and + +56 +00:02:17,720 --> 00:02:24,879 +that will be uh starting in a few + +57 +00:02:20,120 --> 00:02:28,080 +classes so uh let's jump into + +58 +00:02:24,879 --> 00:02:30,000 +it so subword models uh as I mentioned + +59 +00:02:28,080 --> 00:02:31,840 +this is a really really important part + +60 +00:02:30,000 --> 00:02:33,360 +all of the models that we're building + +61 +00:02:31,840 --> 00:02:35,480 +nowadays including you know + +62 +00:02:33,360 --> 00:02:38,239 +state-of-the-art language models and and + +63 +00:02:35,480 --> 00:02:42,200 +things like this and the basic idea + +64 +00:02:38,239 --> 00:02:44,720 +behind this is that we want to split uh + +65 +00:02:42,200 --> 00:02:48,040 +in particular split less common words up + +66 +00:02:44,720 --> 00:02:50,200 +into multiple subboard tokens so to give + +67 +00:02:48,040 --> 00:02:52,200 +an example of this uh if we have + +68 +00:02:50,200 --> 00:02:55,040 +something like the companies are + +69 +00:02:52,200 --> 00:02:57,000 +expanding uh it might split companies + +70 +00:02:55,040 --> 00:03:02,120 +into compan + +71 +00:02:57,000 --> 00:03:05,000 +e and expand in like this and there are + +72 +00:03:02,120 --> 00:03:08,480 +a few benefits of this uh the first + +73 +00:03:05,000 --> 00:03:10,760 +benefit is that this allows you to + +74 +00:03:08,480 --> 00:03:13,360 +parameters between word varieties or + +75 +00:03:10,760 --> 00:03:15,200 +compound words and the other one is to + +76 +00:03:13,360 --> 00:03:17,400 +reduce parameter size and save compute + +77 +00:03:15,200 --> 00:03:19,720 +and meming and both of these are kind of + +78 +00:03:17,400 --> 00:03:23,239 +like equally important things that we + +79 +00:03:19,720 --> 00:03:25,519 +need to be uh we need to be considering + +80 +00:03:23,239 --> 00:03:26,440 +so does anyone know how many words there + +81 +00:03:25,519 --> 00:03:28,680 +are in + +82 +00:03:26,440 --> 00:03:31,680 +English any + +83 +00:03:28,680 --> 00:03:31,680 +ideas + +84 +00:03:36,799 --> 00:03:43,400 +yeah two + +85 +00:03:38,599 --> 00:03:45,560 +million pretty good um any other + +86 +00:03:43,400 --> 00:03:47,159 +ideas + +87 +00:03:45,560 --> 00:03:50,360 +yeah + +88 +00:03:47,159 --> 00:03:53,599 +60,000 some models use 60,000 I I think + +89 +00:03:50,360 --> 00:03:56,200 +60,000 is probably these subword models + +90 +00:03:53,599 --> 00:03:58,079 +uh when you're talking about this so + +91 +00:03:56,200 --> 00:03:59,319 +they can use sub models to take the 2 + +92 +00:03:58,079 --> 00:04:03,480 +million which I think is a reasonable + +93 +00:03:59,319 --> 00:04:07,400 +guess to 6 60,000 any other + +94 +00:04:03,480 --> 00:04:08,840 +ideas 700,000 okay pretty good um so + +95 +00:04:07,400 --> 00:04:11,799 +this was a per question it doesn't + +96 +00:04:08,840 --> 00:04:14,760 +really have a good answer um but two 200 + +97 +00:04:11,799 --> 00:04:17,479 +million's probably pretty good six uh + +98 +00:04:14,760 --> 00:04:19,160 +700,000 is pretty good the reason why + +99 +00:04:17,479 --> 00:04:21,360 +this is a trick question is because are + +100 +00:04:19,160 --> 00:04:24,440 +company and companies different + +101 +00:04:21,360 --> 00:04:26,840 +words uh maybe maybe not right because + +102 +00:04:24,440 --> 00:04:30,120 +if we know the word company we can you + +103 +00:04:26,840 --> 00:04:32,520 +know guess what the word companies means + +104 +00:04:30,120 --> 00:04:35,720 +um what about automobile is that a + +105 +00:04:32,520 --> 00:04:37,400 +different word well maybe if we know + +106 +00:04:35,720 --> 00:04:39,400 +Auto and mobile we can kind of guess + +107 +00:04:37,400 --> 00:04:41,160 +what automobile means but not really so + +108 +00:04:39,400 --> 00:04:43,479 +maybe that's a different word there's + +109 +00:04:41,160 --> 00:04:45,960 +all kinds of Shades of Gray there and + +110 +00:04:43,479 --> 00:04:48,120 +also we have really frequent words that + +111 +00:04:45,960 --> 00:04:50,360 +everybody can probably acknowledge our + +112 +00:04:48,120 --> 00:04:52,320 +words like + +113 +00:04:50,360 --> 00:04:55,639 +the and + +114 +00:04:52,320 --> 00:04:58,520 +a and um maybe + +115 +00:04:55,639 --> 00:05:00,680 +car and then we have words down here + +116 +00:04:58,520 --> 00:05:02,320 +which are like Miss spellings or + +117 +00:05:00,680 --> 00:05:04,160 +something like that misspellings of + +118 +00:05:02,320 --> 00:05:06,520 +actual correct words or + +119 +00:05:04,160 --> 00:05:09,199 +slay uh or other things like that and + +120 +00:05:06,520 --> 00:05:12,520 +then it's questionable whether those are + +121 +00:05:09,199 --> 00:05:17,199 +actual words or not so um there's a + +122 +00:05:12,520 --> 00:05:19,520 +famous uh law called Zip's + +123 +00:05:17,199 --> 00:05:21,280 +law um which probably a lot of people + +124 +00:05:19,520 --> 00:05:23,360 +have heard of it's also the source of + +125 +00:05:21,280 --> 00:05:26,919 +your zip + +126 +00:05:23,360 --> 00:05:30,160 +file um which is using Zip's law to + +127 +00:05:26,919 --> 00:05:32,400 +compress uh compress output by making + +128 +00:05:30,160 --> 00:05:34,880 +the uh more frequent words have shorter + +129 +00:05:32,400 --> 00:05:37,520 +bite strings and less frequent words + +130 +00:05:34,880 --> 00:05:38,800 +have uh you know less frequent bite + +131 +00:05:37,520 --> 00:05:43,120 +strings but basically like we're going + +132 +00:05:38,800 --> 00:05:45,120 +to have an infinite number of words or + +133 +00:05:43,120 --> 00:05:46,360 +at least strings that are separated by + +134 +00:05:45,120 --> 00:05:49,280 +white space so we need to handle this + +135 +00:05:46,360 --> 00:05:53,199 +somehow and that's what subword units + +136 +00:05:49,280 --> 00:05:54,560 +do so um 60,000 was a good guess for the + +137 +00:05:53,199 --> 00:05:57,160 +number of subword units you might use in + +138 +00:05:54,560 --> 00:06:00,759 +a model and so uh by using subw units we + +139 +00:05:57,160 --> 00:06:04,840 +can limit to about that much + +140 +00:06:00,759 --> 00:06:08,160 +so there's a couple of common uh ways to + +141 +00:06:04,840 --> 00:06:10,440 +create these subword units and basically + +142 +00:06:08,160 --> 00:06:14,560 +all of them rely on the fact that you + +143 +00:06:10,440 --> 00:06:16,039 +want more common strings to become + +144 +00:06:14,560 --> 00:06:19,599 +subword + +145 +00:06:16,039 --> 00:06:22,199 +units um or actually sorry I realize + +146 +00:06:19,599 --> 00:06:24,280 +maybe before doing that I could explain + +147 +00:06:22,199 --> 00:06:26,360 +an alternative to creating subword units + +148 +00:06:24,280 --> 00:06:29,639 +so the alternative to creating subword + +149 +00:06:26,360 --> 00:06:33,560 +units is to treat every character or + +150 +00:06:29,639 --> 00:06:36,919 +maybe every bite in a string as a single + +151 +00:06:33,560 --> 00:06:38,560 +thing that you encode in forent so in + +152 +00:06:36,919 --> 00:06:42,520 +other words instead of trying to model + +153 +00:06:38,560 --> 00:06:47,919 +the companies are expanding we Model T h + +154 +00:06:42,520 --> 00:06:50,199 +e space c o m uh etc etc can anyone + +155 +00:06:47,919 --> 00:06:53,199 +think of any downsides of + +156 +00:06:50,199 --> 00:06:53,199 +this + +157 +00:06:57,039 --> 00:07:01,879 +yeah yeah the set of these will be very + +158 +00:07:00,080 --> 00:07:05,000 +will be very small but that's not + +159 +00:07:01,879 --> 00:07:05,000 +necessarily a problem + +160 +00:07:08,560 --> 00:07:15,599 +right yeah um and any other + +161 +00:07:12,599 --> 00:07:15,599 +ideas + +162 +00:07:19,520 --> 00:07:24,360 +yeah yeah the resulting sequences will + +163 +00:07:22,080 --> 00:07:25,520 +be very long um and when you say + +164 +00:07:24,360 --> 00:07:27,160 +difficult to use it could be difficult + +165 +00:07:25,520 --> 00:07:29,560 +to use for a couple of reasons there's + +166 +00:07:27,160 --> 00:07:31,840 +mainly two reasons actually any any IDE + +167 +00:07:29,560 --> 00:07:31,840 +about + +168 +00:07:33,479 --> 00:07:37,800 +this any + +169 +00:07:46,280 --> 00:07:50,599 +yeah yeah that's a little bit of a + +170 +00:07:49,000 --> 00:07:52,319 +separate problem than the character + +171 +00:07:50,599 --> 00:07:53,919 +based model so let me get back to that + +172 +00:07:52,319 --> 00:07:56,400 +but uh let let's finish the discussion + +173 +00:07:53,919 --> 00:07:58,360 +of the character based models so if it's + +174 +00:07:56,400 --> 00:08:00,120 +really if it's really long maybe a + +175 +00:07:58,360 --> 00:08:01,879 +simple thing like uh let's say you have + +176 +00:08:00,120 --> 00:08:06,560 +a big neural network and it's processing + +177 +00:08:01,879 --> 00:08:06,560 +a really long sequence any ideas what + +178 +00:08:06,919 --> 00:08:10,879 +happens basically you run out of memory + +179 +00:08:09,280 --> 00:08:13,440 +or it takes a really long time right so + +180 +00:08:10,879 --> 00:08:16,840 +you have computational problems another + +181 +00:08:13,440 --> 00:08:18,479 +reason why is um think of what a bag of + +182 +00:08:16,840 --> 00:08:21,400 +words model would look like if it was a + +183 +00:08:18,479 --> 00:08:21,400 +bag of characters + +184 +00:08:21,800 --> 00:08:25,919 +model it wouldn't be very informative + +185 +00:08:24,199 --> 00:08:27,599 +about whether like a sentence is + +186 +00:08:25,919 --> 00:08:30,919 +positive sentiment or negative sentiment + +187 +00:08:27,599 --> 00:08:32,959 +right because instead of having uh go o + +188 +00:08:30,919 --> 00:08:35,039 +you would have uh instead of having good + +189 +00:08:32,959 --> 00:08:36,360 +you would have go o and that doesn't + +190 +00:08:35,039 --> 00:08:38,560 +really directly tell you whether it's + +191 +00:08:36,360 --> 00:08:41,719 +positive sentiment or not so those are + +192 +00:08:38,560 --> 00:08:43,680 +basically the two problems um compute + +193 +00:08:41,719 --> 00:08:45,320 +and lack of expressiveness in the + +194 +00:08:43,680 --> 00:08:50,720 +underlying representations so you need + +195 +00:08:45,320 --> 00:08:52,080 +to handle both of those yes so if we uh + +196 +00:08:50,720 --> 00:08:54,480 +move from + +197 +00:08:52,080 --> 00:08:56,440 +character better expressiveness and we + +198 +00:08:54,480 --> 00:08:58,920 +assume that if we just get the bigger + +199 +00:08:56,440 --> 00:09:00,120 +and bigger paragraphs we'll get even + +200 +00:08:58,920 --> 00:09:02,760 +better + +201 +00:09:00,120 --> 00:09:05,120 +yeah so a very good question I'll repeat + +202 +00:09:02,760 --> 00:09:06,560 +it um and actually this also goes back + +203 +00:09:05,120 --> 00:09:08,040 +to the other question you asked about + +204 +00:09:06,560 --> 00:09:09,519 +words that look the same but are + +205 +00:09:08,040 --> 00:09:12,160 +pronounced differently or have different + +206 +00:09:09,519 --> 00:09:14,360 +meanings and so like let's say we just + +207 +00:09:12,160 --> 00:09:15,920 +remembered this whole sentence right the + +208 +00:09:14,360 --> 00:09:18,279 +companies are + +209 +00:09:15,920 --> 00:09:21,600 +expanding um and that was like a single + +210 +00:09:18,279 --> 00:09:22,680 +embedding and we somehow embedded it the + +211 +00:09:21,600 --> 00:09:25,720 +problem would be we're never going to + +212 +00:09:22,680 --> 00:09:27,120 +see that sentence again um or if we go + +213 +00:09:25,720 --> 00:09:29,480 +to longer sentences we're never going to + +214 +00:09:27,120 --> 00:09:31,839 +see the longer sentences again so it + +215 +00:09:29,480 --> 00:09:34,320 +becomes too sparse so there's kind of a + +216 +00:09:31,839 --> 00:09:37,240 +sweet spot between + +217 +00:09:34,320 --> 00:09:40,279 +like long enough to be expressive and + +218 +00:09:37,240 --> 00:09:42,480 +short enough to occur many times so that + +219 +00:09:40,279 --> 00:09:43,959 +you can learn appropriately and that's + +220 +00:09:42,480 --> 00:09:47,120 +kind of what subword models are aiming + +221 +00:09:43,959 --> 00:09:48,360 +for and if you get longer subwords then + +222 +00:09:47,120 --> 00:09:50,200 +you'll get things that are more + +223 +00:09:48,360 --> 00:09:52,959 +expressive but more sparse in shorter + +224 +00:09:50,200 --> 00:09:55,440 +subwords you'll get things that are like + +225 +00:09:52,959 --> 00:09:57,279 +uh less expressive but less spice so you + +226 +00:09:55,440 --> 00:09:59,120 +need to balance between them and then + +227 +00:09:57,279 --> 00:10:00,600 +once we get into sequence modeling they + +228 +00:09:59,120 --> 00:10:02,600 +start being able to model like which + +229 +00:10:00,600 --> 00:10:04,120 +words are next to each other uh which + +230 +00:10:02,600 --> 00:10:06,040 +tokens are next to each other and stuff + +231 +00:10:04,120 --> 00:10:07,800 +like that so even if they are less + +232 +00:10:06,040 --> 00:10:11,279 +expressive the combination between them + +233 +00:10:07,800 --> 00:10:12,600 +can be expressive so um yeah that's kind + +234 +00:10:11,279 --> 00:10:13,440 +of a preview of what we're going to be + +235 +00:10:12,600 --> 00:10:17,320 +doing + +236 +00:10:13,440 --> 00:10:19,279 +next okay so um let's assume that we + +237 +00:10:17,320 --> 00:10:21,320 +want to have some subwords that are + +238 +00:10:19,279 --> 00:10:23,000 +longer than characters but shorter than + +239 +00:10:21,320 --> 00:10:26,240 +tokens how do we make these in a + +240 +00:10:23,000 --> 00:10:28,680 +consistent way there's two major ways of + +241 +00:10:26,240 --> 00:10:31,480 +doing this uh the first one is bite pair + +242 +00:10:28,680 --> 00:10:32,839 +encoding and this is uh very very simple + +243 +00:10:31,480 --> 00:10:35,839 +in fact it's so + +244 +00:10:32,839 --> 00:10:35,839 +simple + +245 +00:10:36,600 --> 00:10:40,839 +that we can implement + +246 +00:10:41,839 --> 00:10:47,240 +it in this notebook here which you can + +247 +00:10:44,600 --> 00:10:51,720 +click through to on the + +248 +00:10:47,240 --> 00:10:55,440 +slides and it's uh + +249 +00:10:51,720 --> 00:10:58,040 +about 10 lines of code um and so + +250 +00:10:55,440 --> 00:11:01,040 +basically what B pair encoding + +251 +00:10:58,040 --> 00:11:01,040 +does + +252 +00:11:04,600 --> 00:11:09,560 +is that you start out with um all of the + +253 +00:11:07,000 --> 00:11:14,360 +vocabulary that you want to process + +254 +00:11:09,560 --> 00:11:17,560 +where each vocabulary item is split into + +255 +00:11:14,360 --> 00:11:21,240 +uh the characters and an end of word + +256 +00:11:17,560 --> 00:11:23,360 +symbol and you have a corresponding + +257 +00:11:21,240 --> 00:11:27,519 +frequency of + +258 +00:11:23,360 --> 00:11:31,120 +this you then uh get statistics about + +259 +00:11:27,519 --> 00:11:33,279 +the most common pairs of tokens that + +260 +00:11:31,120 --> 00:11:34,880 +occur next to each other and so here the + +261 +00:11:33,279 --> 00:11:38,240 +most common pairs of tokens that occur + +262 +00:11:34,880 --> 00:11:41,920 +next to each other are e s because it + +263 +00:11:38,240 --> 00:11:46,560 +occurs nine times because it occurs in + +264 +00:11:41,920 --> 00:11:48,279 +newest and wildest also s and t w + +265 +00:11:46,560 --> 00:11:51,440 +because those occur there too and then + +266 +00:11:48,279 --> 00:11:53,519 +you have we and other things like that + +267 +00:11:51,440 --> 00:11:56,000 +so out of all the most frequent ones you + +268 +00:11:53,519 --> 00:11:59,920 +just merge them together and that gives + +269 +00:11:56,000 --> 00:12:02,720 +you uh new s new + +270 +00:11:59,920 --> 00:12:05,200 +EST and wide + +271 +00:12:02,720 --> 00:12:09,360 +EST and then you do the same thing this + +272 +00:12:05,200 --> 00:12:12,519 +time now you get EST so now you get this + +273 +00:12:09,360 --> 00:12:14,279 +uh suffix EST and that looks pretty + +274 +00:12:12,519 --> 00:12:16,399 +reasonable for English right you know + +275 +00:12:14,279 --> 00:12:19,040 +EST is a common suffix that we use it + +276 +00:12:16,399 --> 00:12:22,399 +seems like it should be a single token + +277 +00:12:19,040 --> 00:12:25,880 +and um so you just do this over and over + +278 +00:12:22,399 --> 00:12:29,279 +again if you want a vocabulary of 60,000 + +279 +00:12:25,880 --> 00:12:31,120 +for example you would do um 60,000 minus + +280 +00:12:29,279 --> 00:12:33,079 +number of characters merge operations + +281 +00:12:31,120 --> 00:12:37,160 +and eventually you would get a B of + +282 +00:12:33,079 --> 00:12:41,920 +60,000 um and yeah very very simple + +283 +00:12:37,160 --> 00:12:41,920 +method to do this um any questions about + +284 +00:12:43,160 --> 00:12:46,160 +that + +285 +00:12:57,839 --> 00:13:00,839 +yeah + +286 +00:13:15,600 --> 00:13:20,959 +yeah so uh just to repeat the the + +287 +00:13:18,040 --> 00:13:23,560 +comment uh this seems like a greedy + +288 +00:13:20,959 --> 00:13:25,320 +version of Huffman encoding which is a + +289 +00:13:23,560 --> 00:13:28,839 +you know similar to what you're using in + +290 +00:13:25,320 --> 00:13:32,000 +your zip file a way to shorten things by + +291 +00:13:28,839 --> 00:13:36,560 +getting longer uh more frequent things + +292 +00:13:32,000 --> 00:13:39,120 +being inced as a single token um I think + +293 +00:13:36,560 --> 00:13:40,760 +B pair encoding did originally start + +294 +00:13:39,120 --> 00:13:43,720 +like that that's part of the reason why + +295 +00:13:40,760 --> 00:13:45,760 +the encoding uh thing is here I think it + +296 +00:13:43,720 --> 00:13:47,360 +originally started there I haven't read + +297 +00:13:45,760 --> 00:13:49,360 +really deeply into this but I can talk + +298 +00:13:47,360 --> 00:13:53,240 +more about how the next one corresponds + +299 +00:13:49,360 --> 00:13:54,440 +to information Theory and Tuesday I'm + +300 +00:13:53,240 --> 00:13:55,720 +going to talk even more about how + +301 +00:13:54,440 --> 00:13:57,720 +language models correspond to + +302 +00:13:55,720 --> 00:14:00,040 +information theories so we can uh we can + +303 +00:13:57,720 --> 00:14:04,519 +discuss maybe in more detail + +304 +00:14:00,040 --> 00:14:07,639 +to um so the the alternative option is + +305 +00:14:04,519 --> 00:14:10,000 +to use unigram models and unigram models + +306 +00:14:07,639 --> 00:14:12,240 +are the simplest type of language model + +307 +00:14:10,000 --> 00:14:15,079 +I'm going to talk more in detail about + +308 +00:14:12,240 --> 00:14:18,279 +them next time but basically uh the way + +309 +00:14:15,079 --> 00:14:20,759 +it works is you create a model that + +310 +00:14:18,279 --> 00:14:23,600 +generates all word uh words in the + +311 +00:14:20,759 --> 00:14:26,199 +sequence independently sorry I thought I + +312 +00:14:23,600 --> 00:14:26,199 +had a + +313 +00:14:26,320 --> 00:14:31,800 +um I thought I had an equation but + +314 +00:14:28,800 --> 00:14:31,800 +basically the + +315 +00:14:32,240 --> 00:14:35,759 +equation looks + +316 +00:14:38,079 --> 00:14:41,079 +like + +317 +00:14:47,720 --> 00:14:52,120 +this so you say the probability of the + +318 +00:14:50,360 --> 00:14:53,440 +sequence is the product of the + +319 +00:14:52,120 --> 00:14:54,279 +probabilities of each of the words in + +320 +00:14:53,440 --> 00:14:55,959 +the + +321 +00:14:54,279 --> 00:15:00,079 +sequence + +322 +00:14:55,959 --> 00:15:04,079 +and uh then you try to pick a vocabulary + +323 +00:15:00,079 --> 00:15:06,839 +that maximizes the probability of the + +324 +00:15:04,079 --> 00:15:09,320 +Corpus given a fixed vocabulary size so + +325 +00:15:06,839 --> 00:15:10,320 +you try to say okay you get a vocabulary + +326 +00:15:09,320 --> 00:15:14,440 +size of + +327 +00:15:10,320 --> 00:15:16,920 +60,000 how do you um how do you pick the + +328 +00:15:14,440 --> 00:15:19,680 +best 60,000 vocabulary to maximize the + +329 +00:15:16,920 --> 00:15:22,440 +probability of the the Corpus and that + +330 +00:15:19,680 --> 00:15:25,959 +will result in something very similar uh + +331 +00:15:22,440 --> 00:15:27,920 +it will also try to give longer uh + +332 +00:15:25,959 --> 00:15:29,880 +vocabulary uh sorry more common + +333 +00:15:27,920 --> 00:15:32,240 +vocabulary long sequences because that + +334 +00:15:29,880 --> 00:15:35,560 +allows you to to maximize this + +335 +00:15:32,240 --> 00:15:36,959 +objective um the optimization for this + +336 +00:15:35,560 --> 00:15:40,040 +is performed using something called the + +337 +00:15:36,959 --> 00:15:44,440 +EM algorithm where basically you uh + +338 +00:15:40,040 --> 00:15:48,560 +predict the uh the probability of each + +339 +00:15:44,440 --> 00:15:51,600 +token showing up and uh then select the + +340 +00:15:48,560 --> 00:15:53,279 +most common tokens and then trim off the + +341 +00:15:51,600 --> 00:15:54,759 +ones that are less common and then just + +342 +00:15:53,279 --> 00:15:58,120 +do this over and over again until you + +343 +00:15:54,759 --> 00:15:59,839 +drop down to the 60,000 token lat so the + +344 +00:15:58,120 --> 00:16:02,040 +details for this are not important for + +345 +00:15:59,839 --> 00:16:04,160 +most people in this class uh because + +346 +00:16:02,040 --> 00:16:07,480 +you're going to just be using a toolkit + +347 +00:16:04,160 --> 00:16:08,880 +that implements this for you um but if + +348 +00:16:07,480 --> 00:16:10,759 +you're interested in this I'm happy to + +349 +00:16:08,880 --> 00:16:14,199 +talk to you about it + +350 +00:16:10,759 --> 00:16:14,199 +yeah is there + +351 +00:16:14,680 --> 00:16:18,959 +problem Oh in unigram models there's a + +352 +00:16:17,199 --> 00:16:20,959 +huge problem with assuming Independence + +353 +00:16:18,959 --> 00:16:22,720 +in language models because then you + +354 +00:16:20,959 --> 00:16:25,120 +could rearrange the order of words in + +355 +00:16:22,720 --> 00:16:26,600 +sentences um that that's something we're + +356 +00:16:25,120 --> 00:16:27,519 +going to talk about in language model + +357 +00:16:26,600 --> 00:16:30,560 +next + +358 +00:16:27,519 --> 00:16:32,839 +time but the the good thing about this + +359 +00:16:30,560 --> 00:16:34,519 +is the EM algorithm requires dynamic + +360 +00:16:32,839 --> 00:16:36,079 +programming in this case and you can't + +361 +00:16:34,519 --> 00:16:37,800 +easily do dynamic programming if you + +362 +00:16:36,079 --> 00:16:40,160 +don't make that + +363 +00:16:37,800 --> 00:16:41,880 +assumptions um and then finally after + +364 +00:16:40,160 --> 00:16:43,560 +you've picked your vocabulary and you've + +365 +00:16:41,880 --> 00:16:45,720 +assigned a probability to each word in + +366 +00:16:43,560 --> 00:16:47,800 +the vocabulary you then find a + +367 +00:16:45,720 --> 00:16:49,639 +segmentation of the input that maximizes + +368 +00:16:47,800 --> 00:16:52,600 +the unigram + +369 +00:16:49,639 --> 00:16:54,880 +probabilities um so this is basically + +370 +00:16:52,600 --> 00:16:56,519 +the idea of what's going on here um I'm + +371 +00:16:54,880 --> 00:16:58,120 +not going to go into a lot of detail + +372 +00:16:56,519 --> 00:17:00,560 +about this because most people are just + +373 +00:16:58,120 --> 00:17:02,279 +going to be users of this algorithm so + +374 +00:17:00,560 --> 00:17:06,240 +it's not super super + +375 +00:17:02,279 --> 00:17:09,400 +important um the one important thing + +376 +00:17:06,240 --> 00:17:11,240 +about this is that there's a library + +377 +00:17:09,400 --> 00:17:15,520 +called sentence piece that's used very + +378 +00:17:11,240 --> 00:17:19,199 +widely in order to build these um in + +379 +00:17:15,520 --> 00:17:22,000 +order to build these subword units and + +380 +00:17:19,199 --> 00:17:23,720 +uh basically what you do is you run the + +381 +00:17:22,000 --> 00:17:27,600 +sentence piece + +382 +00:17:23,720 --> 00:17:30,200 +train uh model or sorry uh program and + +383 +00:17:27,600 --> 00:17:32,640 +that gives you uh you select your vocab + +384 +00:17:30,200 --> 00:17:34,240 +size uh this also this character + +385 +00:17:32,640 --> 00:17:36,120 +coverage is basically how well do you + +386 +00:17:34,240 --> 00:17:39,760 +need to cover all of the characters in + +387 +00:17:36,120 --> 00:17:41,840 +your vocabulary or in your input text um + +388 +00:17:39,760 --> 00:17:45,240 +what model type do you use and then you + +389 +00:17:41,840 --> 00:17:48,640 +run this uh sentence piece en code file + +390 +00:17:45,240 --> 00:17:51,039 +uh to uh encode the output and split the + +391 +00:17:48,640 --> 00:17:54,799 +output and there's also python bindings + +392 +00:17:51,039 --> 00:17:56,240 +available for this and by the one thing + +393 +00:17:54,799 --> 00:17:57,919 +that you should know is by default it + +394 +00:17:56,240 --> 00:18:00,600 +uses the unigram model but it also + +395 +00:17:57,919 --> 00:18:01,960 +supports EP in my experience it doesn't + +396 +00:18:00,600 --> 00:18:05,159 +make a huge difference about which one + +397 +00:18:01,960 --> 00:18:07,640 +you use the bigger thing is how um how + +398 +00:18:05,159 --> 00:18:10,159 +big is your vocabulary size and if your + +399 +00:18:07,640 --> 00:18:11,880 +vocabulary size is smaller then things + +400 +00:18:10,159 --> 00:18:13,760 +will be more efficient but less + +401 +00:18:11,880 --> 00:18:17,480 +expressive if your vocabulary size is + +402 +00:18:13,760 --> 00:18:21,280 +bigger things will be um will + +403 +00:18:17,480 --> 00:18:23,240 +be more expressive but less efficient + +404 +00:18:21,280 --> 00:18:25,360 +and A good rule of thumb is like + +405 +00:18:23,240 --> 00:18:26,960 +something like 60,000 to 80,000 is + +406 +00:18:25,360 --> 00:18:29,120 +pretty reasonable if you're only doing + +407 +00:18:26,960 --> 00:18:31,320 +English if you're spreading out to + +408 +00:18:29,120 --> 00:18:32,600 +things that do other languages um which + +409 +00:18:31,320 --> 00:18:35,960 +I'll talk about in a second then you + +410 +00:18:32,600 --> 00:18:38,720 +need a much bigger B regular + +411 +00:18:35,960 --> 00:18:40,559 +say so there's two considerations here + +412 +00:18:38,720 --> 00:18:42,440 +two important considerations when using + +413 +00:18:40,559 --> 00:18:46,320 +these models uh the first is + +414 +00:18:42,440 --> 00:18:48,760 +multilinguality as I said so when you're + +415 +00:18:46,320 --> 00:18:50,760 +using um subword + +416 +00:18:48,760 --> 00:18:54,710 +models they're hard to use + +417 +00:18:50,760 --> 00:18:55,840 +multilingually because as I said before + +418 +00:18:54,710 --> 00:18:59,799 +[Music] + +419 +00:18:55,840 --> 00:19:03,799 +they give longer strings to more + +420 +00:18:59,799 --> 00:19:06,520 +frequent strings basically so then + +421 +00:19:03,799 --> 00:19:09,559 +imagine what happens if 50% of your + +422 +00:19:06,520 --> 00:19:11,919 +Corpus is English another 30% of your + +423 +00:19:09,559 --> 00:19:15,400 +Corpus is + +424 +00:19:11,919 --> 00:19:17,200 +other languages written in Latin script + +425 +00:19:15,400 --> 00:19:21,720 +10% is + +426 +00:19:17,200 --> 00:19:25,480 +Chinese uh 5% is cerlic script languages + +427 +00:19:21,720 --> 00:19:27,240 +four 4% is 3% is Japanese and then you + +428 +00:19:25,480 --> 00:19:31,080 +have like + +429 +00:19:27,240 --> 00:19:33,320 +0.01% written in like burmes or + +430 +00:19:31,080 --> 00:19:35,520 +something like that suddenly burmes just + +431 +00:19:33,320 --> 00:19:37,400 +gets chunked up really really tiny + +432 +00:19:35,520 --> 00:19:38,360 +really long sequences and it doesn't + +433 +00:19:37,400 --> 00:19:45,559 +work as + +434 +00:19:38,360 --> 00:19:45,559 +well um so one way that people fix this + +435 +00:19:45,919 --> 00:19:50,520 +um and actually there's a really nice uh + +436 +00:19:48,760 --> 00:19:52,600 +blog post about this called exploring + +437 +00:19:50,520 --> 00:19:53,760 +B's vocabulary which I referenced here + +438 +00:19:52,600 --> 00:19:58,039 +if you're interested in learning more + +439 +00:19:53,760 --> 00:20:02,960 +about that um but one way that people + +440 +00:19:58,039 --> 00:20:05,240 +were around this is if your + +441 +00:20:02,960 --> 00:20:07,960 +actual uh data + +442 +00:20:05,240 --> 00:20:11,559 +distribution looks like this like + +443 +00:20:07,960 --> 00:20:11,559 +English uh + +444 +00:20:17,039 --> 00:20:23,159 +Ty we actually sorry I took out the + +445 +00:20:19,280 --> 00:20:23,159 +Indian languages in my example + +446 +00:20:24,960 --> 00:20:30,159 +apologies + +447 +00:20:27,159 --> 00:20:30,159 +so + +448 +00:20:30,400 --> 00:20:35,919 +um what you do is you essentially create + +449 +00:20:33,640 --> 00:20:40,000 +a different distribution that like + +450 +00:20:35,919 --> 00:20:43,559 +downweights English a little bit and up + +451 +00:20:40,000 --> 00:20:47,000 +weights up weights all of the other + +452 +00:20:43,559 --> 00:20:49,480 +languages um so that you get more of + +453 +00:20:47,000 --> 00:20:53,159 +other languages when creating so this is + +454 +00:20:49,480 --> 00:20:53,159 +a common work around that you can do for + +455 +00:20:54,200 --> 00:20:59,960 +this um the + +456 +00:20:56,799 --> 00:21:03,000 +second problem with these is + +457 +00:20:59,960 --> 00:21:08,000 +arbitrariness so as you saw in my + +458 +00:21:03,000 --> 00:21:11,240 +example with bpe e s s and t and of + +459 +00:21:08,000 --> 00:21:13,520 +board symbol all have the same probabil + +460 +00:21:11,240 --> 00:21:16,960 +or have the same frequency right so if + +461 +00:21:13,520 --> 00:21:21,520 +we get to that point do we segment es or + +462 +00:21:16,960 --> 00:21:25,039 +do we seg uh EST or do we segment e + +463 +00:21:21,520 --> 00:21:26,559 +s and so this is also a problem and it + +464 +00:21:25,039 --> 00:21:29,000 +actually can affect your results + +465 +00:21:26,559 --> 00:21:30,480 +especially if you like don't have a + +466 +00:21:29,000 --> 00:21:31,760 +really strong vocabulary for the + +467 +00:21:30,480 --> 00:21:33,279 +language you're working in or you're + +468 +00:21:31,760 --> 00:21:37,200 +working in a new + +469 +00:21:33,279 --> 00:21:40,159 +domain and so there's a few workarounds + +470 +00:21:37,200 --> 00:21:41,520 +for this uh one workaround for this is + +471 +00:21:40,159 --> 00:21:44,000 +uh called subword + +472 +00:21:41,520 --> 00:21:46,279 +regularization and the way it works is + +473 +00:21:44,000 --> 00:21:49,400 +instead + +474 +00:21:46,279 --> 00:21:51,640 +of just having a single segmentation and + +475 +00:21:49,400 --> 00:21:54,679 +getting the kind of + +476 +00:21:51,640 --> 00:21:56,200 +maximally probable segmentation or the + +477 +00:21:54,679 --> 00:21:58,480 +one the greedy one that you get out of + +478 +00:21:56,200 --> 00:22:01,360 +BP instead you sample different + +479 +00:21:58,480 --> 00:22:03,000 +segmentations in training time and use + +480 +00:22:01,360 --> 00:22:05,720 +the different segmentations and that + +481 +00:22:03,000 --> 00:22:09,200 +makes your model more robust to this + +482 +00:22:05,720 --> 00:22:10,840 +kind of variation and that's also + +483 +00:22:09,200 --> 00:22:15,679 +actually the reason why sentence piece + +484 +00:22:10,840 --> 00:22:17,919 +was released was through this um subword + +485 +00:22:15,679 --> 00:22:19,559 +regularization paper so that's also + +486 +00:22:17,919 --> 00:22:22,720 +implemented in sentence piece if that's + +487 +00:22:19,559 --> 00:22:22,720 +something you're interested in + +488 +00:22:24,919 --> 00:22:32,520 +trying cool um are there any questions + +489 +00:22:28,480 --> 00:22:32,520 +or discussions about this + +490 +00:22:53,279 --> 00:22:56,279 +yeah + +491 +00:22:56,960 --> 00:22:59,960 +already + +492 +00:23:06,799 --> 00:23:11,080 +yeah so this is a good question um just + +493 +00:23:08,960 --> 00:23:12,760 +to repeat the question it was like let's + +494 +00:23:11,080 --> 00:23:16,080 +say we have a big + +495 +00:23:12,760 --> 00:23:19,640 +multilingual um subword + +496 +00:23:16,080 --> 00:23:23,440 +model and we want to add a new language + +497 +00:23:19,640 --> 00:23:26,240 +in some way uh how can we reuse the + +498 +00:23:23,440 --> 00:23:28,880 +existing model but add a new + +499 +00:23:26,240 --> 00:23:31,080 +language it's a good question if you're + +500 +00:23:28,880 --> 00:23:33,679 +only using it for subord + +501 +00:23:31,080 --> 00:23:36,320 +segmentation um one one nice thing about + +502 +00:23:33,679 --> 00:23:36,320 +the unigram + +503 +00:23:36,400 --> 00:23:41,799 +model here is this is kind of a + +504 +00:23:38,880 --> 00:23:43,679 +probabilistic model so it's very easy to + +505 +00:23:41,799 --> 00:23:46,360 +do the kind of standard things that we + +506 +00:23:43,679 --> 00:23:48,240 +do with probabilistic models which is + +507 +00:23:46,360 --> 00:23:50,559 +like let's say we had an + +508 +00:23:48,240 --> 00:23:53,919 +old uh an + +509 +00:23:50,559 --> 00:23:56,880 +old vocabulary for + +510 +00:23:53,919 --> 00:23:59,880 +this um we could just + +511 +00:23:56,880 --> 00:23:59,880 +interpolate + +512 +00:24:07,159 --> 00:24:12,320 +um we could interpolate like this and + +513 +00:24:09,559 --> 00:24:13,840 +just you know uh combine the + +514 +00:24:12,320 --> 00:24:17,080 +probabilities of the two and then use + +515 +00:24:13,840 --> 00:24:19,520 +that combine probability in order to + +516 +00:24:17,080 --> 00:24:21,320 +segment the new language um things like + +517 +00:24:19,520 --> 00:24:24,159 +this have been uh done before but I + +518 +00:24:21,320 --> 00:24:26,159 +don't remember the exact preferences uh + +519 +00:24:24,159 --> 00:24:30,440 +for them but that that's what I would do + +520 +00:24:26,159 --> 00:24:31,960 +here another interesting thing is um + +521 +00:24:30,440 --> 00:24:35,399 +this might be getting a little ahead of + +522 +00:24:31,960 --> 00:24:35,399 +myself but there's + +523 +00:24:48,559 --> 00:24:58,279 +a there's a paper that talks about um + +524 +00:24:55,360 --> 00:25:00,159 +how you can take things that or trained + +525 +00:24:58,279 --> 00:25:03,360 +with another + +526 +00:25:00,159 --> 00:25:05,480 +vocabulary and basically the idea is um + +527 +00:25:03,360 --> 00:25:09,320 +you pre-train on whatever languages you + +528 +00:25:05,480 --> 00:25:10,679 +have and then uh you learn embeddings in + +529 +00:25:09,320 --> 00:25:11,880 +the new language you freeze the body of + +530 +00:25:10,679 --> 00:25:14,360 +the model and learn embeddings in the + +531 +00:25:11,880 --> 00:25:15,880 +new language so that's another uh method + +532 +00:25:14,360 --> 00:25:19,080 +that's used it's called on the cross + +533 +00:25:15,880 --> 00:25:19,080 +lingual printability + +534 +00:25:21,840 --> 00:25:26,159 +representations and I'll probably talk + +535 +00:25:23,840 --> 00:25:28,480 +about that in the last class of this uh + +536 +00:25:26,159 --> 00:25:30,720 +thing so you can remember that + +537 +00:25:28,480 --> 00:25:33,720 +then cool any other + +538 +00:25:30,720 --> 00:25:33,720 +questions + +539 +00:25:38,480 --> 00:25:42,640 +yeah is bag of words a first step to + +540 +00:25:41,039 --> 00:25:46,640 +process your data if you want to do + +541 +00:25:42,640 --> 00:25:49,919 +Generation Um do you mean like + +542 +00:25:46,640 --> 00:25:52,440 +uh a word based model or a subword based + +543 +00:25:49,919 --> 00:25:52,440 +model + +544 +00:25:56,679 --> 00:26:00,480 +or like is + +545 +00:26:02,360 --> 00:26:08,000 +this so the subword segmentation is the + +546 +00:26:05,919 --> 00:26:10,640 +first step of creating just about any + +547 +00:26:08,000 --> 00:26:13,080 +model nowadays like every model every + +548 +00:26:10,640 --> 00:26:16,600 +model uses this and they usually use + +549 +00:26:13,080 --> 00:26:21,520 +this either to segment characters or + +550 +00:26:16,600 --> 00:26:23,559 +byes um characters are like Unicode code + +551 +00:26:21,520 --> 00:26:25,799 +points so they actually correspond to an + +552 +00:26:23,559 --> 00:26:28,279 +actual visual character and then bites + +553 +00:26:25,799 --> 00:26:31,120 +are many unicode characters are like + +554 +00:26:28,279 --> 00:26:35,000 +three by like a Chinese character is + +555 +00:26:31,120 --> 00:26:37,159 +three byes if I remember correctly so um + +556 +00:26:35,000 --> 00:26:38,640 +the bbased segmentation is nice because + +557 +00:26:37,159 --> 00:26:41,240 +you don't even need to worry about unic + +558 +00:26:38,640 --> 00:26:43,880 +code you can just do the like you can + +559 +00:26:41,240 --> 00:26:45,640 +just segment the pile like literally as + +560 +00:26:43,880 --> 00:26:49,440 +is and so a lot of people do it that way + +561 +00:26:45,640 --> 00:26:53,279 +too uh llama as far as I know is + +562 +00:26:49,440 --> 00:26:55,720 +bites I believe GPT is also bites um but + +563 +00:26:53,279 --> 00:26:58,799 +pre previous to like three or four years + +564 +00:26:55,720 --> 00:27:02,799 +ago people used SCS I + +565 +00:26:58,799 --> 00:27:05,000 +cool um okay so this is really really + +566 +00:27:02,799 --> 00:27:05,919 +important it's not like super complex + +567 +00:27:05,000 --> 00:27:09,760 +and + +568 +00:27:05,919 --> 00:27:13,039 +practically uh you will just maybe maybe + +569 +00:27:09,760 --> 00:27:15,840 +train or maybe just use a tokenizer um + +570 +00:27:13,039 --> 00:27:18,559 +but uh that that's an important thing to + +571 +00:27:15,840 --> 00:27:20,760 +me cool uh next I'd like to move on to + +572 +00:27:18,559 --> 00:27:24,399 +continuous word eddings + +573 +00:27:20,760 --> 00:27:26,720 +so the basic idea is that previously we + +574 +00:27:24,399 --> 00:27:28,240 +represented words with a sparse Vector + +575 +00:27:26,720 --> 00:27:30,120 +uh with a single one + +576 +00:27:28,240 --> 00:27:31,960 +also known as one poot Vector so it + +577 +00:27:30,120 --> 00:27:35,720 +looked a little bit like + +578 +00:27:31,960 --> 00:27:37,640 +this and instead what continuous word + +579 +00:27:35,720 --> 00:27:39,640 +embeddings do is they look up a dense + +580 +00:27:37,640 --> 00:27:42,320 +vector and so you get a dense + +581 +00:27:39,640 --> 00:27:45,760 +representation where the entire Vector + +582 +00:27:42,320 --> 00:27:45,760 +has continuous values in + +583 +00:27:46,000 --> 00:27:51,919 +it and I talked about a bag of words + +584 +00:27:49,200 --> 00:27:54,320 +model but we could also create a + +585 +00:27:51,919 --> 00:27:58,360 +continuous bag of words model and the + +586 +00:27:54,320 --> 00:28:01,159 +way this works is you look up the + +587 +00:27:58,360 --> 00:28:03,720 +values of each Vector the embeddings of + +588 +00:28:01,159 --> 00:28:06,320 +each Vector this gives you an embedding + +589 +00:28:03,720 --> 00:28:08,440 +Vector for the entire sequence and then + +590 +00:28:06,320 --> 00:28:15,120 +you multiply this by a weight + +591 +00:28:08,440 --> 00:28:17,559 +Matrix uh where the so this is column so + +592 +00:28:15,120 --> 00:28:19,960 +the rows of the weight Matrix uh + +593 +00:28:17,559 --> 00:28:22,919 +correspond to to the size of this + +594 +00:28:19,960 --> 00:28:24,760 +continuous embedding and The Columns of + +595 +00:28:22,919 --> 00:28:28,320 +the weight Matrix would correspond to + +596 +00:28:24,760 --> 00:28:30,919 +the uh overall um + +597 +00:28:28,320 --> 00:28:32,559 +to the overall uh number of labels that + +598 +00:28:30,919 --> 00:28:36,919 +you would have here and then that would + +599 +00:28:32,559 --> 00:28:40,120 +give you sces and so this uh basically + +600 +00:28:36,919 --> 00:28:41,679 +what this is saying is each Vector now + +601 +00:28:40,120 --> 00:28:43,440 +instead of having a single thing that + +602 +00:28:41,679 --> 00:28:46,799 +represents which vocabulary item you're + +603 +00:28:43,440 --> 00:28:48,679 +looking at uh you would kind of hope + +604 +00:28:46,799 --> 00:28:52,120 +that you would get vectors where words + +605 +00:28:48,679 --> 00:28:54,919 +that are similar uh by some mention of + +606 +00:28:52,120 --> 00:28:57,760 +by some concept of similar like syntatic + +607 +00:28:54,919 --> 00:28:59,679 +uh syntax semantics whether they're in + +608 +00:28:57,760 --> 00:29:03,120 +the same language or not are close in + +609 +00:28:59,679 --> 00:29:06,679 +the vector space and each Vector element + +610 +00:29:03,120 --> 00:29:09,399 +is a feature uh so for example each + +611 +00:29:06,679 --> 00:29:11,519 +Vector element corresponds to is this an + +612 +00:29:09,399 --> 00:29:14,960 +animate object or is this a positive + +613 +00:29:11,519 --> 00:29:17,399 +word or other Vector other things like + +614 +00:29:14,960 --> 00:29:19,399 +that so just to give an example here + +615 +00:29:17,399 --> 00:29:21,760 +this is totally made up I just made it + +616 +00:29:19,399 --> 00:29:24,360 +in keynote so it's not natural Vector + +617 +00:29:21,760 --> 00:29:26,279 +space but to Ill illustrate the concept + +618 +00:29:24,360 --> 00:29:27,960 +I showed here what if we had a + +619 +00:29:26,279 --> 00:29:30,240 +two-dimensional vector + +620 +00:29:27,960 --> 00:29:33,399 +space where the two-dimensional Vector + +621 +00:29:30,240 --> 00:29:36,240 +space the xais here is corresponding to + +622 +00:29:33,399 --> 00:29:38,679 +whether it's animate or not and the the + +623 +00:29:36,240 --> 00:29:41,480 +Y AIS here is corresponding to whether + +624 +00:29:38,679 --> 00:29:44,080 +it's like positive sentiment or not and + +625 +00:29:41,480 --> 00:29:46,399 +so this is kind of like our ideal uh + +626 +00:29:44,080 --> 00:29:49,799 +goal + +627 +00:29:46,399 --> 00:29:52,279 +here um so why would we want to do this + +628 +00:29:49,799 --> 00:29:52,279 +yeah sorry + +629 +00:29:56,320 --> 00:30:03,399 +guys what do the like in the one it's + +630 +00:30:00,919 --> 00:30:06,399 +one + +631 +00:30:03,399 --> 00:30:06,399 +yep + +632 +00:30:07,200 --> 00:30:12,519 +like so what would the four entries do + +633 +00:30:09,880 --> 00:30:14,799 +here the four entries here are learned + +634 +00:30:12,519 --> 00:30:17,039 +so they are um they're learned just + +635 +00:30:14,799 --> 00:30:18,519 +together with the model um and I'm going + +636 +00:30:17,039 --> 00:30:22,120 +to talk about exactly how we learn them + +637 +00:30:18,519 --> 00:30:24,000 +soon but the the final goal is that + +638 +00:30:22,120 --> 00:30:25,399 +after learning has happened they look + +639 +00:30:24,000 --> 00:30:26,799 +they have these two properties like + +640 +00:30:25,399 --> 00:30:28,600 +similar words are close together in the + +641 +00:30:26,799 --> 00:30:30,080 +vectorace + +642 +00:30:28,600 --> 00:30:32,640 +and + +643 +00:30:30,080 --> 00:30:35,679 +um that's like number one that's the + +644 +00:30:32,640 --> 00:30:37,600 +most important and then number two is + +645 +00:30:35,679 --> 00:30:39,279 +ideally these uh features would have + +646 +00:30:37,600 --> 00:30:41,200 +some meaning uh maybe human + +647 +00:30:39,279 --> 00:30:44,720 +interpretable meaning maybe not human + +648 +00:30:41,200 --> 00:30:47,880 +interpretable meaning but + +649 +00:30:44,720 --> 00:30:50,880 +yeah so um one thing that I should + +650 +00:30:47,880 --> 00:30:53,159 +mention is I I showed a contrast between + +651 +00:30:50,880 --> 00:30:55,159 +the bag of words uh the one hot + +652 +00:30:53,159 --> 00:30:57,000 +representations here and the dense + +653 +00:30:55,159 --> 00:31:00,880 +representations here and I used this + +654 +00:30:57,000 --> 00:31:03,880 +look look up operation for both of them + +655 +00:31:00,880 --> 00:31:07,399 +and this this lookup + +656 +00:31:03,880 --> 00:31:09,559 +operation actually um can be viewed as + +657 +00:31:07,399 --> 00:31:11,799 +grabbing a single Vector from a big + +658 +00:31:09,559 --> 00:31:14,919 +Matrix of word + +659 +00:31:11,799 --> 00:31:17,760 +embeddings and + +660 +00:31:14,919 --> 00:31:19,760 +so the way it can work is like we have + +661 +00:31:17,760 --> 00:31:22,919 +this big vector and then we look up word + +662 +00:31:19,760 --> 00:31:25,919 +number two in a zero index Matrix and it + +663 +00:31:22,919 --> 00:31:27,799 +would just grab this out of that Matrix + +664 +00:31:25,919 --> 00:31:29,880 +and that's practically what most like + +665 +00:31:27,799 --> 00:31:32,240 +deep learning libraries or or whatever + +666 +00:31:29,880 --> 00:31:35,840 +Library you use are going to be + +667 +00:31:32,240 --> 00:31:38,000 +doing but another uh way you can view it + +668 +00:31:35,840 --> 00:31:40,880 +is you can view it as multiplying by a + +669 +00:31:38,000 --> 00:31:43,880 +one hot vector and so you have this + +670 +00:31:40,880 --> 00:31:48,679 +Vector uh exactly the same Matrix uh but + +671 +00:31:43,880 --> 00:31:50,799 +you just multiply by a vector uh 0 1 z z + +672 +00:31:48,679 --> 00:31:55,720 +and that gives you exactly the same + +673 +00:31:50,799 --> 00:31:58,200 +things um so the Practical imple + +674 +00:31:55,720 --> 00:31:59,720 +implementations of this uh uh tend to be + +675 +00:31:58,200 --> 00:32:01,279 +the first one because the first one's a + +676 +00:31:59,720 --> 00:32:04,679 +lot faster to implement you don't need + +677 +00:32:01,279 --> 00:32:06,760 +to multiply like this big thing by a + +678 +00:32:04,679 --> 00:32:11,000 +huge Vector but there + +679 +00:32:06,760 --> 00:32:13,880 +are advantages of knowing the second one + +680 +00:32:11,000 --> 00:32:15,519 +uh just to give an example what if you + +681 +00:32:13,880 --> 00:32:19,600 +for whatever reason you came up with + +682 +00:32:15,519 --> 00:32:21,440 +like an a crazy model that predicts a + +683 +00:32:19,600 --> 00:32:24,120 +probability distribution over words + +684 +00:32:21,440 --> 00:32:25,720 +instead of just words maybe it's a + +685 +00:32:24,120 --> 00:32:27,679 +language model that has an idea of what + +686 +00:32:25,720 --> 00:32:30,200 +the next word is going to look like + +687 +00:32:27,679 --> 00:32:32,159 +and maybe your um maybe your model + +688 +00:32:30,200 --> 00:32:35,279 +thinks the next word has a 50% + +689 +00:32:32,159 --> 00:32:36,600 +probability of being capped 30% + +690 +00:32:35,279 --> 00:32:42,279 +probability of being + +691 +00:32:36,600 --> 00:32:44,960 +dog and uh 2% probability uh sorry uh + +692 +00:32:42,279 --> 00:32:47,200 +20% probability being + +693 +00:32:44,960 --> 00:32:50,000 +bir you can take this vector and + +694 +00:32:47,200 --> 00:32:51,480 +multiply it by The Matrix and get like a + +695 +00:32:50,000 --> 00:32:53,639 +word embedding that's kind of a mix of + +696 +00:32:51,480 --> 00:32:55,639 +all of those word which might be + +697 +00:32:53,639 --> 00:32:57,960 +interesting and let you do creative + +698 +00:32:55,639 --> 00:33:02,120 +things so um knowing that these two + +699 +00:32:57,960 --> 00:33:05,360 +things are the same are the same is kind + +700 +00:33:02,120 --> 00:33:05,360 +of useful for that kind of + +701 +00:33:05,919 --> 00:33:11,480 +thing um any any questions about this + +702 +00:33:09,120 --> 00:33:13,919 +I'm G to talk about how we train next so + +703 +00:33:11,480 --> 00:33:18,159 +maybe maybe I can goow into + +704 +00:33:13,919 --> 00:33:23,159 +that okay cool so how do we get the + +705 +00:33:18,159 --> 00:33:25,840 +vectors uh like the question uh so up + +706 +00:33:23,159 --> 00:33:27,519 +until now we trained a bag of words + +707 +00:33:25,840 --> 00:33:29,080 +model and the way we trained a bag of + +708 +00:33:27,519 --> 00:33:31,159 +words model was using the structured + +709 +00:33:29,080 --> 00:33:35,440 +perceptron algorithm where if the model + +710 +00:33:31,159 --> 00:33:39,639 +got the answer wrong we would either + +711 +00:33:35,440 --> 00:33:42,799 +increment or decrement the embeddings + +712 +00:33:39,639 --> 00:33:45,080 +based on whether uh whether the label + +713 +00:33:42,799 --> 00:33:46,559 +was positive or negative right so I + +714 +00:33:45,080 --> 00:33:48,919 +showed an example of this very simple + +715 +00:33:46,559 --> 00:33:51,039 +algorithm you don't even uh need to + +716 +00:33:48,919 --> 00:33:52,480 +write any like numpy or anything like + +717 +00:33:51,039 --> 00:33:55,919 +that to implement that + +718 +00:33:52,480 --> 00:33:59,559 +algorithm uh so here here it is so we + +719 +00:33:55,919 --> 00:34:02,320 +have like 4X why in uh data we extract + +720 +00:33:59,559 --> 00:34:04,639 +the features we run the classifier uh we + +721 +00:34:02,320 --> 00:34:07,440 +have the predicted why and then we + +722 +00:34:04,639 --> 00:34:09,480 +increment or decrement + +723 +00:34:07,440 --> 00:34:12,679 +features but how do we train more + +724 +00:34:09,480 --> 00:34:15,599 +complex models so I think most people + +725 +00:34:12,679 --> 00:34:17,079 +here have taken a uh machine learning + +726 +00:34:15,599 --> 00:34:19,159 +class of some kind so this will be + +727 +00:34:17,079 --> 00:34:21,079 +reviewed for a lot of people uh but + +728 +00:34:19,159 --> 00:34:22,280 +basically we do this uh by doing + +729 +00:34:21,079 --> 00:34:24,839 +gradient + +730 +00:34:22,280 --> 00:34:27,240 +descent and in order to do so we write + +731 +00:34:24,839 --> 00:34:29,919 +down a loss function calculate the + +732 +00:34:27,240 --> 00:34:30,919 +derivatives of the L function with + +733 +00:34:29,919 --> 00:34:35,079 +respect to the + +734 +00:34:30,919 --> 00:34:37,320 +parameters and move uh the parameters in + +735 +00:34:35,079 --> 00:34:40,839 +the direction that reduces the loss + +736 +00:34:37,320 --> 00:34:42,720 +mtion and so specifically for this bag + +737 +00:34:40,839 --> 00:34:45,560 +of words or continuous bag of words + +738 +00:34:42,720 --> 00:34:48,240 +model um we want this loss of function + +739 +00:34:45,560 --> 00:34:50,839 +to be a loss function that gets lower as + +740 +00:34:48,240 --> 00:34:52,240 +the model gets better and I'm going to + +741 +00:34:50,839 --> 00:34:54,000 +give two examples from binary + +742 +00:34:52,240 --> 00:34:57,400 +classification both of these are used in + +743 +00:34:54,000 --> 00:34:58,839 +NLP models uh reasonably frequently + +744 +00:34:57,400 --> 00:35:01,440 +uh there's a bunch of other loss + +745 +00:34:58,839 --> 00:35:02,800 +functions but these are kind of the two + +746 +00:35:01,440 --> 00:35:05,480 +major + +747 +00:35:02,800 --> 00:35:08,160 +ones so the first one um which is + +748 +00:35:05,480 --> 00:35:10,160 +actually less frequent is the hinge loss + +749 +00:35:08,160 --> 00:35:13,400 +and then the second one is taking a + +750 +00:35:10,160 --> 00:35:15,800 +sigmoid and then doing negative log + +751 +00:35:13,400 --> 00:35:19,760 +likelyhood so the hinge loss basically + +752 +00:35:15,800 --> 00:35:22,760 +what we do is we uh take the max of the + +753 +00:35:19,760 --> 00:35:26,119 +label times the score that is output by + +754 +00:35:22,760 --> 00:35:29,200 +the model and zero and what this looks + +755 +00:35:26,119 --> 00:35:33,480 +like is we have a hinged loss uh where + +756 +00:35:29,200 --> 00:35:36,880 +if Y is equal to one the loss if Y is + +757 +00:35:33,480 --> 00:35:39,520 +greater than zero is zero so as long as + +758 +00:35:36,880 --> 00:35:42,680 +we get basically as long as we get the + +759 +00:35:39,520 --> 00:35:45,079 +answer right there's no loss um as the + +760 +00:35:42,680 --> 00:35:47,400 +answer gets more wrong the loss gets + +761 +00:35:45,079 --> 00:35:49,880 +worse like this and then similarly if + +762 +00:35:47,400 --> 00:35:53,160 +the label is negative if we get a + +763 +00:35:49,880 --> 00:35:54,839 +negative score uh then we get zero loss + +764 +00:35:53,160 --> 00:35:55,800 +and the loss increases if we have a + +765 +00:35:54,839 --> 00:35:58,800 +positive + +766 +00:35:55,800 --> 00:36:00,800 +score so the sigmoid plus negative log + +767 +00:35:58,800 --> 00:36:05,440 +likelihood the way this works is you + +768 +00:36:00,800 --> 00:36:07,400 +multiply y * the score here and um then + +769 +00:36:05,440 --> 00:36:09,960 +we have the sigmoid function which is + +770 +00:36:07,400 --> 00:36:14,079 +just kind of a nice function that looks + +771 +00:36:09,960 --> 00:36:15,440 +like this with zero and one centered + +772 +00:36:14,079 --> 00:36:19,480 +around + +773 +00:36:15,440 --> 00:36:21,240 +zero and then we take the negative log + +774 +00:36:19,480 --> 00:36:22,319 +of this sigmoid function or the negative + +775 +00:36:21,240 --> 00:36:27,160 +log + +776 +00:36:22,319 --> 00:36:28,520 +likelihood and that gives us a uh L that + +777 +00:36:27,160 --> 00:36:30,440 +looks a little bit like this so + +778 +00:36:28,520 --> 00:36:32,640 +basically you can see that these look + +779 +00:36:30,440 --> 00:36:36,040 +very similar right the difference being + +780 +00:36:32,640 --> 00:36:37,760 +that the hinge loss is uh sharp and we + +781 +00:36:36,040 --> 00:36:41,119 +get exactly a zero loss if we get the + +782 +00:36:37,760 --> 00:36:44,319 +answer right and the sigmoid is smooth + +783 +00:36:41,119 --> 00:36:48,440 +uh and we never get a zero + +784 +00:36:44,319 --> 00:36:50,680 +loss um so does anyone have an idea of + +785 +00:36:48,440 --> 00:36:53,119 +the benefits and disadvantages of + +786 +00:36:50,680 --> 00:36:55,680 +these I kind of flashed one on the + +787 +00:36:53,119 --> 00:36:57,599 +screen already + +788 +00:36:55,680 --> 00:36:59,400 +but + +789 +00:36:57,599 --> 00:37:01,359 +so I flash that on the screen so I'll + +790 +00:36:59,400 --> 00:37:03,680 +give this one and then I can have a quiz + +791 +00:37:01,359 --> 00:37:06,319 +about the sign but the the hinge glass + +792 +00:37:03,680 --> 00:37:07,720 +is more closely linked to accuracy and + +793 +00:37:06,319 --> 00:37:10,400 +the reason why it's more closely linked + +794 +00:37:07,720 --> 00:37:13,640 +to accuracy is because basically we will + +795 +00:37:10,400 --> 00:37:16,079 +get a zero loss if the model gets the + +796 +00:37:13,640 --> 00:37:18,319 +answer right so when the model gets all + +797 +00:37:16,079 --> 00:37:20,240 +of the answers right we will just stop + +798 +00:37:18,319 --> 00:37:22,760 +updating our model whatsoever because we + +799 +00:37:20,240 --> 00:37:25,440 +never we don't have any loss whatsoever + +800 +00:37:22,760 --> 00:37:27,720 +and the gradient of the loss is zero um + +801 +00:37:25,440 --> 00:37:29,960 +what about the sigmoid uh a negative log + +802 +00:37:27,720 --> 00:37:33,160 +likelihood uh there there's kind of two + +803 +00:37:29,960 --> 00:37:36,160 +major advantages of this anyone want to + +804 +00:37:33,160 --> 00:37:36,160 +review their machine learning + +805 +00:37:38,240 --> 00:37:41,800 +test sorry what was + +806 +00:37:43,800 --> 00:37:49,960 +that for for R uh yeah maybe there's a + +807 +00:37:48,200 --> 00:37:51,319 +more direct I think I know what you're + +808 +00:37:49,960 --> 00:37:54,560 +saying but maybe there's a more direct + +809 +00:37:51,319 --> 00:37:54,560 +way to say that um + +810 +00:37:54,839 --> 00:38:00,760 +yeah yeah so the gradient is nonzero + +811 +00:37:57,560 --> 00:38:04,240 +everywhere and uh the gradient also kind + +812 +00:38:00,760 --> 00:38:05,839 +of increases as your score gets worse so + +813 +00:38:04,240 --> 00:38:08,440 +those are that's one advantage it makes + +814 +00:38:05,839 --> 00:38:11,240 +it easier to optimize models um another + +815 +00:38:08,440 --> 00:38:13,839 +one linked to the ROC score but maybe we + +816 +00:38:11,240 --> 00:38:13,839 +could say it more + +817 +00:38:16,119 --> 00:38:19,400 +directly any + +818 +00:38:20,040 --> 00:38:26,920 +ideas okay um basically the sigmoid can + +819 +00:38:23,240 --> 00:38:30,160 +be interpreted as a probability so um if + +820 +00:38:26,920 --> 00:38:32,839 +the the sigmoid is between Zer and one + +821 +00:38:30,160 --> 00:38:34,640 +uh and because it's between zero and one + +822 +00:38:32,839 --> 00:38:36,720 +we can say the sigmoid is a + +823 +00:38:34,640 --> 00:38:38,640 +probability um and that can be useful + +824 +00:38:36,720 --> 00:38:40,119 +for various things like if we want a + +825 +00:38:38,640 --> 00:38:41,960 +downstream model or if we want a + +826 +00:38:40,119 --> 00:38:45,480 +confidence prediction out of the model + +827 +00:38:41,960 --> 00:38:48,200 +so those are two uh advantages of using + +828 +00:38:45,480 --> 00:38:49,920 +a s plus negative log likelihood there's + +829 +00:38:48,200 --> 00:38:53,160 +no probabilistic interpretation to + +830 +00:38:49,920 --> 00:38:56,560 +something transing theas + +831 +00:38:53,160 --> 00:38:59,200 +basically cool um so the next thing that + +832 +00:38:56,560 --> 00:39:01,240 +that we do is we calculate derivatives + +833 +00:38:59,200 --> 00:39:04,040 +and we calculate the derivative of the + +834 +00:39:01,240 --> 00:39:05,920 +parameter given the loss function um to + +835 +00:39:04,040 --> 00:39:09,839 +give an example of the bag of words + +836 +00:39:05,920 --> 00:39:13,480 +model and the hinge loss um the hinge + +837 +00:39:09,839 --> 00:39:16,480 +loss as I said is the max of the score + +838 +00:39:13,480 --> 00:39:19,359 +and times y in the bag of words model + +839 +00:39:16,480 --> 00:39:22,640 +the score was the frequency of that + +840 +00:39:19,359 --> 00:39:25,880 +vocabulary item in the input multiplied + +841 +00:39:22,640 --> 00:39:27,680 +by the weight here and so if we this is + +842 +00:39:25,880 --> 00:39:29,520 +a simple a function that I can just do + +843 +00:39:27,680 --> 00:39:34,440 +the derivative by hand and if I do the + +844 +00:39:29,520 --> 00:39:36,920 +deriva by hand what comes out is if y * + +845 +00:39:34,440 --> 00:39:39,319 +this value is greater than zero so in + +846 +00:39:36,920 --> 00:39:44,640 +other words if this Max uh picks this + +847 +00:39:39,319 --> 00:39:48,319 +instead of this then the derivative is y + +848 +00:39:44,640 --> 00:39:52,359 +* stre and otherwise uh it + +849 +00:39:48,319 --> 00:39:52,359 +is in the opposite + +850 +00:39:55,400 --> 00:40:00,160 +direction + +851 +00:39:56,920 --> 00:40:02,839 +then uh optimizing gradients uh we do + +852 +00:40:00,160 --> 00:40:06,200 +standard uh in standard stochastic + +853 +00:40:02,839 --> 00:40:07,839 +gradient descent uh which is the most + +854 +00:40:06,200 --> 00:40:10,920 +standard optimization algorithm for + +855 +00:40:07,839 --> 00:40:14,440 +these models uh we basically have a + +856 +00:40:10,920 --> 00:40:17,440 +gradient over uh you take the gradient + +857 +00:40:14,440 --> 00:40:20,040 +over the parameter of the loss function + +858 +00:40:17,440 --> 00:40:22,480 +and we call it GT so here um sorry I + +859 +00:40:20,040 --> 00:40:25,599 +switched my terminology between W and + +860 +00:40:22,480 --> 00:40:28,280 +Theta so this could be W uh the previous + +861 +00:40:25,599 --> 00:40:31,000 +value of w + +862 +00:40:28,280 --> 00:40:35,440 +um and this is the gradient of the loss + +863 +00:40:31,000 --> 00:40:37,040 +and then uh we take the previous value + +864 +00:40:35,440 --> 00:40:39,680 +and then we subtract out the learning + +865 +00:40:37,040 --> 00:40:39,680 +rate times the + +866 +00:40:40,680 --> 00:40:45,720 +gradient and uh there are many many + +867 +00:40:43,200 --> 00:40:47,280 +other optimization options uh I'll cover + +868 +00:40:45,720 --> 00:40:50,960 +the more frequent one called Adam at the + +869 +00:40:47,280 --> 00:40:54,319 +end of this uh this lecture but um this + +870 +00:40:50,960 --> 00:40:57,160 +is the basic way of optimizing the + +871 +00:40:54,319 --> 00:41:00,599 +model so + +872 +00:40:57,160 --> 00:41:03,359 +then my question now is what is this + +873 +00:41:00,599 --> 00:41:07,000 +algorithm with respect + +874 +00:41:03,359 --> 00:41:10,119 +to this is an algorithm that is + +875 +00:41:07,000 --> 00:41:12,280 +taking that has a loss function it's + +876 +00:41:10,119 --> 00:41:14,079 +calculating derivatives and it's + +877 +00:41:12,280 --> 00:41:17,240 +optimizing gradients using stochastic + +878 +00:41:14,079 --> 00:41:18,839 +gradient descent so does anyone have a + +879 +00:41:17,240 --> 00:41:20,960 +guess about what the loss function is + +880 +00:41:18,839 --> 00:41:23,520 +here and maybe what is the learning rate + +881 +00:41:20,960 --> 00:41:23,520 +of stas + +882 +00:41:24,319 --> 00:41:29,480 +gradient I kind of gave you a hint about + +883 +00:41:26,599 --> 00:41:29,480 +the L one + +884 +00:41:31,640 --> 00:41:37,839 +actually and just to recap what this is + +885 +00:41:34,440 --> 00:41:41,440 +doing here it's um if predicted Y is + +886 +00:41:37,839 --> 00:41:44,560 +equal to Y then it is moving the uh the + +887 +00:41:41,440 --> 00:41:48,240 +future weights in the direction of Y + +888 +00:41:44,560 --> 00:41:48,240 +times the frequency + +889 +00:41:52,599 --> 00:41:56,960 +Vector + +890 +00:41:55,240 --> 00:41:59,079 +yeah + +891 +00:41:56,960 --> 00:42:01,640 +yeah exactly so the loss function is + +892 +00:41:59,079 --> 00:42:05,800 +hinge loss and the learning rate is one + +893 +00:42:01,640 --> 00:42:07,880 +um and just to show how that you know + +894 +00:42:05,800 --> 00:42:12,359 +corresponds we have this if statement + +895 +00:42:07,880 --> 00:42:12,359 +here and we have the increment of the + +896 +00:42:12,960 --> 00:42:20,240 +features and this is what the um what + +897 +00:42:16,920 --> 00:42:21,599 +the L sorry the derivative looked like + +898 +00:42:20,240 --> 00:42:24,240 +so we have + +899 +00:42:21,599 --> 00:42:26,920 +if this is moving in the right direction + +900 +00:42:24,240 --> 00:42:29,520 +for the label uh then we increment + +901 +00:42:26,920 --> 00:42:31,599 +otherwise we do nothing so + +902 +00:42:29,520 --> 00:42:33,559 +basically you can see that even this + +903 +00:42:31,599 --> 00:42:35,200 +really simple algorithm that I you know + +904 +00:42:33,559 --> 00:42:37,480 +implemented with a few lines of python + +905 +00:42:35,200 --> 00:42:38,839 +is essentially equivalent to this uh + +906 +00:42:37,480 --> 00:42:40,760 +stochastic gradient descent that we + +907 +00:42:38,839 --> 00:42:44,559 +doing + +908 +00:42:40,760 --> 00:42:46,359 +models so the good news about this is + +909 +00:42:44,559 --> 00:42:48,359 +you know this this is really simple but + +910 +00:42:46,359 --> 00:42:50,599 +it only really works forit like a bag of + +911 +00:42:48,359 --> 00:42:55,400 +words model or a simple feature based + +912 +00:42:50,599 --> 00:42:57,200 +model uh but it opens up a lot of uh new + +913 +00:42:55,400 --> 00:43:00,440 +possibilities for how we can optimize + +914 +00:42:57,200 --> 00:43:01,599 +models and in particular I mentioned uh + +915 +00:43:00,440 --> 00:43:04,839 +that there was a problem with + +916 +00:43:01,599 --> 00:43:08,200 +combination features last class like + +917 +00:43:04,839 --> 00:43:11,200 +don't hate and don't love are not just + +918 +00:43:08,200 --> 00:43:12,760 +you know hate plus don't and love plus + +919 +00:43:11,200 --> 00:43:14,119 +don't it's actually the combination of + +920 +00:43:12,760 --> 00:43:17,680 +the two is really + +921 +00:43:14,119 --> 00:43:20,160 +important and so um yeah just to give an + +922 +00:43:17,680 --> 00:43:23,440 +example we have don't love is maybe bad + +923 +00:43:20,160 --> 00:43:26,960 +uh nothing I don't love is very + +924 +00:43:23,440 --> 00:43:30,960 +good and so in order + +925 +00:43:26,960 --> 00:43:34,040 +to solve this problem we turn to neural + +926 +00:43:30,960 --> 00:43:37,160 +networks and the way we do this is we + +927 +00:43:34,040 --> 00:43:39,119 +have a lookup of dense embeddings sorry + +928 +00:43:37,160 --> 00:43:41,839 +I actually I just realized my coloring + +929 +00:43:39,119 --> 00:43:44,119 +is off I was using red to indicate dense + +930 +00:43:41,839 --> 00:43:46,480 +embeddings so this should be maybe red + +931 +00:43:44,119 --> 00:43:49,319 +instead of blue but um we take these + +932 +00:43:46,480 --> 00:43:51,200 +stents embeddings and then we create + +933 +00:43:49,319 --> 00:43:53,720 +some complicated function to extract + +934 +00:43:51,200 --> 00:43:55,079 +combination features um and then use + +935 +00:43:53,720 --> 00:43:57,359 +those to calculate + +936 +00:43:55,079 --> 00:44:02,200 +scores + +937 +00:43:57,359 --> 00:44:04,480 +um and so we calculate these combination + +938 +00:44:02,200 --> 00:44:08,240 +features and what we want to do is we + +939 +00:44:04,480 --> 00:44:12,880 +want to extract vectors from the input + +940 +00:44:08,240 --> 00:44:12,880 +where each Vector has features + +941 +00:44:15,839 --> 00:44:21,040 +um sorry this is in the wrong order so + +942 +00:44:18,240 --> 00:44:22,559 +I'll I'll get back to this um so this + +943 +00:44:21,040 --> 00:44:25,319 +this was talking about the The + +944 +00:44:22,559 --> 00:44:27,200 +Continuous bag of words features so the + +945 +00:44:25,319 --> 00:44:30,960 +problem with the continuous bag of words + +946 +00:44:27,200 --> 00:44:30,960 +features was we were extracting + +947 +00:44:31,359 --> 00:44:36,359 +features + +948 +00:44:33,079 --> 00:44:36,359 +um like + +949 +00:44:36,839 --> 00:44:41,400 +this but then we were directly using the + +950 +00:44:39,760 --> 00:44:43,359 +the feature the dense features that we + +951 +00:44:41,400 --> 00:44:45,559 +extracted to make predictions without + +952 +00:44:43,359 --> 00:44:48,839 +actually allowing for any interactions + +953 +00:44:45,559 --> 00:44:51,839 +between the features um and + +954 +00:44:48,839 --> 00:44:55,160 +so uh neural networks the way we fix + +955 +00:44:51,839 --> 00:44:57,079 +this is we first extract these features + +956 +00:44:55,160 --> 00:44:59,440 +uh we take these these features of each + +957 +00:44:57,079 --> 00:45:04,000 +word embedding and then we run them + +958 +00:44:59,440 --> 00:45:07,240 +through uh kind of linear transforms in + +959 +00:45:04,000 --> 00:45:09,880 +nonlinear uh like linear multiplications + +960 +00:45:07,240 --> 00:45:10,880 +and then nonlinear transforms to extract + +961 +00:45:09,880 --> 00:45:13,920 +additional + +962 +00:45:10,880 --> 00:45:15,839 +features and uh finally run this through + +963 +00:45:13,920 --> 00:45:18,640 +several layers and then use the + +964 +00:45:15,839 --> 00:45:21,119 +resulting features to make our + +965 +00:45:18,640 --> 00:45:23,200 +predictions and when we do this this + +966 +00:45:21,119 --> 00:45:25,319 +allows us to do more uh interesting + +967 +00:45:23,200 --> 00:45:28,319 +things so like for example we could + +968 +00:45:25,319 --> 00:45:30,000 +learn feature combination a node in the + +969 +00:45:28,319 --> 00:45:32,599 +second layer might be feature one and + +970 +00:45:30,000 --> 00:45:35,240 +feature five are active so that could be + +971 +00:45:32,599 --> 00:45:38,680 +like feature one corresponds to negative + +972 +00:45:35,240 --> 00:45:43,640 +sentiment words like hate + +973 +00:45:38,680 --> 00:45:45,839 +despise um and other things like that so + +974 +00:45:43,640 --> 00:45:50,079 +for hate and despise feature one would + +975 +00:45:45,839 --> 00:45:53,119 +have a high value like 8.0 and then + +976 +00:45:50,079 --> 00:45:55,480 +7.2 and then we also have negation words + +977 +00:45:53,119 --> 00:45:57,040 +like don't or not or something like that + +978 +00:45:55,480 --> 00:46:00,040 +and those would + +979 +00:45:57,040 --> 00:46:00,040 +have + +980 +00:46:03,720 --> 00:46:08,640 +don't would have a high value for like 2 + +981 +00:46:11,880 --> 00:46:15,839 +five and so these would be the word + +982 +00:46:14,200 --> 00:46:18,040 +embeddings where each word embedding + +983 +00:46:15,839 --> 00:46:20,599 +corresponded to you know features of the + +984 +00:46:18,040 --> 00:46:23,480 +words and + +985 +00:46:20,599 --> 00:46:25,480 +then um after that we would extract + +986 +00:46:23,480 --> 00:46:29,319 +feature combinations in this second + +987 +00:46:25,480 --> 00:46:32,079 +layer that say oh we see at least one + +988 +00:46:29,319 --> 00:46:33,760 +word where the first feature is active + +989 +00:46:32,079 --> 00:46:36,359 +and we see at least one word where the + +990 +00:46:33,760 --> 00:46:37,920 +fifth feature is active so now that + +991 +00:46:36,359 --> 00:46:40,640 +allows us to capture the fact that we + +992 +00:46:37,920 --> 00:46:42,319 +saw like don't hate or don't despise or + +993 +00:46:40,640 --> 00:46:44,559 +not hate or not despise or something + +994 +00:46:42,319 --> 00:46:44,559 +like + +995 +00:46:45,079 --> 00:46:51,760 +that so this is the way uh kind of this + +996 +00:46:49,680 --> 00:46:54,839 +is a deep uh continuous bag of words + +997 +00:46:51,760 --> 00:46:56,839 +model um this actually was proposed in + +998 +00:46:54,839 --> 00:46:58,119 +205 15 I don't think I have the + +999 +00:46:56,839 --> 00:47:02,599 +reference on the slide but I think it's + +1000 +00:46:58,119 --> 00:47:05,040 +in the notes um on the website and + +1001 +00:47:02,599 --> 00:47:07,200 +actually at that point in time they + +1002 +00:47:05,040 --> 00:47:09,200 +demon there were several interesting + +1003 +00:47:07,200 --> 00:47:11,960 +results that showed that even this like + +1004 +00:47:09,200 --> 00:47:13,960 +really simple model did really well uh + +1005 +00:47:11,960 --> 00:47:16,319 +at text classification and other simple + +1006 +00:47:13,960 --> 00:47:18,640 +tasks like that because it was able to + +1007 +00:47:16,319 --> 00:47:21,720 +you know share features of the words and + +1008 +00:47:18,640 --> 00:47:23,800 +then extract combinations to the + +1009 +00:47:21,720 --> 00:47:28,200 +features + +1010 +00:47:23,800 --> 00:47:29,760 +so um in order order to learn these we + +1011 +00:47:28,200 --> 00:47:30,920 +need to start turning to neural networks + +1012 +00:47:29,760 --> 00:47:34,400 +and the reason why we need to start + +1013 +00:47:30,920 --> 00:47:38,040 +turning to neural networks is + +1014 +00:47:34,400 --> 00:47:41,920 +because while I can calculate the loss + +1015 +00:47:38,040 --> 00:47:43,280 +function of the while I can calculate + +1016 +00:47:41,920 --> 00:47:44,839 +the loss function of the hinged loss for + +1017 +00:47:43,280 --> 00:47:47,720 +a bag of words model by hand I + +1018 +00:47:44,839 --> 00:47:49,359 +definitely don't I probably could but + +1019 +00:47:47,720 --> 00:47:51,240 +don't want to do it for a model that + +1020 +00:47:49,359 --> 00:47:53,200 +starts become as complicated as this + +1021 +00:47:51,240 --> 00:47:57,440 +with multiple Matrix multiplications + +1022 +00:47:53,200 --> 00:48:00,520 +Andes and stuff like that so the way we + +1023 +00:47:57,440 --> 00:48:05,000 +do this just a very brief uh coverage of + +1024 +00:48:00,520 --> 00:48:06,200 +this uh for because um I think probably + +1025 +00:48:05,000 --> 00:48:08,400 +a lot of people have dealt with neural + +1026 +00:48:06,200 --> 00:48:10,200 +networks before um the original + +1027 +00:48:08,400 --> 00:48:12,880 +motivation was that we had neurons in + +1028 +00:48:10,200 --> 00:48:16,160 +the brain uh where + +1029 +00:48:12,880 --> 00:48:18,839 +the each of the neuron synapses took in + +1030 +00:48:16,160 --> 00:48:21,480 +an electrical signal and once they got + +1031 +00:48:18,839 --> 00:48:24,079 +enough electrical signal they would fire + +1032 +00:48:21,480 --> 00:48:25,960 +um but now the current conception of + +1033 +00:48:24,079 --> 00:48:28,160 +neural networks or deep learning models + +1034 +00:48:25,960 --> 00:48:30,440 +is basically computation + +1035 +00:48:28,160 --> 00:48:32,400 +graphs and the way a computation graph + +1036 +00:48:30,440 --> 00:48:34,760 +Works um and I'm especially going to + +1037 +00:48:32,400 --> 00:48:36,240 +talk about the way it works in natural + +1038 +00:48:34,760 --> 00:48:38,119 +language processing which might be a + +1039 +00:48:36,240 --> 00:48:42,319 +contrast to the way it works in computer + +1040 +00:48:38,119 --> 00:48:43,960 +vision is um we have an expression uh + +1041 +00:48:42,319 --> 00:48:46,480 +that looks like this and maybe maybe + +1042 +00:48:43,960 --> 00:48:47,640 +it's the expression X corresponding to + +1043 +00:48:46,480 --> 00:48:51,880 +uh a + +1044 +00:48:47,640 --> 00:48:53,400 +scal um and each node corresponds to + +1045 +00:48:51,880 --> 00:48:55,599 +something like a tensor a matrix a + +1046 +00:48:53,400 --> 00:48:57,599 +vector a scalar so scaler is uh kind + +1047 +00:48:55,599 --> 00:49:00,480 +kind of Zero Dimensional it's a single + +1048 +00:48:57,599 --> 00:49:01,720 +value one dimensional two dimensional or + +1049 +00:49:00,480 --> 00:49:04,200 +arbitrary + +1050 +00:49:01,720 --> 00:49:06,040 +dimensional um and then we also have + +1051 +00:49:04,200 --> 00:49:08,000 +nodes that correspond to the result of + +1052 +00:49:06,040 --> 00:49:11,480 +function applications so if we have X be + +1053 +00:49:08,000 --> 00:49:14,079 +a vector uh we take the vector transpose + +1054 +00:49:11,480 --> 00:49:18,160 +and so each Edge represents a function + +1055 +00:49:14,079 --> 00:49:20,559 +argument and also a data + +1056 +00:49:18,160 --> 00:49:23,960 +dependency and a node with an incoming + +1057 +00:49:20,559 --> 00:49:27,000 +Edge is a function of that Edge's tail + +1058 +00:49:23,960 --> 00:49:29,040 +node and importantly each node knows how + +1059 +00:49:27,000 --> 00:49:30,640 +to compute its value and the value of + +1060 +00:49:29,040 --> 00:49:32,640 +its derivative with respect to each + +1061 +00:49:30,640 --> 00:49:34,440 +argument times the derivative of an + +1062 +00:49:32,640 --> 00:49:37,920 +arbitrary + +1063 +00:49:34,440 --> 00:49:41,000 +input and functions could be basically + +1064 +00:49:37,920 --> 00:49:45,400 +arbitrary functions it can be unary Nary + +1065 +00:49:41,000 --> 00:49:49,440 +unary binary Nary often unary or binary + +1066 +00:49:45,400 --> 00:49:52,400 +and computation graphs are directed in + +1067 +00:49:49,440 --> 00:49:57,040 +cyclic and um one important thing to + +1068 +00:49:52,400 --> 00:50:00,640 +note is that you can um have multiple + +1069 +00:49:57,040 --> 00:50:02,559 +ways of expressing the same function so + +1070 +00:50:00,640 --> 00:50:04,839 +this is actually really important as you + +1071 +00:50:02,559 --> 00:50:06,920 +start implementing things and the reason + +1072 +00:50:04,839 --> 00:50:09,359 +why is the left graph and the right + +1073 +00:50:06,920 --> 00:50:12,960 +graph both express the same thing the + +1074 +00:50:09,359 --> 00:50:18,640 +left graph expresses X + +1075 +00:50:12,960 --> 00:50:22,559 +transpose time A Time X where is whereas + +1076 +00:50:18,640 --> 00:50:27,160 +this one has x a and then it puts it + +1077 +00:50:22,559 --> 00:50:28,760 +into a node that is X transpose a x + +1078 +00:50:27,160 --> 00:50:30,319 +and so these Express exactly the same + +1079 +00:50:28,760 --> 00:50:32,319 +thing but the graph on the left is + +1080 +00:50:30,319 --> 00:50:33,760 +larger and the reason why this is + +1081 +00:50:32,319 --> 00:50:38,920 +important is for practical + +1082 +00:50:33,760 --> 00:50:40,359 +implementation of neural networks um you + +1083 +00:50:38,920 --> 00:50:43,200 +the larger graphs are going to take more + +1084 +00:50:40,359 --> 00:50:46,799 +memory and going to be slower usually + +1085 +00:50:43,200 --> 00:50:48,200 +and so often um in a neural network we + +1086 +00:50:46,799 --> 00:50:49,559 +look at like pipe part which we're going + +1087 +00:50:48,200 --> 00:50:52,160 +to look at in a + +1088 +00:50:49,559 --> 00:50:55,520 +second + +1089 +00:50:52,160 --> 00:50:57,920 +um you will have something you will be + +1090 +00:50:55,520 --> 00:50:57,920 +able to + +1091 +00:50:58,680 --> 00:51:01,680 +do + +1092 +00:51:03,079 --> 00:51:07,880 +this or you'll be able to do + +1093 +00:51:18,760 --> 00:51:22,880 +like + +1094 +00:51:20,359 --> 00:51:24,839 +this so these are two different options + +1095 +00:51:22,880 --> 00:51:26,920 +this one is using more operations and + +1096 +00:51:24,839 --> 00:51:29,559 +this one is using using less operations + +1097 +00:51:26,920 --> 00:51:31,000 +and this is going to be faster because + +1098 +00:51:29,559 --> 00:51:33,119 +basically the implementation within + +1099 +00:51:31,000 --> 00:51:34,799 +Pythor will have been optimized for you + +1100 +00:51:33,119 --> 00:51:36,799 +it will only require one graph node + +1101 +00:51:34,799 --> 00:51:37,880 +instead of multiple graph nodes and + +1102 +00:51:36,799 --> 00:51:39,799 +that's even more important when you + +1103 +00:51:37,880 --> 00:51:41,040 +start talking about like attention or + +1104 +00:51:39,799 --> 00:51:43,920 +something like that which we're going to + +1105 +00:51:41,040 --> 00:51:46,079 +be covering very soon um attention is a + +1106 +00:51:43,920 --> 00:51:47,359 +very multi-head attention or something + +1107 +00:51:46,079 --> 00:51:49,839 +like that is a very complicated + +1108 +00:51:47,359 --> 00:51:52,079 +operation so you want to make sure that + +1109 +00:51:49,839 --> 00:51:54,359 +you're using the operators that are + +1110 +00:51:52,079 --> 00:51:57,359 +available to you to make this more + +1111 +00:51:54,359 --> 00:51:57,359 +efficient + +1112 +00:51:57,440 --> 00:52:00,760 +um and then finally we could like add + +1113 +00:51:59,280 --> 00:52:01,920 +all of these together at the end we + +1114 +00:52:00,760 --> 00:52:04,000 +could add a + +1115 +00:52:01,920 --> 00:52:05,880 +constant um and then we get this + +1116 +00:52:04,000 --> 00:52:09,520 +expression here which gives us kind of a + +1117 +00:52:05,880 --> 00:52:09,520 +polinomial polom + +1118 +00:52:09,680 --> 00:52:15,760 +expression um also another thing to note + +1119 +00:52:13,480 --> 00:52:17,599 +is within a neural network computation + +1120 +00:52:15,760 --> 00:52:21,920 +graph variable names are just labelings + +1121 +00:52:17,599 --> 00:52:25,359 +of nodes and so if you're using a a + +1122 +00:52:21,920 --> 00:52:27,680 +computation graph like this you might + +1123 +00:52:25,359 --> 00:52:29,240 +only be declaring one variable here but + +1124 +00:52:27,680 --> 00:52:30,839 +actually there's a whole bunch of stuff + +1125 +00:52:29,240 --> 00:52:32,359 +going on behind the scenes and all of + +1126 +00:52:30,839 --> 00:52:34,240 +that will take memory and computation + +1127 +00:52:32,359 --> 00:52:35,440 +time and stuff like that so it's + +1128 +00:52:34,240 --> 00:52:37,119 +important to be aware of that if you + +1129 +00:52:35,440 --> 00:52:40,400 +want to make your implementations more + +1130 +00:52:37,119 --> 00:52:40,400 +efficient than other other + +1131 +00:52:41,119 --> 00:52:46,680 +things so we have several algorithms + +1132 +00:52:44,480 --> 00:52:49,079 +that go into implementing neural nuts um + +1133 +00:52:46,680 --> 00:52:50,760 +the first one is graph construction uh + +1134 +00:52:49,079 --> 00:52:53,480 +the second one is forward + +1135 +00:52:50,760 --> 00:52:54,839 +propagation uh and graph construction is + +1136 +00:52:53,480 --> 00:52:56,359 +basically constructing the graph + +1137 +00:52:54,839 --> 00:52:58,680 +declaring ing all the variables stuff + +1138 +00:52:56,359 --> 00:53:01,520 +like this the second one is forward + +1139 +00:52:58,680 --> 00:53:03,880 +propagation and um the way you do this + +1140 +00:53:01,520 --> 00:53:06,480 +is in topological order uh you compute + +1141 +00:53:03,880 --> 00:53:08,280 +the value of a node given its inputs and + +1142 +00:53:06,480 --> 00:53:11,000 +so basically you start out with all of + +1143 +00:53:08,280 --> 00:53:12,680 +the nodes that you give is input and + +1144 +00:53:11,000 --> 00:53:16,040 +then you find any node in the graph + +1145 +00:53:12,680 --> 00:53:17,799 +where all of its uh all of its tail + +1146 +00:53:16,040 --> 00:53:20,280 +nodes or all of its children have been + +1147 +00:53:17,799 --> 00:53:22,119 +calculated so in this case that would be + +1148 +00:53:20,280 --> 00:53:24,640 +these two nodes and then in arbitrary + +1149 +00:53:22,119 --> 00:53:27,000 +order or even in parallel you calculate + +1150 +00:53:24,640 --> 00:53:28,280 +the value of all of the satisfied nodes + +1151 +00:53:27,000 --> 00:53:31,799 +until you get to the + +1152 +00:53:28,280 --> 00:53:34,280 +end and then uh the remaining algorithms + +1153 +00:53:31,799 --> 00:53:36,200 +are back propagation and parameter + +1154 +00:53:34,280 --> 00:53:38,240 +update I already talked about parameter + +1155 +00:53:36,200 --> 00:53:40,799 +update uh using stochastic gradient + +1156 +00:53:38,240 --> 00:53:42,760 +descent but for back propagation we then + +1157 +00:53:40,799 --> 00:53:45,400 +process examples in Reverse topological + +1158 +00:53:42,760 --> 00:53:47,640 +order uh calculate derivatives of + +1159 +00:53:45,400 --> 00:53:50,400 +parameters with respect to final + +1160 +00:53:47,640 --> 00:53:52,319 +value and so we start out with the very + +1161 +00:53:50,400 --> 00:53:54,200 +final value usually this is your loss + +1162 +00:53:52,319 --> 00:53:56,200 +function and then you just step + +1163 +00:53:54,200 --> 00:54:00,440 +backwards in top ological order to + +1164 +00:53:56,200 --> 00:54:04,160 +calculate the derivatives of all these + +1165 +00:54:00,440 --> 00:54:05,920 +so um this is pretty simple I think a + +1166 +00:54:04,160 --> 00:54:08,040 +lot of people may have seen this already + +1167 +00:54:05,920 --> 00:54:09,920 +but keeping this in mind as you're + +1168 +00:54:08,040 --> 00:54:12,480 +implementing NLP models especially + +1169 +00:54:09,920 --> 00:54:14,240 +models that are really memory intensive + +1170 +00:54:12,480 --> 00:54:16,559 +or things like that is pretty important + +1171 +00:54:14,240 --> 00:54:19,040 +because if you accidentally like for + +1172 +00:54:16,559 --> 00:54:21,799 +example calculate the same thing twice + +1173 +00:54:19,040 --> 00:54:23,559 +or accidentally create a graph that is + +1174 +00:54:21,799 --> 00:54:25,720 +manipulating very large tensors and + +1175 +00:54:23,559 --> 00:54:27,319 +creating very large intermediate States + +1176 +00:54:25,720 --> 00:54:29,720 +that can kill your memory and and cause + +1177 +00:54:27,319 --> 00:54:31,839 +big problems so it's an important thing + +1178 +00:54:29,720 --> 00:54:31,839 +to + +1179 +00:54:34,359 --> 00:54:38,880 +be um cool any any questions about + +1180 +00:54:39,040 --> 00:54:44,440 +this okay if not I will go on to the + +1181 +00:54:41,680 --> 00:54:45,680 +next one so neural network Frameworks + +1182 +00:54:44,440 --> 00:54:48,920 +there's several neural network + +1183 +00:54:45,680 --> 00:54:52,880 +Frameworks but in NLP nowadays I really + +1184 +00:54:48,920 --> 00:54:55,079 +only see two and mostly only see one um + +1185 +00:54:52,880 --> 00:54:57,960 +so that one that almost everybody us + +1186 +00:54:55,079 --> 00:55:01,240 +uses is pie torch um and I would + +1187 +00:54:57,960 --> 00:55:04,559 +recommend using it unless you uh you + +1188 +00:55:01,240 --> 00:55:07,480 +know if you're a fan of like rust or you + +1189 +00:55:04,559 --> 00:55:09,200 +know esoteric uh not esoteric but like + +1190 +00:55:07,480 --> 00:55:11,960 +unusual programming languages and you + +1191 +00:55:09,200 --> 00:55:14,720 +like Beauty and things like this another + +1192 +00:55:11,960 --> 00:55:15,799 +option might be Jacks uh so I'll explain + +1193 +00:55:14,720 --> 00:55:18,440 +a little bit about the difference + +1194 +00:55:15,799 --> 00:55:19,960 +between them uh and you can pick + +1195 +00:55:18,440 --> 00:55:23,559 +accordingly + +1196 +00:55:19,960 --> 00:55:25,359 +um first uh both of these Frameworks uh + +1197 +00:55:23,559 --> 00:55:26,839 +are developed by big companies and they + +1198 +00:55:25,359 --> 00:55:28,520 +have a lot of engineering support behind + +1199 +00:55:26,839 --> 00:55:29,720 +them that's kind of an important thing + +1200 +00:55:28,520 --> 00:55:31,280 +to think about when you're deciding + +1201 +00:55:29,720 --> 00:55:32,599 +which framework to use because you know + +1202 +00:55:31,280 --> 00:55:36,000 +it'll be well + +1203 +00:55:32,599 --> 00:55:38,039 +supported um pytorch is definitely most + +1204 +00:55:36,000 --> 00:55:40,400 +widely used in NLP especially NLP + +1205 +00:55:38,039 --> 00:55:44,240 +research um and it's used in some NLP + +1206 +00:55:40,400 --> 00:55:47,359 +project J is used in some NLP + +1207 +00:55:44,240 --> 00:55:49,960 +projects um pytorch favors Dynamic + +1208 +00:55:47,359 --> 00:55:53,760 +execution so what dynamic execution + +1209 +00:55:49,960 --> 00:55:55,880 +means is um you basically create a + +1210 +00:55:53,760 --> 00:55:59,760 +computation graph and and then execute + +1211 +00:55:55,880 --> 00:56:02,760 +it uh every time you process an input uh + +1212 +00:55:59,760 --> 00:56:04,680 +in contrast there's also you define the + +1213 +00:56:02,760 --> 00:56:07,200 +computation graph first and then execute + +1214 +00:56:04,680 --> 00:56:09,280 +it over and over again so in other words + +1215 +00:56:07,200 --> 00:56:10,680 +the graph construction step only happens + +1216 +00:56:09,280 --> 00:56:13,119 +once kind of at the beginning of + +1217 +00:56:10,680 --> 00:56:16,799 +computation and then you compile it + +1218 +00:56:13,119 --> 00:56:20,039 +afterwards and it's actually pytorch + +1219 +00:56:16,799 --> 00:56:23,359 +supports kind of defining and compiling + +1220 +00:56:20,039 --> 00:56:27,480 +and Jax supports more Dynamic things but + +1221 +00:56:23,359 --> 00:56:30,160 +the way they were designed is uh is kind + +1222 +00:56:27,480 --> 00:56:32,960 +of favoring Dynamic execution or + +1223 +00:56:30,160 --> 00:56:37,079 +favoring definition in population + +1224 +00:56:32,960 --> 00:56:39,200 +and the difference between these two is + +1225 +00:56:37,079 --> 00:56:41,760 +this one gives you more flexibility this + +1226 +00:56:39,200 --> 00:56:45,440 +one gives you better optimization in wor + +1227 +00:56:41,760 --> 00:56:49,760 +speed if you want to if you want to do + +1228 +00:56:45,440 --> 00:56:52,400 +that um another thing about Jax is um + +1229 +00:56:49,760 --> 00:56:55,200 +it's kind of very close to numpy in a + +1230 +00:56:52,400 --> 00:56:57,440 +way like it uses a very num something + +1231 +00:56:55,200 --> 00:56:59,960 +that's kind of close to numpy it's very + +1232 +00:56:57,440 --> 00:57:02,359 +heavily based on tensors and so because + +1233 +00:56:59,960 --> 00:57:04,640 +of this you can kind of easily do some + +1234 +00:57:02,359 --> 00:57:06,640 +interesting things like okay I want to + +1235 +00:57:04,640 --> 00:57:11,319 +take this tensor and I want to split it + +1236 +00:57:06,640 --> 00:57:14,000 +over two gpus um and this is good if + +1237 +00:57:11,319 --> 00:57:17,119 +you're training like a very large model + +1238 +00:57:14,000 --> 00:57:20,920 +and you want to put kind + +1239 +00:57:17,119 --> 00:57:20,920 +of this part of the + +1240 +00:57:22,119 --> 00:57:26,520 +model uh you want to put this part of + +1241 +00:57:24,119 --> 00:57:30,079 +the model on GP 1 this on gpu2 this on + +1242 +00:57:26,520 --> 00:57:31,599 +GPU 3 this on GPU it's slightly simpler + +1243 +00:57:30,079 --> 00:57:34,400 +conceptually to do in Jacks but it's + +1244 +00:57:31,599 --> 00:57:37,160 +also possible to do in + +1245 +00:57:34,400 --> 00:57:39,119 +p and pytorch by far has the most + +1246 +00:57:37,160 --> 00:57:41,640 +vibrant ecosystem so like as I said + +1247 +00:57:39,119 --> 00:57:44,200 +pytorch is a good default choice but you + +1248 +00:57:41,640 --> 00:57:47,480 +can consider using Jack if you uh if you + +1249 +00:57:44,200 --> 00:57:47,480 +like new + +1250 +00:57:48,079 --> 00:57:55,480 +things cool um yeah actually I already + +1251 +00:57:51,599 --> 00:57:58,079 +talked about that so in the interest of + +1252 +00:57:55,480 --> 00:58:02,119 +time I may not go into these very deeply + +1253 +00:57:58,079 --> 00:58:05,799 +but it's important to note that we have + +1254 +00:58:02,119 --> 00:58:05,799 +examples of all of + +1255 +00:58:06,920 --> 00:58:12,520 +the models that I talked about in the + +1256 +00:58:09,359 --> 00:58:16,720 +class today these are created for + +1257 +00:58:12,520 --> 00:58:17,520 +Simplicity not for Speed or efficiency + +1258 +00:58:16,720 --> 00:58:20,480 +of + +1259 +00:58:17,520 --> 00:58:24,920 +implementation um so these are kind of + +1260 +00:58:20,480 --> 00:58:27,760 +torch P torch based uh examples uh where + +1261 +00:58:24,920 --> 00:58:31,599 +you can create the bag of words + +1262 +00:58:27,760 --> 00:58:36,440 +Model A continuous bag of words + +1263 +00:58:31,599 --> 00:58:39,640 +model um and + +1264 +00:58:36,440 --> 00:58:41,640 +a deep continuous bag of wordss + +1265 +00:58:39,640 --> 00:58:44,359 +model + +1266 +00:58:41,640 --> 00:58:46,039 +and all of these I believe are + +1267 +00:58:44,359 --> 00:58:48,760 +implemented in + +1268 +00:58:46,039 --> 00:58:51,960 +model.py and the most important thing is + +1269 +00:58:48,760 --> 00:58:54,960 +where you define the forward pass and + +1270 +00:58:51,960 --> 00:58:57,319 +maybe I can just give a a simple example + +1271 +00:58:54,960 --> 00:58:58,200 +this but here this is where you do the + +1272 +00:58:57,319 --> 00:59:01,839 +word + +1273 +00:58:58,200 --> 00:59:04,400 +embedding this is where you sum up all + +1274 +00:59:01,839 --> 00:59:08,119 +of the embeddings and add a + +1275 +00:59:04,400 --> 00:59:10,200 +bias um and then this is uh where you + +1276 +00:59:08,119 --> 00:59:13,960 +return the the + +1277 +00:59:10,200 --> 00:59:13,960 +score and then oh + +1278 +00:59:14,799 --> 00:59:19,119 +sorry the continuous bag of words model + +1279 +00:59:17,520 --> 00:59:22,160 +sums up some + +1280 +00:59:19,119 --> 00:59:23,640 +embeddings uh or gets the embeddings + +1281 +00:59:22,160 --> 00:59:25,799 +sums up some + +1282 +00:59:23,640 --> 00:59:28,079 +embeddings + +1283 +00:59:25,799 --> 00:59:30,599 +uh gets the score here and then runs it + +1284 +00:59:28,079 --> 00:59:33,200 +through a linear or changes the view + +1285 +00:59:30,599 --> 00:59:35,119 +runs it through a linear layer and then + +1286 +00:59:33,200 --> 00:59:38,319 +the Deep continuous bag of words model + +1287 +00:59:35,119 --> 00:59:41,160 +also adds a few layers of uh like linear + +1288 +00:59:38,319 --> 00:59:43,119 +transformations in Dage so you should be + +1289 +00:59:41,160 --> 00:59:44,640 +able to see that these correspond pretty + +1290 +00:59:43,119 --> 00:59:47,440 +closely to the things that I had on the + +1291 +00:59:44,640 --> 00:59:49,280 +slides so um hopefully that's a good + +1292 +00:59:47,440 --> 00:59:51,839 +start if you're not very familiar with + +1293 +00:59:49,280 --> 00:59:51,839 +implementing + +1294 +00:59:53,119 --> 00:59:58,440 +model oh and yes the recitation uh will + +1295 +00:59:56,599 --> 00:59:59,799 +be about playing around with sentence + +1296 +00:59:58,440 --> 01:00:01,200 +piece and playing around with these so + +1297 +00:59:59,799 --> 01:00:02,839 +if you have any look at them have any + +1298 +01:00:01,200 --> 01:00:05,000 +questions you're welcome to show up + +1299 +01:00:02,839 --> 01:00:09,880 +where I walk + +1300 +01:00:05,000 --> 01:00:09,880 +through cool um any any questions about + +1301 +01:00:12,839 --> 01:00:19,720 +these okay so a few more final important + +1302 +01:00:16,720 --> 01:00:21,720 +Concepts um another concept that you + +1303 +01:00:19,720 --> 01:00:25,440 +should definitely be aware of is the + +1304 +01:00:21,720 --> 01:00:27,280 +atom Optimizer uh so there's lots of uh + +1305 +01:00:25,440 --> 01:00:30,559 +optimizers that you could be using but + +1306 +01:00:27,280 --> 01:00:32,200 +almost all research in NLP uses some uh + +1307 +01:00:30,559 --> 01:00:38,440 +variety of the atom + +1308 +01:00:32,200 --> 01:00:40,839 +Optimizer and the U the way this works + +1309 +01:00:38,440 --> 01:00:42,559 +is it + +1310 +01:00:40,839 --> 01:00:45,640 +optimizes + +1311 +01:00:42,559 --> 01:00:48,480 +the um it optimizes model considering + +1312 +01:00:45,640 --> 01:00:49,359 +the rolling average of the gradient and + +1313 +01:00:48,480 --> 01:00:53,160 +uh + +1314 +01:00:49,359 --> 01:00:55,920 +momentum and the way it works is here we + +1315 +01:00:53,160 --> 01:00:58,839 +have a gradient here we have + +1316 +01:00:55,920 --> 01:01:04,000 +momentum and what you can see is + +1317 +01:00:58,839 --> 01:01:06,680 +happening here is we add a little bit of + +1318 +01:01:04,000 --> 01:01:09,200 +the gradient in uh how much you add in + +1319 +01:01:06,680 --> 01:01:12,720 +is with respect to the size of this beta + +1320 +01:01:09,200 --> 01:01:16,000 +1 parameter and you add it into uh the + +1321 +01:01:12,720 --> 01:01:18,640 +momentum term so this momentum term like + +1322 +01:01:16,000 --> 01:01:20,440 +gradually increases and decreases so in + +1323 +01:01:18,640 --> 01:01:23,440 +contrast to standard gradient percent + +1324 +01:01:20,440 --> 01:01:25,839 +which could be + +1325 +01:01:23,440 --> 01:01:28,440 +updating + +1326 +01:01:25,839 --> 01:01:31,440 +uh each parameter kind of like very + +1327 +01:01:28,440 --> 01:01:33,359 +differently on each time step this will + +1328 +01:01:31,440 --> 01:01:35,680 +make the momentum kind of transition + +1329 +01:01:33,359 --> 01:01:37,240 +more smoothly by taking the rolling + +1330 +01:01:35,680 --> 01:01:39,880 +average of the + +1331 +01:01:37,240 --> 01:01:43,400 +gradient and then the the second thing + +1332 +01:01:39,880 --> 01:01:47,640 +is um by taking the momentum this is the + +1333 +01:01:43,400 --> 01:01:51,000 +rolling average of the I guess gradient + +1334 +01:01:47,640 --> 01:01:54,440 +uh variance sorry I this should be + +1335 +01:01:51,000 --> 01:01:58,079 +variance and the reason why you need + +1336 +01:01:54,440 --> 01:02:01,319 +need to keep track of the variance is + +1337 +01:01:58,079 --> 01:02:03,319 +some uh some parameters will have very + +1338 +01:02:01,319 --> 01:02:06,559 +large variance in their gradients and + +1339 +01:02:03,319 --> 01:02:11,480 +might fluctuate very uh strongly and + +1340 +01:02:06,559 --> 01:02:13,039 +others might have a smaller uh chain + +1341 +01:02:11,480 --> 01:02:15,240 +variant in their gradients and not + +1342 +01:02:13,039 --> 01:02:18,240 +fluctuate very much but we want to make + +1343 +01:02:15,240 --> 01:02:20,200 +sure that we update the ones we still + +1344 +01:02:18,240 --> 01:02:22,240 +update the ones that have a very small + +1345 +01:02:20,200 --> 01:02:25,760 +uh change of their variance and the + +1346 +01:02:22,240 --> 01:02:27,440 +reason why is kind of let's say you have + +1347 +01:02:25,760 --> 01:02:30,440 +a + +1348 +01:02:27,440 --> 01:02:30,440 +multi-layer + +1349 +01:02:32,480 --> 01:02:38,720 +network + +1350 +01:02:34,480 --> 01:02:41,240 +um or actually sorry a better + +1351 +01:02:38,720 --> 01:02:44,319 +um a better example is like let's say we + +1352 +01:02:41,240 --> 01:02:47,559 +have a big word embedding Matrix and + +1353 +01:02:44,319 --> 01:02:53,359 +over here we have like really frequent + +1354 +01:02:47,559 --> 01:02:56,279 +words and then over here we have uh + +1355 +01:02:53,359 --> 01:02:59,319 +gradi + +1356 +01:02:56,279 --> 01:03:00,880 +no we have like less frequent words we + +1357 +01:02:59,319 --> 01:03:02,799 +want to make sure that all of these get + +1358 +01:03:00,880 --> 01:03:06,160 +updated appropriately all of these get + +1359 +01:03:02,799 --> 01:03:08,640 +like enough updates and so over here + +1360 +01:03:06,160 --> 01:03:10,760 +this one will have lots of updates and + +1361 +01:03:08,640 --> 01:03:13,680 +so uh kind of + +1362 +01:03:10,760 --> 01:03:16,599 +the amount that we + +1363 +01:03:13,680 --> 01:03:20,039 +update or the the amount that we update + +1364 +01:03:16,599 --> 01:03:21,799 +the uh this will be relatively large + +1365 +01:03:20,039 --> 01:03:23,119 +whereas over here this will not have + +1366 +01:03:21,799 --> 01:03:24,880 +very many updates we'll have lots of + +1367 +01:03:23,119 --> 01:03:26,480 +zero updates also + +1368 +01:03:24,880 --> 01:03:29,160 +and so the amount that we update this + +1369 +01:03:26,480 --> 01:03:32,520 +will be relatively small and so this + +1370 +01:03:29,160 --> 01:03:36,119 +kind of squared to gradient here will uh + +1371 +01:03:32,520 --> 01:03:38,400 +be smaller for the values over here and + +1372 +01:03:36,119 --> 01:03:41,359 +what that allows us to do is it allows + +1373 +01:03:38,400 --> 01:03:44,200 +us to maybe I can just go to the bottom + +1374 +01:03:41,359 --> 01:03:46,039 +we end up uh dividing by the square root + +1375 +01:03:44,200 --> 01:03:47,599 +of this and because we divide by the + +1376 +01:03:46,039 --> 01:03:51,000 +square root of this if this is really + +1377 +01:03:47,599 --> 01:03:55,680 +large like 50 and 70 and then this over + +1378 +01:03:51,000 --> 01:03:59,480 +here is like one 0.5 + +1379 +01:03:55,680 --> 01:04:01,920 +uh or something we will be upgrading the + +1380 +01:03:59,480 --> 01:04:03,920 +ones that have like less Square + +1381 +01:04:01,920 --> 01:04:06,880 +gradients so it will it allows you to + +1382 +01:04:03,920 --> 01:04:08,760 +upweight the less common gradients more + +1383 +01:04:06,880 --> 01:04:10,440 +frequently and then there's also some + +1384 +01:04:08,760 --> 01:04:13,400 +terms for correcting bias early in + +1385 +01:04:10,440 --> 01:04:16,440 +training because these momentum in uh in + +1386 +01:04:13,400 --> 01:04:19,559 +variance or momentum in squared gradient + +1387 +01:04:16,440 --> 01:04:23,119 +terms are not going to be like well + +1388 +01:04:19,559 --> 01:04:24,839 +calibrated yet so it prevents them from + +1389 +01:04:23,119 --> 01:04:28,880 +going very three wire beginning of + +1390 +01:04:24,839 --> 01:04:30,839 +training so this is uh the details of + +1391 +01:04:28,880 --> 01:04:33,640 +this again are not like super super + +1392 +01:04:30,839 --> 01:04:37,359 +important um another thing that I didn't + +1393 +01:04:33,640 --> 01:04:40,200 +write on the slides is uh now in + +1394 +01:04:37,359 --> 01:04:43,920 +Transformers it's also super common to + +1395 +01:04:40,200 --> 01:04:47,400 +have an overall learning rate schle so + +1396 +01:04:43,920 --> 01:04:50,520 +even um Even Adam has this uh Ada + +1397 +01:04:47,400 --> 01:04:53,440 +learning rate parameter here and we what + +1398 +01:04:50,520 --> 01:04:55,240 +we often do is we adjust this so we + +1399 +01:04:53,440 --> 01:04:57,839 +start at low + +1400 +01:04:55,240 --> 01:04:59,640 +we raise it up and then we have a Decay + +1401 +01:04:57,839 --> 01:05:03,039 +uh at the end and exactly how much you + +1402 +01:04:59,640 --> 01:05:04,440 +do this kind of depends on um you know + +1403 +01:05:03,039 --> 01:05:06,160 +how big your model is how much data + +1404 +01:05:04,440 --> 01:05:09,160 +you're tring on eventually and the + +1405 +01:05:06,160 --> 01:05:12,440 +reason why we do this is transformers + +1406 +01:05:09,160 --> 01:05:13,839 +are unfortunately super sensitive to + +1407 +01:05:12,440 --> 01:05:15,359 +having a high learning rate right at the + +1408 +01:05:13,839 --> 01:05:16,559 +very beginning so if you update them + +1409 +01:05:15,359 --> 01:05:17,920 +with a high learning rate right at the + +1410 +01:05:16,559 --> 01:05:22,920 +very beginning they go haywire and you + +1411 +01:05:17,920 --> 01:05:24,400 +get a really weird model um and but you + +1412 +01:05:22,920 --> 01:05:26,760 +want to raise it eventually so your + +1413 +01:05:24,400 --> 01:05:28,920 +model is learning appropriately and then + +1414 +01:05:26,760 --> 01:05:30,400 +in all stochastic gradient descent no + +1415 +01:05:28,920 --> 01:05:31,680 +matter whether you're using atom or + +1416 +01:05:30,400 --> 01:05:33,400 +anything else it's a good idea to + +1417 +01:05:31,680 --> 01:05:36,200 +gradually decrease the learning rate at + +1418 +01:05:33,400 --> 01:05:38,119 +the end to prevent the model from + +1419 +01:05:36,200 --> 01:05:40,480 +continuing to fluctuate and getting it + +1420 +01:05:38,119 --> 01:05:42,760 +to a stable point that gives you good + +1421 +01:05:40,480 --> 01:05:45,559 +accuracy over a large part of data so + +1422 +01:05:42,760 --> 01:05:47,480 +this is often included like if you look + +1423 +01:05:45,559 --> 01:05:51,000 +at any standard Transformer training + +1424 +01:05:47,480 --> 01:05:53,079 +recipe it will have that this so that's + +1425 +01:05:51,000 --> 01:05:54,799 +kind of the the go-to + +1426 +01:05:53,079 --> 01:05:58,960 +optimizer + +1427 +01:05:54,799 --> 01:06:01,039 +um are there any questions or + +1428 +01:05:58,960 --> 01:06:02,599 +discussion there's also tricky things + +1429 +01:06:01,039 --> 01:06:04,000 +like cyclic learning rates where you + +1430 +01:06:02,599 --> 01:06:06,599 +decrease the learning rate increase it + +1431 +01:06:04,000 --> 01:06:08,559 +and stuff like that but I won't go into + +1432 +01:06:06,599 --> 01:06:11,000 +that and don't actually use it that + +1433 +01:06:08,559 --> 01:06:12,760 +much second thing is visualization of + +1434 +01:06:11,000 --> 01:06:15,400 +embeddings so normally when we have word + +1435 +01:06:12,760 --> 01:06:19,760 +embeddings usually they're kind of large + +1436 +01:06:15,400 --> 01:06:21,559 +um and they can be like 512 or 1024 + +1437 +01:06:19,760 --> 01:06:25,079 +dimensions + +1438 +01:06:21,559 --> 01:06:28,720 +and so one thing that we can do is we + +1439 +01:06:25,079 --> 01:06:31,079 +can down weight them or sorry down uh + +1440 +01:06:28,720 --> 01:06:34,400 +like reduce the dimensions or perform + +1441 +01:06:31,079 --> 01:06:35,880 +dimensionality reduction and put them in + +1442 +01:06:34,400 --> 01:06:37,680 +like two or three dimensions which are + +1443 +01:06:35,880 --> 01:06:40,200 +easy for humans to + +1444 +01:06:37,680 --> 01:06:42,000 +visualize this is an example using + +1445 +01:06:40,200 --> 01:06:44,839 +principal component analysis which is a + +1446 +01:06:42,000 --> 01:06:48,279 +linear Dimension reduction technique and + +1447 +01:06:44,839 --> 01:06:50,680 +this is uh an example from 10 years ago + +1448 +01:06:48,279 --> 01:06:52,359 +now uh one of the first major word + +1449 +01:06:50,680 --> 01:06:55,240 +embedding papers where they demonstrated + +1450 +01:06:52,359 --> 01:06:57,720 +that if you do this sort of linear + +1451 +01:06:55,240 --> 01:06:59,440 +Dimension reduction uh you get actually + +1452 +01:06:57,720 --> 01:07:01,279 +some interesting things where you can + +1453 +01:06:59,440 --> 01:07:03,240 +draw a vector that's almost the same + +1454 +01:07:01,279 --> 01:07:06,400 +direction between like countries and + +1455 +01:07:03,240 --> 01:07:09,319 +their uh countries and their capitals + +1456 +01:07:06,400 --> 01:07:13,720 +for example so this is a good thing to + +1457 +01:07:09,319 --> 01:07:16,559 +do but actually PCA uh doesn't give + +1458 +01:07:13,720 --> 01:07:20,760 +you in some cases PCA doesn't give you + +1459 +01:07:16,559 --> 01:07:22,920 +super great uh visualizations sorry yeah + +1460 +01:07:20,760 --> 01:07:25,920 +well for like if it's + +1461 +01:07:22,920 --> 01:07:25,920 +like + +1462 +01:07:29,880 --> 01:07:35,039 +um for things like this I think you + +1463 +01:07:33,119 --> 01:07:37,359 +probably would still see vectors in the + +1464 +01:07:35,039 --> 01:07:38,760 +same direction but I don't think it like + +1465 +01:07:37,359 --> 01:07:40,920 +there's a reason why I'm introducing + +1466 +01:07:38,760 --> 01:07:44,279 +nonlinear projections next because the + +1467 +01:07:40,920 --> 01:07:46,799 +more standard way to do this is uh + +1468 +01:07:44,279 --> 01:07:50,640 +nonlinear projections in in particular a + +1469 +01:07:46,799 --> 01:07:54,880 +method called tisne and the way um they + +1470 +01:07:50,640 --> 01:07:56,880 +do this is they try to group + +1471 +01:07:54,880 --> 01:07:59,000 +things that are close together in high + +1472 +01:07:56,880 --> 01:08:01,240 +dimensional space so that they're also + +1473 +01:07:59,000 --> 01:08:04,440 +close together in low dimensional space + +1474 +01:08:01,240 --> 01:08:08,520 +but they remove the Restriction that + +1475 +01:08:04,440 --> 01:08:10,799 +this is uh that this is linear so this + +1476 +01:08:08,520 --> 01:08:15,480 +is an example of just grouping together + +1477 +01:08:10,799 --> 01:08:18,040 +some digits uh from the memus data + +1478 +01:08:15,480 --> 01:08:20,279 +set or sorry reducing the dimension of + +1479 +01:08:18,040 --> 01:08:23,640 +digits from the mest data + +1480 +01:08:20,279 --> 01:08:25,640 +set according to PCA and you can see it + +1481 +01:08:23,640 --> 01:08:28,000 +gives these kind of blobs that overlap + +1482 +01:08:25,640 --> 01:08:29,799 +with each other and stuff like this but + +1483 +01:08:28,000 --> 01:08:31,679 +if you do it with tney this is + +1484 +01:08:29,799 --> 01:08:34,799 +completely unsupervised actually it's + +1485 +01:08:31,679 --> 01:08:37,080 +not training any model for labeling the + +1486 +01:08:34,799 --> 01:08:39,239 +labels are just used to draw the colors + +1487 +01:08:37,080 --> 01:08:42,520 +and you can see that it gets pretty + +1488 +01:08:39,239 --> 01:08:44,520 +coherent um clusters that correspond to + +1489 +01:08:42,520 --> 01:08:48,120 +like what the actual digits + +1490 +01:08:44,520 --> 01:08:50,120 +are um however uh one problem with + +1491 +01:08:48,120 --> 01:08:53,159 +titney I I still think it's better than + +1492 +01:08:50,120 --> 01:08:55,000 +PCA for a large number of uh + +1493 +01:08:53,159 --> 01:08:59,199 +applications + +1494 +01:08:55,000 --> 01:09:01,040 +but settings of tisy matter and tisy has + +1495 +01:08:59,199 --> 01:09:02,920 +a few settings kind of the most + +1496 +01:09:01,040 --> 01:09:04,120 +important ones are the overall + +1497 +01:09:02,920 --> 01:09:06,560 +perplexity + +1498 +01:09:04,120 --> 01:09:09,040 +hyperparameter and uh the number of + +1499 +01:09:06,560 --> 01:09:12,319 +steps that you perform and there's a + +1500 +01:09:09,040 --> 01:09:14,920 +nice example uh of a paper or kind of + +1501 +01:09:12,319 --> 01:09:16,359 +like online post uh that demonstrates + +1502 +01:09:14,920 --> 01:09:18,560 +how if you change these parameters you + +1503 +01:09:16,359 --> 01:09:22,279 +can get very different things so if this + +1504 +01:09:18,560 --> 01:09:24,080 +is the original data you run tisy and it + +1505 +01:09:22,279 --> 01:09:26,640 +gives you very different things based on + +1506 +01:09:24,080 --> 01:09:29,279 +the hyper parameters that you change um + +1507 +01:09:26,640 --> 01:09:32,880 +and here's another example uh you have + +1508 +01:09:29,279 --> 01:09:36,960 +two linear uh things like this and so + +1509 +01:09:32,880 --> 01:09:40,839 +PCA no matter how you ran PCA you would + +1510 +01:09:36,960 --> 01:09:44,080 +still get a linear output from this so + +1511 +01:09:40,839 --> 01:09:45,960 +normally uh you know it might change the + +1512 +01:09:44,080 --> 01:09:49,239 +order it might squash it a little bit or + +1513 +01:09:45,960 --> 01:09:51,239 +something like this but um if you run + +1514 +01:09:49,239 --> 01:09:53,400 +tisy it gives you crazy things it even + +1515 +01:09:51,239 --> 01:09:56,040 +gives you like DNA and other stuff like + +1516 +01:09:53,400 --> 01:09:58,040 +that so so um you do need to be a little + +1517 +01:09:56,040 --> 01:10:00,600 +bit careful that uh this is not + +1518 +01:09:58,040 --> 01:10:02,320 +necessarily going to tell you nice + +1519 +01:10:00,600 --> 01:10:04,400 +linear correlations like this so like + +1520 +01:10:02,320 --> 01:10:06,159 +let's say this correlation existed if + +1521 +01:10:04,400 --> 01:10:09,199 +you use tisy it might not necessarily + +1522 +01:10:06,159 --> 01:10:09,199 +come out to + +1523 +01:10:09,320 --> 01:10:14,880 +TIY + +1524 +01:10:11,800 --> 01:10:16,920 +cool yep uh that that's my final thing + +1525 +01:10:14,880 --> 01:10:18,520 +actually I talked said sequence models + +1526 +01:10:16,920 --> 01:10:19,679 +in the next class but it's in the class + +1527 +01:10:18,520 --> 01:10:21,440 +after this I'm going to be talking about + +1528 +01:10:19,679 --> 01:10:24,199 +language + +1529 +01:10:21,440 --> 01:10:27,159 +modeling uh cool any any questions + +1530 +01:10:24,199 --> 01:10:27,159 +or \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (2) Word Representation and Text Classification/transcript.vtt b/CMU Advanced NLP 2024 (2) Word Representation and Text Classification/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..04c67eb1b3cddb91c1854773830ec5626dc02198 --- /dev/null +++ b/CMU Advanced NLP 2024 (2) Word Representation and Text Classification/transcript.vtt @@ -0,0 +1,4591 @@ +WEBVTT + +00:00:03.879 --> 00:00:07.480 +cool um so this time I'm going to talk + +00:00:05.480 --> 00:00:08.880 +about word representation and text + +00:00:07.480 --> 00:00:11.480 +classifiers these are kind of the + +00:00:08.880 --> 00:00:14.080 +foundations that you need to know uh in + +00:00:11.480 --> 00:00:15.640 +order to move on to the more complex + +00:00:14.080 --> 00:00:17.920 +things that we'll be talking in future + +00:00:15.640 --> 00:00:19.640 +classes uh but actually the in + +00:00:17.920 --> 00:00:22.760 +particular the word representation part + +00:00:19.640 --> 00:00:25.439 +is pretty important it's a major uh + +00:00:22.760 --> 00:00:31.800 +thing that we need to do for all NLP + +00:00:25.439 --> 00:00:34.239 +models so uh let's go into it + +00:00:31.800 --> 00:00:38.200 +so last class I talked about the bag of + +00:00:34.239 --> 00:00:40.239 +words model um and just to review this + +00:00:38.200 --> 00:00:43.920 +was a model where basically we take each + +00:00:40.239 --> 00:00:45.520 +word we represent it as a one hot Vector + +00:00:43.920 --> 00:00:48.760 +uh like + +00:00:45.520 --> 00:00:51.120 +this and we add all of these vectors + +00:00:48.760 --> 00:00:53.160 +together we multiply the resulting + +00:00:51.120 --> 00:00:55.160 +frequency vector by some weights and we + +00:00:53.160 --> 00:00:57.239 +get a score out of this and we can use + +00:00:55.160 --> 00:00:58.559 +this score for binary classification or + +00:00:57.239 --> 00:01:00.239 +if we want to do multiclass + +00:00:58.559 --> 00:01:02.519 +classification we get you know multiple + +00:01:00.239 --> 00:01:05.720 +scores for each + +00:01:02.519 --> 00:01:08.040 +class and the features F were just based + +00:01:05.720 --> 00:01:08.920 +on our word identities and the weights + +00:01:08.040 --> 00:01:12.159 +were + +00:01:08.920 --> 00:01:14.680 +learned and um if we look at what's + +00:01:12.159 --> 00:01:17.520 +missing in bag of words + +00:01:14.680 --> 00:01:19.600 +models um we talked about handling of + +00:01:17.520 --> 00:01:23.280 +conjugated or compound + +00:01:19.600 --> 00:01:25.439 +words we talked about handling of word + +00:01:23.280 --> 00:01:27.880 +similarity and we talked about handling + +00:01:25.439 --> 00:01:30.240 +of combination features and handling of + +00:01:27.880 --> 00:01:33.280 +sentence structure and so all of these + +00:01:30.240 --> 00:01:35.000 +are are tricky problems uh we saw that + +00:01:33.280 --> 00:01:37.000 +you know creating a rule-based system to + +00:01:35.000 --> 00:01:39.000 +solve these problems is non-trivial and + +00:01:37.000 --> 00:01:41.399 +at the very least would take a lot of + +00:01:39.000 --> 00:01:44.079 +time and so now I want to talk about + +00:01:41.399 --> 00:01:47.119 +some solutions to the problems in this + +00:01:44.079 --> 00:01:49.280 +class so the first the solution to the + +00:01:47.119 --> 00:01:52.240 +first problem or a solution to the first + +00:01:49.280 --> 00:01:54.880 +problem is uh subword or character based + +00:01:52.240 --> 00:01:57.520 +models and that's what I'll talk about + +00:01:54.880 --> 00:02:00.719 +first handling of word similarity this + +00:01:57.520 --> 00:02:02.960 +can be handled uh using Word edings + +00:02:00.719 --> 00:02:05.079 +and the word embeddings uh will be + +00:02:02.960 --> 00:02:07.159 +another thing we'll talk about this time + +00:02:05.079 --> 00:02:08.879 +handling of combination features uh we + +00:02:07.159 --> 00:02:11.039 +can handle through neural networks which + +00:02:08.879 --> 00:02:14.040 +we'll also talk about this time and then + +00:02:11.039 --> 00:02:15.560 +handling of sentence structure uh the + +00:02:14.040 --> 00:02:17.720 +kind of standard way of handling this + +00:02:15.560 --> 00:02:20.120 +now is through sequence-based models and + +00:02:17.720 --> 00:02:24.879 +that will be uh starting in a few + +00:02:20.120 --> 00:02:28.080 +classes so uh let's jump into + +00:02:24.879 --> 00:02:30.000 +it so subword models uh as I mentioned + +00:02:28.080 --> 00:02:31.840 +this is a really really important part + +00:02:30.000 --> 00:02:33.360 +all of the models that we're building + +00:02:31.840 --> 00:02:35.480 +nowadays including you know + +00:02:33.360 --> 00:02:38.239 +state-of-the-art language models and and + +00:02:35.480 --> 00:02:42.200 +things like this and the basic idea + +00:02:38.239 --> 00:02:44.720 +behind this is that we want to split uh + +00:02:42.200 --> 00:02:48.040 +in particular split less common words up + +00:02:44.720 --> 00:02:50.200 +into multiple subboard tokens so to give + +00:02:48.040 --> 00:02:52.200 +an example of this uh if we have + +00:02:50.200 --> 00:02:55.040 +something like the companies are + +00:02:52.200 --> 00:02:57.000 +expanding uh it might split companies + +00:02:55.040 --> 00:03:02.120 +into compan + +00:02:57.000 --> 00:03:05.000 +e and expand in like this and there are + +00:03:02.120 --> 00:03:08.480 +a few benefits of this uh the first + +00:03:05.000 --> 00:03:10.760 +benefit is that this allows you to + +00:03:08.480 --> 00:03:13.360 +parameters between word varieties or + +00:03:10.760 --> 00:03:15.200 +compound words and the other one is to + +00:03:13.360 --> 00:03:17.400 +reduce parameter size and save compute + +00:03:15.200 --> 00:03:19.720 +and meming and both of these are kind of + +00:03:17.400 --> 00:03:23.239 +like equally important things that we + +00:03:19.720 --> 00:03:25.519 +need to be uh we need to be considering + +00:03:23.239 --> 00:03:26.440 +so does anyone know how many words there + +00:03:25.519 --> 00:03:28.680 +are in + +00:03:26.440 --> 00:03:31.680 +English any + +00:03:28.680 --> 00:03:31.680 +ideas + +00:03:36.799 --> 00:03:43.400 +yeah two + +00:03:38.599 --> 00:03:45.560 +million pretty good um any other + +00:03:43.400 --> 00:03:47.159 +ideas + +00:03:45.560 --> 00:03:50.360 +yeah + +00:03:47.159 --> 00:03:53.599 +60,000 some models use 60,000 I I think + +00:03:50.360 --> 00:03:56.200 +60,000 is probably these subword models + +00:03:53.599 --> 00:03:58.079 +uh when you're talking about this so + +00:03:56.200 --> 00:03:59.319 +they can use sub models to take the 2 + +00:03:58.079 --> 00:04:03.480 +million which I think is a reasonable + +00:03:59.319 --> 00:04:07.400 +guess to 6 60,000 any other + +00:04:03.480 --> 00:04:08.840 +ideas 700,000 okay pretty good um so + +00:04:07.400 --> 00:04:11.799 +this was a per question it doesn't + +00:04:08.840 --> 00:04:14.760 +really have a good answer um but two 200 + +00:04:11.799 --> 00:04:17.479 +million's probably pretty good six uh + +00:04:14.760 --> 00:04:19.160 +700,000 is pretty good the reason why + +00:04:17.479 --> 00:04:21.360 +this is a trick question is because are + +00:04:19.160 --> 00:04:24.440 +company and companies different + +00:04:21.360 --> 00:04:26.840 +words uh maybe maybe not right because + +00:04:24.440 --> 00:04:30.120 +if we know the word company we can you + +00:04:26.840 --> 00:04:32.520 +know guess what the word companies means + +00:04:30.120 --> 00:04:35.720 +um what about automobile is that a + +00:04:32.520 --> 00:04:37.400 +different word well maybe if we know + +00:04:35.720 --> 00:04:39.400 +Auto and mobile we can kind of guess + +00:04:37.400 --> 00:04:41.160 +what automobile means but not really so + +00:04:39.400 --> 00:04:43.479 +maybe that's a different word there's + +00:04:41.160 --> 00:04:45.960 +all kinds of Shades of Gray there and + +00:04:43.479 --> 00:04:48.120 +also we have really frequent words that + +00:04:45.960 --> 00:04:50.360 +everybody can probably acknowledge our + +00:04:48.120 --> 00:04:52.320 +words like + +00:04:50.360 --> 00:04:55.639 +the and + +00:04:52.320 --> 00:04:58.520 +a and um maybe + +00:04:55.639 --> 00:05:00.680 +car and then we have words down here + +00:04:58.520 --> 00:05:02.320 +which are like Miss spellings or + +00:05:00.680 --> 00:05:04.160 +something like that misspellings of + +00:05:02.320 --> 00:05:06.520 +actual correct words or + +00:05:04.160 --> 00:05:09.199 +slay uh or other things like that and + +00:05:06.520 --> 00:05:12.520 +then it's questionable whether those are + +00:05:09.199 --> 00:05:17.199 +actual words or not so um there's a + +00:05:12.520 --> 00:05:19.520 +famous uh law called Zip's + +00:05:17.199 --> 00:05:21.280 +law um which probably a lot of people + +00:05:19.520 --> 00:05:23.360 +have heard of it's also the source of + +00:05:21.280 --> 00:05:26.919 +your zip + +00:05:23.360 --> 00:05:30.160 +file um which is using Zip's law to + +00:05:26.919 --> 00:05:32.400 +compress uh compress output by making + +00:05:30.160 --> 00:05:34.880 +the uh more frequent words have shorter + +00:05:32.400 --> 00:05:37.520 +bite strings and less frequent words + +00:05:34.880 --> 00:05:38.800 +have uh you know less frequent bite + +00:05:37.520 --> 00:05:43.120 +strings but basically like we're going + +00:05:38.800 --> 00:05:45.120 +to have an infinite number of words or + +00:05:43.120 --> 00:05:46.360 +at least strings that are separated by + +00:05:45.120 --> 00:05:49.280 +white space so we need to handle this + +00:05:46.360 --> 00:05:53.199 +somehow and that's what subword units + +00:05:49.280 --> 00:05:54.560 +do so um 60,000 was a good guess for the + +00:05:53.199 --> 00:05:57.160 +number of subword units you might use in + +00:05:54.560 --> 00:06:00.759 +a model and so uh by using subw units we + +00:05:57.160 --> 00:06:04.840 +can limit to about that much + +00:06:00.759 --> 00:06:08.160 +so there's a couple of common uh ways to + +00:06:04.840 --> 00:06:10.440 +create these subword units and basically + +00:06:08.160 --> 00:06:14.560 +all of them rely on the fact that you + +00:06:10.440 --> 00:06:16.039 +want more common strings to become + +00:06:14.560 --> 00:06:19.599 +subword + +00:06:16.039 --> 00:06:22.199 +units um or actually sorry I realize + +00:06:19.599 --> 00:06:24.280 +maybe before doing that I could explain + +00:06:22.199 --> 00:06:26.360 +an alternative to creating subword units + +00:06:24.280 --> 00:06:29.639 +so the alternative to creating subword + +00:06:26.360 --> 00:06:33.560 +units is to treat every character or + +00:06:29.639 --> 00:06:36.919 +maybe every bite in a string as a single + +00:06:33.560 --> 00:06:38.560 +thing that you encode in forent so in + +00:06:36.919 --> 00:06:42.520 +other words instead of trying to model + +00:06:38.560 --> 00:06:47.919 +the companies are expanding we Model T h + +00:06:42.520 --> 00:06:50.199 +e space c o m uh etc etc can anyone + +00:06:47.919 --> 00:06:53.199 +think of any downsides of + +00:06:50.199 --> 00:06:53.199 +this + +00:06:57.039 --> 00:07:01.879 +yeah yeah the set of these will be very + +00:07:00.080 --> 00:07:05.000 +will be very small but that's not + +00:07:01.879 --> 00:07:05.000 +necessarily a problem + +00:07:08.560 --> 00:07:15.599 +right yeah um and any other + +00:07:12.599 --> 00:07:15.599 +ideas + +00:07:19.520 --> 00:07:24.360 +yeah yeah the resulting sequences will + +00:07:22.080 --> 00:07:25.520 +be very long um and when you say + +00:07:24.360 --> 00:07:27.160 +difficult to use it could be difficult + +00:07:25.520 --> 00:07:29.560 +to use for a couple of reasons there's + +00:07:27.160 --> 00:07:31.840 +mainly two reasons actually any any IDE + +00:07:29.560 --> 00:07:31.840 +about + +00:07:33.479 --> 00:07:37.800 +this any + +00:07:46.280 --> 00:07:50.599 +yeah yeah that's a little bit of a + +00:07:49.000 --> 00:07:52.319 +separate problem than the character + +00:07:50.599 --> 00:07:53.919 +based model so let me get back to that + +00:07:52.319 --> 00:07:56.400 +but uh let let's finish the discussion + +00:07:53.919 --> 00:07:58.360 +of the character based models so if it's + +00:07:56.400 --> 00:08:00.120 +really if it's really long maybe a + +00:07:58.360 --> 00:08:01.879 +simple thing like uh let's say you have + +00:08:00.120 --> 00:08:06.560 +a big neural network and it's processing + +00:08:01.879 --> 00:08:06.560 +a really long sequence any ideas what + +00:08:06.919 --> 00:08:10.879 +happens basically you run out of memory + +00:08:09.280 --> 00:08:13.440 +or it takes a really long time right so + +00:08:10.879 --> 00:08:16.840 +you have computational problems another + +00:08:13.440 --> 00:08:18.479 +reason why is um think of what a bag of + +00:08:16.840 --> 00:08:21.400 +words model would look like if it was a + +00:08:18.479 --> 00:08:21.400 +bag of characters + +00:08:21.800 --> 00:08:25.919 +model it wouldn't be very informative + +00:08:24.199 --> 00:08:27.599 +about whether like a sentence is + +00:08:25.919 --> 00:08:30.919 +positive sentiment or negative sentiment + +00:08:27.599 --> 00:08:32.959 +right because instead of having uh go o + +00:08:30.919 --> 00:08:35.039 +you would have uh instead of having good + +00:08:32.959 --> 00:08:36.360 +you would have go o and that doesn't + +00:08:35.039 --> 00:08:38.560 +really directly tell you whether it's + +00:08:36.360 --> 00:08:41.719 +positive sentiment or not so those are + +00:08:38.560 --> 00:08:43.680 +basically the two problems um compute + +00:08:41.719 --> 00:08:45.320 +and lack of expressiveness in the + +00:08:43.680 --> 00:08:50.720 +underlying representations so you need + +00:08:45.320 --> 00:08:52.080 +to handle both of those yes so if we uh + +00:08:50.720 --> 00:08:54.480 +move from + +00:08:52.080 --> 00:08:56.440 +character better expressiveness and we + +00:08:54.480 --> 00:08:58.920 +assume that if we just get the bigger + +00:08:56.440 --> 00:09:00.120 +and bigger paragraphs we'll get even + +00:08:58.920 --> 00:09:02.760 +better + +00:09:00.120 --> 00:09:05.120 +yeah so a very good question I'll repeat + +00:09:02.760 --> 00:09:06.560 +it um and actually this also goes back + +00:09:05.120 --> 00:09:08.040 +to the other question you asked about + +00:09:06.560 --> 00:09:09.519 +words that look the same but are + +00:09:08.040 --> 00:09:12.160 +pronounced differently or have different + +00:09:09.519 --> 00:09:14.360 +meanings and so like let's say we just + +00:09:12.160 --> 00:09:15.920 +remembered this whole sentence right the + +00:09:14.360 --> 00:09:18.279 +companies are + +00:09:15.920 --> 00:09:21.600 +expanding um and that was like a single + +00:09:18.279 --> 00:09:22.680 +embedding and we somehow embedded it the + +00:09:21.600 --> 00:09:25.720 +problem would be we're never going to + +00:09:22.680 --> 00:09:27.120 +see that sentence again um or if we go + +00:09:25.720 --> 00:09:29.480 +to longer sentences we're never going to + +00:09:27.120 --> 00:09:31.839 +see the longer sentences again so it + +00:09:29.480 --> 00:09:34.320 +becomes too sparse so there's kind of a + +00:09:31.839 --> 00:09:37.240 +sweet spot between + +00:09:34.320 --> 00:09:40.279 +like long enough to be expressive and + +00:09:37.240 --> 00:09:42.480 +short enough to occur many times so that + +00:09:40.279 --> 00:09:43.959 +you can learn appropriately and that's + +00:09:42.480 --> 00:09:47.120 +kind of what subword models are aiming + +00:09:43.959 --> 00:09:48.360 +for and if you get longer subwords then + +00:09:47.120 --> 00:09:50.200 +you'll get things that are more + +00:09:48.360 --> 00:09:52.959 +expressive but more sparse in shorter + +00:09:50.200 --> 00:09:55.440 +subwords you'll get things that are like + +00:09:52.959 --> 00:09:57.279 +uh less expressive but less spice so you + +00:09:55.440 --> 00:09:59.120 +need to balance between them and then + +00:09:57.279 --> 00:10:00.600 +once we get into sequence modeling they + +00:09:59.120 --> 00:10:02.600 +start being able to model like which + +00:10:00.600 --> 00:10:04.120 +words are next to each other uh which + +00:10:02.600 --> 00:10:06.040 +tokens are next to each other and stuff + +00:10:04.120 --> 00:10:07.800 +like that so even if they are less + +00:10:06.040 --> 00:10:11.279 +expressive the combination between them + +00:10:07.800 --> 00:10:12.600 +can be expressive so um yeah that's kind + +00:10:11.279 --> 00:10:13.440 +of a preview of what we're going to be + +00:10:12.600 --> 00:10:17.320 +doing + +00:10:13.440 --> 00:10:19.279 +next okay so um let's assume that we + +00:10:17.320 --> 00:10:21.320 +want to have some subwords that are + +00:10:19.279 --> 00:10:23.000 +longer than characters but shorter than + +00:10:21.320 --> 00:10:26.240 +tokens how do we make these in a + +00:10:23.000 --> 00:10:28.680 +consistent way there's two major ways of + +00:10:26.240 --> 00:10:31.480 +doing this uh the first one is bite pair + +00:10:28.680 --> 00:10:32.839 +encoding and this is uh very very simple + +00:10:31.480 --> 00:10:35.839 +in fact it's so + +00:10:32.839 --> 00:10:35.839 +simple + +00:10:36.600 --> 00:10:40.839 +that we can implement + +00:10:41.839 --> 00:10:47.240 +it in this notebook here which you can + +00:10:44.600 --> 00:10:51.720 +click through to on the + +00:10:47.240 --> 00:10:55.440 +slides and it's uh + +00:10:51.720 --> 00:10:58.040 +about 10 lines of code um and so + +00:10:55.440 --> 00:11:01.040 +basically what B pair encoding + +00:10:58.040 --> 00:11:01.040 +does + +00:11:04.600 --> 00:11:09.560 +is that you start out with um all of the + +00:11:07.000 --> 00:11:14.360 +vocabulary that you want to process + +00:11:09.560 --> 00:11:17.560 +where each vocabulary item is split into + +00:11:14.360 --> 00:11:21.240 +uh the characters and an end of word + +00:11:17.560 --> 00:11:23.360 +symbol and you have a corresponding + +00:11:21.240 --> 00:11:27.519 +frequency of + +00:11:23.360 --> 00:11:31.120 +this you then uh get statistics about + +00:11:27.519 --> 00:11:33.279 +the most common pairs of tokens that + +00:11:31.120 --> 00:11:34.880 +occur next to each other and so here the + +00:11:33.279 --> 00:11:38.240 +most common pairs of tokens that occur + +00:11:34.880 --> 00:11:41.920 +next to each other are e s because it + +00:11:38.240 --> 00:11:46.560 +occurs nine times because it occurs in + +00:11:41.920 --> 00:11:48.279 +newest and wildest also s and t w + +00:11:46.560 --> 00:11:51.440 +because those occur there too and then + +00:11:48.279 --> 00:11:53.519 +you have we and other things like that + +00:11:51.440 --> 00:11:56.000 +so out of all the most frequent ones you + +00:11:53.519 --> 00:11:59.920 +just merge them together and that gives + +00:11:56.000 --> 00:12:02.720 +you uh new s new + +00:11:59.920 --> 00:12:05.200 +EST and wide + +00:12:02.720 --> 00:12:09.360 +EST and then you do the same thing this + +00:12:05.200 --> 00:12:12.519 +time now you get EST so now you get this + +00:12:09.360 --> 00:12:14.279 +uh suffix EST and that looks pretty + +00:12:12.519 --> 00:12:16.399 +reasonable for English right you know + +00:12:14.279 --> 00:12:19.040 +EST is a common suffix that we use it + +00:12:16.399 --> 00:12:22.399 +seems like it should be a single token + +00:12:19.040 --> 00:12:25.880 +and um so you just do this over and over + +00:12:22.399 --> 00:12:29.279 +again if you want a vocabulary of 60,000 + +00:12:25.880 --> 00:12:31.120 +for example you would do um 60,000 minus + +00:12:29.279 --> 00:12:33.079 +number of characters merge operations + +00:12:31.120 --> 00:12:37.160 +and eventually you would get a B of + +00:12:33.079 --> 00:12:41.920 +60,000 um and yeah very very simple + +00:12:37.160 --> 00:12:41.920 +method to do this um any questions about + +00:12:43.160 --> 00:12:46.160 +that + +00:12:57.839 --> 00:13:00.839 +yeah + +00:13:15.600 --> 00:13:20.959 +yeah so uh just to repeat the the + +00:13:18.040 --> 00:13:23.560 +comment uh this seems like a greedy + +00:13:20.959 --> 00:13:25.320 +version of Huffman encoding which is a + +00:13:23.560 --> 00:13:28.839 +you know similar to what you're using in + +00:13:25.320 --> 00:13:32.000 +your zip file a way to shorten things by + +00:13:28.839 --> 00:13:36.560 +getting longer uh more frequent things + +00:13:32.000 --> 00:13:39.120 +being inced as a single token um I think + +00:13:36.560 --> 00:13:40.760 +B pair encoding did originally start + +00:13:39.120 --> 00:13:43.720 +like that that's part of the reason why + +00:13:40.760 --> 00:13:45.760 +the encoding uh thing is here I think it + +00:13:43.720 --> 00:13:47.360 +originally started there I haven't read + +00:13:45.760 --> 00:13:49.360 +really deeply into this but I can talk + +00:13:47.360 --> 00:13:53.240 +more about how the next one corresponds + +00:13:49.360 --> 00:13:54.440 +to information Theory and Tuesday I'm + +00:13:53.240 --> 00:13:55.720 +going to talk even more about how + +00:13:54.440 --> 00:13:57.720 +language models correspond to + +00:13:55.720 --> 00:14:00.040 +information theories so we can uh we can + +00:13:57.720 --> 00:14:04.519 +discuss maybe in more detail + +00:14:00.040 --> 00:14:07.639 +to um so the the alternative option is + +00:14:04.519 --> 00:14:10.000 +to use unigram models and unigram models + +00:14:07.639 --> 00:14:12.240 +are the simplest type of language model + +00:14:10.000 --> 00:14:15.079 +I'm going to talk more in detail about + +00:14:12.240 --> 00:14:18.279 +them next time but basically uh the way + +00:14:15.079 --> 00:14:20.759 +it works is you create a model that + +00:14:18.279 --> 00:14:23.600 +generates all word uh words in the + +00:14:20.759 --> 00:14:26.199 +sequence independently sorry I thought I + +00:14:23.600 --> 00:14:26.199 +had a + +00:14:26.320 --> 00:14:31.800 +um I thought I had an equation but + +00:14:28.800 --> 00:14:31.800 +basically the + +00:14:32.240 --> 00:14:35.759 +equation looks + +00:14:38.079 --> 00:14:41.079 +like + +00:14:47.720 --> 00:14:52.120 +this so you say the probability of the + +00:14:50.360 --> 00:14:53.440 +sequence is the product of the + +00:14:52.120 --> 00:14:54.279 +probabilities of each of the words in + +00:14:53.440 --> 00:14:55.959 +the + +00:14:54.279 --> 00:15:00.079 +sequence + +00:14:55.959 --> 00:15:04.079 +and uh then you try to pick a vocabulary + +00:15:00.079 --> 00:15:06.839 +that maximizes the probability of the + +00:15:04.079 --> 00:15:09.320 +Corpus given a fixed vocabulary size so + +00:15:06.839 --> 00:15:10.320 +you try to say okay you get a vocabulary + +00:15:09.320 --> 00:15:14.440 +size of + +00:15:10.320 --> 00:15:16.920 +60,000 how do you um how do you pick the + +00:15:14.440 --> 00:15:19.680 +best 60,000 vocabulary to maximize the + +00:15:16.920 --> 00:15:22.440 +probability of the the Corpus and that + +00:15:19.680 --> 00:15:25.959 +will result in something very similar uh + +00:15:22.440 --> 00:15:27.920 +it will also try to give longer uh + +00:15:25.959 --> 00:15:29.880 +vocabulary uh sorry more common + +00:15:27.920 --> 00:15:32.240 +vocabulary long sequences because that + +00:15:29.880 --> 00:15:35.560 +allows you to to maximize this + +00:15:32.240 --> 00:15:36.959 +objective um the optimization for this + +00:15:35.560 --> 00:15:40.040 +is performed using something called the + +00:15:36.959 --> 00:15:44.440 +EM algorithm where basically you uh + +00:15:40.040 --> 00:15:48.560 +predict the uh the probability of each + +00:15:44.440 --> 00:15:51.600 +token showing up and uh then select the + +00:15:48.560 --> 00:15:53.279 +most common tokens and then trim off the + +00:15:51.600 --> 00:15:54.759 +ones that are less common and then just + +00:15:53.279 --> 00:15:58.120 +do this over and over again until you + +00:15:54.759 --> 00:15:59.839 +drop down to the 60,000 token lat so the + +00:15:58.120 --> 00:16:02.040 +details for this are not important for + +00:15:59.839 --> 00:16:04.160 +most people in this class uh because + +00:16:02.040 --> 00:16:07.480 +you're going to just be using a toolkit + +00:16:04.160 --> 00:16:08.880 +that implements this for you um but if + +00:16:07.480 --> 00:16:10.759 +you're interested in this I'm happy to + +00:16:08.880 --> 00:16:14.199 +talk to you about it + +00:16:10.759 --> 00:16:14.199 +yeah is there + +00:16:14.680 --> 00:16:18.959 +problem Oh in unigram models there's a + +00:16:17.199 --> 00:16:20.959 +huge problem with assuming Independence + +00:16:18.959 --> 00:16:22.720 +in language models because then you + +00:16:20.959 --> 00:16:25.120 +could rearrange the order of words in + +00:16:22.720 --> 00:16:26.600 +sentences um that that's something we're + +00:16:25.120 --> 00:16:27.519 +going to talk about in language model + +00:16:26.600 --> 00:16:30.560 +next + +00:16:27.519 --> 00:16:32.839 +time but the the good thing about this + +00:16:30.560 --> 00:16:34.519 +is the EM algorithm requires dynamic + +00:16:32.839 --> 00:16:36.079 +programming in this case and you can't + +00:16:34.519 --> 00:16:37.800 +easily do dynamic programming if you + +00:16:36.079 --> 00:16:40.160 +don't make that + +00:16:37.800 --> 00:16:41.880 +assumptions um and then finally after + +00:16:40.160 --> 00:16:43.560 +you've picked your vocabulary and you've + +00:16:41.880 --> 00:16:45.720 +assigned a probability to each word in + +00:16:43.560 --> 00:16:47.800 +the vocabulary you then find a + +00:16:45.720 --> 00:16:49.639 +segmentation of the input that maximizes + +00:16:47.800 --> 00:16:52.600 +the unigram + +00:16:49.639 --> 00:16:54.880 +probabilities um so this is basically + +00:16:52.600 --> 00:16:56.519 +the idea of what's going on here um I'm + +00:16:54.880 --> 00:16:58.120 +not going to go into a lot of detail + +00:16:56.519 --> 00:17:00.560 +about this because most people are just + +00:16:58.120 --> 00:17:02.279 +going to be users of this algorithm so + +00:17:00.560 --> 00:17:06.240 +it's not super super + +00:17:02.279 --> 00:17:09.400 +important um the one important thing + +00:17:06.240 --> 00:17:11.240 +about this is that there's a library + +00:17:09.400 --> 00:17:15.520 +called sentence piece that's used very + +00:17:11.240 --> 00:17:19.199 +widely in order to build these um in + +00:17:15.520 --> 00:17:22.000 +order to build these subword units and + +00:17:19.199 --> 00:17:23.720 +uh basically what you do is you run the + +00:17:22.000 --> 00:17:27.600 +sentence piece + +00:17:23.720 --> 00:17:30.200 +train uh model or sorry uh program and + +00:17:27.600 --> 00:17:32.640 +that gives you uh you select your vocab + +00:17:30.200 --> 00:17:34.240 +size uh this also this character + +00:17:32.640 --> 00:17:36.120 +coverage is basically how well do you + +00:17:34.240 --> 00:17:39.760 +need to cover all of the characters in + +00:17:36.120 --> 00:17:41.840 +your vocabulary or in your input text um + +00:17:39.760 --> 00:17:45.240 +what model type do you use and then you + +00:17:41.840 --> 00:17:48.640 +run this uh sentence piece en code file + +00:17:45.240 --> 00:17:51.039 +uh to uh encode the output and split the + +00:17:48.640 --> 00:17:54.799 +output and there's also python bindings + +00:17:51.039 --> 00:17:56.240 +available for this and by the one thing + +00:17:54.799 --> 00:17:57.919 +that you should know is by default it + +00:17:56.240 --> 00:18:00.600 +uses the unigram model but it also + +00:17:57.919 --> 00:18:01.960 +supports EP in my experience it doesn't + +00:18:00.600 --> 00:18:05.159 +make a huge difference about which one + +00:18:01.960 --> 00:18:07.640 +you use the bigger thing is how um how + +00:18:05.159 --> 00:18:10.159 +big is your vocabulary size and if your + +00:18:07.640 --> 00:18:11.880 +vocabulary size is smaller then things + +00:18:10.159 --> 00:18:13.760 +will be more efficient but less + +00:18:11.880 --> 00:18:17.480 +expressive if your vocabulary size is + +00:18:13.760 --> 00:18:21.280 +bigger things will be um will + +00:18:17.480 --> 00:18:23.240 +be more expressive but less efficient + +00:18:21.280 --> 00:18:25.360 +and A good rule of thumb is like + +00:18:23.240 --> 00:18:26.960 +something like 60,000 to 80,000 is + +00:18:25.360 --> 00:18:29.120 +pretty reasonable if you're only doing + +00:18:26.960 --> 00:18:31.320 +English if you're spreading out to + +00:18:29.120 --> 00:18:32.600 +things that do other languages um which + +00:18:31.320 --> 00:18:35.960 +I'll talk about in a second then you + +00:18:32.600 --> 00:18:38.720 +need a much bigger B regular + +00:18:35.960 --> 00:18:40.559 +say so there's two considerations here + +00:18:38.720 --> 00:18:42.440 +two important considerations when using + +00:18:40.559 --> 00:18:46.320 +these models uh the first is + +00:18:42.440 --> 00:18:48.760 +multilinguality as I said so when you're + +00:18:46.320 --> 00:18:50.760 +using um subword + +00:18:48.760 --> 00:18:54.710 +models they're hard to use + +00:18:50.760 --> 00:18:55.840 +multilingually because as I said before + +00:18:54.710 --> 00:18:59.799 +[Music] + +00:18:55.840 --> 00:19:03.799 +they give longer strings to more + +00:18:59.799 --> 00:19:06.520 +frequent strings basically so then + +00:19:03.799 --> 00:19:09.559 +imagine what happens if 50% of your + +00:19:06.520 --> 00:19:11.919 +Corpus is English another 30% of your + +00:19:09.559 --> 00:19:15.400 +Corpus is + +00:19:11.919 --> 00:19:17.200 +other languages written in Latin script + +00:19:15.400 --> 00:19:21.720 +10% is + +00:19:17.200 --> 00:19:25.480 +Chinese uh 5% is cerlic script languages + +00:19:21.720 --> 00:19:27.240 +four 4% is 3% is Japanese and then you + +00:19:25.480 --> 00:19:31.080 +have like + +00:19:27.240 --> 00:19:33.320 +0.01% written in like burmes or + +00:19:31.080 --> 00:19:35.520 +something like that suddenly burmes just + +00:19:33.320 --> 00:19:37.400 +gets chunked up really really tiny + +00:19:35.520 --> 00:19:38.360 +really long sequences and it doesn't + +00:19:37.400 --> 00:19:45.559 +work as + +00:19:38.360 --> 00:19:45.559 +well um so one way that people fix this + +00:19:45.919 --> 00:19:50.520 +um and actually there's a really nice uh + +00:19:48.760 --> 00:19:52.600 +blog post about this called exploring + +00:19:50.520 --> 00:19:53.760 +B's vocabulary which I referenced here + +00:19:52.600 --> 00:19:58.039 +if you're interested in learning more + +00:19:53.760 --> 00:20:02.960 +about that um but one way that people + +00:19:58.039 --> 00:20:05.240 +were around this is if your + +00:20:02.960 --> 00:20:07.960 +actual uh data + +00:20:05.240 --> 00:20:11.559 +distribution looks like this like + +00:20:07.960 --> 00:20:11.559 +English uh + +00:20:17.039 --> 00:20:23.159 +Ty we actually sorry I took out the + +00:20:19.280 --> 00:20:23.159 +Indian languages in my example + +00:20:24.960 --> 00:20:30.159 +apologies + +00:20:27.159 --> 00:20:30.159 +so + +00:20:30.400 --> 00:20:35.919 +um what you do is you essentially create + +00:20:33.640 --> 00:20:40.000 +a different distribution that like + +00:20:35.919 --> 00:20:43.559 +downweights English a little bit and up + +00:20:40.000 --> 00:20:47.000 +weights up weights all of the other + +00:20:43.559 --> 00:20:49.480 +languages um so that you get more of + +00:20:47.000 --> 00:20:53.159 +other languages when creating so this is + +00:20:49.480 --> 00:20:53.159 +a common work around that you can do for + +00:20:54.200 --> 00:20:59.960 +this um the + +00:20:56.799 --> 00:21:03.000 +second problem with these is + +00:20:59.960 --> 00:21:08.000 +arbitrariness so as you saw in my + +00:21:03.000 --> 00:21:11.240 +example with bpe e s s and t and of + +00:21:08.000 --> 00:21:13.520 +board symbol all have the same probabil + +00:21:11.240 --> 00:21:16.960 +or have the same frequency right so if + +00:21:13.520 --> 00:21:21.520 +we get to that point do we segment es or + +00:21:16.960 --> 00:21:25.039 +do we seg uh EST or do we segment e + +00:21:21.520 --> 00:21:26.559 +s and so this is also a problem and it + +00:21:25.039 --> 00:21:29.000 +actually can affect your results + +00:21:26.559 --> 00:21:30.480 +especially if you like don't have a + +00:21:29.000 --> 00:21:31.760 +really strong vocabulary for the + +00:21:30.480 --> 00:21:33.279 +language you're working in or you're + +00:21:31.760 --> 00:21:37.200 +working in a new + +00:21:33.279 --> 00:21:40.159 +domain and so there's a few workarounds + +00:21:37.200 --> 00:21:41.520 +for this uh one workaround for this is + +00:21:40.159 --> 00:21:44.000 +uh called subword + +00:21:41.520 --> 00:21:46.279 +regularization and the way it works is + +00:21:44.000 --> 00:21:49.400 +instead + +00:21:46.279 --> 00:21:51.640 +of just having a single segmentation and + +00:21:49.400 --> 00:21:54.679 +getting the kind of + +00:21:51.640 --> 00:21:56.200 +maximally probable segmentation or the + +00:21:54.679 --> 00:21:58.480 +one the greedy one that you get out of + +00:21:56.200 --> 00:22:01.360 +BP instead you sample different + +00:21:58.480 --> 00:22:03.000 +segmentations in training time and use + +00:22:01.360 --> 00:22:05.720 +the different segmentations and that + +00:22:03.000 --> 00:22:09.200 +makes your model more robust to this + +00:22:05.720 --> 00:22:10.840 +kind of variation and that's also + +00:22:09.200 --> 00:22:15.679 +actually the reason why sentence piece + +00:22:10.840 --> 00:22:17.919 +was released was through this um subword + +00:22:15.679 --> 00:22:19.559 +regularization paper so that's also + +00:22:17.919 --> 00:22:22.720 +implemented in sentence piece if that's + +00:22:19.559 --> 00:22:22.720 +something you're interested in + +00:22:24.919 --> 00:22:32.520 +trying cool um are there any questions + +00:22:28.480 --> 00:22:32.520 +or discussions about this + +00:22:53.279 --> 00:22:56.279 +yeah + +00:22:56.960 --> 00:22:59.960 +already + +00:23:06.799 --> 00:23:11.080 +yeah so this is a good question um just + +00:23:08.960 --> 00:23:12.760 +to repeat the question it was like let's + +00:23:11.080 --> 00:23:16.080 +say we have a big + +00:23:12.760 --> 00:23:19.640 +multilingual um subword + +00:23:16.080 --> 00:23:23.440 +model and we want to add a new language + +00:23:19.640 --> 00:23:26.240 +in some way uh how can we reuse the + +00:23:23.440 --> 00:23:28.880 +existing model but add a new + +00:23:26.240 --> 00:23:31.080 +language it's a good question if you're + +00:23:28.880 --> 00:23:33.679 +only using it for subord + +00:23:31.080 --> 00:23:36.320 +segmentation um one one nice thing about + +00:23:33.679 --> 00:23:36.320 +the unigram + +00:23:36.400 --> 00:23:41.799 +model here is this is kind of a + +00:23:38.880 --> 00:23:43.679 +probabilistic model so it's very easy to + +00:23:41.799 --> 00:23:46.360 +do the kind of standard things that we + +00:23:43.679 --> 00:23:48.240 +do with probabilistic models which is + +00:23:46.360 --> 00:23:50.559 +like let's say we had an + +00:23:48.240 --> 00:23:53.919 +old uh an + +00:23:50.559 --> 00:23:56.880 +old vocabulary for + +00:23:53.919 --> 00:23:59.880 +this um we could just + +00:23:56.880 --> 00:23:59.880 +interpolate + +00:24:07.159 --> 00:24:12.320 +um we could interpolate like this and + +00:24:09.559 --> 00:24:13.840 +just you know uh combine the + +00:24:12.320 --> 00:24:17.080 +probabilities of the two and then use + +00:24:13.840 --> 00:24:19.520 +that combine probability in order to + +00:24:17.080 --> 00:24:21.320 +segment the new language um things like + +00:24:19.520 --> 00:24:24.159 +this have been uh done before but I + +00:24:21.320 --> 00:24:26.159 +don't remember the exact preferences uh + +00:24:24.159 --> 00:24:30.440 +for them but that that's what I would do + +00:24:26.159 --> 00:24:31.960 +here another interesting thing is um + +00:24:30.440 --> 00:24:35.399 +this might be getting a little ahead of + +00:24:31.960 --> 00:24:35.399 +myself but there's + +00:24:48.559 --> 00:24:58.279 +a there's a paper that talks about um + +00:24:55.360 --> 00:25:00.159 +how you can take things that or trained + +00:24:58.279 --> 00:25:03.360 +with another + +00:25:00.159 --> 00:25:05.480 +vocabulary and basically the idea is um + +00:25:03.360 --> 00:25:09.320 +you pre-train on whatever languages you + +00:25:05.480 --> 00:25:10.679 +have and then uh you learn embeddings in + +00:25:09.320 --> 00:25:11.880 +the new language you freeze the body of + +00:25:10.679 --> 00:25:14.360 +the model and learn embeddings in the + +00:25:11.880 --> 00:25:15.880 +new language so that's another uh method + +00:25:14.360 --> 00:25:19.080 +that's used it's called on the cross + +00:25:15.880 --> 00:25:19.080 +lingual printability + +00:25:21.840 --> 00:25:26.159 +representations and I'll probably talk + +00:25:23.840 --> 00:25:28.480 +about that in the last class of this uh + +00:25:26.159 --> 00:25:30.720 +thing so you can remember that + +00:25:28.480 --> 00:25:33.720 +then cool any other + +00:25:30.720 --> 00:25:33.720 +questions + +00:25:38.480 --> 00:25:42.640 +yeah is bag of words a first step to + +00:25:41.039 --> 00:25:46.640 +process your data if you want to do + +00:25:42.640 --> 00:25:49.919 +Generation Um do you mean like + +00:25:46.640 --> 00:25:52.440 +uh a word based model or a subword based + +00:25:49.919 --> 00:25:52.440 +model + +00:25:56.679 --> 00:26:00.480 +or like is + +00:26:02.360 --> 00:26:08.000 +this so the subword segmentation is the + +00:26:05.919 --> 00:26:10.640 +first step of creating just about any + +00:26:08.000 --> 00:26:13.080 +model nowadays like every model every + +00:26:10.640 --> 00:26:16.600 +model uses this and they usually use + +00:26:13.080 --> 00:26:21.520 +this either to segment characters or + +00:26:16.600 --> 00:26:23.559 +byes um characters are like Unicode code + +00:26:21.520 --> 00:26:25.799 +points so they actually correspond to an + +00:26:23.559 --> 00:26:28.279 +actual visual character and then bites + +00:26:25.799 --> 00:26:31.120 +are many unicode characters are like + +00:26:28.279 --> 00:26:35.000 +three by like a Chinese character is + +00:26:31.120 --> 00:26:37.159 +three byes if I remember correctly so um + +00:26:35.000 --> 00:26:38.640 +the bbased segmentation is nice because + +00:26:37.159 --> 00:26:41.240 +you don't even need to worry about unic + +00:26:38.640 --> 00:26:43.880 +code you can just do the like you can + +00:26:41.240 --> 00:26:45.640 +just segment the pile like literally as + +00:26:43.880 --> 00:26:49.440 +is and so a lot of people do it that way + +00:26:45.640 --> 00:26:53.279 +too uh llama as far as I know is + +00:26:49.440 --> 00:26:55.720 +bites I believe GPT is also bites um but + +00:26:53.279 --> 00:26:58.799 +pre previous to like three or four years + +00:26:55.720 --> 00:27:02.799 +ago people used SCS I + +00:26:58.799 --> 00:27:05.000 +cool um okay so this is really really + +00:27:02.799 --> 00:27:05.919 +important it's not like super complex + +00:27:05.000 --> 00:27:09.760 +and + +00:27:05.919 --> 00:27:13.039 +practically uh you will just maybe maybe + +00:27:09.760 --> 00:27:15.840 +train or maybe just use a tokenizer um + +00:27:13.039 --> 00:27:18.559 +but uh that that's an important thing to + +00:27:15.840 --> 00:27:20.760 +me cool uh next I'd like to move on to + +00:27:18.559 --> 00:27:24.399 +continuous word eddings + +00:27:20.760 --> 00:27:26.720 +so the basic idea is that previously we + +00:27:24.399 --> 00:27:28.240 +represented words with a sparse Vector + +00:27:26.720 --> 00:27:30.120 +uh with a single one + +00:27:28.240 --> 00:27:31.960 +also known as one poot Vector so it + +00:27:30.120 --> 00:27:35.720 +looked a little bit like + +00:27:31.960 --> 00:27:37.640 +this and instead what continuous word + +00:27:35.720 --> 00:27:39.640 +embeddings do is they look up a dense + +00:27:37.640 --> 00:27:42.320 +vector and so you get a dense + +00:27:39.640 --> 00:27:45.760 +representation where the entire Vector + +00:27:42.320 --> 00:27:45.760 +has continuous values in + +00:27:46.000 --> 00:27:51.919 +it and I talked about a bag of words + +00:27:49.200 --> 00:27:54.320 +model but we could also create a + +00:27:51.919 --> 00:27:58.360 +continuous bag of words model and the + +00:27:54.320 --> 00:28:01.159 +way this works is you look up the + +00:27:58.360 --> 00:28:03.720 +values of each Vector the embeddings of + +00:28:01.159 --> 00:28:06.320 +each Vector this gives you an embedding + +00:28:03.720 --> 00:28:08.440 +Vector for the entire sequence and then + +00:28:06.320 --> 00:28:15.120 +you multiply this by a weight + +00:28:08.440 --> 00:28:17.559 +Matrix uh where the so this is column so + +00:28:15.120 --> 00:28:19.960 +the rows of the weight Matrix uh + +00:28:17.559 --> 00:28:22.919 +correspond to to the size of this + +00:28:19.960 --> 00:28:24.760 +continuous embedding and The Columns of + +00:28:22.919 --> 00:28:28.320 +the weight Matrix would correspond to + +00:28:24.760 --> 00:28:30.919 +the uh overall um + +00:28:28.320 --> 00:28:32.559 +to the overall uh number of labels that + +00:28:30.919 --> 00:28:36.919 +you would have here and then that would + +00:28:32.559 --> 00:28:40.120 +give you sces and so this uh basically + +00:28:36.919 --> 00:28:41.679 +what this is saying is each Vector now + +00:28:40.120 --> 00:28:43.440 +instead of having a single thing that + +00:28:41.679 --> 00:28:46.799 +represents which vocabulary item you're + +00:28:43.440 --> 00:28:48.679 +looking at uh you would kind of hope + +00:28:46.799 --> 00:28:52.120 +that you would get vectors where words + +00:28:48.679 --> 00:28:54.919 +that are similar uh by some mention of + +00:28:52.120 --> 00:28:57.760 +by some concept of similar like syntatic + +00:28:54.919 --> 00:28:59.679 +uh syntax semantics whether they're in + +00:28:57.760 --> 00:29:03.120 +the same language or not are close in + +00:28:59.679 --> 00:29:06.679 +the vector space and each Vector element + +00:29:03.120 --> 00:29:09.399 +is a feature uh so for example each + +00:29:06.679 --> 00:29:11.519 +Vector element corresponds to is this an + +00:29:09.399 --> 00:29:14.960 +animate object or is this a positive + +00:29:11.519 --> 00:29:17.399 +word or other Vector other things like + +00:29:14.960 --> 00:29:19.399 +that so just to give an example here + +00:29:17.399 --> 00:29:21.760 +this is totally made up I just made it + +00:29:19.399 --> 00:29:24.360 +in keynote so it's not natural Vector + +00:29:21.760 --> 00:29:26.279 +space but to Ill illustrate the concept + +00:29:24.360 --> 00:29:27.960 +I showed here what if we had a + +00:29:26.279 --> 00:29:30.240 +two-dimensional vector + +00:29:27.960 --> 00:29:33.399 +space where the two-dimensional Vector + +00:29:30.240 --> 00:29:36.240 +space the xais here is corresponding to + +00:29:33.399 --> 00:29:38.679 +whether it's animate or not and the the + +00:29:36.240 --> 00:29:41.480 +Y AIS here is corresponding to whether + +00:29:38.679 --> 00:29:44.080 +it's like positive sentiment or not and + +00:29:41.480 --> 00:29:46.399 +so this is kind of like our ideal uh + +00:29:44.080 --> 00:29:49.799 +goal + +00:29:46.399 --> 00:29:52.279 +here um so why would we want to do this + +00:29:49.799 --> 00:29:52.279 +yeah sorry + +00:29:56.320 --> 00:30:03.399 +guys what do the like in the one it's + +00:30:00.919 --> 00:30:06.399 +one + +00:30:03.399 --> 00:30:06.399 +yep + +00:30:07.200 --> 00:30:12.519 +like so what would the four entries do + +00:30:09.880 --> 00:30:14.799 +here the four entries here are learned + +00:30:12.519 --> 00:30:17.039 +so they are um they're learned just + +00:30:14.799 --> 00:30:18.519 +together with the model um and I'm going + +00:30:17.039 --> 00:30:22.120 +to talk about exactly how we learn them + +00:30:18.519 --> 00:30:24.000 +soon but the the final goal is that + +00:30:22.120 --> 00:30:25.399 +after learning has happened they look + +00:30:24.000 --> 00:30:26.799 +they have these two properties like + +00:30:25.399 --> 00:30:28.600 +similar words are close together in the + +00:30:26.799 --> 00:30:30.080 +vectorace + +00:30:28.600 --> 00:30:32.640 +and + +00:30:30.080 --> 00:30:35.679 +um that's like number one that's the + +00:30:32.640 --> 00:30:37.600 +most important and then number two is + +00:30:35.679 --> 00:30:39.279 +ideally these uh features would have + +00:30:37.600 --> 00:30:41.200 +some meaning uh maybe human + +00:30:39.279 --> 00:30:44.720 +interpretable meaning maybe not human + +00:30:41.200 --> 00:30:47.880 +interpretable meaning but + +00:30:44.720 --> 00:30:50.880 +yeah so um one thing that I should + +00:30:47.880 --> 00:30:53.159 +mention is I I showed a contrast between + +00:30:50.880 --> 00:30:55.159 +the bag of words uh the one hot + +00:30:53.159 --> 00:30:57.000 +representations here and the dense + +00:30:55.159 --> 00:31:00.880 +representations here and I used this + +00:30:57.000 --> 00:31:03.880 +look look up operation for both of them + +00:31:00.880 --> 00:31:07.399 +and this this lookup + +00:31:03.880 --> 00:31:09.559 +operation actually um can be viewed as + +00:31:07.399 --> 00:31:11.799 +grabbing a single Vector from a big + +00:31:09.559 --> 00:31:14.919 +Matrix of word + +00:31:11.799 --> 00:31:17.760 +embeddings and + +00:31:14.919 --> 00:31:19.760 +so the way it can work is like we have + +00:31:17.760 --> 00:31:22.919 +this big vector and then we look up word + +00:31:19.760 --> 00:31:25.919 +number two in a zero index Matrix and it + +00:31:22.919 --> 00:31:27.799 +would just grab this out of that Matrix + +00:31:25.919 --> 00:31:29.880 +and that's practically what most like + +00:31:27.799 --> 00:31:32.240 +deep learning libraries or or whatever + +00:31:29.880 --> 00:31:35.840 +Library you use are going to be + +00:31:32.240 --> 00:31:38.000 +doing but another uh way you can view it + +00:31:35.840 --> 00:31:40.880 +is you can view it as multiplying by a + +00:31:38.000 --> 00:31:43.880 +one hot vector and so you have this + +00:31:40.880 --> 00:31:48.679 +Vector uh exactly the same Matrix uh but + +00:31:43.880 --> 00:31:50.799 +you just multiply by a vector uh 0 1 z z + +00:31:48.679 --> 00:31:55.720 +and that gives you exactly the same + +00:31:50.799 --> 00:31:58.200 +things um so the Practical imple + +00:31:55.720 --> 00:31:59.720 +implementations of this uh uh tend to be + +00:31:58.200 --> 00:32:01.279 +the first one because the first one's a + +00:31:59.720 --> 00:32:04.679 +lot faster to implement you don't need + +00:32:01.279 --> 00:32:06.760 +to multiply like this big thing by a + +00:32:04.679 --> 00:32:11.000 +huge Vector but there + +00:32:06.760 --> 00:32:13.880 +are advantages of knowing the second one + +00:32:11.000 --> 00:32:15.519 +uh just to give an example what if you + +00:32:13.880 --> 00:32:19.600 +for whatever reason you came up with + +00:32:15.519 --> 00:32:21.440 +like an a crazy model that predicts a + +00:32:19.600 --> 00:32:24.120 +probability distribution over words + +00:32:21.440 --> 00:32:25.720 +instead of just words maybe it's a + +00:32:24.120 --> 00:32:27.679 +language model that has an idea of what + +00:32:25.720 --> 00:32:30.200 +the next word is going to look like + +00:32:27.679 --> 00:32:32.159 +and maybe your um maybe your model + +00:32:30.200 --> 00:32:35.279 +thinks the next word has a 50% + +00:32:32.159 --> 00:32:36.600 +probability of being capped 30% + +00:32:35.279 --> 00:32:42.279 +probability of being + +00:32:36.600 --> 00:32:44.960 +dog and uh 2% probability uh sorry uh + +00:32:42.279 --> 00:32:47.200 +20% probability being + +00:32:44.960 --> 00:32:50.000 +bir you can take this vector and + +00:32:47.200 --> 00:32:51.480 +multiply it by The Matrix and get like a + +00:32:50.000 --> 00:32:53.639 +word embedding that's kind of a mix of + +00:32:51.480 --> 00:32:55.639 +all of those word which might be + +00:32:53.639 --> 00:32:57.960 +interesting and let you do creative + +00:32:55.639 --> 00:33:02.120 +things so um knowing that these two + +00:32:57.960 --> 00:33:05.360 +things are the same are the same is kind + +00:33:02.120 --> 00:33:05.360 +of useful for that kind of + +00:33:05.919 --> 00:33:11.480 +thing um any any questions about this + +00:33:09.120 --> 00:33:13.919 +I'm G to talk about how we train next so + +00:33:11.480 --> 00:33:18.159 +maybe maybe I can goow into + +00:33:13.919 --> 00:33:23.159 +that okay cool so how do we get the + +00:33:18.159 --> 00:33:25.840 +vectors uh like the question uh so up + +00:33:23.159 --> 00:33:27.519 +until now we trained a bag of words + +00:33:25.840 --> 00:33:29.080 +model and the way we trained a bag of + +00:33:27.519 --> 00:33:31.159 +words model was using the structured + +00:33:29.080 --> 00:33:35.440 +perceptron algorithm where if the model + +00:33:31.159 --> 00:33:39.639 +got the answer wrong we would either + +00:33:35.440 --> 00:33:42.799 +increment or decrement the embeddings + +00:33:39.639 --> 00:33:45.080 +based on whether uh whether the label + +00:33:42.799 --> 00:33:46.559 +was positive or negative right so I + +00:33:45.080 --> 00:33:48.919 +showed an example of this very simple + +00:33:46.559 --> 00:33:51.039 +algorithm you don't even uh need to + +00:33:48.919 --> 00:33:52.480 +write any like numpy or anything like + +00:33:51.039 --> 00:33:55.919 +that to implement that + +00:33:52.480 --> 00:33:59.559 +algorithm uh so here here it is so we + +00:33:55.919 --> 00:34:02.320 +have like 4X why in uh data we extract + +00:33:59.559 --> 00:34:04.639 +the features we run the classifier uh we + +00:34:02.320 --> 00:34:07.440 +have the predicted why and then we + +00:34:04.639 --> 00:34:09.480 +increment or decrement + +00:34:07.440 --> 00:34:12.679 +features but how do we train more + +00:34:09.480 --> 00:34:15.599 +complex models so I think most people + +00:34:12.679 --> 00:34:17.079 +here have taken a uh machine learning + +00:34:15.599 --> 00:34:19.159 +class of some kind so this will be + +00:34:17.079 --> 00:34:21.079 +reviewed for a lot of people uh but + +00:34:19.159 --> 00:34:22.280 +basically we do this uh by doing + +00:34:21.079 --> 00:34:24.839 +gradient + +00:34:22.280 --> 00:34:27.240 +descent and in order to do so we write + +00:34:24.839 --> 00:34:29.919 +down a loss function calculate the + +00:34:27.240 --> 00:34:30.919 +derivatives of the L function with + +00:34:29.919 --> 00:34:35.079 +respect to the + +00:34:30.919 --> 00:34:37.320 +parameters and move uh the parameters in + +00:34:35.079 --> 00:34:40.839 +the direction that reduces the loss + +00:34:37.320 --> 00:34:42.720 +mtion and so specifically for this bag + +00:34:40.839 --> 00:34:45.560 +of words or continuous bag of words + +00:34:42.720 --> 00:34:48.240 +model um we want this loss of function + +00:34:45.560 --> 00:34:50.839 +to be a loss function that gets lower as + +00:34:48.240 --> 00:34:52.240 +the model gets better and I'm going to + +00:34:50.839 --> 00:34:54.000 +give two examples from binary + +00:34:52.240 --> 00:34:57.400 +classification both of these are used in + +00:34:54.000 --> 00:34:58.839 +NLP models uh reasonably frequently + +00:34:57.400 --> 00:35:01.440 +uh there's a bunch of other loss + +00:34:58.839 --> 00:35:02.800 +functions but these are kind of the two + +00:35:01.440 --> 00:35:05.480 +major + +00:35:02.800 --> 00:35:08.160 +ones so the first one um which is + +00:35:05.480 --> 00:35:10.160 +actually less frequent is the hinge loss + +00:35:08.160 --> 00:35:13.400 +and then the second one is taking a + +00:35:10.160 --> 00:35:15.800 +sigmoid and then doing negative log + +00:35:13.400 --> 00:35:19.760 +likelyhood so the hinge loss basically + +00:35:15.800 --> 00:35:22.760 +what we do is we uh take the max of the + +00:35:19.760 --> 00:35:26.119 +label times the score that is output by + +00:35:22.760 --> 00:35:29.200 +the model and zero and what this looks + +00:35:26.119 --> 00:35:33.480 +like is we have a hinged loss uh where + +00:35:29.200 --> 00:35:36.880 +if Y is equal to one the loss if Y is + +00:35:33.480 --> 00:35:39.520 +greater than zero is zero so as long as + +00:35:36.880 --> 00:35:42.680 +we get basically as long as we get the + +00:35:39.520 --> 00:35:45.079 +answer right there's no loss um as the + +00:35:42.680 --> 00:35:47.400 +answer gets more wrong the loss gets + +00:35:45.079 --> 00:35:49.880 +worse like this and then similarly if + +00:35:47.400 --> 00:35:53.160 +the label is negative if we get a + +00:35:49.880 --> 00:35:54.839 +negative score uh then we get zero loss + +00:35:53.160 --> 00:35:55.800 +and the loss increases if we have a + +00:35:54.839 --> 00:35:58.800 +positive + +00:35:55.800 --> 00:36:00.800 +score so the sigmoid plus negative log + +00:35:58.800 --> 00:36:05.440 +likelihood the way this works is you + +00:36:00.800 --> 00:36:07.400 +multiply y * the score here and um then + +00:36:05.440 --> 00:36:09.960 +we have the sigmoid function which is + +00:36:07.400 --> 00:36:14.079 +just kind of a nice function that looks + +00:36:09.960 --> 00:36:15.440 +like this with zero and one centered + +00:36:14.079 --> 00:36:19.480 +around + +00:36:15.440 --> 00:36:21.240 +zero and then we take the negative log + +00:36:19.480 --> 00:36:22.319 +of this sigmoid function or the negative + +00:36:21.240 --> 00:36:27.160 +log + +00:36:22.319 --> 00:36:28.520 +likelihood and that gives us a uh L that + +00:36:27.160 --> 00:36:30.440 +looks a little bit like this so + +00:36:28.520 --> 00:36:32.640 +basically you can see that these look + +00:36:30.440 --> 00:36:36.040 +very similar right the difference being + +00:36:32.640 --> 00:36:37.760 +that the hinge loss is uh sharp and we + +00:36:36.040 --> 00:36:41.119 +get exactly a zero loss if we get the + +00:36:37.760 --> 00:36:44.319 +answer right and the sigmoid is smooth + +00:36:41.119 --> 00:36:48.440 +uh and we never get a zero + +00:36:44.319 --> 00:36:50.680 +loss um so does anyone have an idea of + +00:36:48.440 --> 00:36:53.119 +the benefits and disadvantages of + +00:36:50.680 --> 00:36:55.680 +these I kind of flashed one on the + +00:36:53.119 --> 00:36:57.599 +screen already + +00:36:55.680 --> 00:36:59.400 +but + +00:36:57.599 --> 00:37:01.359 +so I flash that on the screen so I'll + +00:36:59.400 --> 00:37:03.680 +give this one and then I can have a quiz + +00:37:01.359 --> 00:37:06.319 +about the sign but the the hinge glass + +00:37:03.680 --> 00:37:07.720 +is more closely linked to accuracy and + +00:37:06.319 --> 00:37:10.400 +the reason why it's more closely linked + +00:37:07.720 --> 00:37:13.640 +to accuracy is because basically we will + +00:37:10.400 --> 00:37:16.079 +get a zero loss if the model gets the + +00:37:13.640 --> 00:37:18.319 +answer right so when the model gets all + +00:37:16.079 --> 00:37:20.240 +of the answers right we will just stop + +00:37:18.319 --> 00:37:22.760 +updating our model whatsoever because we + +00:37:20.240 --> 00:37:25.440 +never we don't have any loss whatsoever + +00:37:22.760 --> 00:37:27.720 +and the gradient of the loss is zero um + +00:37:25.440 --> 00:37:29.960 +what about the sigmoid uh a negative log + +00:37:27.720 --> 00:37:33.160 +likelihood uh there there's kind of two + +00:37:29.960 --> 00:37:36.160 +major advantages of this anyone want to + +00:37:33.160 --> 00:37:36.160 +review their machine learning + +00:37:38.240 --> 00:37:41.800 +test sorry what was + +00:37:43.800 --> 00:37:49.960 +that for for R uh yeah maybe there's a + +00:37:48.200 --> 00:37:51.319 +more direct I think I know what you're + +00:37:49.960 --> 00:37:54.560 +saying but maybe there's a more direct + +00:37:51.319 --> 00:37:54.560 +way to say that um + +00:37:54.839 --> 00:38:00.760 +yeah yeah so the gradient is nonzero + +00:37:57.560 --> 00:38:04.240 +everywhere and uh the gradient also kind + +00:38:00.760 --> 00:38:05.839 +of increases as your score gets worse so + +00:38:04.240 --> 00:38:08.440 +those are that's one advantage it makes + +00:38:05.839 --> 00:38:11.240 +it easier to optimize models um another + +00:38:08.440 --> 00:38:13.839 +one linked to the ROC score but maybe we + +00:38:11.240 --> 00:38:13.839 +could say it more + +00:38:16.119 --> 00:38:19.400 +directly any + +00:38:20.040 --> 00:38:26.920 +ideas okay um basically the sigmoid can + +00:38:23.240 --> 00:38:30.160 +be interpreted as a probability so um if + +00:38:26.920 --> 00:38:32.839 +the the sigmoid is between Zer and one + +00:38:30.160 --> 00:38:34.640 +uh and because it's between zero and one + +00:38:32.839 --> 00:38:36.720 +we can say the sigmoid is a + +00:38:34.640 --> 00:38:38.640 +probability um and that can be useful + +00:38:36.720 --> 00:38:40.119 +for various things like if we want a + +00:38:38.640 --> 00:38:41.960 +downstream model or if we want a + +00:38:40.119 --> 00:38:45.480 +confidence prediction out of the model + +00:38:41.960 --> 00:38:48.200 +so those are two uh advantages of using + +00:38:45.480 --> 00:38:49.920 +a s plus negative log likelihood there's + +00:38:48.200 --> 00:38:53.160 +no probabilistic interpretation to + +00:38:49.920 --> 00:38:56.560 +something transing theas + +00:38:53.160 --> 00:38:59.200 +basically cool um so the next thing that + +00:38:56.560 --> 00:39:01.240 +that we do is we calculate derivatives + +00:38:59.200 --> 00:39:04.040 +and we calculate the derivative of the + +00:39:01.240 --> 00:39:05.920 +parameter given the loss function um to + +00:39:04.040 --> 00:39:09.839 +give an example of the bag of words + +00:39:05.920 --> 00:39:13.480 +model and the hinge loss um the hinge + +00:39:09.839 --> 00:39:16.480 +loss as I said is the max of the score + +00:39:13.480 --> 00:39:19.359 +and times y in the bag of words model + +00:39:16.480 --> 00:39:22.640 +the score was the frequency of that + +00:39:19.359 --> 00:39:25.880 +vocabulary item in the input multiplied + +00:39:22.640 --> 00:39:27.680 +by the weight here and so if we this is + +00:39:25.880 --> 00:39:29.520 +a simple a function that I can just do + +00:39:27.680 --> 00:39:34.440 +the derivative by hand and if I do the + +00:39:29.520 --> 00:39:36.920 +deriva by hand what comes out is if y * + +00:39:34.440 --> 00:39:39.319 +this value is greater than zero so in + +00:39:36.920 --> 00:39:44.640 +other words if this Max uh picks this + +00:39:39.319 --> 00:39:48.319 +instead of this then the derivative is y + +00:39:44.640 --> 00:39:52.359 +* stre and otherwise uh it + +00:39:48.319 --> 00:39:52.359 +is in the opposite + +00:39:55.400 --> 00:40:00.160 +direction + +00:39:56.920 --> 00:40:02.839 +then uh optimizing gradients uh we do + +00:40:00.160 --> 00:40:06.200 +standard uh in standard stochastic + +00:40:02.839 --> 00:40:07.839 +gradient descent uh which is the most + +00:40:06.200 --> 00:40:10.920 +standard optimization algorithm for + +00:40:07.839 --> 00:40:14.440 +these models uh we basically have a + +00:40:10.920 --> 00:40:17.440 +gradient over uh you take the gradient + +00:40:14.440 --> 00:40:20.040 +over the parameter of the loss function + +00:40:17.440 --> 00:40:22.480 +and we call it GT so here um sorry I + +00:40:20.040 --> 00:40:25.599 +switched my terminology between W and + +00:40:22.480 --> 00:40:28.280 +Theta so this could be W uh the previous + +00:40:25.599 --> 00:40:31.000 +value of w + +00:40:28.280 --> 00:40:35.440 +um and this is the gradient of the loss + +00:40:31.000 --> 00:40:37.040 +and then uh we take the previous value + +00:40:35.440 --> 00:40:39.680 +and then we subtract out the learning + +00:40:37.040 --> 00:40:39.680 +rate times the + +00:40:40.680 --> 00:40:45.720 +gradient and uh there are many many + +00:40:43.200 --> 00:40:47.280 +other optimization options uh I'll cover + +00:40:45.720 --> 00:40:50.960 +the more frequent one called Adam at the + +00:40:47.280 --> 00:40:54.319 +end of this uh this lecture but um this + +00:40:50.960 --> 00:40:57.160 +is the basic way of optimizing the + +00:40:54.319 --> 00:41:00.599 +model so + +00:40:57.160 --> 00:41:03.359 +then my question now is what is this + +00:41:00.599 --> 00:41:07.000 +algorithm with respect + +00:41:03.359 --> 00:41:10.119 +to this is an algorithm that is + +00:41:07.000 --> 00:41:12.280 +taking that has a loss function it's + +00:41:10.119 --> 00:41:14.079 +calculating derivatives and it's + +00:41:12.280 --> 00:41:17.240 +optimizing gradients using stochastic + +00:41:14.079 --> 00:41:18.839 +gradient descent so does anyone have a + +00:41:17.240 --> 00:41:20.960 +guess about what the loss function is + +00:41:18.839 --> 00:41:23.520 +here and maybe what is the learning rate + +00:41:20.960 --> 00:41:23.520 +of stas + +00:41:24.319 --> 00:41:29.480 +gradient I kind of gave you a hint about + +00:41:26.599 --> 00:41:29.480 +the L one + +00:41:31.640 --> 00:41:37.839 +actually and just to recap what this is + +00:41:34.440 --> 00:41:41.440 +doing here it's um if predicted Y is + +00:41:37.839 --> 00:41:44.560 +equal to Y then it is moving the uh the + +00:41:41.440 --> 00:41:48.240 +future weights in the direction of Y + +00:41:44.560 --> 00:41:48.240 +times the frequency + +00:41:52.599 --> 00:41:56.960 +Vector + +00:41:55.240 --> 00:41:59.079 +yeah + +00:41:56.960 --> 00:42:01.640 +yeah exactly so the loss function is + +00:41:59.079 --> 00:42:05.800 +hinge loss and the learning rate is one + +00:42:01.640 --> 00:42:07.880 +um and just to show how that you know + +00:42:05.800 --> 00:42:12.359 +corresponds we have this if statement + +00:42:07.880 --> 00:42:12.359 +here and we have the increment of the + +00:42:12.960 --> 00:42:20.240 +features and this is what the um what + +00:42:16.920 --> 00:42:21.599 +the L sorry the derivative looked like + +00:42:20.240 --> 00:42:24.240 +so we have + +00:42:21.599 --> 00:42:26.920 +if this is moving in the right direction + +00:42:24.240 --> 00:42:29.520 +for the label uh then we increment + +00:42:26.920 --> 00:42:31.599 +otherwise we do nothing so + +00:42:29.520 --> 00:42:33.559 +basically you can see that even this + +00:42:31.599 --> 00:42:35.200 +really simple algorithm that I you know + +00:42:33.559 --> 00:42:37.480 +implemented with a few lines of python + +00:42:35.200 --> 00:42:38.839 +is essentially equivalent to this uh + +00:42:37.480 --> 00:42:40.760 +stochastic gradient descent that we + +00:42:38.839 --> 00:42:44.559 +doing + +00:42:40.760 --> 00:42:46.359 +models so the good news about this is + +00:42:44.559 --> 00:42:48.359 +you know this this is really simple but + +00:42:46.359 --> 00:42:50.599 +it only really works forit like a bag of + +00:42:48.359 --> 00:42:55.400 +words model or a simple feature based + +00:42:50.599 --> 00:42:57.200 +model uh but it opens up a lot of uh new + +00:42:55.400 --> 00:43:00.440 +possibilities for how we can optimize + +00:42:57.200 --> 00:43:01.599 +models and in particular I mentioned uh + +00:43:00.440 --> 00:43:04.839 +that there was a problem with + +00:43:01.599 --> 00:43:08.200 +combination features last class like + +00:43:04.839 --> 00:43:11.200 +don't hate and don't love are not just + +00:43:08.200 --> 00:43:12.760 +you know hate plus don't and love plus + +00:43:11.200 --> 00:43:14.119 +don't it's actually the combination of + +00:43:12.760 --> 00:43:17.680 +the two is really + +00:43:14.119 --> 00:43:20.160 +important and so um yeah just to give an + +00:43:17.680 --> 00:43:23.440 +example we have don't love is maybe bad + +00:43:20.160 --> 00:43:26.960 +uh nothing I don't love is very + +00:43:23.440 --> 00:43:30.960 +good and so in order + +00:43:26.960 --> 00:43:34.040 +to solve this problem we turn to neural + +00:43:30.960 --> 00:43:37.160 +networks and the way we do this is we + +00:43:34.040 --> 00:43:39.119 +have a lookup of dense embeddings sorry + +00:43:37.160 --> 00:43:41.839 +I actually I just realized my coloring + +00:43:39.119 --> 00:43:44.119 +is off I was using red to indicate dense + +00:43:41.839 --> 00:43:46.480 +embeddings so this should be maybe red + +00:43:44.119 --> 00:43:49.319 +instead of blue but um we take these + +00:43:46.480 --> 00:43:51.200 +stents embeddings and then we create + +00:43:49.319 --> 00:43:53.720 +some complicated function to extract + +00:43:51.200 --> 00:43:55.079 +combination features um and then use + +00:43:53.720 --> 00:43:57.359 +those to calculate + +00:43:55.079 --> 00:44:02.200 +scores + +00:43:57.359 --> 00:44:04.480 +um and so we calculate these combination + +00:44:02.200 --> 00:44:08.240 +features and what we want to do is we + +00:44:04.480 --> 00:44:12.880 +want to extract vectors from the input + +00:44:08.240 --> 00:44:12.880 +where each Vector has features + +00:44:15.839 --> 00:44:21.040 +um sorry this is in the wrong order so + +00:44:18.240 --> 00:44:22.559 +I'll I'll get back to this um so this + +00:44:21.040 --> 00:44:25.319 +this was talking about the The + +00:44:22.559 --> 00:44:27.200 +Continuous bag of words features so the + +00:44:25.319 --> 00:44:30.960 +problem with the continuous bag of words + +00:44:27.200 --> 00:44:30.960 +features was we were extracting + +00:44:31.359 --> 00:44:36.359 +features + +00:44:33.079 --> 00:44:36.359 +um like + +00:44:36.839 --> 00:44:41.400 +this but then we were directly using the + +00:44:39.760 --> 00:44:43.359 +the feature the dense features that we + +00:44:41.400 --> 00:44:45.559 +extracted to make predictions without + +00:44:43.359 --> 00:44:48.839 +actually allowing for any interactions + +00:44:45.559 --> 00:44:51.839 +between the features um and + +00:44:48.839 --> 00:44:55.160 +so uh neural networks the way we fix + +00:44:51.839 --> 00:44:57.079 +this is we first extract these features + +00:44:55.160 --> 00:44:59.440 +uh we take these these features of each + +00:44:57.079 --> 00:45:04.000 +word embedding and then we run them + +00:44:59.440 --> 00:45:07.240 +through uh kind of linear transforms in + +00:45:04.000 --> 00:45:09.880 +nonlinear uh like linear multiplications + +00:45:07.240 --> 00:45:10.880 +and then nonlinear transforms to extract + +00:45:09.880 --> 00:45:13.920 +additional + +00:45:10.880 --> 00:45:15.839 +features and uh finally run this through + +00:45:13.920 --> 00:45:18.640 +several layers and then use the + +00:45:15.839 --> 00:45:21.119 +resulting features to make our + +00:45:18.640 --> 00:45:23.200 +predictions and when we do this this + +00:45:21.119 --> 00:45:25.319 +allows us to do more uh interesting + +00:45:23.200 --> 00:45:28.319 +things so like for example we could + +00:45:25.319 --> 00:45:30.000 +learn feature combination a node in the + +00:45:28.319 --> 00:45:32.599 +second layer might be feature one and + +00:45:30.000 --> 00:45:35.240 +feature five are active so that could be + +00:45:32.599 --> 00:45:38.680 +like feature one corresponds to negative + +00:45:35.240 --> 00:45:43.640 +sentiment words like hate + +00:45:38.680 --> 00:45:45.839 +despise um and other things like that so + +00:45:43.640 --> 00:45:50.079 +for hate and despise feature one would + +00:45:45.839 --> 00:45:53.119 +have a high value like 8.0 and then + +00:45:50.079 --> 00:45:55.480 +7.2 and then we also have negation words + +00:45:53.119 --> 00:45:57.040 +like don't or not or something like that + +00:45:55.480 --> 00:46:00.040 +and those would + +00:45:57.040 --> 00:46:00.040 +have + +00:46:03.720 --> 00:46:08.640 +don't would have a high value for like 2 + +00:46:11.880 --> 00:46:15.839 +five and so these would be the word + +00:46:14.200 --> 00:46:18.040 +embeddings where each word embedding + +00:46:15.839 --> 00:46:20.599 +corresponded to you know features of the + +00:46:18.040 --> 00:46:23.480 +words and + +00:46:20.599 --> 00:46:25.480 +then um after that we would extract + +00:46:23.480 --> 00:46:29.319 +feature combinations in this second + +00:46:25.480 --> 00:46:32.079 +layer that say oh we see at least one + +00:46:29.319 --> 00:46:33.760 +word where the first feature is active + +00:46:32.079 --> 00:46:36.359 +and we see at least one word where the + +00:46:33.760 --> 00:46:37.920 +fifth feature is active so now that + +00:46:36.359 --> 00:46:40.640 +allows us to capture the fact that we + +00:46:37.920 --> 00:46:42.319 +saw like don't hate or don't despise or + +00:46:40.640 --> 00:46:44.559 +not hate or not despise or something + +00:46:42.319 --> 00:46:44.559 +like + +00:46:45.079 --> 00:46:51.760 +that so this is the way uh kind of this + +00:46:49.680 --> 00:46:54.839 +is a deep uh continuous bag of words + +00:46:51.760 --> 00:46:56.839 +model um this actually was proposed in + +00:46:54.839 --> 00:46:58.119 +205 15 I don't think I have the + +00:46:56.839 --> 00:47:02.599 +reference on the slide but I think it's + +00:46:58.119 --> 00:47:05.040 +in the notes um on the website and + +00:47:02.599 --> 00:47:07.200 +actually at that point in time they + +00:47:05.040 --> 00:47:09.200 +demon there were several interesting + +00:47:07.200 --> 00:47:11.960 +results that showed that even this like + +00:47:09.200 --> 00:47:13.960 +really simple model did really well uh + +00:47:11.960 --> 00:47:16.319 +at text classification and other simple + +00:47:13.960 --> 00:47:18.640 +tasks like that because it was able to + +00:47:16.319 --> 00:47:21.720 +you know share features of the words and + +00:47:18.640 --> 00:47:23.800 +then extract combinations to the + +00:47:21.720 --> 00:47:28.200 +features + +00:47:23.800 --> 00:47:29.760 +so um in order order to learn these we + +00:47:28.200 --> 00:47:30.920 +need to start turning to neural networks + +00:47:29.760 --> 00:47:34.400 +and the reason why we need to start + +00:47:30.920 --> 00:47:38.040 +turning to neural networks is + +00:47:34.400 --> 00:47:41.920 +because while I can calculate the loss + +00:47:38.040 --> 00:47:43.280 +function of the while I can calculate + +00:47:41.920 --> 00:47:44.839 +the loss function of the hinged loss for + +00:47:43.280 --> 00:47:47.720 +a bag of words model by hand I + +00:47:44.839 --> 00:47:49.359 +definitely don't I probably could but + +00:47:47.720 --> 00:47:51.240 +don't want to do it for a model that + +00:47:49.359 --> 00:47:53.200 +starts become as complicated as this + +00:47:51.240 --> 00:47:57.440 +with multiple Matrix multiplications + +00:47:53.200 --> 00:48:00.520 +Andes and stuff like that so the way we + +00:47:57.440 --> 00:48:05.000 +do this just a very brief uh coverage of + +00:48:00.520 --> 00:48:06.200 +this uh for because um I think probably + +00:48:05.000 --> 00:48:08.400 +a lot of people have dealt with neural + +00:48:06.200 --> 00:48:10.200 +networks before um the original + +00:48:08.400 --> 00:48:12.880 +motivation was that we had neurons in + +00:48:10.200 --> 00:48:16.160 +the brain uh where + +00:48:12.880 --> 00:48:18.839 +the each of the neuron synapses took in + +00:48:16.160 --> 00:48:21.480 +an electrical signal and once they got + +00:48:18.839 --> 00:48:24.079 +enough electrical signal they would fire + +00:48:21.480 --> 00:48:25.960 +um but now the current conception of + +00:48:24.079 --> 00:48:28.160 +neural networks or deep learning models + +00:48:25.960 --> 00:48:30.440 +is basically computation + +00:48:28.160 --> 00:48:32.400 +graphs and the way a computation graph + +00:48:30.440 --> 00:48:34.760 +Works um and I'm especially going to + +00:48:32.400 --> 00:48:36.240 +talk about the way it works in natural + +00:48:34.760 --> 00:48:38.119 +language processing which might be a + +00:48:36.240 --> 00:48:42.319 +contrast to the way it works in computer + +00:48:38.119 --> 00:48:43.960 +vision is um we have an expression uh + +00:48:42.319 --> 00:48:46.480 +that looks like this and maybe maybe + +00:48:43.960 --> 00:48:47.640 +it's the expression X corresponding to + +00:48:46.480 --> 00:48:51.880 +uh a + +00:48:47.640 --> 00:48:53.400 +scal um and each node corresponds to + +00:48:51.880 --> 00:48:55.599 +something like a tensor a matrix a + +00:48:53.400 --> 00:48:57.599 +vector a scalar so scaler is uh kind + +00:48:55.599 --> 00:49:00.480 +kind of Zero Dimensional it's a single + +00:48:57.599 --> 00:49:01.720 +value one dimensional two dimensional or + +00:49:00.480 --> 00:49:04.200 +arbitrary + +00:49:01.720 --> 00:49:06.040 +dimensional um and then we also have + +00:49:04.200 --> 00:49:08.000 +nodes that correspond to the result of + +00:49:06.040 --> 00:49:11.480 +function applications so if we have X be + +00:49:08.000 --> 00:49:14.079 +a vector uh we take the vector transpose + +00:49:11.480 --> 00:49:18.160 +and so each Edge represents a function + +00:49:14.079 --> 00:49:20.559 +argument and also a data + +00:49:18.160 --> 00:49:23.960 +dependency and a node with an incoming + +00:49:20.559 --> 00:49:27.000 +Edge is a function of that Edge's tail + +00:49:23.960 --> 00:49:29.040 +node and importantly each node knows how + +00:49:27.000 --> 00:49:30.640 +to compute its value and the value of + +00:49:29.040 --> 00:49:32.640 +its derivative with respect to each + +00:49:30.640 --> 00:49:34.440 +argument times the derivative of an + +00:49:32.640 --> 00:49:37.920 +arbitrary + +00:49:34.440 --> 00:49:41.000 +input and functions could be basically + +00:49:37.920 --> 00:49:45.400 +arbitrary functions it can be unary Nary + +00:49:41.000 --> 00:49:49.440 +unary binary Nary often unary or binary + +00:49:45.400 --> 00:49:52.400 +and computation graphs are directed in + +00:49:49.440 --> 00:49:57.040 +cyclic and um one important thing to + +00:49:52.400 --> 00:50:00.640 +note is that you can um have multiple + +00:49:57.040 --> 00:50:02.559 +ways of expressing the same function so + +00:50:00.640 --> 00:50:04.839 +this is actually really important as you + +00:50:02.559 --> 00:50:06.920 +start implementing things and the reason + +00:50:04.839 --> 00:50:09.359 +why is the left graph and the right + +00:50:06.920 --> 00:50:12.960 +graph both express the same thing the + +00:50:09.359 --> 00:50:18.640 +left graph expresses X + +00:50:12.960 --> 00:50:22.559 +transpose time A Time X where is whereas + +00:50:18.640 --> 00:50:27.160 +this one has x a and then it puts it + +00:50:22.559 --> 00:50:28.760 +into a node that is X transpose a x + +00:50:27.160 --> 00:50:30.319 +and so these Express exactly the same + +00:50:28.760 --> 00:50:32.319 +thing but the graph on the left is + +00:50:30.319 --> 00:50:33.760 +larger and the reason why this is + +00:50:32.319 --> 00:50:38.920 +important is for practical + +00:50:33.760 --> 00:50:40.359 +implementation of neural networks um you + +00:50:38.920 --> 00:50:43.200 +the larger graphs are going to take more + +00:50:40.359 --> 00:50:46.799 +memory and going to be slower usually + +00:50:43.200 --> 00:50:48.200 +and so often um in a neural network we + +00:50:46.799 --> 00:50:49.559 +look at like pipe part which we're going + +00:50:48.200 --> 00:50:52.160 +to look at in a + +00:50:49.559 --> 00:50:55.520 +second + +00:50:52.160 --> 00:50:57.920 +um you will have something you will be + +00:50:55.520 --> 00:50:57.920 +able to + +00:50:58.680 --> 00:51:01.680 +do + +00:51:03.079 --> 00:51:07.880 +this or you'll be able to do + +00:51:18.760 --> 00:51:22.880 +like + +00:51:20.359 --> 00:51:24.839 +this so these are two different options + +00:51:22.880 --> 00:51:26.920 +this one is using more operations and + +00:51:24.839 --> 00:51:29.559 +this one is using using less operations + +00:51:26.920 --> 00:51:31.000 +and this is going to be faster because + +00:51:29.559 --> 00:51:33.119 +basically the implementation within + +00:51:31.000 --> 00:51:34.799 +Pythor will have been optimized for you + +00:51:33.119 --> 00:51:36.799 +it will only require one graph node + +00:51:34.799 --> 00:51:37.880 +instead of multiple graph nodes and + +00:51:36.799 --> 00:51:39.799 +that's even more important when you + +00:51:37.880 --> 00:51:41.040 +start talking about like attention or + +00:51:39.799 --> 00:51:43.920 +something like that which we're going to + +00:51:41.040 --> 00:51:46.079 +be covering very soon um attention is a + +00:51:43.920 --> 00:51:47.359 +very multi-head attention or something + +00:51:46.079 --> 00:51:49.839 +like that is a very complicated + +00:51:47.359 --> 00:51:52.079 +operation so you want to make sure that + +00:51:49.839 --> 00:51:54.359 +you're using the operators that are + +00:51:52.079 --> 00:51:57.359 +available to you to make this more + +00:51:54.359 --> 00:51:57.359 +efficient + +00:51:57.440 --> 00:52:00.760 +um and then finally we could like add + +00:51:59.280 --> 00:52:01.920 +all of these together at the end we + +00:52:00.760 --> 00:52:04.000 +could add a + +00:52:01.920 --> 00:52:05.880 +constant um and then we get this + +00:52:04.000 --> 00:52:09.520 +expression here which gives us kind of a + +00:52:05.880 --> 00:52:09.520 +polinomial polom + +00:52:09.680 --> 00:52:15.760 +expression um also another thing to note + +00:52:13.480 --> 00:52:17.599 +is within a neural network computation + +00:52:15.760 --> 00:52:21.920 +graph variable names are just labelings + +00:52:17.599 --> 00:52:25.359 +of nodes and so if you're using a a + +00:52:21.920 --> 00:52:27.680 +computation graph like this you might + +00:52:25.359 --> 00:52:29.240 +only be declaring one variable here but + +00:52:27.680 --> 00:52:30.839 +actually there's a whole bunch of stuff + +00:52:29.240 --> 00:52:32.359 +going on behind the scenes and all of + +00:52:30.839 --> 00:52:34.240 +that will take memory and computation + +00:52:32.359 --> 00:52:35.440 +time and stuff like that so it's + +00:52:34.240 --> 00:52:37.119 +important to be aware of that if you + +00:52:35.440 --> 00:52:40.400 +want to make your implementations more + +00:52:37.119 --> 00:52:40.400 +efficient than other other + +00:52:41.119 --> 00:52:46.680 +things so we have several algorithms + +00:52:44.480 --> 00:52:49.079 +that go into implementing neural nuts um + +00:52:46.680 --> 00:52:50.760 +the first one is graph construction uh + +00:52:49.079 --> 00:52:53.480 +the second one is forward + +00:52:50.760 --> 00:52:54.839 +propagation uh and graph construction is + +00:52:53.480 --> 00:52:56.359 +basically constructing the graph + +00:52:54.839 --> 00:52:58.680 +declaring ing all the variables stuff + +00:52:56.359 --> 00:53:01.520 +like this the second one is forward + +00:52:58.680 --> 00:53:03.880 +propagation and um the way you do this + +00:53:01.520 --> 00:53:06.480 +is in topological order uh you compute + +00:53:03.880 --> 00:53:08.280 +the value of a node given its inputs and + +00:53:06.480 --> 00:53:11.000 +so basically you start out with all of + +00:53:08.280 --> 00:53:12.680 +the nodes that you give is input and + +00:53:11.000 --> 00:53:16.040 +then you find any node in the graph + +00:53:12.680 --> 00:53:17.799 +where all of its uh all of its tail + +00:53:16.040 --> 00:53:20.280 +nodes or all of its children have been + +00:53:17.799 --> 00:53:22.119 +calculated so in this case that would be + +00:53:20.280 --> 00:53:24.640 +these two nodes and then in arbitrary + +00:53:22.119 --> 00:53:27.000 +order or even in parallel you calculate + +00:53:24.640 --> 00:53:28.280 +the value of all of the satisfied nodes + +00:53:27.000 --> 00:53:31.799 +until you get to the + +00:53:28.280 --> 00:53:34.280 +end and then uh the remaining algorithms + +00:53:31.799 --> 00:53:36.200 +are back propagation and parameter + +00:53:34.280 --> 00:53:38.240 +update I already talked about parameter + +00:53:36.200 --> 00:53:40.799 +update uh using stochastic gradient + +00:53:38.240 --> 00:53:42.760 +descent but for back propagation we then + +00:53:40.799 --> 00:53:45.400 +process examples in Reverse topological + +00:53:42.760 --> 00:53:47.640 +order uh calculate derivatives of + +00:53:45.400 --> 00:53:50.400 +parameters with respect to final + +00:53:47.640 --> 00:53:52.319 +value and so we start out with the very + +00:53:50.400 --> 00:53:54.200 +final value usually this is your loss + +00:53:52.319 --> 00:53:56.200 +function and then you just step + +00:53:54.200 --> 00:54:00.440 +backwards in top ological order to + +00:53:56.200 --> 00:54:04.160 +calculate the derivatives of all these + +00:54:00.440 --> 00:54:05.920 +so um this is pretty simple I think a + +00:54:04.160 --> 00:54:08.040 +lot of people may have seen this already + +00:54:05.920 --> 00:54:09.920 +but keeping this in mind as you're + +00:54:08.040 --> 00:54:12.480 +implementing NLP models especially + +00:54:09.920 --> 00:54:14.240 +models that are really memory intensive + +00:54:12.480 --> 00:54:16.559 +or things like that is pretty important + +00:54:14.240 --> 00:54:19.040 +because if you accidentally like for + +00:54:16.559 --> 00:54:21.799 +example calculate the same thing twice + +00:54:19.040 --> 00:54:23.559 +or accidentally create a graph that is + +00:54:21.799 --> 00:54:25.720 +manipulating very large tensors and + +00:54:23.559 --> 00:54:27.319 +creating very large intermediate States + +00:54:25.720 --> 00:54:29.720 +that can kill your memory and and cause + +00:54:27.319 --> 00:54:31.839 +big problems so it's an important thing + +00:54:29.720 --> 00:54:31.839 +to + +00:54:34.359 --> 00:54:38.880 +be um cool any any questions about + +00:54:39.040 --> 00:54:44.440 +this okay if not I will go on to the + +00:54:41.680 --> 00:54:45.680 +next one so neural network Frameworks + +00:54:44.440 --> 00:54:48.920 +there's several neural network + +00:54:45.680 --> 00:54:52.880 +Frameworks but in NLP nowadays I really + +00:54:48.920 --> 00:54:55.079 +only see two and mostly only see one um + +00:54:52.880 --> 00:54:57.960 +so that one that almost everybody us + +00:54:55.079 --> 00:55:01.240 +uses is pie torch um and I would + +00:54:57.960 --> 00:55:04.559 +recommend using it unless you uh you + +00:55:01.240 --> 00:55:07.480 +know if you're a fan of like rust or you + +00:55:04.559 --> 00:55:09.200 +know esoteric uh not esoteric but like + +00:55:07.480 --> 00:55:11.960 +unusual programming languages and you + +00:55:09.200 --> 00:55:14.720 +like Beauty and things like this another + +00:55:11.960 --> 00:55:15.799 +option might be Jacks uh so I'll explain + +00:55:14.720 --> 00:55:18.440 +a little bit about the difference + +00:55:15.799 --> 00:55:19.960 +between them uh and you can pick + +00:55:18.440 --> 00:55:23.559 +accordingly + +00:55:19.960 --> 00:55:25.359 +um first uh both of these Frameworks uh + +00:55:23.559 --> 00:55:26.839 +are developed by big companies and they + +00:55:25.359 --> 00:55:28.520 +have a lot of engineering support behind + +00:55:26.839 --> 00:55:29.720 +them that's kind of an important thing + +00:55:28.520 --> 00:55:31.280 +to think about when you're deciding + +00:55:29.720 --> 00:55:32.599 +which framework to use because you know + +00:55:31.280 --> 00:55:36.000 +it'll be well + +00:55:32.599 --> 00:55:38.039 +supported um pytorch is definitely most + +00:55:36.000 --> 00:55:40.400 +widely used in NLP especially NLP + +00:55:38.039 --> 00:55:44.240 +research um and it's used in some NLP + +00:55:40.400 --> 00:55:47.359 +project J is used in some NLP + +00:55:44.240 --> 00:55:49.960 +projects um pytorch favors Dynamic + +00:55:47.359 --> 00:55:53.760 +execution so what dynamic execution + +00:55:49.960 --> 00:55:55.880 +means is um you basically create a + +00:55:53.760 --> 00:55:59.760 +computation graph and and then execute + +00:55:55.880 --> 00:56:02.760 +it uh every time you process an input uh + +00:55:59.760 --> 00:56:04.680 +in contrast there's also you define the + +00:56:02.760 --> 00:56:07.200 +computation graph first and then execute + +00:56:04.680 --> 00:56:09.280 +it over and over again so in other words + +00:56:07.200 --> 00:56:10.680 +the graph construction step only happens + +00:56:09.280 --> 00:56:13.119 +once kind of at the beginning of + +00:56:10.680 --> 00:56:16.799 +computation and then you compile it + +00:56:13.119 --> 00:56:20.039 +afterwards and it's actually pytorch + +00:56:16.799 --> 00:56:23.359 +supports kind of defining and compiling + +00:56:20.039 --> 00:56:27.480 +and Jax supports more Dynamic things but + +00:56:23.359 --> 00:56:30.160 +the way they were designed is uh is kind + +00:56:27.480 --> 00:56:32.960 +of favoring Dynamic execution or + +00:56:30.160 --> 00:56:37.079 +favoring definition in population + +00:56:32.960 --> 00:56:39.200 +and the difference between these two is + +00:56:37.079 --> 00:56:41.760 +this one gives you more flexibility this + +00:56:39.200 --> 00:56:45.440 +one gives you better optimization in wor + +00:56:41.760 --> 00:56:49.760 +speed if you want to if you want to do + +00:56:45.440 --> 00:56:52.400 +that um another thing about Jax is um + +00:56:49.760 --> 00:56:55.200 +it's kind of very close to numpy in a + +00:56:52.400 --> 00:56:57.440 +way like it uses a very num something + +00:56:55.200 --> 00:56:59.960 +that's kind of close to numpy it's very + +00:56:57.440 --> 00:57:02.359 +heavily based on tensors and so because + +00:56:59.960 --> 00:57:04.640 +of this you can kind of easily do some + +00:57:02.359 --> 00:57:06.640 +interesting things like okay I want to + +00:57:04.640 --> 00:57:11.319 +take this tensor and I want to split it + +00:57:06.640 --> 00:57:14.000 +over two gpus um and this is good if + +00:57:11.319 --> 00:57:17.119 +you're training like a very large model + +00:57:14.000 --> 00:57:20.920 +and you want to put kind + +00:57:17.119 --> 00:57:20.920 +of this part of the + +00:57:22.119 --> 00:57:26.520 +model uh you want to put this part of + +00:57:24.119 --> 00:57:30.079 +the model on GP 1 this on gpu2 this on + +00:57:26.520 --> 00:57:31.599 +GPU 3 this on GPU it's slightly simpler + +00:57:30.079 --> 00:57:34.400 +conceptually to do in Jacks but it's + +00:57:31.599 --> 00:57:37.160 +also possible to do in + +00:57:34.400 --> 00:57:39.119 +p and pytorch by far has the most + +00:57:37.160 --> 00:57:41.640 +vibrant ecosystem so like as I said + +00:57:39.119 --> 00:57:44.200 +pytorch is a good default choice but you + +00:57:41.640 --> 00:57:47.480 +can consider using Jack if you uh if you + +00:57:44.200 --> 00:57:47.480 +like new + +00:57:48.079 --> 00:57:55.480 +things cool um yeah actually I already + +00:57:51.599 --> 00:57:58.079 +talked about that so in the interest of + +00:57:55.480 --> 00:58:02.119 +time I may not go into these very deeply + +00:57:58.079 --> 00:58:05.799 +but it's important to note that we have + +00:58:02.119 --> 00:58:05.799 +examples of all of + +00:58:06.920 --> 00:58:12.520 +the models that I talked about in the + +00:58:09.359 --> 00:58:16.720 +class today these are created for + +00:58:12.520 --> 00:58:17.520 +Simplicity not for Speed or efficiency + +00:58:16.720 --> 00:58:20.480 +of + +00:58:17.520 --> 00:58:24.920 +implementation um so these are kind of + +00:58:20.480 --> 00:58:27.760 +torch P torch based uh examples uh where + +00:58:24.920 --> 00:58:31.599 +you can create the bag of words + +00:58:27.760 --> 00:58:36.440 +Model A continuous bag of words + +00:58:31.599 --> 00:58:39.640 +model um and + +00:58:36.440 --> 00:58:41.640 +a deep continuous bag of wordss + +00:58:39.640 --> 00:58:44.359 +model + +00:58:41.640 --> 00:58:46.039 +and all of these I believe are + +00:58:44.359 --> 00:58:48.760 +implemented in + +00:58:46.039 --> 00:58:51.960 +model.py and the most important thing is + +00:58:48.760 --> 00:58:54.960 +where you define the forward pass and + +00:58:51.960 --> 00:58:57.319 +maybe I can just give a a simple example + +00:58:54.960 --> 00:58:58.200 +this but here this is where you do the + +00:58:57.319 --> 00:59:01.839 +word + +00:58:58.200 --> 00:59:04.400 +embedding this is where you sum up all + +00:59:01.839 --> 00:59:08.119 +of the embeddings and add a + +00:59:04.400 --> 00:59:10.200 +bias um and then this is uh where you + +00:59:08.119 --> 00:59:13.960 +return the the + +00:59:10.200 --> 00:59:13.960 +score and then oh + +00:59:14.799 --> 00:59:19.119 +sorry the continuous bag of words model + +00:59:17.520 --> 00:59:22.160 +sums up some + +00:59:19.119 --> 00:59:23.640 +embeddings uh or gets the embeddings + +00:59:22.160 --> 00:59:25.799 +sums up some + +00:59:23.640 --> 00:59:28.079 +embeddings + +00:59:25.799 --> 00:59:30.599 +uh gets the score here and then runs it + +00:59:28.079 --> 00:59:33.200 +through a linear or changes the view + +00:59:30.599 --> 00:59:35.119 +runs it through a linear layer and then + +00:59:33.200 --> 00:59:38.319 +the Deep continuous bag of words model + +00:59:35.119 --> 00:59:41.160 +also adds a few layers of uh like linear + +00:59:38.319 --> 00:59:43.119 +transformations in Dage so you should be + +00:59:41.160 --> 00:59:44.640 +able to see that these correspond pretty + +00:59:43.119 --> 00:59:47.440 +closely to the things that I had on the + +00:59:44.640 --> 00:59:49.280 +slides so um hopefully that's a good + +00:59:47.440 --> 00:59:51.839 +start if you're not very familiar with + +00:59:49.280 --> 00:59:51.839 +implementing + +00:59:53.119 --> 00:59:58.440 +model oh and yes the recitation uh will + +00:59:56.599 --> 00:59:59.799 +be about playing around with sentence + +00:59:58.440 --> 01:00:01.200 +piece and playing around with these so + +00:59:59.799 --> 01:00:02.839 +if you have any look at them have any + +01:00:01.200 --> 01:00:05.000 +questions you're welcome to show up + +01:00:02.839 --> 01:00:09.880 +where I walk + +01:00:05.000 --> 01:00:09.880 +through cool um any any questions about + +01:00:12.839 --> 01:00:19.720 +these okay so a few more final important + +01:00:16.720 --> 01:00:21.720 +Concepts um another concept that you + +01:00:19.720 --> 01:00:25.440 +should definitely be aware of is the + +01:00:21.720 --> 01:00:27.280 +atom Optimizer uh so there's lots of uh + +01:00:25.440 --> 01:00:30.559 +optimizers that you could be using but + +01:00:27.280 --> 01:00:32.200 +almost all research in NLP uses some uh + +01:00:30.559 --> 01:00:38.440 +variety of the atom + +01:00:32.200 --> 01:00:40.839 +Optimizer and the U the way this works + +01:00:38.440 --> 01:00:42.559 +is it + +01:00:40.839 --> 01:00:45.640 +optimizes + +01:00:42.559 --> 01:00:48.480 +the um it optimizes model considering + +01:00:45.640 --> 01:00:49.359 +the rolling average of the gradient and + +01:00:48.480 --> 01:00:53.160 +uh + +01:00:49.359 --> 01:00:55.920 +momentum and the way it works is here we + +01:00:53.160 --> 01:00:58.839 +have a gradient here we have + +01:00:55.920 --> 01:01:04.000 +momentum and what you can see is + +01:00:58.839 --> 01:01:06.680 +happening here is we add a little bit of + +01:01:04.000 --> 01:01:09.200 +the gradient in uh how much you add in + +01:01:06.680 --> 01:01:12.720 +is with respect to the size of this beta + +01:01:09.200 --> 01:01:16.000 +1 parameter and you add it into uh the + +01:01:12.720 --> 01:01:18.640 +momentum term so this momentum term like + +01:01:16.000 --> 01:01:20.440 +gradually increases and decreases so in + +01:01:18.640 --> 01:01:23.440 +contrast to standard gradient percent + +01:01:20.440 --> 01:01:25.839 +which could be + +01:01:23.440 --> 01:01:28.440 +updating + +01:01:25.839 --> 01:01:31.440 +uh each parameter kind of like very + +01:01:28.440 --> 01:01:33.359 +differently on each time step this will + +01:01:31.440 --> 01:01:35.680 +make the momentum kind of transition + +01:01:33.359 --> 01:01:37.240 +more smoothly by taking the rolling + +01:01:35.680 --> 01:01:39.880 +average of the + +01:01:37.240 --> 01:01:43.400 +gradient and then the the second thing + +01:01:39.880 --> 01:01:47.640 +is um by taking the momentum this is the + +01:01:43.400 --> 01:01:51.000 +rolling average of the I guess gradient + +01:01:47.640 --> 01:01:54.440 +uh variance sorry I this should be + +01:01:51.000 --> 01:01:58.079 +variance and the reason why you need + +01:01:54.440 --> 01:02:01.319 +need to keep track of the variance is + +01:01:58.079 --> 01:02:03.319 +some uh some parameters will have very + +01:02:01.319 --> 01:02:06.559 +large variance in their gradients and + +01:02:03.319 --> 01:02:11.480 +might fluctuate very uh strongly and + +01:02:06.559 --> 01:02:13.039 +others might have a smaller uh chain + +01:02:11.480 --> 01:02:15.240 +variant in their gradients and not + +01:02:13.039 --> 01:02:18.240 +fluctuate very much but we want to make + +01:02:15.240 --> 01:02:20.200 +sure that we update the ones we still + +01:02:18.240 --> 01:02:22.240 +update the ones that have a very small + +01:02:20.200 --> 01:02:25.760 +uh change of their variance and the + +01:02:22.240 --> 01:02:27.440 +reason why is kind of let's say you have + +01:02:25.760 --> 01:02:30.440 +a + +01:02:27.440 --> 01:02:30.440 +multi-layer + +01:02:32.480 --> 01:02:38.720 +network + +01:02:34.480 --> 01:02:41.240 +um or actually sorry a better + +01:02:38.720 --> 01:02:44.319 +um a better example is like let's say we + +01:02:41.240 --> 01:02:47.559 +have a big word embedding Matrix and + +01:02:44.319 --> 01:02:53.359 +over here we have like really frequent + +01:02:47.559 --> 01:02:56.279 +words and then over here we have uh + +01:02:53.359 --> 01:02:59.319 +gradi + +01:02:56.279 --> 01:03:00.880 +no we have like less frequent words we + +01:02:59.319 --> 01:03:02.799 +want to make sure that all of these get + +01:03:00.880 --> 01:03:06.160 +updated appropriately all of these get + +01:03:02.799 --> 01:03:08.640 +like enough updates and so over here + +01:03:06.160 --> 01:03:10.760 +this one will have lots of updates and + +01:03:08.640 --> 01:03:13.680 +so uh kind of + +01:03:10.760 --> 01:03:16.599 +the amount that we + +01:03:13.680 --> 01:03:20.039 +update or the the amount that we update + +01:03:16.599 --> 01:03:21.799 +the uh this will be relatively large + +01:03:20.039 --> 01:03:23.119 +whereas over here this will not have + +01:03:21.799 --> 01:03:24.880 +very many updates we'll have lots of + +01:03:23.119 --> 01:03:26.480 +zero updates also + +01:03:24.880 --> 01:03:29.160 +and so the amount that we update this + +01:03:26.480 --> 01:03:32.520 +will be relatively small and so this + +01:03:29.160 --> 01:03:36.119 +kind of squared to gradient here will uh + +01:03:32.520 --> 01:03:38.400 +be smaller for the values over here and + +01:03:36.119 --> 01:03:41.359 +what that allows us to do is it allows + +01:03:38.400 --> 01:03:44.200 +us to maybe I can just go to the bottom + +01:03:41.359 --> 01:03:46.039 +we end up uh dividing by the square root + +01:03:44.200 --> 01:03:47.599 +of this and because we divide by the + +01:03:46.039 --> 01:03:51.000 +square root of this if this is really + +01:03:47.599 --> 01:03:55.680 +large like 50 and 70 and then this over + +01:03:51.000 --> 01:03:59.480 +here is like one 0.5 + +01:03:55.680 --> 01:04:01.920 +uh or something we will be upgrading the + +01:03:59.480 --> 01:04:03.920 +ones that have like less Square + +01:04:01.920 --> 01:04:06.880 +gradients so it will it allows you to + +01:04:03.920 --> 01:04:08.760 +upweight the less common gradients more + +01:04:06.880 --> 01:04:10.440 +frequently and then there's also some + +01:04:08.760 --> 01:04:13.400 +terms for correcting bias early in + +01:04:10.440 --> 01:04:16.440 +training because these momentum in uh in + +01:04:13.400 --> 01:04:19.559 +variance or momentum in squared gradient + +01:04:16.440 --> 01:04:23.119 +terms are not going to be like well + +01:04:19.559 --> 01:04:24.839 +calibrated yet so it prevents them from + +01:04:23.119 --> 01:04:28.880 +going very three wire beginning of + +01:04:24.839 --> 01:04:30.839 +training so this is uh the details of + +01:04:28.880 --> 01:04:33.640 +this again are not like super super + +01:04:30.839 --> 01:04:37.359 +important um another thing that I didn't + +01:04:33.640 --> 01:04:40.200 +write on the slides is uh now in + +01:04:37.359 --> 01:04:43.920 +Transformers it's also super common to + +01:04:40.200 --> 01:04:47.400 +have an overall learning rate schle so + +01:04:43.920 --> 01:04:50.520 +even um Even Adam has this uh Ada + +01:04:47.400 --> 01:04:53.440 +learning rate parameter here and we what + +01:04:50.520 --> 01:04:55.240 +we often do is we adjust this so we + +01:04:53.440 --> 01:04:57.839 +start at low + +01:04:55.240 --> 01:04:59.640 +we raise it up and then we have a Decay + +01:04:57.839 --> 01:05:03.039 +uh at the end and exactly how much you + +01:04:59.640 --> 01:05:04.440 +do this kind of depends on um you know + +01:05:03.039 --> 01:05:06.160 +how big your model is how much data + +01:05:04.440 --> 01:05:09.160 +you're tring on eventually and the + +01:05:06.160 --> 01:05:12.440 +reason why we do this is transformers + +01:05:09.160 --> 01:05:13.839 +are unfortunately super sensitive to + +01:05:12.440 --> 01:05:15.359 +having a high learning rate right at the + +01:05:13.839 --> 01:05:16.559 +very beginning so if you update them + +01:05:15.359 --> 01:05:17.920 +with a high learning rate right at the + +01:05:16.559 --> 01:05:22.920 +very beginning they go haywire and you + +01:05:17.920 --> 01:05:24.400 +get a really weird model um and but you + +01:05:22.920 --> 01:05:26.760 +want to raise it eventually so your + +01:05:24.400 --> 01:05:28.920 +model is learning appropriately and then + +01:05:26.760 --> 01:05:30.400 +in all stochastic gradient descent no + +01:05:28.920 --> 01:05:31.680 +matter whether you're using atom or + +01:05:30.400 --> 01:05:33.400 +anything else it's a good idea to + +01:05:31.680 --> 01:05:36.200 +gradually decrease the learning rate at + +01:05:33.400 --> 01:05:38.119 +the end to prevent the model from + +01:05:36.200 --> 01:05:40.480 +continuing to fluctuate and getting it + +01:05:38.119 --> 01:05:42.760 +to a stable point that gives you good + +01:05:40.480 --> 01:05:45.559 +accuracy over a large part of data so + +01:05:42.760 --> 01:05:47.480 +this is often included like if you look + +01:05:45.559 --> 01:05:51.000 +at any standard Transformer training + +01:05:47.480 --> 01:05:53.079 +recipe it will have that this so that's + +01:05:51.000 --> 01:05:54.799 +kind of the the go-to + +01:05:53.079 --> 01:05:58.960 +optimizer + +01:05:54.799 --> 01:06:01.039 +um are there any questions or + +01:05:58.960 --> 01:06:02.599 +discussion there's also tricky things + +01:06:01.039 --> 01:06:04.000 +like cyclic learning rates where you + +01:06:02.599 --> 01:06:06.599 +decrease the learning rate increase it + +01:06:04.000 --> 01:06:08.559 +and stuff like that but I won't go into + +01:06:06.599 --> 01:06:11.000 +that and don't actually use it that + +01:06:08.559 --> 01:06:12.760 +much second thing is visualization of + +01:06:11.000 --> 01:06:15.400 +embeddings so normally when we have word + +01:06:12.760 --> 01:06:19.760 +embeddings usually they're kind of large + +01:06:15.400 --> 01:06:21.559 +um and they can be like 512 or 1024 + +01:06:19.760 --> 01:06:25.079 +dimensions + +01:06:21.559 --> 01:06:28.720 +and so one thing that we can do is we + +01:06:25.079 --> 01:06:31.079 +can down weight them or sorry down uh + +01:06:28.720 --> 01:06:34.400 +like reduce the dimensions or perform + +01:06:31.079 --> 01:06:35.880 +dimensionality reduction and put them in + +01:06:34.400 --> 01:06:37.680 +like two or three dimensions which are + +01:06:35.880 --> 01:06:40.200 +easy for humans to + +01:06:37.680 --> 01:06:42.000 +visualize this is an example using + +01:06:40.200 --> 01:06:44.839 +principal component analysis which is a + +01:06:42.000 --> 01:06:48.279 +linear Dimension reduction technique and + +01:06:44.839 --> 01:06:50.680 +this is uh an example from 10 years ago + +01:06:48.279 --> 01:06:52.359 +now uh one of the first major word + +01:06:50.680 --> 01:06:55.240 +embedding papers where they demonstrated + +01:06:52.359 --> 01:06:57.720 +that if you do this sort of linear + +01:06:55.240 --> 01:06:59.440 +Dimension reduction uh you get actually + +01:06:57.720 --> 01:07:01.279 +some interesting things where you can + +01:06:59.440 --> 01:07:03.240 +draw a vector that's almost the same + +01:07:01.279 --> 01:07:06.400 +direction between like countries and + +01:07:03.240 --> 01:07:09.319 +their uh countries and their capitals + +01:07:06.400 --> 01:07:13.720 +for example so this is a good thing to + +01:07:09.319 --> 01:07:16.559 +do but actually PCA uh doesn't give + +01:07:13.720 --> 01:07:20.760 +you in some cases PCA doesn't give you + +01:07:16.559 --> 01:07:22.920 +super great uh visualizations sorry yeah + +01:07:20.760 --> 01:07:25.920 +well for like if it's + +01:07:22.920 --> 01:07:25.920 +like + +01:07:29.880 --> 01:07:35.039 +um for things like this I think you + +01:07:33.119 --> 01:07:37.359 +probably would still see vectors in the + +01:07:35.039 --> 01:07:38.760 +same direction but I don't think it like + +01:07:37.359 --> 01:07:40.920 +there's a reason why I'm introducing + +01:07:38.760 --> 01:07:44.279 +nonlinear projections next because the + +01:07:40.920 --> 01:07:46.799 +more standard way to do this is uh + +01:07:44.279 --> 01:07:50.640 +nonlinear projections in in particular a + +01:07:46.799 --> 01:07:54.880 +method called tisne and the way um they + +01:07:50.640 --> 01:07:56.880 +do this is they try to group + +01:07:54.880 --> 01:07:59.000 +things that are close together in high + +01:07:56.880 --> 01:08:01.240 +dimensional space so that they're also + +01:07:59.000 --> 01:08:04.440 +close together in low dimensional space + +01:08:01.240 --> 01:08:08.520 +but they remove the Restriction that + +01:08:04.440 --> 01:08:10.799 +this is uh that this is linear so this + +01:08:08.520 --> 01:08:15.480 +is an example of just grouping together + +01:08:10.799 --> 01:08:18.040 +some digits uh from the memus data + +01:08:15.480 --> 01:08:20.279 +set or sorry reducing the dimension of + +01:08:18.040 --> 01:08:23.640 +digits from the mest data + +01:08:20.279 --> 01:08:25.640 +set according to PCA and you can see it + +01:08:23.640 --> 01:08:28.000 +gives these kind of blobs that overlap + +01:08:25.640 --> 01:08:29.799 +with each other and stuff like this but + +01:08:28.000 --> 01:08:31.679 +if you do it with tney this is + +01:08:29.799 --> 01:08:34.799 +completely unsupervised actually it's + +01:08:31.679 --> 01:08:37.080 +not training any model for labeling the + +01:08:34.799 --> 01:08:39.239 +labels are just used to draw the colors + +01:08:37.080 --> 01:08:42.520 +and you can see that it gets pretty + +01:08:39.239 --> 01:08:44.520 +coherent um clusters that correspond to + +01:08:42.520 --> 01:08:48.120 +like what the actual digits + +01:08:44.520 --> 01:08:50.120 +are um however uh one problem with + +01:08:48.120 --> 01:08:53.159 +titney I I still think it's better than + +01:08:50.120 --> 01:08:55.000 +PCA for a large number of uh + +01:08:53.159 --> 01:08:59.199 +applications + +01:08:55.000 --> 01:09:01.040 +but settings of tisy matter and tisy has + +01:08:59.199 --> 01:09:02.920 +a few settings kind of the most + +01:09:01.040 --> 01:09:04.120 +important ones are the overall + +01:09:02.920 --> 01:09:06.560 +perplexity + +01:09:04.120 --> 01:09:09.040 +hyperparameter and uh the number of + +01:09:06.560 --> 01:09:12.319 +steps that you perform and there's a + +01:09:09.040 --> 01:09:14.920 +nice example uh of a paper or kind of + +01:09:12.319 --> 01:09:16.359 +like online post uh that demonstrates + +01:09:14.920 --> 01:09:18.560 +how if you change these parameters you + +01:09:16.359 --> 01:09:22.279 +can get very different things so if this + +01:09:18.560 --> 01:09:24.080 +is the original data you run tisy and it + +01:09:22.279 --> 01:09:26.640 +gives you very different things based on + +01:09:24.080 --> 01:09:29.279 +the hyper parameters that you change um + +01:09:26.640 --> 01:09:32.880 +and here's another example uh you have + +01:09:29.279 --> 01:09:36.960 +two linear uh things like this and so + +01:09:32.880 --> 01:09:40.839 +PCA no matter how you ran PCA you would + +01:09:36.960 --> 01:09:44.080 +still get a linear output from this so + +01:09:40.839 --> 01:09:45.960 +normally uh you know it might change the + +01:09:44.080 --> 01:09:49.239 +order it might squash it a little bit or + +01:09:45.960 --> 01:09:51.239 +something like this but um if you run + +01:09:49.239 --> 01:09:53.400 +tisy it gives you crazy things it even + +01:09:51.239 --> 01:09:56.040 +gives you like DNA and other stuff like + +01:09:53.400 --> 01:09:58.040 +that so so um you do need to be a little + +01:09:56.040 --> 01:10:00.600 +bit careful that uh this is not + +01:09:58.040 --> 01:10:02.320 +necessarily going to tell you nice + +01:10:00.600 --> 01:10:04.400 +linear correlations like this so like + +01:10:02.320 --> 01:10:06.159 +let's say this correlation existed if + +01:10:04.400 --> 01:10:09.199 +you use tisy it might not necessarily + +01:10:06.159 --> 01:10:09.199 +come out to + +01:10:09.320 --> 01:10:14.880 +TIY + +01:10:11.800 --> 01:10:16.920 +cool yep uh that that's my final thing + +01:10:14.880 --> 01:10:18.520 +actually I talked said sequence models + +01:10:16.920 --> 01:10:19.679 +in the next class but it's in the class + +01:10:18.520 --> 01:10:21.440 +after this I'm going to be talking about + +01:10:19.679 --> 01:10:24.199 +language + +01:10:21.440 --> 01:10:27.159 +modeling uh cool any any questions + +01:10:24.199 --> 01:10:27.159 +or diff --git a/CMU Advanced NLP 2024 (20) Tool Use and Language Agents/CMU Advanced NLP 2024 (20) Tool Use and Language Agents.mp4 b/CMU Advanced NLP 2024 (20) Tool Use and Language Agents/CMU Advanced NLP 2024 (20) Tool Use and Language Agents.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..617799ef5798c04634cc04700efd96a160dcf101 --- /dev/null +++ b/CMU Advanced NLP 2024 (20) Tool Use and Language Agents/CMU Advanced NLP 2024 (20) Tool Use and Language Agents.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:639a8f338d12c0f77948889dffe7bb1bbe4e9ad4cb6f4aa806babd15d679af88 +size 83218086 diff --git a/CMU Advanced NLP 2024 (20) Tool Use and Language Agents/metadata.json b/CMU Advanced NLP 2024 (20) Tool Use and Language Agents/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..bafe77f07f0fc4718388a84cafa944e10170876c --- /dev/null +++ b/CMU Advanced NLP 2024 (20) Tool Use and Language Agents/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=d0QSnLjlgzc", + "title": "CMU Advanced NLP 2024 (20) Tool Use and Language Agents" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (20) Tool Use and Language Agents/transcript.srt b/CMU Advanced NLP 2024 (20) Tool Use and Language Agents/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..ae6512677d23367e77fcbd614d5dbd65b5f02831 --- /dev/null +++ b/CMU Advanced NLP 2024 (20) Tool Use and Language Agents/transcript.srt @@ -0,0 +1,6651 @@ +1 +00:00:04,359 --> 00:00:10,679 +cool so w models are pretty powerful for + +2 +00:00:08,200 --> 00:00:13,440 +solving many tasks mostly tax generation + +3 +00:00:10,679 --> 00:00:15,400 +tasks as you probably have seen a lot of + +4 +00:00:13,440 --> 00:00:18,279 +examples from + +5 +00:00:15,400 --> 00:00:20,560 +CH4 um but however our language model is + +6 +00:00:18,279 --> 00:00:23,840 +good enough for everything and my + +7 +00:00:20,560 --> 00:00:26,400 +personal answer is no so if you look at + +8 +00:00:23,840 --> 00:00:28,480 +some scenarios that uh langage models + +9 +00:00:26,400 --> 00:00:31,080 +are actually not very good at it so for + +10 +00:00:28,480 --> 00:00:34,399 +example first when some is asked about + +11 +00:00:31,080 --> 00:00:36,360 +complex reasoning uh for example if you + +12 +00:00:34,399 --> 00:00:38,600 +want the language model to do math + +13 +00:00:36,360 --> 00:00:40,600 +calculation it's not probably not going + +14 +00:00:38,600 --> 00:00:42,800 +to do it very efficiently or very + +15 +00:00:40,600 --> 00:00:45,680 +accurately for example the common + +16 +00:00:42,800 --> 00:00:49,199 +approach like here is like using a Chain + +17 +00:00:45,680 --> 00:00:52,079 +of Thought like to Pro process in very + +18 +00:00:49,199 --> 00:00:54,120 +detail um and may not get the correct + +19 +00:00:52,079 --> 00:00:56,320 +answer uh however if you have a + +20 +00:00:54,120 --> 00:00:59,039 +calculator tool you can just directly + +21 +00:00:56,320 --> 00:01:01,320 +input the expression into the calculator + +22 +00:00:59,039 --> 00:01:02,879 +and get the founder result + +23 +00:01:01,320 --> 00:01:05,280 +that's a first scenario and there are + +24 +00:01:02,879 --> 00:01:07,960 +other scenarios for example if you need + +25 +00:01:05,280 --> 00:01:10,360 +to access real world information that + +26 +00:01:07,960 --> 00:01:12,240 +language model itself may not be or + +27 +00:01:10,360 --> 00:01:14,600 +fundamentally unable to answer that + +28 +00:01:12,240 --> 00:01:16,560 +question for example if you ask what is + +29 +00:01:14,600 --> 00:01:18,720 +the current time the current time is + +30 +00:01:16,560 --> 00:01:21,079 +probably not in the model training data + +31 +00:01:18,720 --> 00:01:23,439 +and depending on the time they ask this + +32 +00:01:21,079 --> 00:01:25,920 +question the answer to this question is + +33 +00:01:23,439 --> 00:01:27,880 +probably different so the only way the + +34 +00:01:25,920 --> 00:01:30,600 +language models can answer this is by + +35 +00:01:27,880 --> 00:01:33,240 +using an external tool for example + +36 +00:01:30,600 --> 00:01:35,479 +calling the API called get time to get + +37 +00:01:33,240 --> 00:01:37,759 +the current time so that's the + +38 +00:01:35,479 --> 00:01:40,600 +motivation of why people started to get + +39 +00:01:37,759 --> 00:01:44,040 +interested in tools and start using them + +40 +00:01:40,600 --> 00:01:46,840 +and there are a lot of interest in tool + +41 +00:01:44,040 --> 00:01:49,799 +related work so you may have heard of to + +42 +00:01:46,840 --> 00:01:53,079 +former which is maybe one of the most uh + +43 +00:01:49,799 --> 00:01:55,680 +like first works that like explicitly uh + +44 +00:01:53,079 --> 00:01:58,759 +propos the term tool to language models + +45 +00:01:55,680 --> 00:02:01,320 +and what they do is they uh propose five + +46 +00:01:58,759 --> 00:02:04,320 +tools including like a calculator or + +47 +00:02:01,320 --> 00:02:06,640 +Wiki search engine to provide external + +48 +00:02:04,320 --> 00:02:09,679 +access to the language + +49 +00:02:06,640 --> 00:02:12,520 +model but there are many Works after + +50 +00:02:09,679 --> 00:02:14,400 +this and you can see this works are all + +51 +00:02:12,520 --> 00:02:17,200 +interested in like two augmented + +52 +00:02:14,400 --> 00:02:20,080 +language models however if you look + +53 +00:02:17,200 --> 00:02:22,000 +closer they all evaluate they all use in + +54 +00:02:20,080 --> 00:02:24,599 +different tools and evaluate on + +55 +00:02:22,000 --> 00:02:27,800 +different data sets for example this + +56 +00:02:24,599 --> 00:02:30,599 +tool formare and art uh paper they use + +57 +00:02:27,800 --> 00:02:32,120 +software as as tools such as calculator + +58 +00:02:30,599 --> 00:02:35,000 +or Wiki search + +59 +00:02:32,120 --> 00:02:37,360 +engine and for this to Works they use + +60 +00:02:35,000 --> 00:02:39,440 +apis as tools for example the apis that + +61 +00:02:37,360 --> 00:02:41,120 +you can scrap from the website like get + +62 +00:02:39,440 --> 00:02:43,560 +weather or get + +63 +00:02:41,120 --> 00:02:45,879 +time and there are also works that use + +64 +00:02:43,560 --> 00:02:48,239 +neurom models as tools for example they + +65 +00:02:45,879 --> 00:02:52,080 +scraped like all the model names from + +66 +00:02:48,239 --> 00:02:54,280 +hugging F Hub or other places and uh use + +67 +00:02:52,080 --> 00:02:56,560 +like langage model or like other kinds + +68 +00:02:54,280 --> 00:03:00,000 +of models that are specialized for some + +69 +00:02:56,560 --> 00:03:03,200 +task and use them as tools and lastly + +70 +00:03:00,000 --> 00:03:05,239 +there are works that um use like mostly + +71 +00:03:03,200 --> 00:03:07,959 +locally defined and expert crafted + +72 +00:03:05,239 --> 00:03:10,480 +functions tools so with all this + +73 +00:03:07,959 --> 00:03:12,959 +different virus of tools uh a natural + +74 +00:03:10,480 --> 00:03:16,440 +confusion at least for me is what are + +75 +00:03:12,959 --> 00:03:19,920 +tools anyway are this all tools so to + +76 +00:03:16,440 --> 00:03:21,840 +answer this question we had a survey but + +77 +00:03:19,920 --> 00:03:24,239 +I'll briefly cover this in three + +78 +00:03:21,840 --> 00:03:26,440 +dimensions today one is what is the + +79 +00:03:24,239 --> 00:03:28,599 +basic of tools what is the definition + +80 +00:03:26,440 --> 00:03:31,640 +and what are the main functionalities of + +81 +00:03:28,599 --> 00:03:33,519 +tools and second uh what are scenarios + +82 +00:03:31,640 --> 00:03:36,400 +where you can use tools what tools do we + +83 +00:03:33,519 --> 00:03:38,200 +have what task can we apply this tool + +84 +00:03:36,400 --> 00:03:41,080 +and also using what + +85 +00:03:38,200 --> 00:03:44,040 +method and lastly I'll cover the + +86 +00:03:41,080 --> 00:03:46,799 +evaluation aspect what tasps and what + +87 +00:03:44,040 --> 00:03:48,799 +evaluation metrics we can use and also + +88 +00:03:46,799 --> 00:03:51,879 +um what's the empirical benefit where if + +89 +00:03:48,799 --> 00:03:56,200 +they have any empirical benefit at all + +90 +00:03:51,879 --> 00:03:56,200 +so I'll first dive into the two + +91 +00:03:57,079 --> 00:04:03,120 +Basics so from definition + +92 +00:04:00,360 --> 00:04:05,159 +uh we think tools are actually a program + +93 +00:04:03,120 --> 00:04:09,079 +that you can like language models can + +94 +00:04:05,159 --> 00:04:12,879 +leverage and um like called the program + +95 +00:04:09,079 --> 00:04:15,599 +to exert some functions uh however for a + +96 +00:04:12,879 --> 00:04:18,720 +tool to be a uh for a program to be a + +97 +00:04:15,599 --> 00:04:22,639 +tool it needs to satify satisfy two + +98 +00:04:18,720 --> 00:04:25,320 +properties one is external if we refer + +99 +00:04:22,639 --> 00:04:28,919 +back to the definition of animal use + +100 +00:04:25,320 --> 00:04:32,280 +tools provide by this um like literature + +101 +00:04:28,919 --> 00:04:35,639 +and they say like tools are external + +102 +00:04:32,280 --> 00:04:38,120 +employment of like environmental object + +103 +00:04:35,639 --> 00:04:41,199 +so like similarly for language model use + +104 +00:04:38,120 --> 00:04:43,360 +tools this tools should also be external + +105 +00:04:41,199 --> 00:04:46,880 +to the employer which in this scenario + +106 +00:04:43,360 --> 00:04:49,240 +is the language model so in that s the + +107 +00:04:46,880 --> 00:04:52,360 +program should be external to language + +108 +00:04:49,240 --> 00:04:55,759 +model and the second property is + +109 +00:04:52,360 --> 00:04:59,120 +functional um so what we mean by + +110 +00:04:55,759 --> 00:05:01,199 +functional is this um program should be + +111 +00:04:59,120 --> 00:05:03,199 +a function can be applied to other + +112 +00:05:01,199 --> 00:05:05,440 +objects in the environment and then + +113 +00:05:03,199 --> 00:05:08,360 +change the state of the environment or + +114 +00:05:05,440 --> 00:05:10,720 +like just yell an output so a simple + +115 +00:05:08,360 --> 00:05:13,479 +example is like this if in the + +116 +00:05:10,720 --> 00:05:15,840 +environment we have like a blend canvas + +117 +00:05:13,479 --> 00:05:18,680 +which is an object and we also have a + +118 +00:05:15,840 --> 00:05:21,800 +tool which is a brush here and the + +119 +00:05:18,680 --> 00:05:24,680 +function of this brush is to uh paint on + +120 +00:05:21,800 --> 00:05:27,120 +the canvas and change like yield and + +121 +00:05:24,680 --> 00:05:30,199 +result which is the + +122 +00:05:27,120 --> 00:05:33,000 +picture so combining this uh two + +123 +00:05:30,199 --> 00:05:35,639 +properties we give AER definition of + +124 +00:05:33,000 --> 00:05:38,319 +what is a tool which is a function + +125 +00:05:35,639 --> 00:05:40,919 +interface to computer program that runs + +126 +00:05:38,319 --> 00:05:43,479 +external to the language model and in + +127 +00:05:40,919 --> 00:05:46,240 +short how the language model used the to + +128 +00:05:43,479 --> 00:05:49,240 +is by uh generat the function calls and + +129 +00:05:46,240 --> 00:05:53,160 +the input arguments to about that + +130 +00:05:49,240 --> 00:05:57,639 +to any questions so + +131 +00:05:53,160 --> 00:05:59,639 +far cool so after knowing the definition + +132 +00:05:57,639 --> 00:06:02,160 +um what are the main functionalities + +133 +00:05:59,639 --> 00:06:04,560 +tools we summarized uh the main + +134 +00:06:02,160 --> 00:06:07,080 +functionality we summarize three main + +135 +00:06:04,560 --> 00:06:10,000 +functionalities one is uh perception + +136 +00:06:07,080 --> 00:06:12,400 +tool perception tool is used to collect + +137 +00:06:10,000 --> 00:06:16,680 +data from the environment without + +138 +00:06:12,400 --> 00:06:19,759 +changing its dat so for example um like + +139 +00:06:16,680 --> 00:06:21,639 +a search engine may be a perception tool + +140 +00:06:19,759 --> 00:06:24,160 +you can just like search on the web and + +141 +00:06:21,639 --> 00:06:25,800 +collect the pieces of data that are most + +142 +00:06:24,160 --> 00:06:28,479 +relevant to your + +143 +00:06:25,800 --> 00:06:30,720 +query um and also another example could + +144 +00:06:28,479 --> 00:06:33,440 +be like get weather you can get like for + +145 +00:06:30,720 --> 00:06:35,160 +example by calling the get weather API + +146 +00:06:33,440 --> 00:06:38,919 +um on the + +147 +00:06:35,160 --> 00:06:42,639 +web and the second uh functionality is + +148 +00:06:38,919 --> 00:06:44,880 +action so action tools are used to exert + +149 +00:06:42,639 --> 00:06:48,080 +action in the environment and change the + +150 +00:06:44,880 --> 00:06:50,039 +state of the environment so this is we + +151 +00:06:48,080 --> 00:06:54,520 +can re use the example that we' seen in + +152 +00:06:50,039 --> 00:06:57,400 +the last uh slide that um actually so if + +153 +00:06:54,520 --> 00:06:59,199 +you see the canvas as a object that + +154 +00:06:57,400 --> 00:07:01,520 +belongs to the environment then the + +155 +00:06:59,199 --> 00:07:05,000 +brush just like painting the canvas and + +156 +00:07:01,520 --> 00:07:07,720 +change the state of the + +157 +00:07:05,000 --> 00:07:10,560 +environment a third uh calegory is + +158 +00:07:07,720 --> 00:07:13,080 +computation tool they keep mean simple + +159 +00:07:10,560 --> 00:07:15,919 +like Computing uh computation activities + +160 +00:07:13,080 --> 00:07:18,120 +such as as ma math population but here + +161 +00:07:15,919 --> 00:07:21,440 +we mean like more General acts of + +162 +00:07:18,120 --> 00:07:24,319 +computing which includes like any other + +163 +00:07:21,440 --> 00:07:27,879 +source of computing instead of just like + +164 +00:07:24,319 --> 00:07:30,879 +number of calculations for example + +165 +00:07:27,879 --> 00:07:33,280 +um uh a transl or could also be like a + +166 +00:07:30,879 --> 00:07:35,080 +computation tool because uh you need to + +167 +00:07:33,280 --> 00:07:36,039 +like translate from this language to + +168 +00:07:35,080 --> 00:07:38,840 +another + +169 +00:07:36,039 --> 00:07:41,879 +language um another example yeah here's + +170 +00:07:38,840 --> 00:07:46,039 +the like simple example the calculator + +171 +00:07:41,879 --> 00:07:47,319 +example however there are like this like + +172 +00:07:46,039 --> 00:07:51,360 +different categories or different + +173 +00:07:47,319 --> 00:07:54,039 +functionalities are not like um disjoint + +174 +00:07:51,360 --> 00:07:57,440 +so one two may have one functionality or + +175 +00:07:54,039 --> 00:07:59,919 +more functionality so does anyone have + +176 +00:07:57,440 --> 00:08:02,599 +any ideas for example have I know what + +177 +00:07:59,919 --> 00:08:04,360 +tool can serve as like other for example + +178 +00:08:02,599 --> 00:08:06,639 +perception and computation tool at the + +179 +00:08:04,360 --> 00:08:06,639 +same + +180 +00:08:13,960 --> 00:08:20,879 +time yeah maybe to give a um answer uh + +181 +00:08:18,520 --> 00:08:22,759 +the first one I mentioned the wiki + +182 +00:08:20,879 --> 00:08:25,440 +Search tool it can be both a + +183 +00:08:22,759 --> 00:08:27,080 +interception tool and a computation tool + +184 +00:08:25,440 --> 00:08:29,159 +so we have explained that you can + +185 +00:08:27,080 --> 00:08:31,199 +collect relevant documents from the web + +186 +00:08:29,159 --> 00:08:33,959 +so that's like getting information from + +187 +00:08:31,199 --> 00:08:37,640 +the web but also as a competition tool + +188 +00:08:33,959 --> 00:08:39,919 +if you think deeply in uh the process of + +189 +00:08:37,640 --> 00:08:42,519 +how the search engine uh returns those + +190 +00:08:39,919 --> 00:08:45,040 +socks back to you given the query it + +191 +00:08:42,519 --> 00:08:48,200 +calculat the similarity scores of your + +192 +00:08:45,040 --> 00:08:50,120 +query to many other documents and uh + +193 +00:08:48,200 --> 00:08:52,959 +like rank them by their scores and + +194 +00:08:50,120 --> 00:08:58,240 +return to top PR lines so that process + +195 +00:08:52,959 --> 00:08:58,240 +also involves uh Computing yeah + +196 +00:09:01,160 --> 00:09:05,920 +yeah that's a great question that's + +197 +00:09:02,720 --> 00:09:08,920 +actually my next CL is about so what's + +198 +00:09:05,920 --> 00:09:11,839 +the relationship between um tools and + +199 +00:09:08,920 --> 00:09:14,760 +agents uh but maybe a brief answer to + +200 +00:09:11,839 --> 00:09:16,680 +that is uh I think language models can + +201 +00:09:14,760 --> 00:09:19,680 +use tools and not like language model + +202 +00:09:16,680 --> 00:09:22,040 +based agents but uh tools are on the + +203 +00:09:19,680 --> 00:09:24,760 +other hand tools are pretty important to + +204 +00:09:22,040 --> 00:09:28,160 +help agents to achieve like success on + +205 +00:09:24,760 --> 00:09:30,519 +many tasks so here's a de diab on the + +206 +00:09:28,160 --> 00:09:35,120 +relationship between tools + +207 +00:09:30,519 --> 00:09:37,720 +agents um so again what's that agents + +208 +00:09:35,120 --> 00:09:40,640 +anyway so here we have a pretty good + +209 +00:09:37,720 --> 00:09:42,839 +definition of agent which is um anything + +210 +00:09:40,640 --> 00:09:45,079 +that can be viewed as perceiving its + +211 +00:09:42,839 --> 00:09:47,760 +environment through sensors and acting + +212 +00:09:45,079 --> 00:09:50,440 +upon that environment through actuators + +213 +00:09:47,760 --> 00:09:54,120 +so connecting back to the functionality + +214 +00:09:50,440 --> 00:09:56,480 +of tools an agent can use um P + +215 +00:09:54,120 --> 00:09:59,680 +perception tools to perceive the world + +216 +00:09:56,480 --> 00:10:03,200 +as um uh to get information from it and + +217 +00:09:59,680 --> 00:10:05,720 +also use action tools to um take action + +218 +00:10:03,200 --> 00:10:08,040 +on the environment um but there's a + +219 +00:10:05,720 --> 00:10:10,720 +caveat here that for example language + +220 +00:10:08,040 --> 00:10:13,000 +model that only uses computation tools + +221 +00:10:10,720 --> 00:10:14,920 +but do not use perception action tools + +222 +00:10:13,000 --> 00:10:19,000 +or actually do not fall into the + +223 +00:10:14,920 --> 00:10:19,000 +category of Agents Alle by this + +224 +00:10:22,800 --> 00:10:29,240 +definition example it's acep and + +225 +00:10:37,560 --> 00:10:44,360 +yeah that's a good question I think um + +226 +00:10:40,880 --> 00:10:47,920 +for I think for now the like to + +227 +00:10:44,360 --> 00:10:51,360 +using build is not very mature yet so + +228 +00:10:47,920 --> 00:10:54,279 +mostly the data sets uh only support + +229 +00:10:51,360 --> 00:10:56,320 +like multi-term to usage so like you can + +230 +00:10:54,279 --> 00:10:58,200 +split a task into multiple task that + +231 +00:10:56,320 --> 00:11:00,639 +multiple steps but each step only used + +232 +00:10:58,200 --> 00:11:02,480 +one CH there's not necessarily a lot of + +233 +00:11:00,639 --> 00:11:04,440 +like interaction between those + +234 +00:11:02,480 --> 00:11:07,680 +two yeah but that could be an + +235 +00:11:04,440 --> 00:11:07,680 +interesting diretion to + +236 +00:11:10,480 --> 00:11:18,040 +Mo cool so that's the basic of um tools + +237 +00:11:14,800 --> 00:11:19,839 +so we'll dive into like more detail like + +238 +00:11:18,040 --> 00:11:21,120 +what scenarios and what task can we + +239 +00:11:19,839 --> 00:11:24,560 +apply this + +240 +00:11:21,120 --> 00:11:27,000 +in so we start with like basic tool + +241 +00:11:24,560 --> 00:11:30,639 +using Paradigm so how that can like + +242 +00:11:27,000 --> 00:11:34,079 +language models use a tool um so first + +243 +00:11:30,639 --> 00:11:36,480 +um or in a nutshell is basically a shift + +244 +00:11:34,079 --> 00:11:39,760 +in between tax generation and Tool + +245 +00:11:36,480 --> 00:11:43,560 +execution modes so for example in the + +246 +00:11:39,760 --> 00:11:45,959 +example on the right that a user asks um + +247 +00:11:43,560 --> 00:11:47,480 +how is the weather today so the langage + +248 +00:11:45,959 --> 00:11:51,000 +model start with the standard tax + +249 +00:11:47,480 --> 00:11:53,120 +generation process by doning S and when + +250 +00:11:51,000 --> 00:11:55,240 +the model feels like needing like extra + +251 +00:11:53,120 --> 00:11:58,399 +he from tools it starts to generate + +252 +00:11:55,240 --> 00:12:00,760 +tokens that forms um the call expression + +253 +00:11:58,399 --> 00:12:03,839 +to that tool for example the call uh + +254 +00:12:00,760 --> 00:12:06,160 +check weather here and after this uh + +255 +00:12:03,839 --> 00:12:08,720 +expression is completed this uh + +256 +00:12:06,160 --> 00:12:11,399 +completed uh expression will trigger the + +257 +00:12:08,720 --> 00:12:14,320 +remote um to excution server which is + +258 +00:12:11,399 --> 00:12:16,639 +the weather server here and then we'll + +259 +00:12:14,320 --> 00:12:18,800 +shift to the two execution mode where + +260 +00:12:16,639 --> 00:12:21,880 +the server will execute the call and + +261 +00:12:18,800 --> 00:12:24,240 +return to result which is sunny here and + +262 +00:12:21,880 --> 00:12:28,000 +return that back to the loc model in + +263 +00:12:24,240 --> 00:12:29,880 +Lage model replaces the API call by this + +264 +00:12:28,000 --> 00:12:32,560 +returned execution output + +265 +00:12:29,880 --> 00:12:34,760 +and continue like shift back to the text + +266 +00:12:32,560 --> 00:12:35,959 +generation mode and continues generating + +267 +00:12:34,760 --> 00:12:38,720 +the rest of the + +268 +00:12:35,959 --> 00:12:40,760 +token and after finish all this process + +269 +00:12:38,720 --> 00:12:43,519 +we will return the final response and + +270 +00:12:40,760 --> 00:12:47,600 +the sing today to the original + +271 +00:12:43,519 --> 00:12:51,240 +user and this process is not it's pretty + +272 +00:12:47,600 --> 00:12:54,760 +like intuitive and the method for + +273 +00:12:51,240 --> 00:12:56,920 +current methods to use like teach model + +274 +00:12:54,760 --> 00:13:00,000 +to use tools like this is also pretty + +275 +00:12:56,920 --> 00:13:02,600 +straightforward so there are two uh + +276 +00:13:00,000 --> 00:13:05,880 +categories of approaches one is uh + +277 +00:13:02,600 --> 00:13:08,079 +inference time counting so basically you + +278 +00:13:05,880 --> 00:13:10,360 +can provide for example natural language + +279 +00:13:08,079 --> 00:13:13,440 +instructions and instruct the model to + +280 +00:13:10,360 --> 00:13:15,680 +do this kind of process also in context + +281 +00:13:13,440 --> 00:13:18,199 +examples with like natural language + +282 +00:13:15,680 --> 00:13:20,480 +input and also to involved solution + +283 +00:13:18,199 --> 00:13:22,560 +outputs and also there are other works + +284 +00:13:20,480 --> 00:13:25,440 +that provide like documentation of tools + +285 +00:13:22,560 --> 00:13:27,279 +to help models inter in this tool and + +286 +00:13:25,440 --> 00:13:30,240 +the second category is learning by + +287 +00:13:27,279 --> 00:13:32,560 +training which is uh more here that you + +288 +00:13:30,240 --> 00:13:36,480 +can just turn on examples of natural + +289 +00:13:32,560 --> 00:13:36,480 +language quaries and to L + +290 +00:13:37,680 --> 00:13:44,720 +solution and using this methods many + +291 +00:13:40,880 --> 00:13:47,800 +people have applied um tools um in many + +292 +00:13:44,720 --> 00:13:50,880 +scenarios and here are the major five + +293 +00:13:47,800 --> 00:13:54,040 +scenarios you have sumarized the first + +294 +00:13:50,880 --> 00:13:56,399 +is um knowledge access so that's aimed + +295 +00:13:54,040 --> 00:13:58,480 +to solve like The Limited knowledge and + +296 +00:13:56,399 --> 00:13:59,880 +knows can memorize or slore during the + +297 +00:13:58,480 --> 00:14:01,680 +training time + +298 +00:13:59,880 --> 00:14:05,880 +uh for example current time is probably + +299 +00:14:01,680 --> 00:14:09,079 +not involved in Sharing data and uh one + +300 +00:14:05,880 --> 00:14:11,600 +um there could be like many sources a + +301 +00:14:09,079 --> 00:14:13,600 +model can access Knowledge from one a + +302 +00:14:11,600 --> 00:14:15,920 +structured um knowledge bases with + +303 +00:14:13,600 --> 00:14:18,680 +knowledge graphs but people can useal + +304 +00:14:15,920 --> 00:14:21,519 +executor or spal executor to execute + +305 +00:14:18,680 --> 00:14:24,720 +over the structured data to get the um + +306 +00:14:21,519 --> 00:14:27,079 +final result and uh more generally on + +307 +00:14:24,720 --> 00:14:28,839 +like free from tags people are using + +308 +00:14:27,079 --> 00:14:31,600 +like search engines to search over the + +309 +00:14:28,839 --> 00:14:35,160 +that uh Internet and get the information + +310 +00:14:31,600 --> 00:14:37,920 +that want and more like generally maybe + +311 +00:14:35,160 --> 00:14:40,680 +related to like retrieval generation if + +312 +00:14:37,920 --> 00:14:43,199 +I have heard of it uh all like the + +313 +00:14:40,680 --> 00:14:45,759 +retrieval models can be seen as like + +314 +00:14:43,199 --> 00:14:48,320 +knowledge accessing to + +315 +00:14:45,759 --> 00:14:49,680 +here and the second category is + +316 +00:14:48,320 --> 00:14:53,079 +computation + +317 +00:14:49,680 --> 00:14:56,320 +activities so there's also this aims to + +318 +00:14:53,079 --> 00:14:58,399 +solve the issue that for complex for aqu + +319 +00:14:56,320 --> 00:15:00,000 +that required complex reasoning models + +320 +00:14:58,399 --> 00:15:03,279 +are probably + +321 +00:15:00,000 --> 00:15:05,199 +not good at it like solving cannot solve + +322 +00:15:03,279 --> 00:15:07,759 +the problem very efficiently but + +323 +00:15:05,199 --> 00:15:10,480 +accurately so for example there are + +324 +00:15:07,759 --> 00:15:13,320 +people that use calculator to solve math + +325 +00:15:10,480 --> 00:15:16,040 +pattern or more generally for more + +326 +00:15:13,320 --> 00:15:19,079 +complex operations to use py interpreter + +327 +00:15:16,040 --> 00:15:21,519 +by wring like more complex P programs + +328 +00:15:19,079 --> 00:15:24,440 +and lastly they're also like you can + +329 +00:15:21,519 --> 00:15:27,720 +also leverage existing softw like uh + +330 +00:15:24,440 --> 00:15:30,000 +Google uh work sheet uh Google sheet + +331 +00:15:27,720 --> 00:15:33,759 +where like frally and like take the + +332 +00:15:30,000 --> 00:15:36,120 +actions from there and uh exert the + +333 +00:15:33,759 --> 00:15:38,880 +actions and the third category is + +334 +00:15:36,120 --> 00:15:41,319 +interacting with the world so what + +335 +00:15:38,880 --> 00:15:44,160 +language model can think is pretty + +336 +00:15:41,319 --> 00:15:46,399 +Limited in this training data and how do + +337 +00:15:44,160 --> 00:15:49,880 +we leverage language models to for + +338 +00:15:46,399 --> 00:15:52,600 +example navigating the web so a lot of + +339 +00:15:49,880 --> 00:15:55,319 +uh we're like getting access to re World + +340 +00:15:52,600 --> 00:15:57,680 +information so there are some cases for + +341 +00:15:55,319 --> 00:16:00,440 +example getting get weather or get + +342 +00:15:57,680 --> 00:16:02,399 +location to get the current uh + +343 +00:16:00,440 --> 00:16:03,519 +information in the current time when + +344 +00:16:02,399 --> 00:16:06,920 +your + +345 +00:16:03,519 --> 00:16:11,120 +location so it can also manipulate your + +346 +00:16:06,920 --> 00:16:14,000 +calendar um events to like help automize + +347 +00:16:11,120 --> 00:16:15,800 +your work or so it can also help like + +348 +00:16:14,000 --> 00:16:19,199 +manage your + +349 +00:16:15,800 --> 00:16:22,279 +email um the fourth category is non- + +350 +00:16:19,199 --> 00:16:24,519 +taxure modalities so this is targeting + +351 +00:16:22,279 --> 00:16:28,199 +another lotion that language models have + +352 +00:16:24,519 --> 00:16:30,759 +as they're like mainly processing Tas + +353 +00:16:28,199 --> 00:16:33,920 +they can only take tax input and + +354 +00:16:30,759 --> 00:16:36,199 +generate tax outputs but if we provide + +355 +00:16:33,920 --> 00:16:38,800 +like tools that connect with other + +356 +00:16:36,199 --> 00:16:42,120 +modalities we can enable language models + +357 +00:16:38,800 --> 00:16:45,040 +to uh access other modal data from other + +358 +00:16:42,120 --> 00:16:48,720 +modalities or interact with them for + +359 +00:16:45,040 --> 00:16:51,279 +example there are API for example called + +360 +00:16:48,720 --> 00:16:55,399 +Tad image where you can access images of + +361 +00:16:51,279 --> 00:16:59,199 +Tad or like uh delete them and also you + +362 +00:16:55,399 --> 00:17:03,839 +can like listen to audios um by like the + +363 +00:16:59,199 --> 00:17:07,880 +Spotify play music at API but also uh + +364 +00:17:03,839 --> 00:17:10,000 +like Beyond um just viewing where simply + +365 +00:17:07,880 --> 00:17:12,880 +when you're mutilating the data you can + +366 +00:17:10,000 --> 00:17:15,439 +also like use for example a visual QA + +367 +00:17:12,880 --> 00:17:18,319 +tool which probably involves your other + +368 +00:17:15,439 --> 00:17:20,120 +to answer questions about this data and + +369 +00:17:18,319 --> 00:17:23,839 +other + +370 +00:17:20,120 --> 00:17:27,480 +modalities and lastly there's a special + +371 +00:17:23,839 --> 00:17:29,520 +category called uh that use tools um + +372 +00:17:27,480 --> 00:17:33,640 +based on your model + +373 +00:17:29,520 --> 00:17:37,760 +so for example you can load a QA model + +374 +00:17:33,640 --> 00:17:41,880 +and uh assign QA T specifically to this + +375 +00:17:37,760 --> 00:17:44,799 +model and also another um example is the + +376 +00:17:41,880 --> 00:17:46,640 +translation uh model where you can load + +377 +00:17:44,799 --> 00:17:49,679 +in your model and plug in PH and do + +378 +00:17:46,640 --> 00:17:52,760 +translation for you um and also like for + +379 +00:17:49,679 --> 00:17:56,280 +the visual QA here it also LS like a vqa + +380 +00:17:52,760 --> 00:17:57,840 +model from plugin Shades and um do the + +381 +00:17:56,280 --> 00:18:01,520 +things you want + +382 +00:17:57,840 --> 00:18:04,159 +here so yeah these are uh also again + +383 +00:18:01,520 --> 00:18:06,120 +like tools can fall into like modle + +384 +00:18:04,159 --> 00:18:10,679 +categories for example visual Cod can be + +385 +00:18:06,120 --> 00:18:13,039 +both a special one also like non Capal + +386 +00:18:10,679 --> 00:18:13,039 +vality + +387 +00:18:15,480 --> 00:18:21,120 +tools um another limitation maybe not a + +388 +00:18:18,919 --> 00:18:23,200 +limitation but just like shared property + +389 +00:18:21,120 --> 00:18:25,799 +of this tools are they're all designed + +390 +00:18:23,200 --> 00:18:29,400 +by human experts before we even tackle + +391 +00:18:25,799 --> 00:18:32,120 +the test so like with all this uh tools + +392 +00:18:29,400 --> 00:18:34,720 +pretty fun we can when we want to tackle + +393 +00:18:32,120 --> 00:18:37,720 +a task we can just adopt those tools and + +394 +00:18:34,720 --> 00:18:39,559 +use them but there are also tasks that + +395 +00:18:37,720 --> 00:18:43,480 +don't have this predesigned tools + +396 +00:18:39,559 --> 00:18:45,480 +available so another question of can we + +397 +00:18:43,480 --> 00:18:47,679 +automatically make tools for example + +398 +00:18:45,480 --> 00:18:49,480 +using langage models without like + +399 +00:18:47,679 --> 00:18:52,559 +relying on human + +400 +00:18:49,480 --> 00:18:55,480 +experts so in our previous work we + +401 +00:18:52,559 --> 00:19:00,120 +export this a bit on pratic T and our + +402 +00:18:55,480 --> 00:19:03,360 +answer is yes so to give a brief + +403 +00:19:00,120 --> 00:19:06,840 +overview so for the standard way of + +404 +00:19:03,360 --> 00:19:08,840 +solving program molecules so maybe + +405 +00:19:06,840 --> 00:19:11,480 +before that like a programmatic test + +406 +00:19:08,840 --> 00:19:13,039 +like you given an actra language problem + +407 +00:19:11,480 --> 00:19:15,080 +and you ask the language models to + +408 +00:19:13,039 --> 00:19:17,320 +generate a program then you execute the + +409 +00:19:15,080 --> 00:19:19,840 +result to get the final + +410 +00:19:17,320 --> 00:19:22,640 +answer so the similar way of doing this + +411 +00:19:19,840 --> 00:19:25,120 +you have a stream of um Tex examples and + +412 +00:19:22,640 --> 00:19:27,559 +you pass that you to the C langage model + +413 +00:19:25,120 --> 00:19:29,919 +and usually generate uh solutions for + +414 +00:19:27,559 --> 00:19:33,000 +each of this programs but they usually + +415 +00:19:29,919 --> 00:19:35,159 +look like this so um maybe much longer + +416 +00:19:33,000 --> 00:19:38,840 +than this but it's usually a long + +417 +00:19:35,159 --> 00:19:41,720 +program um what issue of this is that it + +418 +00:19:38,840 --> 00:19:44,679 +may be PL to error for example here it + +419 +00:19:41,720 --> 00:19:46,760 +like just like Miss type one character + +420 +00:19:44,679 --> 00:19:50,400 +that CA the entire program solution to + +421 +00:19:46,760 --> 00:19:52,400 +be the our motivation is what if we ask + +422 +00:19:50,400 --> 00:19:57,039 +po language models to additionally + +423 +00:19:52,400 --> 00:19:58,960 +generate a toolbox and now with uh for + +424 +00:19:57,039 --> 00:20:01,720 +example you have like a calculate rate + +425 +00:19:58,960 --> 00:20:04,559 +of change tool now solution become very + +426 +00:20:01,720 --> 00:20:07,280 +simple it's just like one tool calling + +427 +00:20:04,559 --> 00:20:09,960 +expression all you need to figure out is + +428 +00:20:07,280 --> 00:20:12,480 +um the argument they input into this + +429 +00:20:09,960 --> 00:20:15,960 +function and that could alleviate the + +430 +00:20:12,480 --> 00:20:18,600 +errors that we seen but also it could be + +431 +00:20:15,960 --> 00:20:22,039 +like benefit humans say it's easier for + +432 +00:20:18,600 --> 00:20:22,039 +humans to verify the + +433 +00:20:22,480 --> 00:20:29,080 +solution so how does show like our + +434 +00:20:25,760 --> 00:20:31,640 +method work + +435 +00:20:29,080 --> 00:20:31,640 +this might be a + +436 +00:20:33,600 --> 00:20:39,360 +little so in general here's the type + +437 +00:20:37,480 --> 00:20:42,280 +that we have a stre of examples we + +438 +00:20:39,360 --> 00:20:45,600 +output Solutions in the tool box and to + +439 +00:20:42,280 --> 00:20:49,400 +start we initiate uh mt2 bo and prepare + +440 +00:20:45,600 --> 00:20:52,679 +the examples and when we encounter A New + +441 +00:20:49,400 --> 00:20:54,760 +U examples with unseen functionality we + +442 +00:20:52,679 --> 00:20:56,720 +ask the along with model to First + +443 +00:20:54,760 --> 00:20:58,600 +generate a reusable tool and then + +444 +00:20:56,720 --> 00:21:01,840 +generate solution using the tool that + +445 +00:20:58,600 --> 00:21:04,120 +that it just generated and because we're + +446 +00:21:01,840 --> 00:21:06,799 +like operating purely in task time + +447 +00:21:04,120 --> 00:21:09,280 +without turning to provision we to + +448 +00:21:06,799 --> 00:21:12,960 +improve the quality of the pairs we do + +449 +00:21:09,280 --> 00:21:15,840 +sampling inly select the best and then + +450 +00:21:12,960 --> 00:21:18,400 +we add the DAT the solution and also add + +451 +00:21:15,840 --> 00:21:21,039 +the tool to our tool box for future use + +452 +00:21:18,400 --> 00:21:24,000 +so that allign to the first create mode + +453 +00:21:21,039 --> 00:21:26,640 +the model can create a new function um + +454 +00:21:24,000 --> 00:21:30,400 +when it see + +455 +00:21:26,640 --> 00:21:34,840 +example and the second + +456 +00:21:30,400 --> 00:21:36,679 +scenario is um using the import mode so + +457 +00:21:34,840 --> 00:21:39,559 +now suppose we have many functions in + +458 +00:21:36,679 --> 00:21:41,960 +our tool box and next time the when the + +459 +00:21:39,559 --> 00:21:46,679 +model sees an example that uses similar + +460 +00:21:41,960 --> 00:21:50,159 +functions without a setad of + +461 +00:21:46,679 --> 00:21:53,240 +um instead of regenerate a new tool we + +462 +00:21:50,159 --> 00:21:57,039 +can ask one model + +463 +00:21:53,240 --> 00:22:00,880 +to uh directly import the tool from the + +464 +00:21:57,039 --> 00:22:03,000 +tool box so it will like reuse the tool + +465 +00:22:00,880 --> 00:22:05,120 +which aligns with the second import mode + +466 +00:22:03,000 --> 00:22:06,400 +and similarly generate Solutions using + +467 +00:22:05,120 --> 00:22:09,080 +that + +468 +00:22:06,400 --> 00:22:12,480 +tool and lastly you also support a skip + +469 +00:22:09,080 --> 00:22:15,440 +mode where you can uh not generate the + +470 +00:22:12,480 --> 00:22:18,720 +tool because you think if you think the + +471 +00:22:15,440 --> 00:22:22,559 +um problem is too easy to use + +472 +00:22:18,720 --> 00:22:25,559 +to and the results are pretty good my + +473 +00:22:22,559 --> 00:22:28,720 +oping so compared to the Basel lines on + +474 +00:22:25,559 --> 00:22:31,039 +the two blocks on the top are actually + +475 +00:22:28,720 --> 00:22:34,360 +is we can our method can like improve + +476 +00:22:31,039 --> 00:22:36,799 +the accuracy a lot but also maintains a + +477 +00:22:34,360 --> 00:22:39,960 +reasonably small size of libraries as + +478 +00:22:36,799 --> 00:22:41,880 +you see here and also if you look at the + +479 +00:22:39,960 --> 00:22:44,760 +middle line which measures the number of + +480 +00:22:41,880 --> 00:22:47,440 +operations as a representative of how + +481 +00:22:44,760 --> 00:22:49,360 +complex his solution is our me the + +482 +00:22:47,440 --> 00:22:51,240 +solutions generated by our method is + +483 +00:22:49,360 --> 00:22:54,400 +much uh + +484 +00:22:51,240 --> 00:22:57,559 +simpler this is the results with Judy me + +485 +00:22:54,400 --> 00:22:59,760 +model and also related to simpler + +486 +00:22:57,559 --> 00:23:01,919 +Solutions of that method we did an + +487 +00:22:59,760 --> 00:23:04,440 +interesting human verification study + +488 +00:23:01,919 --> 00:23:07,200 +where we asked the humans uh first to + +489 +00:23:04,440 --> 00:23:10,320 +verify the correctness of the stion + +490 +00:23:07,200 --> 00:23:12,600 +whether it's correct or not and second + +491 +00:23:10,320 --> 00:23:16,440 +we ask we measure the time that humans + +492 +00:23:12,600 --> 00:23:18,440 +take to verify every single solution and + +493 +00:23:16,440 --> 00:23:21,360 +from here you can see that our methods + +494 +00:23:18,440 --> 00:23:23,960 +leat to 10% more accurate um + +495 +00:23:21,360 --> 00:23:26,679 +verification process but also make the + +496 +00:23:23,960 --> 00:23:30,400 +process 30 to 40% + +497 +00:23:26,679 --> 00:23:33,440 +faster so that's like overview of the + +498 +00:23:30,400 --> 00:23:35,679 +toot making methods so just to as event + +499 +00:23:33,440 --> 00:23:38,000 +scenarios for toot using even if we + +500 +00:23:35,679 --> 00:23:41,159 +don't have human use tools we can still + +501 +00:23:38,000 --> 00:23:44,320 +leverage L models to make tools and + +502 +00:23:41,159 --> 00:23:48,320 +finally benefit from those + +503 +00:23:44,320 --> 00:23:48,320 +tools any questions so + +504 +00:23:55,000 --> 00:24:02,480 +far so yeah it I'll go on to the next + +505 +00:23:58,799 --> 00:24:05,039 +section which talks about um evaluation + +506 +00:24:02,480 --> 00:24:08,600 +and empirical benefits of using + +507 +00:24:05,039 --> 00:24:11,880 +tools so how do we currently evaluate to + +508 +00:24:08,600 --> 00:24:15,200 +use the current benchmarks are Mally in + +509 +00:24:11,880 --> 00:24:18,120 +two category one is reusing existing + +510 +00:24:15,200 --> 00:24:19,679 +benchmarks that um they use two + +511 +00:24:18,120 --> 00:24:22,720 +augmented language model as the + +512 +00:24:19,679 --> 00:24:25,200 +alternative approach to solop the task + +513 +00:24:22,720 --> 00:24:29,279 +um so for example this task usually + +514 +00:24:25,200 --> 00:24:32,120 +involve reasoning um simple form like + +515 +00:24:29,279 --> 00:24:35,279 +the most simple text form are like + +516 +00:24:32,120 --> 00:24:38,880 +mathematical reasoning like math data + +517 +00:24:35,279 --> 00:24:42,679 +set or big bench data set there are also + +518 +00:24:38,880 --> 00:24:45,039 +um like a level up as tasks that require + +519 +00:24:42,679 --> 00:24:48,840 +structured data such as table or + +520 +00:24:45,039 --> 00:24:51,760 +knowledge Bo and data sets for this + +521 +00:24:48,840 --> 00:24:54,120 +category are like um there are data sets + +522 +00:24:51,760 --> 00:24:56,039 +for like we key table for more like + +523 +00:24:54,120 --> 00:24:58,679 +complex structur + +524 +00:24:56,039 --> 00:25:01,320 +tables and lastly there are also that + +525 +00:24:58,679 --> 00:25:04,760 +involve other modalities like visual QA + +526 +00:25:01,320 --> 00:25:07,760 +problems where you can like generate or + +527 +00:25:04,760 --> 00:25:10,039 +re use existing program tools to execute + +528 +00:25:07,760 --> 00:25:13,120 +over an image to answer some questions + +529 +00:25:10,039 --> 00:25:16,399 +about it or do some editing on the + +530 +00:25:13,120 --> 00:25:19,360 +image and another category of benchmarks + +531 +00:25:16,399 --> 00:25:22,600 +are called like aggregated API + +532 +00:25:19,360 --> 00:25:26,399 +benchmarks they mainly focus on the API + +533 +00:25:22,600 --> 00:25:27,880 +category of tools and the like process + +534 +00:25:26,399 --> 00:25:28,840 +how they create this benchmarks are + +535 +00:25:27,880 --> 00:25:31,480 +pretty + +536 +00:25:28,840 --> 00:25:34,880 +much the same that you find a website + +537 +00:25:31,480 --> 00:25:37,520 +that provide a lot of like public apis + +538 +00:25:34,880 --> 00:25:41,080 +you scrip the API as well as in like + +539 +00:25:37,520 --> 00:25:44,320 +metadata about it and there are actually + +540 +00:25:41,080 --> 00:25:47,200 +a lot of like data sets in this type uh + +541 +00:25:44,320 --> 00:25:50,559 +however there are like two issues that + +542 +00:25:47,200 --> 00:25:54,960 +we find when we analyze this data + +543 +00:25:50,559 --> 00:25:57,399 +set one is um the naturalness issue so + +544 +00:25:54,960 --> 00:25:59,720 +if you look deeper deeper into the + +545 +00:25:57,399 --> 00:26:02,279 +process of how they examples using this + +546 +00:25:59,720 --> 00:26:07,480 +apis they usually just theistically + +547 +00:26:02,279 --> 00:26:10,640 +select like one or more apis um and then + +548 +00:26:07,480 --> 00:26:13,880 +ask for example GPT models to synthesize + +549 +00:26:10,640 --> 00:26:16,919 +examples of using this apis so the + +550 +00:26:13,880 --> 00:26:19,200 +naturalness issue are two fold one is + +551 +00:26:16,919 --> 00:26:21,799 +the selected tools may not be used + +552 +00:26:19,200 --> 00:26:23,480 +together in practice and the second is + +553 +00:26:21,799 --> 00:26:25,960 +the example that they created may not + +554 +00:26:23,480 --> 00:26:30,000 +reflect the natural use case of this + +555 +00:26:25,960 --> 00:26:33,640 +examples so um this the issue the first + +556 +00:26:30,000 --> 00:26:36,399 +issue is existing benchmarks and second + +557 +00:26:33,640 --> 00:26:40,520 +really relatedly to the first one is + +558 +00:26:36,399 --> 00:26:42,399 +executability of the tools so like based + +559 +00:26:40,520 --> 00:26:45,679 +on our definition that tools are + +560 +00:26:42,399 --> 00:26:48,039 +programs you probably think that tools + +561 +00:26:45,679 --> 00:26:51,480 +can be executed because they programs + +562 +00:26:48,039 --> 00:26:53,880 +but actually not so if you look at this + +563 +00:26:51,480 --> 00:26:57,760 +table actually more than half the data + +564 +00:26:53,880 --> 00:27:00,440 +sets uh their tools are not executable + +565 +00:26:57,760 --> 00:27:02,360 +um this may be related or at least + +566 +00:27:00,440 --> 00:27:04,799 +partially related to how they create the + +567 +00:27:02,360 --> 00:27:06,799 +examples but they just synthesize + +568 +00:27:04,799 --> 00:27:09,360 +examples and sometimes the example + +569 +00:27:06,799 --> 00:27:12,120 +outputs so without actually executing + +570 +00:27:09,360 --> 00:27:13,520 +the tools so to evaluate on this data + +571 +00:27:12,120 --> 00:27:16,399 +State you don't necessarily need to + +572 +00:27:13,520 --> 00:27:18,520 +execute the tools they kind of like skip + +573 +00:27:16,399 --> 00:27:21,440 +the step but there are also other + +574 +00:27:18,520 --> 00:27:23,480 +reasons for example uh hosting this + +575 +00:27:21,440 --> 00:27:25,880 +tools usually hundreds of thousands of + +576 +00:27:23,480 --> 00:27:29,080 +apis are pretty costly especially if + +577 +00:27:25,880 --> 00:27:31,480 +they involve like large Neo models + +578 +00:27:29,080 --> 00:27:34,399 +and also some of those tools are not + +579 +00:27:31,480 --> 00:27:36,320 +like stable for example the get weather + +580 +00:27:34,399 --> 00:27:39,480 +get time tool they return different + +581 +00:27:36,320 --> 00:27:41,159 +results at different times so it's very + +582 +00:27:39,480 --> 00:27:43,799 +hard for people to create static + +583 +00:27:41,159 --> 00:27:47,360 +benchmarks with a single reference tax + +584 +00:27:43,799 --> 00:27:47,360 +answer for this kind of + +585 +00:27:48,120 --> 00:27:55,760 +problems and the evaluation benchmarks + +586 +00:27:50,679 --> 00:27:57,760 +are um I think pretty like basic now one + +587 +00:27:55,760 --> 00:28:00,640 +is the task completion rate where + +588 +00:27:57,760 --> 00:28:03,000 +basically just they just uh compare the + +589 +00:28:00,640 --> 00:28:06,039 +model generated response with the + +590 +00:28:03,000 --> 00:28:09,440 +annotated reference response and check + +591 +00:28:06,039 --> 00:28:12,200 +the overlap of them and the second is + +592 +00:28:09,440 --> 00:28:15,159 +because some of the works um the tools + +593 +00:28:12,200 --> 00:28:16,840 +are not executable so they cannot like + +594 +00:28:15,159 --> 00:28:18,960 +ask the the tool and get a final + +595 +00:28:16,840 --> 00:28:21,960 +response so instead they just compare + +596 +00:28:18,960 --> 00:28:25,600 +the two selection or like the to color + +597 +00:28:21,960 --> 00:28:28,960 +expression match to see uh the + +598 +00:28:25,600 --> 00:28:31,399 +correctness of the mod generation + +599 +00:28:28,960 --> 00:28:34,919 +and third for especially for like to + +600 +00:28:31,399 --> 00:28:37,559 +making Works um there like our works + +601 +00:28:34,919 --> 00:28:39,919 +evaluate the to usability to encourage + +602 +00:28:37,559 --> 00:28:41,720 +the students to have more like General + +603 +00:28:39,919 --> 00:28:45,159 +functionalities as well as like + +604 +00:28:41,720 --> 00:28:48,880 +encouraging the efficiency in Practical + +605 +00:28:45,159 --> 00:28:52,559 +you however there may be several aspects + +606 +00:28:48,880 --> 00:28:54,640 +missing um on this evalution Dimension + +607 +00:28:52,559 --> 00:28:56,360 +can anyone think of some Dimensions that + +608 +00:28:54,640 --> 00:29:00,039 +you think are important for tools but + +609 +00:28:56,360 --> 00:29:00,039 +not covered here + +610 +00:29:16,080 --> 00:29:22,840 +yeah yeah that's a good + +611 +00:29:18,600 --> 00:29:22,840 +okay any other + +612 +00:29:29,919 --> 00:29:34,159 +maybe that question is a bit too hard + +613 +00:29:32,000 --> 00:29:36,360 +but yeah I think the Contrition c one is + +614 +00:29:34,159 --> 00:29:38,039 +a good one and that's like related to + +615 +00:29:36,360 --> 00:29:41,039 +the first point and second point that + +616 +00:29:38,039 --> 00:29:43,720 +was it here so I'll go over each of them + +617 +00:29:41,039 --> 00:29:47,760 +so one is the efficiency part it's + +618 +00:29:43,720 --> 00:29:51,240 +basically comparing like um the + +619 +00:29:47,760 --> 00:29:54,480 +computation cost that um you spend for + +620 +00:29:51,240 --> 00:29:56,840 +langage models to learn those tools uh + +621 +00:29:54,480 --> 00:29:58,559 +like intuitively we know using tools to + +622 +00:29:56,840 --> 00:30:01,679 +improve the performance + +623 +00:29:58,559 --> 00:30:03,919 +uh but like in on the other hand how + +624 +00:30:01,679 --> 00:30:06,960 +much computation cost do you need to pay + +625 +00:30:03,919 --> 00:30:09,799 +for example extra tokens in your pump or + +626 +00:30:06,960 --> 00:30:12,360 +uh extra training steps uh are this + +627 +00:30:09,799 --> 00:30:13,519 +competion cost worthy of the Improvement + +628 +00:30:12,360 --> 00:30:16,080 +that the tool + +629 +00:30:13,519 --> 00:30:19,880 +brought and the second is the quality of + +630 +00:30:16,080 --> 00:30:21,519 +tools so instead of the task performance + +631 +00:30:19,880 --> 00:30:24,080 +there Al it's also pretty important to + +632 +00:30:21,519 --> 00:30:26,159 +measure the tool performance itself how + +633 +00:30:24,080 --> 00:30:28,039 +quick can this tool return the response + +634 +00:30:26,159 --> 00:30:31,000 +to you do you need to wait one second + +635 +00:30:28,039 --> 00:30:33,720 +and wor 10 minutes where um how + +636 +00:30:31,000 --> 00:30:38,279 +computation efficient of tools how much + +637 +00:30:33,720 --> 00:30:38,279 +like GPU Sayes to cause where + +638 +00:30:38,440 --> 00:30:45,480 +mely and the third is reliability of + +639 +00:30:42,559 --> 00:30:47,919 +tools this uh maely involves unstable + +640 +00:30:45,480 --> 00:30:50,840 +tools that involves neur models or other + +641 +00:30:47,919 --> 00:30:53,480 +randomized modules so for example if you + +642 +00:30:50,840 --> 00:30:56,279 +remember the visual QA example it + +643 +00:30:53,480 --> 00:30:58,679 +actually loads a vqa model and answer a + +644 +00:30:56,279 --> 00:31:00,200 +question about the image but some times + +645 +00:30:58,679 --> 00:31:02,200 +it's in your model you can answer the + +646 +00:31:00,200 --> 00:31:05,039 +cor question correctly but sometimes + +647 +00:31:02,200 --> 00:31:07,240 +incorrectly so how do you are users + +648 +00:31:05,039 --> 00:31:10,600 +aware of this certainty of correctness + +649 +00:31:07,240 --> 00:31:13,880 +of this food and if so what should we do + +650 +00:31:10,600 --> 00:31:16,519 +to like like manage + +651 +00:31:13,880 --> 00:31:19,120 +it and the fourth is reproducible + +652 +00:31:16,519 --> 00:31:22,039 +testing as we have shown with the issues + +653 +00:31:19,120 --> 00:31:23,840 +of the evaluation Benchmark if we ask we + +654 +00:31:22,039 --> 00:31:26,320 +have a question asking what is the + +655 +00:31:23,840 --> 00:31:28,559 +current time what is the current weather + +656 +00:31:26,320 --> 00:31:30,399 +having a static reference result is + +657 +00:31:28,559 --> 00:31:32,960 +probably not going to work not going to + +658 +00:31:30,399 --> 00:31:35,080 +make it a reliable Benchmark so maybe + +659 +00:31:32,960 --> 00:31:37,840 +another approach is to have a reference + +660 +00:31:35,080 --> 00:31:41,480 +solution trajectory that for example + +661 +00:31:37,840 --> 00:31:43,639 +includes get weather um this calling + +662 +00:31:41,480 --> 00:31:45,960 +expression and when the model generates + +663 +00:31:43,639 --> 00:31:48,480 +it's solution they can like run the two + +664 +00:31:45,960 --> 00:31:49,519 +solutions in parallel and check if their + +665 +00:31:48,480 --> 00:31:53,880 +response + +666 +00:31:49,519 --> 00:31:56,519 +is and lastly is to save usage of tools + +667 +00:31:53,880 --> 00:31:58,919 +so for example a lot of tools are apis + +668 +00:31:56,519 --> 00:32:03,080 +hosted by unknown + +669 +00:31:58,919 --> 00:32:05,880 +uh do you trust his to uh sometimes you + +670 +00:32:03,080 --> 00:32:08,639 +like send your personal information to + +671 +00:32:05,880 --> 00:32:10,600 +the tools um are you like confident + +672 +00:32:08,639 --> 00:32:12,480 +enough that your personal expiration + +673 +00:32:10,600 --> 00:32:15,159 +will be + +674 +00:32:12,480 --> 00:32:18,200 +protected this are all like interesting + +675 +00:32:15,159 --> 00:32:21,720 +aspects but like no one the tool area + +676 +00:32:18,200 --> 00:32:24,919 +has really looked into + +677 +00:32:21,720 --> 00:32:28,679 +yet and we specifically did an C study + +678 +00:32:24,919 --> 00:32:32,039 +on the side that we compare + +679 +00:32:28,679 --> 00:32:33,600 +for each method We compare the + +680 +00:32:32,039 --> 00:32:36,960 +performance Improvement and the + +681 +00:32:33,600 --> 00:32:40,399 +computation cost that they have so we + +682 +00:32:36,960 --> 00:32:44,120 +did analysis from two aspects one is + +683 +00:32:40,399 --> 00:32:47,760 +the um like what path benefit the most + +684 +00:32:44,120 --> 00:32:50,279 +from tools so for example on this figure + +685 +00:32:47,760 --> 00:32:52,279 +this is the like single method the two + +686 +00:32:50,279 --> 00:32:55,000 +form method so they evaluate on + +687 +00:32:52,279 --> 00:32:58,080 +different data sets for different tasks + +688 +00:32:55,000 --> 00:33:00,480 +you can see from the like top right + +689 +00:32:58,080 --> 00:33:03,880 +corner here the mass data sets benefit a + +690 +00:33:00,480 --> 00:33:06,039 +lot they have like a huge Improvement um + +691 +00:33:03,880 --> 00:33:08,919 +but also here if you look at the + +692 +00:33:06,039 --> 00:33:11,200 +multilingual task it actually it's not + +693 +00:33:08,919 --> 00:33:13,880 +zero it decreases the performance but it + +694 +00:33:11,200 --> 00:33:16,240 +still use a lot of competition TS so + +695 +00:33:13,880 --> 00:33:20,559 +there's like multilingual that we may + +696 +00:33:16,240 --> 00:33:23,799 +not use have it's like help that we use + +697 +00:33:20,559 --> 00:33:25,840 +to sometimes and the other dimension is + +698 +00:33:23,799 --> 00:33:28,240 +what methods are efficient in using + +699 +00:33:25,840 --> 00:33:30,519 +tools even on the same data that to + +700 +00:33:28,240 --> 00:33:32,639 +compare on the MTH and table data sets + +701 +00:33:30,519 --> 00:33:35,080 +there those three different making + +702 +00:33:32,639 --> 00:33:35,960 +methods and there are methods that use a + +703 +00:33:35,080 --> 00:33:38,720 +w the + +704 +00:33:35,960 --> 00:33:40,600 +computation um and sometimes without + +705 +00:33:38,720 --> 00:33:44,519 +getting much improvement but there are + +706 +00:33:40,600 --> 00:33:47,080 +also methods that use like much fewer + +707 +00:33:44,519 --> 00:33:48,760 +computation cause but which is similar + +708 +00:33:47,080 --> 00:33:52,240 +am of + +709 +00:33:48,760 --> 00:33:55,080 +gin so yeah this kind of evaluation is + +710 +00:33:52,240 --> 00:33:56,880 +probably um very important to show the + +711 +00:33:55,080 --> 00:33:59,840 +whole picture where at least the more + +712 +00:33:56,880 --> 00:34:03,559 +comprehensive picture of the treat + +713 +00:33:59,840 --> 00:34:05,919 +meod so that's an overview of um the + +714 +00:34:03,559 --> 00:34:08,960 +things that IAL talked about today the + +715 +00:34:05,919 --> 00:34:11,000 +basics of tool scenarios and also the + +716 +00:34:08,960 --> 00:34:14,720 +evaluation other + +717 +00:34:11,000 --> 00:34:14,720 +aspects any final + +718 +00:34:14,839 --> 00:34:19,879 +questions and tools and agents are + +719 +00:34:17,240 --> 00:34:22,200 +really closely related also so I think + +720 +00:34:19,879 --> 00:34:23,879 +we can also have some time uh to ask + +721 +00:34:22,200 --> 00:34:27,000 +questions at the end after we covered + +722 +00:34:23,879 --> 00:34:29,079 +both but um you know while while Frank + +723 +00:34:27,000 --> 00:34:31,320 +comes up if you have any questions for + +724 +00:34:29,079 --> 00:34:35,320 +Zora + +725 +00:34:31,320 --> 00:34:35,320 +so we'll we'll set up the + +726 +00:34:50,720 --> 00:34:56,240 +screen hello everyone I'm Frank and + +727 +00:34:53,879 --> 00:34:58,359 +continueing the topic continuing the + +728 +00:34:56,240 --> 00:35:02,119 +topic about the language model using + +729 +00:34:58,359 --> 00:35:05,440 +tools I'm going to talk about + +730 +00:35:02,119 --> 00:35:08,040 +uh going to talk about langage Model S + +731 +00:35:05,440 --> 00:35:09,599 +agents these two are super closely + +732 +00:35:08,040 --> 00:35:13,200 +related + +733 +00:35:09,599 --> 00:35:13,200 +so as + +734 +00:35:13,920 --> 00:35:18,880 +sorry hang on one + +735 +00:35:17,320 --> 00:35:24,520 +second + +736 +00:35:18,880 --> 00:35:24,520 +not they not muted but it would be nice + +737 +00:35:28,599 --> 00:35:33,200 +so first what are agents I think Zora + +738 +00:35:30,680 --> 00:35:35,240 +covered this a little bit but basically + +739 +00:35:33,200 --> 00:35:37,280 +anything that can be viewed as you know + +740 +00:35:35,240 --> 00:35:39,800 +perceiving uh environments through + +741 +00:35:37,280 --> 00:35:42,880 +sensors and acting upon them through + +742 +00:35:39,800 --> 00:35:45,359 +actuators if you look at this uh plot + +743 +00:35:42,880 --> 00:35:48,280 +here you can see you the agent will take + +744 +00:35:45,359 --> 00:35:50,800 +in observations from the environment and + +745 +00:35:48,280 --> 00:35:53,000 +perform actions on it and the agent + +746 +00:35:50,800 --> 00:35:55,640 +themselves will have some sort of + +747 +00:35:53,000 --> 00:35:58,839 +abilities or knowledge goals preference + +748 +00:35:55,640 --> 00:36:01,839 +or any preference uh prior knowledge + +749 +00:35:58,839 --> 00:36:05,280 +since it's uh the topic is about LM + +750 +00:36:01,839 --> 00:36:08,960 +agents now we usually use l the large NE + +751 +00:36:05,280 --> 00:36:10,720 +models themselves as the agent itself + +752 +00:36:08,960 --> 00:36:14,119 +and all the + +753 +00:36:10,720 --> 00:36:17,319 +tools are covered about can be used as + +754 +00:36:14,119 --> 00:36:19,119 +perceptors you know or actuators for + +755 +00:36:17,319 --> 00:36:22,640 +example if you play music that's kind of + +756 +00:36:19,119 --> 00:36:25,880 +an actuator and all these um aties prior + +757 +00:36:22,640 --> 00:36:28,079 +knowledge or observations P experience + +758 +00:36:25,880 --> 00:36:31,079 +can be seen as um + +759 +00:36:28,079 --> 00:36:34,359 +data or training data for your langage + +760 +00:36:31,079 --> 00:36:37,560 +model so to get started on langage model + +761 +00:36:34,359 --> 00:36:41,079 +agents I'm going to cover four stages of + +762 +00:36:37,560 --> 00:36:43,960 +get yourself a large L model agent first + +763 +00:36:41,079 --> 00:36:46,240 +to cover a bit of tasks and applications + +764 +00:36:43,960 --> 00:36:48,720 +second some trainingfree methods for + +765 +00:36:46,240 --> 00:36:51,160 +building agents so that you can use with + +766 +00:36:48,720 --> 00:36:53,880 +API based models and evaluation + +767 +00:36:51,160 --> 00:36:56,720 +environment and Benchmark which is a + +768 +00:36:53,880 --> 00:36:58,760 +super important topic in research and + +769 +00:36:56,720 --> 00:37:00,680 +finally I'm going to briefly cover some + +770 +00:36:58,760 --> 00:37:03,800 +of the training methods for improving + +771 +00:37:00,680 --> 00:37:06,480 +agents and since it's a pretty ongoing + +772 +00:37:03,800 --> 00:37:09,440 +area some of these training methods + +773 +00:37:06,480 --> 00:37:12,520 +might not be the + +774 +00:37:09,440 --> 00:37:14,480 +best so first uh what are task and + +775 +00:37:12,520 --> 00:37:17,640 +applications for large langage model + +776 +00:37:14,480 --> 00:37:19,480 +agents so to answer this question to + +777 +00:37:17,640 --> 00:37:22,040 +cover this we need to answer why do we + +778 +00:37:19,480 --> 00:37:24,000 +want agents imagine if things can be + +779 +00:37:22,040 --> 00:37:26,960 +done by just talking that's what human + +780 +00:37:24,000 --> 00:37:29,960 +agents do you talk to some real estate + +781 +00:37:26,960 --> 00:37:32,839 +agent toy buy and house for you so + +782 +00:37:29,960 --> 00:37:35,640 +nowadays when we are interacting with + +783 +00:37:32,839 --> 00:37:38,720 +computers traditionally you know you use + +784 +00:37:35,640 --> 00:37:41,160 +um graphical user interface or you write + +785 +00:37:38,720 --> 00:37:44,040 +code manually with your hand using + +786 +00:37:41,160 --> 00:37:45,920 +keyboard and mouse but what if you know + +787 +00:37:44,040 --> 00:37:49,560 +everything in the future can be just + +788 +00:37:45,920 --> 00:37:51,960 +done we are talking to some Alexa or + +789 +00:37:49,560 --> 00:37:54,760 +Google Assistant it's safe time it is + +790 +00:37:51,960 --> 00:37:57,119 +natural it is accessible and it there's + +791 +00:37:54,760 --> 00:38:00,119 +no need to browse or any of those + +792 +00:37:57,119 --> 00:38:03,040 +programs learning curve nowadays there + +793 +00:38:00,119 --> 00:38:05,480 +are some agents that help you do task we + +794 +00:38:03,040 --> 00:38:08,560 +are the um like Natural Energy interface + +795 +00:38:05,480 --> 00:38:10,280 +computers for example there are Siri um + +796 +00:38:08,560 --> 00:38:12,200 +there are like Google Assistant Alexa + +797 +00:38:10,280 --> 00:38:14,560 +you can set an alarm that's actually an + +798 +00:38:12,200 --> 00:38:17,520 +agent because it actually set an alarm + +799 +00:38:14,560 --> 00:38:19,960 +for you and there are some natural + +800 +00:38:17,520 --> 00:38:22,359 +language programming tools I think a lot + +801 +00:38:19,960 --> 00:38:24,680 +not of a lot of us are using like GitHub + +802 +00:38:22,359 --> 00:38:27,160 +co-pilot plugins to help you write code + +803 +00:38:24,680 --> 00:38:29,400 +you can just say I want to sort my list + +804 +00:38:27,160 --> 00:38:33,240 +in descending order it will generate + +805 +00:38:29,400 --> 00:38:35,960 +code that does it for you um and + +806 +00:38:33,240 --> 00:38:38,720 +also I think many people also use chat + +807 +00:38:35,960 --> 00:38:41,760 +GPT in their daily life uh they once + +808 +00:38:38,720 --> 00:38:44,319 +throw out the feature of having ability + +809 +00:38:41,760 --> 00:38:46,440 +to include plugins into the chat GP so + +810 +00:38:44,319 --> 00:38:49,440 +the tool Integrations into chat box can + +811 +00:38:46,440 --> 00:38:52,960 +help you book PRS or you know uh buy + +812 +00:38:49,440 --> 00:38:57,599 +inst create insta card orders and in + +813 +00:38:52,960 --> 00:39:01,319 +some other areas um where + +814 +00:38:57,599 --> 00:39:04,720 +agent could work which is in robotics um + +815 +00:39:01,319 --> 00:39:07,520 +here in this example um the agent can + +816 +00:39:04,720 --> 00:39:09,839 +see the surroundings uh as it in the + +817 +00:39:07,520 --> 00:39:14,400 +like those Google Street View you have + +818 +00:39:09,839 --> 00:39:16,560 +those Vision um surroundings and you ask + +819 +00:39:14,400 --> 00:39:18,319 +the agent to you know turn and go with + +820 +00:39:16,560 --> 00:39:20,440 +the FL track but you give some natural + +821 +00:39:18,319 --> 00:39:21,920 +language instructions these task are + +822 +00:39:20,440 --> 00:39:24,040 +traditional called natural language + +823 +00:39:21,920 --> 00:39:26,160 +navigation which is given the natural + +824 +00:39:24,040 --> 00:39:28,119 +language prompt you you ask to go + +825 +00:39:26,160 --> 00:39:31,200 +through a streets + +826 +00:39:28,119 --> 00:39:34,319 +uh and there's also some data set a data + +827 +00:39:31,200 --> 00:39:38,200 +set called Alward where you can you are + +828 +00:39:34,319 --> 00:39:40,760 +given a simulated environment here and + +829 +00:39:38,200 --> 00:39:42,760 +you're also given a textual description + +830 +00:39:40,760 --> 00:39:45,079 +like you are in the middle of room you + +831 +00:39:42,760 --> 00:39:47,119 +look around you can see what's + +832 +00:39:45,079 --> 00:39:49,440 +surrounding you and there's a bed + +833 +00:39:47,119 --> 00:39:51,680 +there's drawer and you ask + +834 +00:39:49,440 --> 00:39:55,400 +it you ask + +835 +00:39:51,680 --> 00:39:58,800 +that uh the task is to examine an alarm + +836 +00:39:55,400 --> 00:40:02,160 +clock with the desk uh with the desk SL + +837 +00:39:58,800 --> 00:40:04,920 +and then ideally the if this langage + +838 +00:40:02,160 --> 00:40:07,240 +model agent can't interact with this + +839 +00:40:04,920 --> 00:40:09,440 +environment it should predict oh I + +840 +00:40:07,240 --> 00:40:13,079 +should go to the Dex one which is my + +841 +00:40:09,440 --> 00:40:15,960 +here in red fs and then you arrive and + +842 +00:40:13,079 --> 00:40:18,800 +new you arrived at a new location and + +843 +00:40:15,960 --> 00:40:21,119 +then you get a new observation so this + +844 +00:40:18,800 --> 00:40:24,000 +kind of task you can already get a sense + +845 +00:40:21,119 --> 00:40:27,440 +of how this agent is usually interacting + +846 +00:40:24,000 --> 00:40:29,280 +with your surrounding um surrounding uh + +847 +00:40:27,440 --> 00:40:32,440 +environment you have some sort of + +848 +00:40:29,280 --> 00:40:34,079 +observation you have some sort of action + +849 +00:40:32,440 --> 00:40:36,599 +to + +850 +00:40:34,079 --> 00:40:38,880 +perform and there are of course other + +851 +00:40:36,599 --> 00:40:42,839 +applications in games for example there + +852 +00:40:38,880 --> 00:40:45,160 +are many uh benchmarks or applications + +853 +00:40:42,839 --> 00:40:48,040 +for example this one's called Mind Dojo + +854 +00:40:45,160 --> 00:40:49,960 +it works on like creating a agent that + +855 +00:40:48,040 --> 00:40:51,440 +can you know listen to your in natural + +856 +00:40:49,960 --> 00:40:54,000 +language instructions and perform + +857 +00:40:51,440 --> 00:40:56,680 +Minecraft tasks for you there are also + +858 +00:40:54,000 --> 00:40:59,480 +recent work in from Deep Mind called + +859 +00:40:56,680 --> 00:41:01,720 +Sema that uh you know given a natural + +860 +00:40:59,480 --> 00:41:04,040 +language um instruction program that + +861 +00:41:01,720 --> 00:41:08,800 +shoot asteroid it will just help you + +862 +00:41:04,040 --> 00:41:10,839 +shoot an asteroid in the game and also + +863 +00:41:08,800 --> 00:41:13,079 +uh LM agents can be also used in + +864 +00:41:10,839 --> 00:41:16,720 +software development uh recently there's + +865 +00:41:13,079 --> 00:41:20,720 +a startup called Devon that created a um + +866 +00:41:16,720 --> 00:41:24,400 +AI so-called AI software engineer that + +867 +00:41:20,720 --> 00:41:27,359 +the alation is this um you have this + +868 +00:41:24,400 --> 00:41:29,760 +code editor you have an terminal that + +869 +00:41:27,359 --> 00:41:31,800 +could sorry code editor and a terminal + +870 +00:41:29,760 --> 00:41:33,520 +that you could execute command and there + +871 +00:41:31,800 --> 00:41:36,319 +also a web browser where you can search + +872 +00:41:33,520 --> 00:41:37,280 +for documentations ideally if everything + +873 +00:41:36,319 --> 00:41:39,359 +is + +874 +00:41:37,280 --> 00:41:42,160 +automated this you can just give a + +875 +00:41:39,359 --> 00:41:45,400 +natural instruction on create like + +876 +00:41:42,160 --> 00:41:47,720 +finish 711 homework for me and it will + +877 +00:41:45,400 --> 00:41:50,359 +just uh the agent will just compete the + +878 +00:41:47,720 --> 00:41:52,800 +homework for you in these workspace so + +879 +00:41:50,359 --> 00:41:55,599 +in this scenario the observation will be + +880 +00:41:52,800 --> 00:41:58,440 +this string basically it contains four + +881 +00:41:55,599 --> 00:42:00,800 +three like terminal web Ed web browser + +882 +00:41:58,440 --> 00:42:03,599 +and editor and the action you take is + +883 +00:42:00,800 --> 00:42:05,880 +just like you can issue commands to the + +884 +00:42:03,599 --> 00:42:07,760 +terminal or you can search in the web + +885 +00:42:05,880 --> 00:42:10,839 +browser or you can write code in the + +886 +00:42:07,760 --> 00:42:13,680 +code editor there's of course UI + +887 +00:42:10,839 --> 00:42:17,599 +automation where you can browse web and + +888 +00:42:13,680 --> 00:42:23,800 +like P some songs or um you know zooming + +889 +00:42:17,599 --> 00:42:27,960 +zoom out all these GOI navigation I are + +890 +00:42:23,800 --> 00:42:30,680 +so so so now that we have a brief + +891 +00:42:27,960 --> 00:42:34,319 +overview of what task and applications + +892 +00:42:30,680 --> 00:42:38,119 +could be enabled uh say we have a pretty + +893 +00:42:34,319 --> 00:42:41,160 +good LM agents so now the topic becomes + +894 +00:42:38,119 --> 00:42:43,359 +uh can we have some methods to build + +895 +00:42:41,160 --> 00:42:44,599 +these agents now I'm going to at the + +896 +00:42:43,359 --> 00:42:50,720 +beginning I'm going to cover some + +897 +00:42:44,599 --> 00:42:52,760 +trainingfree methods so how to um you + +898 +00:42:50,720 --> 00:42:54,480 +know how to let an LM become agent + +899 +00:42:52,760 --> 00:43:00,200 +basically that's the question we want to + +900 +00:42:54,480 --> 00:43:02,400 +answer so um the usually an an Al agent + +901 +00:43:00,200 --> 00:43:05,599 +needs to take as an agent it needs to + +902 +00:43:02,400 --> 00:43:08,520 +take into take in an observation of the + +903 +00:43:05,599 --> 00:43:10,319 +current environment usually the envir + +904 +00:43:08,520 --> 00:43:12,480 +the observation can be you know of + +905 +00:43:10,319 --> 00:43:16,520 +multiple modalities you could have a + +906 +00:43:12,480 --> 00:43:18,640 +text input as a you know as a + +907 +00:43:16,520 --> 00:43:22,079 +observation for example in this + +908 +00:43:18,640 --> 00:43:24,720 +example you know I'm just GNA it's like + +909 +00:43:22,079 --> 00:43:26,720 +a description of the previous afterward + +910 +00:43:24,720 --> 00:43:29,800 +example what's surrounding you I have a + +911 +00:43:26,720 --> 00:43:31,480 +desk I have a chair here and there and + +912 +00:43:29,800 --> 00:43:33,400 +what should I if I want to grab the + +913 +00:43:31,480 --> 00:43:36,160 +chair what should I do next that's the + +914 +00:43:33,400 --> 00:43:39,000 +text input visual input of course if you + +915 +00:43:36,160 --> 00:43:40,880 +are building an Alm agent for games of + +916 +00:43:39,000 --> 00:43:43,400 +course you are going to have a capture + +917 +00:43:40,880 --> 00:43:45,839 +or a current screenshot of your + +918 +00:43:43,400 --> 00:43:48,400 +character and surrounding environment + +919 +00:43:45,839 --> 00:43:50,359 +sometimes we may also have audio input + +920 +00:43:48,400 --> 00:43:53,520 +like if you're playing games you want to + +921 +00:43:50,359 --> 00:43:56,119 +hear uh what's happening surrounding you + +922 +00:43:53,520 --> 00:44:00,079 +or behind you that is not seeing on the + +923 +00:43:56,119 --> 00:44:03,079 +uh uh on the screen and uh of course we + +924 +00:44:00,079 --> 00:44:05,880 +also have structured uh input take uh + +925 +00:44:03,079 --> 00:44:09,119 +for example if you are + +926 +00:44:05,880 --> 00:44:12,240 +building say U like an agent for + +927 +00:44:09,119 --> 00:44:15,599 +websites or for other like desktop + +928 +00:44:12,240 --> 00:44:18,640 +applications the website usually are + +929 +00:44:15,599 --> 00:44:21,599 +written in HTML code reference layout of + +930 +00:44:18,640 --> 00:44:24,520 +its UI so it might be like a tree like + +931 +00:44:21,599 --> 00:44:28,119 +structure as you that you can take into + +932 +00:44:24,520 --> 00:44:32,319 +as input for LM agent so this also kind + +933 +00:44:28,119 --> 00:44:35,480 +of uh the need for LM agent also uh says + +934 +00:44:32,319 --> 00:44:39,960 +that we need pretty good multimodel LMS + +935 +00:44:35,480 --> 00:44:43,800 +for this to work and uh I'm going to + +936 +00:44:39,960 --> 00:44:46,839 +cover how to let our become agent by + +937 +00:44:43,800 --> 00:44:50,040 +first you need to plan and reason + +938 +00:44:46,839 --> 00:44:52,839 +because in order to perform a complex + +939 +00:44:50,040 --> 00:44:55,760 +task that the human issue to them you + +940 +00:44:52,839 --> 00:44:57,839 +have to uh usually have to decompose + +941 +00:44:55,760 --> 00:45:01,440 +them into subtance + +942 +00:44:57,839 --> 00:45:04,640 +um the there are many many existing + +943 +00:45:01,440 --> 00:45:07,960 +methods in I that tackle this reasoning + +944 +00:45:04,640 --> 00:45:10,559 +problem and one the most famous of which + +945 +00:45:07,960 --> 00:45:12,640 +is the Chain of Thought reasoning which + +946 +00:45:10,559 --> 00:45:15,760 +uh famously is just prompt the language + +947 +00:45:12,640 --> 00:45:18,920 +model to that is Sy step by step here's + +948 +00:45:15,760 --> 00:45:20,160 +um so basically the goal here is to let + +949 +00:45:18,920 --> 00:45:22,720 +the langage model generate some + +950 +00:45:20,160 --> 00:45:25,040 +reasoning traces so that it has a + +951 +00:45:22,720 --> 00:45:28,400 +roughly a good plan of how to perform + +952 +00:45:25,040 --> 00:45:31,160 +cion tasks + +953 +00:45:28,400 --> 00:45:34,000 +here is an example from alord which is + +954 +00:45:31,160 --> 00:45:35,760 +you are given a texal description of + +955 +00:45:34,000 --> 00:45:37,160 +your surroundings you know you're in the + +956 +00:45:35,760 --> 00:45:39,760 +middle of + +957 +00:45:37,160 --> 00:45:42,599 +room and what's surrounding you and your + +958 +00:45:39,760 --> 00:45:46,400 +task is to put some paper Sher on a + +959 +00:45:42,599 --> 00:45:50,119 +drawer and if you want to let an L to + +960 +00:45:46,400 --> 00:45:52,680 +reason to how to perform this task into + +961 +00:45:50,119 --> 00:45:56,559 +subtasks you ask what should I do next + +962 +00:45:52,680 --> 00:45:59,000 +let's things step by step and Lang model + +963 +00:45:56,559 --> 00:46:02,640 +if it's good langage model it will say + +964 +00:45:59,000 --> 00:46:05,359 +first I need to find a pepper shaker and + +965 +00:46:02,640 --> 00:46:09,200 +they a pepper sh Shaker more likely to + +966 +00:46:05,359 --> 00:46:12,640 +be appear in cabinets counter chops and + +967 +00:46:09,200 --> 00:46:16,520 +after I find pepper shaker one I need to + +968 +00:46:12,640 --> 00:46:19,240 +put it and say your so here you can + +969 +00:46:16,520 --> 00:46:21,079 +actually rely on large Lang models to + +970 +00:46:19,240 --> 00:46:25,119 +generate reasoning traces basically + +971 +00:46:21,079 --> 00:46:27,760 +decompose this find uh put pepper shaker + +972 +00:46:25,119 --> 00:46:29,960 +on drawer T into two s STS first you + +973 +00:46:27,760 --> 00:46:34,760 +need to find them and then you have to + +974 +00:46:29,960 --> 00:46:36,559 +put it but it is not enough to just have + +975 +00:46:34,760 --> 00:46:38,760 +this kind of planning and reasoning to + +976 +00:46:36,559 --> 00:46:40,920 +generate some of these natural langues + +977 +00:46:38,760 --> 00:46:42,040 +because you can't execute them uh if you + +978 +00:46:40,920 --> 00:46:45,440 +just have + +979 +00:46:42,040 --> 00:46:48,680 +this bunch of text uh an actual robot + +980 +00:46:45,440 --> 00:46:51,119 +will not be able to perform this testt + +981 +00:46:48,680 --> 00:46:54,079 +and that's why we also need the language + +982 +00:46:51,119 --> 00:46:57,240 +model to have this tool use ability so + +983 +00:46:54,079 --> 00:47:00,200 +not only it should generate um reasing + +984 +00:46:57,240 --> 00:47:03,280 +traces it can it needs to also interact + +985 +00:47:00,200 --> 00:47:05,800 +with environment here we focus on action + +986 +00:47:03,280 --> 00:47:08,000 +here so it need to generate some action + +987 +00:47:05,800 --> 00:47:10,960 +cause asks you the action cause in + +988 +00:47:08,000 --> 00:47:13,119 +environment and then supposedly if your + +989 +00:47:10,960 --> 00:47:15,520 +action is Meaningful enough it will + +990 +00:47:13,119 --> 00:47:17,599 +change the environment a little bit and + +991 +00:47:15,520 --> 00:47:20,559 +then supposedly you should have new + +992 +00:47:17,599 --> 00:47:22,400 +observation to put it back in your PRT + +993 +00:47:20,559 --> 00:47:25,160 +do the same + +994 +00:47:22,400 --> 00:47:28,800 +example you you ask it what should I do + +995 +00:47:25,160 --> 00:47:32,240 +Lessing step by step you find you you + +996 +00:47:28,800 --> 00:47:34,680 +say oh first I need to find this pepper + +997 +00:47:32,240 --> 00:47:38,599 +shaker and it is more likely to appear + +998 +00:47:34,680 --> 00:47:42,400 +in cabinets then the like with proper + +999 +00:47:38,599 --> 00:47:45,880 +prompting you know the uh Al should be + +1000 +00:47:42,400 --> 00:47:48,200 +able to uh produce an action like go to + +1001 +00:47:45,880 --> 00:47:51,240 +Cabinet one and then you actually + +1002 +00:47:48,200 --> 00:47:52,559 +execute this in this virtual environment + +1003 +00:47:51,240 --> 00:47:55,040 +and now your + +1004 +00:47:52,559 --> 00:47:56,960 +observation so you you go to Cabinet one + +1005 +00:47:55,040 --> 00:48:00,720 +you actually see what's inside cabinet + +1006 +00:47:56,960 --> 00:48:02,960 +one and oh having a one there's a was + +1007 +00:48:00,720 --> 00:48:05,119 +which you know is not pepper spray so + +1008 +00:48:02,960 --> 00:48:08,359 +then you can do further reasoning and + +1009 +00:48:05,119 --> 00:48:11,559 +further action planning so this is how + +1010 +00:48:08,359 --> 00:48:15,040 +uh so this this General framework is + +1011 +00:48:11,559 --> 00:48:16,960 +called react and is quite uh popular in + +1012 +00:48:15,040 --> 00:48:18,839 +creating language model agents so + +1013 +00:48:16,960 --> 00:48:20,800 +basically to let a language model become + +1014 +00:48:18,839 --> 00:48:23,079 +an agent it needs to have planning and + +1015 +00:48:20,800 --> 00:48:25,319 +reasoning ability some of those mostly + +1016 +00:48:23,079 --> 00:48:28,480 +achieved by Chain of La prompting and + +1017 +00:48:25,319 --> 00:48:30,599 +Tool usability by uh also you know + +1018 +00:48:28,480 --> 00:48:35,200 +providing prompting to let it generate + +1019 +00:48:30,599 --> 00:48:39,319 +API cost here's a real world example um + +1020 +00:48:35,200 --> 00:48:42,599 +so H to you may ask how should I you + +1021 +00:48:39,319 --> 00:48:44,200 +know generate actions proper actions + +1022 +00:48:42,599 --> 00:48:47,520 +that can be + +1023 +00:48:44,200 --> 00:48:50,280 +executed um without training you can + +1024 +00:48:47,520 --> 00:48:53,520 +just do prompting so you are given + +1025 +00:48:50,280 --> 00:48:57,040 +supposed you have the following apis the + +1026 +00:48:53,520 --> 00:48:59,720 +um the text that are not in um not in + +1027 +00:48:57,040 --> 00:49:01,480 +Blue uh sorry green are the prompts + +1028 +00:48:59,720 --> 00:49:05,040 +given to the langage model and the texts + +1029 +00:49:01,480 --> 00:49:07,119 +in blue are generated by gp4 and you + +1030 +00:49:05,040 --> 00:49:10,599 +just ask them you provide them with four + +1031 +00:49:07,119 --> 00:49:13,799 +apis get weather get location you know + +1032 +00:49:10,599 --> 00:49:16,319 +bus rout or count characters and the + +1033 +00:49:13,799 --> 00:49:18,680 +question is you know is it okay to go + +1034 +00:49:16,319 --> 00:49:20,599 +hiking today to answer this question you + +1035 +00:49:18,680 --> 00:49:23,440 +know you can see the language model can + +1036 +00:49:20,599 --> 00:49:25,480 +actually reason a good way of solving + +1037 +00:49:23,440 --> 00:49:30,079 +this task by first checking your + +1038 +00:49:25,480 --> 00:49:33,200 +location okay I'm in Seattle and then um + +1039 +00:49:30,079 --> 00:49:34,520 +so it the API call is you know it calls + +1040 +00:49:33,200 --> 00:49:38,400 +weather + +1041 +00:49:34,520 --> 00:49:40,559 +Seattle and then cloudy equals to you + +1042 +00:49:38,400 --> 00:49:43,440 +know like it's chance of ring and then + +1043 +00:49:40,559 --> 00:49:45,440 +based on this observation information it + +1044 +00:49:43,440 --> 00:49:48,119 +is not recommended to go hiking so this + +1045 +00:49:45,440 --> 00:49:51,799 +is kind of a actual example of how + +1046 +00:49:48,119 --> 00:49:54,400 +you're going to call this a uh how you + +1047 +00:49:51,799 --> 00:49:57,640 +are going to enable a to generate + +1048 +00:49:54,400 --> 00:50:01,599 +executable actions you PL examples or + +1049 +00:49:57,640 --> 00:50:04,000 +you give instructions of what these apis + +1050 +00:50:01,599 --> 00:50:05,040 +looks like in the prompt and if you just + +1051 +00:50:04,000 --> 00:50:07,400 +continue + +1052 +00:50:05,040 --> 00:50:10,400 +generation it also have a previous + +1053 +00:50:07,400 --> 00:50:14,160 +example as a like a three shot example + +1054 +00:50:10,400 --> 00:50:14,160 +where you can ask new + +1055 +00:50:15,040 --> 00:50:20,400 +questions there's besides you know this + +1056 +00:50:17,960 --> 00:50:23,640 +kind of natural back and force + +1057 +00:50:20,400 --> 00:50:26,359 +generating code sorry generating actions + +1058 +00:50:23,640 --> 00:50:27,559 +there's also oh actually that's a + +1059 +00:50:26,359 --> 00:50:31,119 +question + +1060 +00:50:27,559 --> 00:50:34,359 +what if there's a lot of apis uh say now + +1061 +00:50:31,119 --> 00:50:37,720 +I have four apis here for the language + +1062 +00:50:34,359 --> 00:50:41,440 +model to choose but in reality I might + +1063 +00:50:37,720 --> 00:50:44,359 +have um a thousand possible actions to + +1064 +00:50:41,440 --> 00:50:47,760 +perform and if I just present them in + +1065 +00:50:44,359 --> 00:50:50,440 +this way like describe what each API + +1066 +00:50:47,760 --> 00:50:53,480 +takes in as an argument what they do + +1067 +00:50:50,440 --> 00:50:56,920 +they might extend uh they might you know + +1068 +00:50:53,480 --> 00:50:59,920 +like exceed the context that + +1069 +00:50:56,920 --> 00:51:02,960 +so I don't or they might cost a lot of + +1070 +00:50:59,920 --> 00:51:06,359 +money because you know token is money so + +1071 +00:51:02,960 --> 00:51:09,400 +do you have anyone else suggestions if + +1072 +00:51:06,359 --> 00:51:14,440 +there's a lot of apis how should I still + +1073 +00:51:09,400 --> 00:51:14,440 +let a uh Lang model generate apis + +1074 +00:51:19,040 --> 00:51:24,160 +PA yeah so basically yeah that's a good + +1075 +00:51:21,480 --> 00:51:28,000 +answer so you have kind of a external + +1076 +00:51:24,160 --> 00:51:30,319 +memory where you can query based on your + +1077 +00:51:28,000 --> 00:51:33,480 +current context what are the most + +1078 +00:51:30,319 --> 00:51:35,720 +necessary required apis from that + +1079 +00:51:33,480 --> 00:51:38,200 +external and provide the documentations + +1080 +00:51:35,720 --> 00:51:42,240 +for them yeah that's a good + +1081 +00:51:38,200 --> 00:51:44,119 +answer besides these we also have + +1082 +00:51:42,240 --> 00:51:46,960 +another scenario where we can actually + +1083 +00:51:44,119 --> 00:51:49,520 +just generate code to perform certain + +1084 +00:51:46,960 --> 00:51:51,880 +tasks this way we are combining + +1085 +00:51:49,520 --> 00:51:55,119 +reasoning ability planning ability plus + +1086 +00:51:51,880 --> 00:51:56,839 +this action calling ability o one + +1087 +00:51:55,119 --> 00:51:59,200 +previously we have to generate natur + +1088 +00:51:56,839 --> 00:52:02,880 +language traces and transform natural + +1089 +00:51:59,200 --> 00:52:05,400 +language traces into actions so this is + +1090 +00:52:02,880 --> 00:52:08,000 +how you would solve this test so + +1091 +00:52:05,400 --> 00:52:11,160 +assuming you know you just ask chat gbd + +1092 +00:52:08,000 --> 00:52:13,640 +assuming you can use Python you you have + +1093 +00:52:11,160 --> 00:52:16,640 +some common apis install you have done + +1094 +00:52:13,640 --> 00:52:18,880 +the authentication steps ask answers the + +1095 +00:52:16,640 --> 00:52:22,200 +following questions like set up a + +1096 +00:52:18,880 --> 00:52:26,040 +meeting with someone tomorrow at 10:00 + +1097 +00:52:22,200 --> 00:52:27,720 +a.m. and actually chat GP can generate a + +1098 +00:52:26,040 --> 00:52:30,839 +python code + +1099 +00:52:27,720 --> 00:52:33,520 +um that that's this task for you and + +1100 +00:52:30,839 --> 00:52:37,160 +then you probably have guessed it you + +1101 +00:52:33,520 --> 00:52:39,119 +can just execute then probably you won't + +1102 +00:52:37,160 --> 00:52:41,960 +actually need it because nowadays you + +1103 +00:52:39,119 --> 00:52:44,040 +know open ey or other jamni those + +1104 +00:52:41,960 --> 00:52:46,359 +documents have building like code + +1105 +00:52:44,040 --> 00:52:48,319 +interpreter functionality where they + +1106 +00:52:46,359 --> 00:52:50,359 +generate code but basically they + +1107 +00:52:48,319 --> 00:52:54,160 +generate a code based on your task + +1108 +00:52:50,359 --> 00:52:57,119 +instruction and execute them and this + +1109 +00:52:54,160 --> 00:53:00,440 +way you are still doing promply but and + +1110 +00:52:57,119 --> 00:53:04,960 +your reasoning or all of these Logics + +1111 +00:53:00,440 --> 00:53:08,520 +are more likely to be handled by this + +1112 +00:53:04,960 --> 00:53:12,599 +code so these are an overview previously + +1113 +00:53:08,520 --> 00:53:15,880 +we touched a bit about um how we going + +1114 +00:53:12,599 --> 00:53:18,559 +to prompt Lage models to perform task + +1115 +00:53:15,880 --> 00:53:20,559 +then we are going to touch a bit about + +1116 +00:53:18,559 --> 00:53:23,640 +evaluation environment and + +1117 +00:53:20,559 --> 00:53:27,040 +Benchmark this is a research oriented + +1118 +00:53:23,640 --> 00:53:29,720 +class we are definitely going to uh + +1119 +00:53:27,040 --> 00:53:32,480 +think about you know reproducible + +1120 +00:53:29,720 --> 00:53:35,119 +environments or like evaluations other + +1121 +00:53:32,480 --> 00:53:37,400 +than just products so evalation of + +1122 +00:53:35,119 --> 00:53:41,200 +langage model agents are actually quite + +1123 +00:53:37,400 --> 00:53:45,720 +hard existing there are several existing + +1124 +00:53:41,200 --> 00:53:48,599 +work on this area many of them contain + +1125 +00:53:45,720 --> 00:53:51,599 +simplified environments and basic tasks + +1126 +00:53:48,599 --> 00:53:54,079 +and if you are performing like basic + +1127 +00:53:51,599 --> 00:53:56,119 +test performance is saturating you have + +1128 +00:53:54,079 --> 00:53:58,920 +already seen the example I previously + +1129 +00:53:56,119 --> 00:54:01,599 +presented it to ask it to whether it is + +1130 +00:53:58,920 --> 00:54:04,720 +okay to go hiking today to check whether + +1131 +00:54:01,599 --> 00:54:06,640 +it is super easy for chat GPT to do that + +1132 +00:54:04,720 --> 00:54:10,200 +and even just to book a meeting through + +1133 +00:54:06,640 --> 00:54:11,880 +the Google API uh Google Calender API uh + +1134 +00:54:10,200 --> 00:54:14,319 +actually that code I verified is + +1135 +00:54:11,880 --> 00:54:17,079 +actually correct in the pre previous + +1136 +00:54:14,319 --> 00:54:19,599 +slide so you can see if it's simple task + +1137 +00:54:17,079 --> 00:54:22,480 +simple envirment the performance is + +1138 +00:54:19,599 --> 00:54:25,400 +saturating a 100% accuracy won't tell + +1139 +00:54:22,480 --> 00:54:27,079 +you any further progress in LM agent + +1140 +00:54:25,400 --> 00:54:30,520 +research + +1141 +00:54:27,079 --> 00:54:34,160 +so but still it's nice to know some of + +1142 +00:54:30,520 --> 00:54:36,839 +these existing um evaluation benchmarks + +1143 +00:54:34,160 --> 00:54:39,839 +the first Ty them are usually stateless + +1144 +00:54:36,839 --> 00:54:42,880 +with non-interactive environment uh for + +1145 +00:54:39,839 --> 00:54:47,400 +example does mind to what work um + +1146 +00:54:42,880 --> 00:54:49,640 +focused on like uh dumping web pages and + +1147 +00:54:47,400 --> 00:54:53,200 +then what actions can be performed on + +1148 +00:54:49,640 --> 00:54:53,960 +them to get them transformed into other + +1149 +00:54:53,200 --> 00:54:57,200 +uh + +1150 +00:54:53,960 --> 00:54:59,880 +stages and sometimes they are valuated + +1151 +00:54:57,200 --> 00:55:01,599 +by checking action sequence + +1152 +00:54:59,880 --> 00:55:03,680 +accuracy sometimes they are just + +1153 +00:55:01,599 --> 00:55:06,040 +checking the stepwise or Surface form + +1154 +00:55:03,680 --> 00:55:08,920 +only accuracy here is an example from + +1155 +00:55:06,040 --> 00:55:10,599 +mind to web so the task is to you know + +1156 +00:55:08,920 --> 00:55:13,480 +for one of the team + +1157 +00:55:10,599 --> 00:55:16,000 +leaders um of the one of the follow the + +1158 +00:55:13,480 --> 00:55:19,760 +team leaders of one of the NL teams from + +1159 +00:55:16,000 --> 00:55:19,760 +Atlantic division and + +1160 +00:55:19,799 --> 00:55:25,280 +supposedly this action sequence is the + +1161 +00:55:22,200 --> 00:55:27,920 +ground shoes action sequence you should + +1162 +00:55:25,280 --> 00:55:32,240 +have you have this uh starting website + +1163 +00:55:27,920 --> 00:55:36,000 +of pro possibly NHL and you click on uh + +1164 +00:55:32,240 --> 00:55:38,520 +you hover the link click on some click + +1165 +00:55:36,000 --> 00:55:42,480 +on something eventually you will arrive + +1166 +00:55:38,520 --> 00:55:45,280 +at the final designed state but um these + +1167 +00:55:42,480 --> 00:55:48,359 +previous uh Benchmark like M2 web they + +1168 +00:55:45,280 --> 00:55:51,119 +evaluate based on the accuracy of how + +1169 +00:55:48,359 --> 00:55:54,079 +these actions predicted matches the + +1170 +00:55:51,119 --> 00:55:57,119 +ground choose one and they only check + +1171 +00:55:54,079 --> 00:56:00,400 +you know click if cck if is correct or + +1172 +00:55:57,119 --> 00:56:02,359 +if the if the argument is correct anyone + +1173 +00:56:00,400 --> 00:56:05,319 +have idea why this might not be + +1174 +00:56:02,359 --> 00:56:08,520 +desirable a desirable way of evaluating + +1175 +00:56:05,319 --> 00:56:08,520 +El agents yeah go + +1176 +00:56:11,440 --> 00:56:17,200 +ahead different + +1177 +00:56:14,440 --> 00:56:19,720 +way yeah yeah that's a good that's a + +1178 +00:56:17,200 --> 00:56:22,839 +good answer and also possibly other + +1179 +00:56:19,720 --> 00:56:25,039 +reasons are so you first have to go + +1180 +00:56:22,839 --> 00:56:27,240 +through in this order but you know you + +1181 +00:56:25,039 --> 00:56:29,359 +can sometimes these orders are not + +1182 +00:56:27,240 --> 00:56:32,799 +strictly dependent on each other so you + +1183 +00:56:29,359 --> 00:56:36,240 +can perform one task like say this task + +1184 +00:56:32,799 --> 00:56:38,760 +is composed of two subtask you can do + +1185 +00:56:36,240 --> 00:56:40,400 +these two in parallel or in any order + +1186 +00:56:38,760 --> 00:56:43,359 +then if you are just given a ground + +1187 +00:56:40,400 --> 00:56:45,280 +choose action sequence and you WR based + +1188 +00:56:43,359 --> 00:56:47,839 +on you know if the action is correct and + +1189 +00:56:45,280 --> 00:56:50,119 +if the argument is correct then you are + +1190 +00:56:47,839 --> 00:56:51,880 +going to miss out you know basically + +1191 +00:56:50,119 --> 00:56:54,359 +miss out a lot of opportunity where the + +1192 +00:56:51,880 --> 00:56:58,319 +agent actually does it correctly but you + +1193 +00:56:54,359 --> 00:57:00,640 +know it didn't cover in this task and + +1194 +00:56:58,319 --> 00:57:03,079 +there are other T So previous I was + +1195 +00:57:00,640 --> 00:57:05,200 +thinking you know I was previously I was + +1196 +00:57:03,079 --> 00:57:07,359 +talking about these kind of stateless + +1197 +00:57:05,200 --> 00:57:09,359 +non interacting environment but then + +1198 +00:57:07,359 --> 00:57:11,079 +there are also some environment + +1199 +00:57:09,359 --> 00:57:14,160 +interactive environment here already + +1200 +00:57:11,079 --> 00:57:16,839 +available but usually they are short + +1201 +00:57:14,160 --> 00:57:19,839 +Horizon for example there's webshop and + +1202 +00:57:16,839 --> 00:57:23,119 +many water with bits here here's an + +1203 +00:57:19,839 --> 00:57:27,039 +example of an actual interactive web + +1204 +00:57:23,119 --> 00:57:29,440 +environment for agents so it's + +1205 +00:57:27,039 --> 00:57:31,559 +simple web pages where you can click you + +1206 +00:57:29,440 --> 00:57:35,920 +can enter stuff so there are simple + +1207 +00:57:31,559 --> 00:57:40,319 +tasks for example let's say in this this + +1208 +00:57:35,920 --> 00:57:45,200 +example I just asked you to um submit I + +1209 +00:57:40,319 --> 00:57:47,119 +love 711 in this text box and submit so + +1210 +00:57:45,200 --> 00:57:50,280 +as you can imagine these tasks usually + +1211 +00:57:47,119 --> 00:57:53,440 +just take one or two or three steps + +1212 +00:57:50,280 --> 00:57:57,160 +actions to perform so that's why we call + +1213 +00:57:53,440 --> 00:58:00,000 +short Horizon and they the enironment is + +1214 +00:57:57,160 --> 00:58:01,280 +pretty simple because it's just like the + +1215 +00:58:00,000 --> 00:58:03,640 +websites from + +1216 +00:58:01,280 --> 00:58:08,000 +1990s + +1217 +00:58:03,640 --> 00:58:11,680 +so and also there are uh and also there + +1218 +00:58:08,000 --> 00:58:14,480 +are um like webshop Benchmark which is a + +1219 +00:58:11,680 --> 00:58:16,680 +simplified version of Amazon basically + +1220 +00:58:14,480 --> 00:58:22,599 +Amazon is you know it + +1221 +00:58:16,680 --> 00:58:25,119 +is 20 years ago and there is um like you + +1222 +00:58:22,599 --> 00:58:27,680 +can search for jacket your favorite + +1223 +00:58:25,119 --> 00:58:31,160 +jacket the the question here is still + +1224 +00:58:27,680 --> 00:58:34,520 +you know I'm looking for extra red color + +1225 +00:58:31,160 --> 00:58:36,400 +womon some like warm jacket coat and + +1226 +00:58:34,520 --> 00:58:39,319 +price lower than $70 you have a lot of + +1227 +00:58:36,400 --> 00:58:41,839 +instructions here the goal is to find + +1228 +00:58:39,319 --> 00:58:43,480 +the correct uh item that suits your + +1229 +00:58:41,839 --> 00:58:46,160 +instruction me you still have to + +1230 +00:58:43,480 --> 00:58:49,599 +navigate through this um simplified + +1231 +00:58:46,160 --> 00:58:52,799 +version of Amazon but you know the same + +1232 +00:58:49,599 --> 00:58:56,160 +rule the same issue applies it is a + +1233 +00:58:52,799 --> 00:58:58,319 +pretty simple environment and it is uh + +1234 +00:58:56,160 --> 00:59:00,359 +like short Horizon just to give you a + +1235 +00:58:58,319 --> 00:59:04,599 +example I just just give you like a + +1236 +00:59:00,359 --> 00:59:08,039 +feeling like she asked gp4 to perform on + +1237 +00:59:04,599 --> 00:59:09,880 +these kind of u w shop with proper up + +1238 +00:59:08,039 --> 00:59:15,960 +tuning it can already achieve my + +1239 +00:59:09,880 --> 00:59:20,200 +basically fa of task so um it's getting + +1240 +00:59:15,960 --> 00:59:23,640 +there so that's why we think there are + +1241 +00:59:20,200 --> 00:59:26,240 +if you are interested in building agent + +1242 +00:59:23,640 --> 00:59:29,119 +benchmarks there are several key con + +1243 +00:59:26,240 --> 00:59:31,760 +considerations so first you have to have + +1244 +00:59:29,119 --> 00:59:33,440 +a interactive environment because + +1245 +00:59:31,760 --> 00:59:36,280 +without environment you know you can + +1246 +00:59:33,440 --> 00:59:39,319 +just you are stuck with uh evaluation + +1247 +00:59:36,280 --> 00:59:41,160 +metrics that are just B uh checking if + +1248 +00:59:39,319 --> 00:59:44,119 +the action is correct rather than the + +1249 +00:59:41,160 --> 00:59:46,119 +final execution results so uh and also + +1250 +00:59:44,119 --> 00:59:47,200 +you know you you need to have kind of + +1251 +00:59:46,119 --> 00:59:49,520 +diverse + +1252 +00:59:47,200 --> 00:59:51,960 +functionality previously like you are + +1253 +00:59:49,520 --> 00:59:53,559 +just focused on shopping and then you + +1254 +00:59:51,960 --> 00:59:56,440 +know if I want to cheat on this + +1255 +00:59:53,559 --> 00:59:59,760 +Benchmark I would just overfit to this + +1256 +00:59:56,440 --> 01:00:02,240 +shopping functionality and then you need + +1257 +00:59:59,760 --> 01:00:05,400 +to have Rich and realistic content to + +1258 +01:00:02,240 --> 01:00:08,680 +make it adaptive uh to like basically + +1259 +01:00:05,400 --> 01:00:10,599 +closer to Modern websites so you can + +1260 +01:00:08,680 --> 01:00:13,720 +transfer your performance in these + +1261 +01:00:10,599 --> 01:00:15,680 +benchmarks better to a real website it + +1262 +01:00:13,720 --> 01:00:18,400 +has to be interactive and easy + +1263 +01:00:15,680 --> 01:00:22,240 +extendable and reproducible reproducible + +1264 +01:00:18,400 --> 01:00:26,640 +is quite important for um creating a + +1265 +01:00:22,240 --> 01:00:29,200 +benchmark uh in research community + +1266 +01:00:26,640 --> 01:00:32,119 +that's why we don't use uh we we don't + +1267 +01:00:29,200 --> 01:00:35,359 +want use live websites as an environment + +1268 +01:00:32,119 --> 01:00:37,440 +because they change so often you got 90% + +1269 +01:00:35,359 --> 01:00:40,240 +accuracy yesterday today you will only + +1270 +01:00:37,440 --> 01:00:43,720 +get 20% accuracy because the websites + +1271 +01:00:40,240 --> 01:00:47,280 +become much much hotter just because of + +1272 +01:00:43,720 --> 01:00:50,240 +today and tasks we ideally want long + +1273 +01:00:47,280 --> 01:00:53,480 +Horizon tasks with enough difficulty and + +1274 +01:00:50,240 --> 01:00:56,160 +also IDE involves M website because + +1275 +01:00:53,480 --> 01:00:58,799 +that's more realistic you are not going + +1276 +01:00:56,160 --> 01:01:01,760 +to stuck your whole day browsing Amazon + +1277 +01:00:58,799 --> 01:01:04,799 +website alone you sometimes go to you + +1278 +01:01:01,760 --> 01:01:07,559 +know other websites like Reddit to + +1279 +01:01:04,799 --> 01:01:11,079 +search for some reviews so it is nice to + +1280 +01:01:07,559 --> 01:01:13,839 +involve them and also for + +1281 +01:01:11,079 --> 01:01:17,160 +evaluation it's nice to have reliable + +1282 +01:01:13,839 --> 01:01:19,760 +metrics so that it encourages final goal + +1283 +01:01:17,160 --> 01:01:22,240 +rather than partial satisfaction and + +1284 +01:01:19,760 --> 01:01:24,400 +also it encourages you know the agent to + +1285 +01:01:22,240 --> 01:01:26,400 +actually perform the task right instead + +1286 +01:01:24,400 --> 01:01:28,240 +of just following you know the provided + +1287 +01:01:26,400 --> 01:01:30,559 +BR choose action because you you can + +1288 +01:01:28,240 --> 01:01:32,480 +have multiple ways of achieving the same + +1289 +01:01:30,559 --> 01:01:35,599 +correct uh final + +1290 +01:01:32,480 --> 01:01:38,920 +goal so here present uh one of the + +1291 +01:01:35,599 --> 01:01:42,799 +latest work in this uh area called Web + +1292 +01:01:38,920 --> 01:01:45,720 +Arina and it's environment to S it we + +1293 +01:01:42,799 --> 01:01:48,799 +try to satisfy this environment uh + +1294 +01:01:45,720 --> 01:01:51,359 +requirement we previously put out which + +1295 +01:01:48,799 --> 01:01:53,640 +is a setbox internet it is open source + +1296 +01:01:51,359 --> 01:01:56,240 +and production ready implementation of + +1297 +01:01:53,640 --> 01:01:58,160 +websites and the data of popular created + +1298 +01:01:56,240 --> 01:02:01,079 +from Real World websites we basically + +1299 +01:01:58,160 --> 01:02:04,559 +scrap Amazon say for example and put it + +1300 +01:02:01,079 --> 01:02:07,400 +in our fake Amazon website it is also + +1301 +01:02:04,559 --> 01:02:11,400 +easily distributable we use you know + +1302 +01:02:07,400 --> 01:02:13,640 +Dockers to use them so since we want it + +1303 +01:02:11,400 --> 01:02:16,000 +to reproducible to be reproducible we + +1304 +01:02:13,640 --> 01:02:18,599 +don't use right websites and that kind + +1305 +01:02:16,000 --> 01:02:21,359 +of means that uh the website selection + +1306 +01:02:18,599 --> 01:02:24,599 +is going to be limited no matter what + +1307 +01:02:21,359 --> 01:02:26,520 +however we try to make it you know as + +1308 +01:02:24,599 --> 01:02:28,760 +diverse as possible by including a + +1309 +01:02:26,520 --> 01:02:33,039 +shopping website a cent management + +1310 +01:02:28,760 --> 01:02:36,880 +website as well as a Reddit like forum + +1311 +01:02:33,039 --> 01:02:39,799 +and plus a g l so it covers um like + +1312 +01:02:36,880 --> 01:02:42,000 +social media some of the development of + +1313 +01:02:39,799 --> 01:02:44,880 +work related things as well as content + +1314 +01:02:42,000 --> 01:02:47,520 +management and shopping we also have a + +1315 +01:02:44,880 --> 01:02:51,240 +Wikipedia and some other toolbox like we + +1316 +01:02:47,520 --> 01:02:53,760 +even have a map here in our in this + +1317 +01:02:51,240 --> 01:02:56,799 +Benchmark called Web so and then you + +1318 +01:02:53,760 --> 01:02:59,720 +need to collect realistic intents + +1319 +01:02:56,799 --> 01:03:01,599 +so of course a good way of collecting + +1320 +01:02:59,720 --> 01:03:05,920 +these is just checking our own browser + +1321 +01:03:01,599 --> 01:03:07,520 +history or checking others then uh we + +1322 +01:03:05,920 --> 01:03:10,119 +then categorize them into three + +1323 +01:03:07,520 --> 01:03:12,839 +different types first is information + +1324 +01:03:10,119 --> 01:03:15,279 +seeking a lot of our browser histories + +1325 +01:03:12,839 --> 01:03:17,440 +or like what we do on internet is check + +1326 +01:03:15,279 --> 01:03:21,079 +information for example when was the + +1327 +01:03:17,440 --> 01:03:25,160 +last time I bought shampoo so to remind + +1328 +01:03:21,079 --> 01:03:28,599 +myself and then a lot some other things + +1329 +01:03:25,160 --> 01:03:31,279 +just navigation you know as a human how + +1330 +01:03:28,599 --> 01:03:34,079 +to get there for example I want to check + +1331 +01:03:31,279 --> 01:03:37,599 +out merge requests that are assed to me + +1332 +01:03:34,079 --> 01:03:40,520 +as my new day out work started and then + +1333 +01:03:37,599 --> 01:03:43,559 +there's some content and configuration + +1334 +01:03:40,520 --> 01:03:46,640 +operations so here's an example like uh + +1335 +01:03:43,559 --> 01:03:49,119 +these type of task are usually you are + +1336 +01:03:46,640 --> 01:03:52,200 +going to modify the environment somewhat + +1337 +01:03:49,119 --> 01:03:54,839 +because previously as we previously + +1338 +01:03:52,200 --> 01:03:57,559 +discussed a agent not only can perceive + +1339 +01:03:54,839 --> 01:04:00,200 +or get information from the environment + +1340 +01:03:57,559 --> 01:04:01,799 +you sometimes have to do actuations back + +1341 +01:04:00,200 --> 01:04:04,760 +to the environment for example if I want + +1342 +01:04:01,799 --> 01:04:07,559 +to post a question is a car necessary in + +1343 +01:04:04,760 --> 01:04:10,039 +New York City in a subreddit where I'm + +1344 +01:04:07,559 --> 01:04:12,400 +most likely to get an answer here you if + +1345 +01:04:10,039 --> 01:04:15,480 +you think about the U you know the we + +1346 +01:04:12,400 --> 01:04:18,000 +size as an environment you actually make + +1347 +01:04:15,480 --> 01:04:18,880 +a dent made a dent to the environment + +1348 +01:04:18,000 --> 01:04:23,880 +this + +1349 +01:04:18,880 --> 01:04:26,559 +way so here's an example task in this uh + +1350 +01:04:23,880 --> 01:04:29,640 +depos uh in this benchmark Mark uh + +1351 +01:04:26,559 --> 01:04:32,520 +create a plan to visit Pittsburgh uh + +1352 +01:04:29,640 --> 01:04:36,160 +museums with minimal driving distance + +1353 +01:04:32,520 --> 01:04:41,279 +starting from shy Park log the order in + +1354 +01:04:36,160 --> 01:04:45,359 +my uh you know awesome Northeast US + +1355 +01:04:41,279 --> 01:04:48,960 +Travel repository uh so here you can + +1356 +01:04:45,359 --> 01:04:52,319 +think of it as pretty complex uh task it + +1357 +01:04:48,960 --> 01:04:56,839 +involves if you just B market it will at + +1358 +01:04:52,319 --> 01:04:59,520 +least be 20 pages of 20 times of page + +1359 +01:04:56,839 --> 01:05:02,440 +transitions some you also have to cover + +1360 +01:04:59,520 --> 01:05:04,400 +several websites so you first have to + +1361 +01:05:02,440 --> 01:05:06,000 +search for museums and P where probably + +1362 +01:05:04,400 --> 01:05:09,079 +you're going to use Google or you + +1363 +01:05:06,000 --> 01:05:11,760 +probably use um Wikipedia and then after + +1364 +01:05:09,079 --> 01:05:15,119 +you get that you are going to search for + +1365 +01:05:11,760 --> 01:05:17,680 +each Art Museum on a map software and + +1366 +01:05:15,119 --> 01:05:19,319 +finally you check you know minimal + +1367 +01:05:17,680 --> 01:05:21,520 +driving distance so there are some even + +1368 +01:05:19,319 --> 01:05:23,680 +some mathematic reasoning here you have + +1369 +01:05:21,520 --> 01:05:26,599 +to gather the driving distance and find + +1370 +01:05:23,680 --> 01:05:30,400 +the minimal one and Rec them in the + +1371 +01:05:26,599 --> 01:05:32,960 +repository and then that involves gitl + +1372 +01:05:30,400 --> 01:05:34,880 +operations and then for these kind of + +1373 +01:05:32,960 --> 01:05:37,559 +complex tasks we are definitely not + +1374 +01:05:34,880 --> 01:05:40,119 +going to rely on uh like action sequence + +1375 +01:05:37,559 --> 01:05:42,279 +based evaluation so our goal is to + +1376 +01:05:40,119 --> 01:05:44,920 +directly validate the correctness of the + +1377 +01:05:42,279 --> 01:05:47,760 +execution for example when was the last + +1378 +01:05:44,920 --> 01:05:50,079 +time I bought shampoo the answer is can + +1379 +01:05:47,760 --> 01:05:53,359 +be directly answered by you + +1380 +01:05:50,079 --> 01:05:56,240 +know if it's just like this some date + +1381 +01:05:53,359 --> 01:05:59,319 +because the data in the we know ahead of + +1382 +01:05:56,240 --> 01:06:01,720 +time what data we have in this Benchmark + +1383 +01:05:59,319 --> 01:06:04,640 +so you have the correct answer to check + +1384 +01:06:01,720 --> 01:06:06,559 +and for others if they are more tricky + +1385 +01:06:04,640 --> 01:06:09,480 +for example the previous question of is + +1386 +01:06:06,559 --> 01:06:12,480 +a car necessary in New York City uh you + +1387 +01:06:09,480 --> 01:06:16,440 +actually have to check oh I got in at + +1388 +01:06:12,480 --> 01:06:18,640 +New York uh New York City in page URL + +1389 +01:06:16,440 --> 01:06:20,440 +and then if the content is a car + +1390 +01:06:18,640 --> 01:06:24,760 +necessary in New York City actually + +1391 +01:06:20,440 --> 01:06:28,039 +inside the uh document page or the HTML + +1392 +01:06:24,760 --> 01:06:30,480 +page so with these kind of verifiers you + +1393 +01:06:28,039 --> 01:06:32,599 +can think of it as unit test in software + +1394 +01:06:30,480 --> 01:06:35,799 +development we are only checking the + +1395 +01:06:32,599 --> 01:06:37,760 +output or the outcome of the final State + +1396 +01:06:35,799 --> 01:06:40,240 +uh of the environment and check if it is + +1397 +01:06:37,760 --> 01:06:43,119 +correct then it alleviates the issue of + +1398 +01:06:40,240 --> 01:06:45,720 +you know relying on checking + +1399 +01:06:43,119 --> 01:06:49,200 +actions the obervation and action space + +1400 +01:06:45,720 --> 01:06:51,559 +for these kind of task can be multimodel + +1401 +01:06:49,200 --> 01:06:54,680 +you can use screenshot of course of the + +1402 +01:06:51,559 --> 01:06:58,440 +website or you can use a Tex space like + +1403 +01:06:54,680 --> 01:07:01,799 +directly you just using raw HTML source + +1404 +01:06:58,440 --> 01:07:04,839 +code or you could use a slightly more + +1405 +01:07:01,799 --> 01:07:07,200 +structured version called accessib tree + +1406 +01:07:04,839 --> 01:07:10,000 +where it is a tree based structure to + +1407 +01:07:07,200 --> 01:07:13,000 +represent the website structure and for + +1408 +01:07:10,000 --> 01:07:15,920 +the action space you we can use keyboard + +1409 +01:07:13,000 --> 01:07:20,559 +to import or Mouse to click hover and + +1410 +01:07:15,920 --> 01:07:24,200 +scroll and use browser to go back + +1411 +01:07:20,559 --> 01:07:26,119 +so we can perform previously we disc + +1412 +01:07:24,200 --> 01:07:28,400 +discussed a bit about using + +1413 +01:07:26,119 --> 01:07:30,720 +prompting few shot in context learning + +1414 +01:07:28,400 --> 01:07:34,200 +to provide a bit of General guidance and + +1415 +01:07:30,720 --> 01:07:37,119 +two examples so just to let you know + +1416 +01:07:34,200 --> 01:07:38,880 +about the performance about GP how gp4 + +1417 +01:07:37,119 --> 01:07:41,480 +would perform in this kind of complex + +1418 +01:07:38,880 --> 01:07:44,279 +task we just provide the gp4 with this + +1419 +01:07:41,480 --> 01:07:47,119 +kind of instruction we we are we let it + +1420 +01:07:44,279 --> 01:07:48,839 +know it's an atomous intelligent engine + +1421 +01:07:47,119 --> 01:07:51,000 +and you can observe the following + +1422 +01:07:48,839 --> 01:07:52,520 +information we describe a little bit of + +1423 +01:07:51,000 --> 01:07:54,240 +what the observation space would look + +1424 +01:07:52,520 --> 01:07:56,920 +like and you can do the following + +1425 +01:07:54,240 --> 01:07:59,640 +actions what the actions space are and + +1426 +01:07:56,920 --> 01:08:01,640 +provide them a little bit of examples + +1427 +01:07:59,640 --> 01:08:04,079 +like what the observation here would be + +1428 +01:08:01,640 --> 01:08:07,079 +you can see the observation here is like + +1429 +01:08:04,079 --> 01:08:09,520 +a tree based a tree tree like structure + +1430 +01:08:07,079 --> 01:08:11,839 +of basically a filtered down version of + +1431 +01:08:09,520 --> 01:08:14,279 +the HTML a representation of the web + +1432 +01:08:11,839 --> 01:08:17,400 +page you have URL of course you have the + +1433 +01:08:14,279 --> 01:08:19,719 +objective or the task description and an + +1434 +01:08:17,400 --> 01:08:21,440 +example output here you can see we are + +1435 +01:08:19,719 --> 01:08:23,839 +using uh Chain of Thought based + +1436 +01:08:21,440 --> 01:08:26,400 +reasoning uh seeing step by step what + +1437 +01:08:23,839 --> 01:08:29,120 +you should do and act here because + +1438 +01:08:26,400 --> 01:08:31,640 +action space is previously provided this + +1439 +01:08:29,120 --> 01:08:34,199 +kind of action we can are something we + +1440 +01:08:31,640 --> 01:08:37,520 +can accure in the environment you can + +1441 +01:08:34,199 --> 01:08:39,880 +see web arena is very a challenging task + +1442 +01:08:37,520 --> 01:08:42,159 +um for humans we ask humans to perform + +1443 +01:08:39,880 --> 01:08:47,480 +these tasks they can easily achieve + +1444 +01:08:42,159 --> 01:08:49,719 +about 78% accuracy within like a time + +1445 +01:08:47,480 --> 01:08:53,920 +limit of two minutes we give human 2 + +1446 +01:08:49,719 --> 01:08:56,359 +minutes but with GPT 3.5 or Advanced + +1447 +01:08:53,920 --> 01:08:58,719 +prompting techniques or even like gp4 + +1448 +01:08:56,359 --> 01:09:01,960 +with Advanced Tech prompting techniques + +1449 +01:08:58,719 --> 01:09:04,759 +we can only solve about 14% of the test + +1450 +01:09:01,960 --> 01:09:07,400 +so chain off L indeed helps but it + +1451 +01:09:04,759 --> 01:09:09,199 +provides limited benefits and gp4 + +1452 +01:09:07,400 --> 01:09:11,880 +remains significantly behind human + +1453 +01:09:09,199 --> 01:09:15,120 +performance and prompt engineering you + +1454 +01:09:11,880 --> 01:09:18,239 +know sometimes emphasizes large J models + +1455 +01:09:15,120 --> 01:09:20,839 +sensitivity to sub subtle instruction + +1456 +01:09:18,239 --> 01:09:22,600 +changes because prompt engineering + +1457 +01:09:20,839 --> 01:09:26,199 +sometimes actually is + +1458 +01:09:22,600 --> 01:09:28,359 +hard and here are some failure cases + +1459 +01:09:26,199 --> 01:09:29,920 +for example if sometimes the langage + +1460 +01:09:28,359 --> 01:09:32,040 +model just don't know what button to + +1461 +01:09:29,920 --> 01:09:34,279 +click show me the customers who have + +1462 +01:09:32,040 --> 01:09:37,279 +expressed this satisfaction this + +1463 +01:09:34,279 --> 01:09:39,319 +satisfaction in a zip jacket then the + +1464 +01:09:37,279 --> 01:09:41,000 +correct one as a human you know you + +1465 +01:09:39,319 --> 01:09:43,679 +should probably go to the catalog + +1466 +01:09:41,000 --> 01:09:45,440 +product page and check reviews or just + +1467 +01:09:43,679 --> 01:09:47,799 +go to the review sections and search for + +1468 +01:09:45,440 --> 01:09:50,239 +the jacket but you know sometimes the + +1469 +01:09:47,799 --> 01:09:52,400 +gp4 without these kind of Common Sense + +1470 +01:09:50,239 --> 01:09:55,960 +knowledge they would just go to + +1471 +01:09:52,400 --> 01:09:57,840 +customers sections so this one is + +1472 +01:09:55,960 --> 01:09:59,760 +basically means that the language model + +1473 +01:09:57,840 --> 01:10:02,120 +does not have good enough reasoning or + +1474 +01:09:59,760 --> 01:10:04,520 +planning ability without a basic + +1475 +01:10:02,120 --> 01:10:07,360 +information a basic knowledge sometimes + +1476 +01:10:04,520 --> 01:10:10,480 +it's just uh simply not being accurate + +1477 +01:10:07,360 --> 01:10:13,120 +for example you ask it to uh enter a due + +1478 +01:10:10,480 --> 01:10:15,239 +date it enters a wrong format and then + +1479 +01:10:13,120 --> 01:10:18,760 +you know if the website is not designed + +1480 +01:10:15,239 --> 01:10:21,480 +really well I would just stop at this + +1481 +01:10:18,760 --> 01:10:24,120 +point because it is a incorrect format + +1482 +01:10:21,480 --> 01:10:26,679 +ideally you should use this state peer + +1483 +01:10:24,120 --> 01:10:28,239 +State peer like vdet but sometimes the + +1484 +01:10:26,679 --> 01:10:31,600 +language model will just figure out to + +1485 +01:10:28,239 --> 01:10:35,560 +enter some text itself and sometimes + +1486 +01:10:31,600 --> 01:10:39,440 +it's trivial errors like in our study + +1487 +01:10:35,560 --> 01:10:42,840 +gb4 in with gb4 about 21% of examples + +1488 +01:10:39,440 --> 01:10:44,760 +fil due to repeated typing I we think + +1489 +01:10:42,840 --> 01:10:47,280 +this is probably related to + +1490 +01:10:44,760 --> 01:10:50,280 +hallucination effects in large common in + +1491 +01:10:47,280 --> 01:10:53,320 +large rank models they will just enter a + +1492 +01:10:50,280 --> 01:10:55,960 +bunch of you know DMV area DMV era DMV + +1493 +01:10:53,320 --> 01:10:59,239 +area and then these two Arrow + +1494 +01:10:55,960 --> 01:11:01,199 +sometimes they are not so trivial errors + +1495 +01:10:59,239 --> 01:11:04,880 +here's a very interesting one assign + +1496 +01:11:01,199 --> 01:11:08,360 +this issue to myself this is a gitlab um + +1497 +01:11:04,880 --> 01:11:11,600 +page and if you ask Lang uh like gp4 to + +1498 +01:11:08,360 --> 01:11:16,040 +perform this Tas for you it doesn't know + +1499 +01:11:11,600 --> 01:11:18,560 +myself it just enters myself as a stram + +1500 +01:11:16,040 --> 01:11:21,159 +to this assigning it actually needs to + +1501 +01:11:18,560 --> 01:11:24,719 +query what's you itself it probably + +1502 +01:11:21,159 --> 01:11:27,480 +needs to enter me or its own use + +1503 +01:11:24,719 --> 01:11:30,560 +username in this field so it's kind of + +1504 +01:11:27,480 --> 01:11:33,040 +interesting finally I'm going to touch a + +1505 +01:11:30,560 --> 01:11:35,440 +little bit of training methods for + +1506 +01:11:33,040 --> 01:11:38,080 +improving the agents previously we have + +1507 +01:11:35,440 --> 01:11:40,120 +what we have covered tests and + +1508 +01:11:38,080 --> 01:11:43,280 +applications I presented some of the + +1509 +01:11:40,120 --> 01:11:44,440 +prompting techniques and also uh one of + +1510 +01:11:43,280 --> 01:11:46,880 +the state-ofthe-art + +1511 +01:11:44,440 --> 01:11:49,320 +uh uh one of the state-ofthe-art you + +1512 +01:11:46,880 --> 01:11:52,480 +know um like Benchmark so you have this + +1513 +01:11:49,320 --> 01:11:55,480 +playground you have this environment now + +1514 +01:11:52,480 --> 01:11:57,600 +but you are still not satisfied with uh + +1515 +01:11:55,480 --> 01:12:00,480 +the performance like even if you do all + +1516 +01:11:57,600 --> 01:12:01,639 +the like Chain of Thought prompting and + +1517 +01:12:00,480 --> 01:12:04,560 +all that + +1518 +01:12:01,639 --> 01:12:07,120 +stuff so the learning this topic for the + +1519 +01:12:04,560 --> 01:12:10,159 +learning of language model learn agents + +1520 +01:12:07,120 --> 01:12:12,239 +I'm going to cover mainly three uh types + +1521 +01:12:10,159 --> 01:12:15,080 +of learning first is Inc context + +1522 +01:12:12,239 --> 01:12:18,679 +learning some may argue oh this is just + +1523 +01:12:15,080 --> 01:12:21,760 +prompting yes I agree but you can still + +1524 +01:12:18,679 --> 01:12:24,920 +probably get most of it out of it by + +1525 +01:12:21,760 --> 01:12:26,920 +providing better um viewshot examples + +1526 +01:12:24,920 --> 01:12:29,440 +and of course there's supervised F + +1527 +01:12:26,920 --> 01:12:31,760 +tuning and learning from basically this + +1528 +01:12:29,440 --> 01:12:34,560 +is learning from experts supposedly if + +1529 +01:12:31,760 --> 01:12:36,800 +you have good ground choose trajectories + +1530 +01:12:34,560 --> 01:12:39,760 +of how a human would perform a task you + +1531 +01:12:36,800 --> 01:12:42,080 +probably can do use these data and + +1532 +01:12:39,760 --> 01:12:44,199 +finally if you thinking about agents + +1533 +01:12:42,080 --> 01:12:48,159 +that interact with um + +1534 +01:12:44,199 --> 01:12:50,600 +enironment um you know a quite popular + +1535 +01:12:48,159 --> 01:12:52,840 +technique for uh these type of tastics + +1536 +01:12:50,600 --> 01:12:54,480 +using reinforcement learning because if + +1537 +01:12:52,840 --> 01:12:57,440 +you have a good enough environment you + +1538 +01:12:54,480 --> 01:13:00,960 +probably can learn from the environment + +1539 +01:12:57,440 --> 01:13:03,800 +so first a little bit of uh background + +1540 +01:13:00,960 --> 01:13:06,440 +on in context learning so the language + +1541 +01:13:03,800 --> 01:13:08,480 +model they basically language model can + +1542 +01:13:06,440 --> 01:13:11,400 +perform a task by just conditioning on + +1543 +01:13:08,480 --> 01:13:14,199 +input output examples without optimizing + +1544 +01:13:11,400 --> 01:13:16,719 +other parameters sometimes it is because + +1545 +01:13:14,199 --> 01:13:19,520 +we don't have access to these parameters + +1546 +01:13:16,719 --> 01:13:22,520 +sometimes it is too costly to train but + +1547 +01:13:19,520 --> 01:13:25,719 +nonetheless this is a very popular way + +1548 +01:13:22,520 --> 01:13:28,080 +of doing uh learning + +1549 +01:13:25,719 --> 01:13:30,159 +just like previously shown example on + +1550 +01:13:28,080 --> 01:13:32,520 +where how we get the Benchmark + +1551 +01:13:30,159 --> 01:13:35,639 +performance on like Baseline performance + +1552 +01:13:32,520 --> 01:13:38,360 +on web Arena tasks uh we provide a + +1553 +01:13:35,639 --> 01:13:41,800 +little bit of example of what the + +1554 +01:13:38,360 --> 01:13:46,120 +observation would look like and what the + +1555 +01:13:41,800 --> 01:13:48,960 +actions should be given a task so just + +1556 +01:13:46,120 --> 01:13:52,719 +like this if you are doing in context + +1557 +01:13:48,960 --> 01:13:55,679 +Durning we can just provide some uh like + +1558 +01:13:52,719 --> 01:13:57,880 +user example user observation here like + +1559 +01:13:55,679 --> 01:14:00,760 +for example this observation represents + +1560 +01:13:57,880 --> 01:14:04,719 +a web page uh it is a trm down version + +1561 +01:14:00,760 --> 01:14:06,840 +of the HTML document tree and then the + +1562 +01:14:04,719 --> 01:14:09,159 +example assistant part is basically kind + +1563 +01:14:06,840 --> 01:14:12,560 +of the you can think of it as a output + +1564 +01:14:09,159 --> 01:14:14,920 +label where you you know you think step + +1565 +01:14:12,560 --> 01:14:17,600 +by step you you you kind of show it how + +1566 +01:14:14,920 --> 01:14:20,960 +to do this uh Chain of Thought reasoning + +1567 +01:14:17,600 --> 01:14:23,800 +and also you show it how to kind of uh + +1568 +01:14:20,960 --> 01:14:27,080 +what a what what format it will be like + +1569 +01:14:23,800 --> 01:14:30,239 +to issue the kind of uh stop for stop + +1570 +01:14:27,080 --> 01:14:33,719 +action usually with proper instruction + +1571 +01:14:30,239 --> 01:14:35,920 +tuning sorry usually with proper + +1572 +01:14:33,719 --> 01:14:39,040 +instruction instructions providing all + +1573 +01:14:35,920 --> 01:14:41,199 +the action space and all the format and + +1574 +01:14:39,040 --> 01:14:44,520 +a good several you know few shot + +1575 +01:14:41,199 --> 01:14:47,880 +examples of these kind of obervation and + +1576 +01:14:44,520 --> 01:14:50,520 +action um pairs you would at least the + +1577 +01:14:47,880 --> 01:14:52,880 +language model are good enough to figure + +1578 +01:14:50,520 --> 01:14:56,159 +out the format they will usually just + +1579 +01:14:52,880 --> 01:14:58,440 +generate um like these kind of of uh + +1580 +01:14:56,159 --> 01:15:01,480 +like stop action kind of this kind of + +1581 +01:14:58,440 --> 01:15:03,840 +action plus a parenthesis with arguments + +1582 +01:15:01,480 --> 01:15:06,159 +format so it is really in context + +1583 +01:15:03,840 --> 01:15:09,760 +learning sometimes is really good to + +1584 +01:15:06,159 --> 01:15:12,560 +tune the language model to your like + +1585 +01:15:09,760 --> 01:15:15,800 +specification of how this this + +1586 +01:15:12,560 --> 01:15:21,840 +format however there + +1587 +01:15:15,800 --> 01:15:24,199 +are a couple oh okay yeah so then we + +1588 +01:15:21,840 --> 01:15:25,920 +have like supervised fine tuning where + +1589 +01:15:24,199 --> 01:15:28,880 +you can collect L amount of expert + +1590 +01:15:25,920 --> 01:15:30,320 +directories from like human adapation + +1591 +01:15:28,880 --> 01:15:32,600 +like for example you have this task + +1592 +01:15:30,320 --> 01:15:35,400 +intent observation action observation + +1593 +01:15:32,600 --> 01:15:38,080 +action pairs but then finally you can + +1594 +01:15:35,400 --> 01:15:41,120 +find find elements with like the cross + +1595 +01:15:38,080 --> 01:15:44,880 +entropy loss like for example like a lot + +1596 +01:15:41,120 --> 01:15:47,520 +of existing work try to optimize Lang + +1597 +01:15:44,880 --> 01:15:50,560 +model agents by just collecting human + +1598 +01:15:47,520 --> 01:15:53,199 +annotations it is super uh it is going + +1599 +01:15:50,560 --> 01:15:55,199 +to work super well but it is St hungry + +1600 +01:15:53,199 --> 01:15:57,360 +and cannot learn much from fa + +1601 +01:15:55,199 --> 01:16:00,600 +trajectories for example if you have a + +1602 +01:15:57,360 --> 01:16:04,320 +success in the failed trajectory uh you + +1603 +01:16:00,600 --> 01:16:06,440 +probably would not be able to um uh you + +1604 +01:16:04,320 --> 01:16:08,800 +probably will not be able you kind of + +1605 +01:16:06,440 --> 01:16:10,520 +wasted the failed the trory even if say + +1606 +01:16:08,800 --> 01:16:13,440 +only the last step is + +1607 +01:16:10,520 --> 01:16:16,440 +incorrect and you know there are several + +1608 +01:16:13,440 --> 01:16:18,679 +like data augmentation techniques where + +1609 +01:16:16,440 --> 01:16:22,480 +for example in this Minecraft playing + +1610 +01:16:18,679 --> 01:16:24,639 +example you can just let it uh do dat + +1611 +01:16:22,480 --> 01:16:25,679 +augmentation based on you know YouTube + +1612 +01:16:24,639 --> 01:16:28,480 +video + +1613 +01:16:25,679 --> 01:16:30,440 +or Wiki pedia or RIT + +1614 +01:16:28,480 --> 01:16:32,760 +threats and + +1615 +01:16:30,440 --> 01:16:35,000 +finally uh we have this uh like + +1616 +01:16:32,760 --> 01:16:38,199 +reinforcement learning based methods a + +1617 +01:16:35,000 --> 01:16:41,880 +lot of like uh ongoing research in this + +1618 +01:16:38,199 --> 01:16:44,080 +area but previously we have like R + +1619 +01:16:41,880 --> 01:16:47,120 +resarch uh research learning from Human + +1620 +01:16:44,080 --> 01:16:49,199 +feedback but then this time uh without + +1621 +01:16:47,120 --> 01:16:52,760 +human feedback we probably can just + +1622 +01:16:49,199 --> 01:16:55,800 +replace all these rewards with a real + +1623 +01:16:52,760 --> 01:16:58,120 +environment for example if you are do + +1624 +01:16:55,800 --> 01:17:01,239 +uh you have access to web Arena whether + +1625 +01:16:58,120 --> 01:17:03,600 +or not a task is successful can be + +1626 +01:17:01,239 --> 01:17:06,080 +automatically determined with that + +1627 +01:17:03,600 --> 01:17:09,440 +environment so it provides a natural + +1628 +01:17:06,080 --> 01:17:11,840 +feedback from the environment so you + +1629 +01:17:09,440 --> 01:17:14,520 +know with some I also listed some of the + +1630 +01:17:11,840 --> 01:17:16,840 +reference here if you are interested you + +1631 +01:17:14,520 --> 01:17:19,679 +can go to the course website or check + +1632 +01:17:16,840 --> 01:17:21,800 +the slides for some of these ongoing + +1633 +01:17:19,679 --> 01:17:25,040 +research but I'm going not going to + +1634 +01:17:21,800 --> 01:17:27,159 +cover much in details here due to the + +1635 +01:17:25,040 --> 01:17:32,199 +the time constraint but these are + +1636 +01:17:27,159 --> 01:17:35,600 +generally the methods for so I + +1637 +01:17:32,199 --> 01:17:40,000 +think yeah if you have any questions + +1638 +01:17:35,600 --> 01:17:42,800 +about anything tools and agents uh feel + +1639 +01:17:40,000 --> 01:17:45,199 +free to ask yeah thanks a lot we have + +1640 +01:17:42,800 --> 01:17:48,880 +time for maybe one or one or two quick + +1641 +01:17:45,199 --> 01:17:52,719 +questions um I'd like to prasis with um + +1642 +01:17:48,880 --> 01:17:54,840 +saying that uh Frank gave a really good + +1643 +01:17:52,719 --> 01:17:57,120 +example of web Arena and I think web + +1644 +01:17:54,840 --> 01:17:59,440 +arena is a good example of some of the + +1645 +01:17:57,120 --> 01:18:01,320 +challenges in the whole agent space uh + +1646 +01:17:59,440 --> 01:18:03,000 +not just like web agents but also code + +1647 +01:18:01,320 --> 01:18:05,600 +generation agents + +1648 +01:18:03,000 --> 01:18:08,159 +robots um you know embodied environments + +1649 +01:18:05,600 --> 01:18:10,560 +and stuff which is that there's a lot of + +1650 +01:18:08,159 --> 01:18:13,000 +really simple ones that are that were + +1651 +01:18:10,560 --> 01:18:15,080 +interesting a few years ago but are kind + +1652 +01:18:13,000 --> 01:18:16,400 +of like solved now or close enough to + +1653 +01:18:15,080 --> 01:18:18,520 +solve that they don't test the really + +1654 +01:18:16,400 --> 01:18:20,239 +hard things like planning you know + +1655 +01:18:18,520 --> 01:18:22,360 +ability to handle diverse environments + +1656 +01:18:20,239 --> 01:18:25,199 +and stuff like that and so weing is just + +1657 +01:18:22,360 --> 01:18:27,040 +one example and then even if you're + +1658 +01:18:25,199 --> 01:18:28,600 +interested in other varieties of things + +1659 +01:18:27,040 --> 01:18:32,480 +you're going to be facing the same props + +1660 +01:18:28,600 --> 01:18:34,400 +of evaluation so um evaluation and model + +1661 +01:18:32,480 --> 01:18:36,280 +cool um any any things people would like + +1662 +01:18:34,400 --> 01:18:38,520 +to ask we can also take things up front + +1663 +01:18:36,280 --> 01:18:38,520 +if \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (20) Tool Use and Language Agents/transcript.vtt b/CMU Advanced NLP 2024 (20) Tool Use and Language Agents/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..2fa537e4e24a71b0db811e1e6dfaa5e6815ef14a --- /dev/null +++ b/CMU Advanced NLP 2024 (20) Tool Use and Language Agents/transcript.vtt @@ -0,0 +1,4990 @@ +WEBVTT + +00:00:04.359 --> 00:00:10.679 +cool so w models are pretty powerful for + +00:00:08.200 --> 00:00:13.440 +solving many tasks mostly tax generation + +00:00:10.679 --> 00:00:15.400 +tasks as you probably have seen a lot of + +00:00:13.440 --> 00:00:18.279 +examples from + +00:00:15.400 --> 00:00:20.560 +CH4 um but however our language model is + +00:00:18.279 --> 00:00:23.840 +good enough for everything and my + +00:00:20.560 --> 00:00:26.400 +personal answer is no so if you look at + +00:00:23.840 --> 00:00:28.480 +some scenarios that uh langage models + +00:00:26.400 --> 00:00:31.080 +are actually not very good at it so for + +00:00:28.480 --> 00:00:34.399 +example first when some is asked about + +00:00:31.080 --> 00:00:36.360 +complex reasoning uh for example if you + +00:00:34.399 --> 00:00:38.600 +want the language model to do math + +00:00:36.360 --> 00:00:40.600 +calculation it's not probably not going + +00:00:38.600 --> 00:00:42.800 +to do it very efficiently or very + +00:00:40.600 --> 00:00:45.680 +accurately for example the common + +00:00:42.800 --> 00:00:49.199 +approach like here is like using a Chain + +00:00:45.680 --> 00:00:52.079 +of Thought like to Pro process in very + +00:00:49.199 --> 00:00:54.120 +detail um and may not get the correct + +00:00:52.079 --> 00:00:56.320 +answer uh however if you have a + +00:00:54.120 --> 00:00:59.039 +calculator tool you can just directly + +00:00:56.320 --> 00:01:01.320 +input the expression into the calculator + +00:00:59.039 --> 00:01:02.879 +and get the founder result + +00:01:01.320 --> 00:01:05.280 +that's a first scenario and there are + +00:01:02.879 --> 00:01:07.960 +other scenarios for example if you need + +00:01:05.280 --> 00:01:10.360 +to access real world information that + +00:01:07.960 --> 00:01:12.240 +language model itself may not be or + +00:01:10.360 --> 00:01:14.600 +fundamentally unable to answer that + +00:01:12.240 --> 00:01:16.560 +question for example if you ask what is + +00:01:14.600 --> 00:01:18.720 +the current time the current time is + +00:01:16.560 --> 00:01:21.079 +probably not in the model training data + +00:01:18.720 --> 00:01:23.439 +and depending on the time they ask this + +00:01:21.079 --> 00:01:25.920 +question the answer to this question is + +00:01:23.439 --> 00:01:27.880 +probably different so the only way the + +00:01:25.920 --> 00:01:30.600 +language models can answer this is by + +00:01:27.880 --> 00:01:33.240 +using an external tool for example + +00:01:30.600 --> 00:01:35.479 +calling the API called get time to get + +00:01:33.240 --> 00:01:37.759 +the current time so that's the + +00:01:35.479 --> 00:01:40.600 +motivation of why people started to get + +00:01:37.759 --> 00:01:44.040 +interested in tools and start using them + +00:01:40.600 --> 00:01:46.840 +and there are a lot of interest in tool + +00:01:44.040 --> 00:01:49.799 +related work so you may have heard of to + +00:01:46.840 --> 00:01:53.079 +former which is maybe one of the most uh + +00:01:49.799 --> 00:01:55.680 +like first works that like explicitly uh + +00:01:53.079 --> 00:01:58.759 +propos the term tool to language models + +00:01:55.680 --> 00:02:01.320 +and what they do is they uh propose five + +00:01:58.759 --> 00:02:04.320 +tools including like a calculator or + +00:02:01.320 --> 00:02:06.640 +Wiki search engine to provide external + +00:02:04.320 --> 00:02:09.679 +access to the language + +00:02:06.640 --> 00:02:12.520 +model but there are many Works after + +00:02:09.679 --> 00:02:14.400 +this and you can see this works are all + +00:02:12.520 --> 00:02:17.200 +interested in like two augmented + +00:02:14.400 --> 00:02:20.080 +language models however if you look + +00:02:17.200 --> 00:02:22.000 +closer they all evaluate they all use in + +00:02:20.080 --> 00:02:24.599 +different tools and evaluate on + +00:02:22.000 --> 00:02:27.800 +different data sets for example this + +00:02:24.599 --> 00:02:30.599 +tool formare and art uh paper they use + +00:02:27.800 --> 00:02:32.120 +software as as tools such as calculator + +00:02:30.599 --> 00:02:35.000 +or Wiki search + +00:02:32.120 --> 00:02:37.360 +engine and for this to Works they use + +00:02:35.000 --> 00:02:39.440 +apis as tools for example the apis that + +00:02:37.360 --> 00:02:41.120 +you can scrap from the website like get + +00:02:39.440 --> 00:02:43.560 +weather or get + +00:02:41.120 --> 00:02:45.879 +time and there are also works that use + +00:02:43.560 --> 00:02:48.239 +neurom models as tools for example they + +00:02:45.879 --> 00:02:52.080 +scraped like all the model names from + +00:02:48.239 --> 00:02:54.280 +hugging F Hub or other places and uh use + +00:02:52.080 --> 00:02:56.560 +like langage model or like other kinds + +00:02:54.280 --> 00:03:00.000 +of models that are specialized for some + +00:02:56.560 --> 00:03:03.200 +task and use them as tools and lastly + +00:03:00.000 --> 00:03:05.239 +there are works that um use like mostly + +00:03:03.200 --> 00:03:07.959 +locally defined and expert crafted + +00:03:05.239 --> 00:03:10.480 +functions tools so with all this + +00:03:07.959 --> 00:03:12.959 +different virus of tools uh a natural + +00:03:10.480 --> 00:03:16.440 +confusion at least for me is what are + +00:03:12.959 --> 00:03:19.920 +tools anyway are this all tools so to + +00:03:16.440 --> 00:03:21.840 +answer this question we had a survey but + +00:03:19.920 --> 00:03:24.239 +I'll briefly cover this in three + +00:03:21.840 --> 00:03:26.440 +dimensions today one is what is the + +00:03:24.239 --> 00:03:28.599 +basic of tools what is the definition + +00:03:26.440 --> 00:03:31.640 +and what are the main functionalities of + +00:03:28.599 --> 00:03:33.519 +tools and second uh what are scenarios + +00:03:31.640 --> 00:03:36.400 +where you can use tools what tools do we + +00:03:33.519 --> 00:03:38.200 +have what task can we apply this tool + +00:03:36.400 --> 00:03:41.080 +and also using what + +00:03:38.200 --> 00:03:44.040 +method and lastly I'll cover the + +00:03:41.080 --> 00:03:46.799 +evaluation aspect what tasps and what + +00:03:44.040 --> 00:03:48.799 +evaluation metrics we can use and also + +00:03:46.799 --> 00:03:51.879 +um what's the empirical benefit where if + +00:03:48.799 --> 00:03:56.200 +they have any empirical benefit at all + +00:03:51.879 --> 00:03:56.200 +so I'll first dive into the two + +00:03:57.079 --> 00:04:03.120 +Basics so from definition + +00:04:00.360 --> 00:04:05.159 +uh we think tools are actually a program + +00:04:03.120 --> 00:04:09.079 +that you can like language models can + +00:04:05.159 --> 00:04:12.879 +leverage and um like called the program + +00:04:09.079 --> 00:04:15.599 +to exert some functions uh however for a + +00:04:12.879 --> 00:04:18.720 +tool to be a uh for a program to be a + +00:04:15.599 --> 00:04:22.639 +tool it needs to satify satisfy two + +00:04:18.720 --> 00:04:25.320 +properties one is external if we refer + +00:04:22.639 --> 00:04:28.919 +back to the definition of animal use + +00:04:25.320 --> 00:04:32.280 +tools provide by this um like literature + +00:04:28.919 --> 00:04:35.639 +and they say like tools are external + +00:04:32.280 --> 00:04:38.120 +employment of like environmental object + +00:04:35.639 --> 00:04:41.199 +so like similarly for language model use + +00:04:38.120 --> 00:04:43.360 +tools this tools should also be external + +00:04:41.199 --> 00:04:46.880 +to the employer which in this scenario + +00:04:43.360 --> 00:04:49.240 +is the language model so in that s the + +00:04:46.880 --> 00:04:52.360 +program should be external to language + +00:04:49.240 --> 00:04:55.759 +model and the second property is + +00:04:52.360 --> 00:04:59.120 +functional um so what we mean by + +00:04:55.759 --> 00:05:01.199 +functional is this um program should be + +00:04:59.120 --> 00:05:03.199 +a function can be applied to other + +00:05:01.199 --> 00:05:05.440 +objects in the environment and then + +00:05:03.199 --> 00:05:08.360 +change the state of the environment or + +00:05:05.440 --> 00:05:10.720 +like just yell an output so a simple + +00:05:08.360 --> 00:05:13.479 +example is like this if in the + +00:05:10.720 --> 00:05:15.840 +environment we have like a blend canvas + +00:05:13.479 --> 00:05:18.680 +which is an object and we also have a + +00:05:15.840 --> 00:05:21.800 +tool which is a brush here and the + +00:05:18.680 --> 00:05:24.680 +function of this brush is to uh paint on + +00:05:21.800 --> 00:05:27.120 +the canvas and change like yield and + +00:05:24.680 --> 00:05:30.199 +result which is the + +00:05:27.120 --> 00:05:33.000 +picture so combining this uh two + +00:05:30.199 --> 00:05:35.639 +properties we give AER definition of + +00:05:33.000 --> 00:05:38.319 +what is a tool which is a function + +00:05:35.639 --> 00:05:40.919 +interface to computer program that runs + +00:05:38.319 --> 00:05:43.479 +external to the language model and in + +00:05:40.919 --> 00:05:46.240 +short how the language model used the to + +00:05:43.479 --> 00:05:49.240 +is by uh generat the function calls and + +00:05:46.240 --> 00:05:53.160 +the input arguments to about that + +00:05:49.240 --> 00:05:57.639 +to any questions so + +00:05:53.160 --> 00:05:59.639 +far cool so after knowing the definition + +00:05:57.639 --> 00:06:02.160 +um what are the main functionalities + +00:05:59.639 --> 00:06:04.560 +tools we summarized uh the main + +00:06:02.160 --> 00:06:07.080 +functionality we summarize three main + +00:06:04.560 --> 00:06:10.000 +functionalities one is uh perception + +00:06:07.080 --> 00:06:12.400 +tool perception tool is used to collect + +00:06:10.000 --> 00:06:16.680 +data from the environment without + +00:06:12.400 --> 00:06:19.759 +changing its dat so for example um like + +00:06:16.680 --> 00:06:21.639 +a search engine may be a perception tool + +00:06:19.759 --> 00:06:24.160 +you can just like search on the web and + +00:06:21.639 --> 00:06:25.800 +collect the pieces of data that are most + +00:06:24.160 --> 00:06:28.479 +relevant to your + +00:06:25.800 --> 00:06:30.720 +query um and also another example could + +00:06:28.479 --> 00:06:33.440 +be like get weather you can get like for + +00:06:30.720 --> 00:06:35.160 +example by calling the get weather API + +00:06:33.440 --> 00:06:38.919 +um on the + +00:06:35.160 --> 00:06:42.639 +web and the second uh functionality is + +00:06:38.919 --> 00:06:44.880 +action so action tools are used to exert + +00:06:42.639 --> 00:06:48.080 +action in the environment and change the + +00:06:44.880 --> 00:06:50.039 +state of the environment so this is we + +00:06:48.080 --> 00:06:54.520 +can re use the example that we' seen in + +00:06:50.039 --> 00:06:57.400 +the last uh slide that um actually so if + +00:06:54.520 --> 00:06:59.199 +you see the canvas as a object that + +00:06:57.400 --> 00:07:01.520 +belongs to the environment then the + +00:06:59.199 --> 00:07:05.000 +brush just like painting the canvas and + +00:07:01.520 --> 00:07:07.720 +change the state of the + +00:07:05.000 --> 00:07:10.560 +environment a third uh calegory is + +00:07:07.720 --> 00:07:13.080 +computation tool they keep mean simple + +00:07:10.560 --> 00:07:15.919 +like Computing uh computation activities + +00:07:13.080 --> 00:07:18.120 +such as as ma math population but here + +00:07:15.919 --> 00:07:21.440 +we mean like more General acts of + +00:07:18.120 --> 00:07:24.319 +computing which includes like any other + +00:07:21.440 --> 00:07:27.879 +source of computing instead of just like + +00:07:24.319 --> 00:07:30.879 +number of calculations for example + +00:07:27.879 --> 00:07:33.280 +um uh a transl or could also be like a + +00:07:30.879 --> 00:07:35.080 +computation tool because uh you need to + +00:07:33.280 --> 00:07:36.039 +like translate from this language to + +00:07:35.080 --> 00:07:38.840 +another + +00:07:36.039 --> 00:07:41.879 +language um another example yeah here's + +00:07:38.840 --> 00:07:46.039 +the like simple example the calculator + +00:07:41.879 --> 00:07:47.319 +example however there are like this like + +00:07:46.039 --> 00:07:51.360 +different categories or different + +00:07:47.319 --> 00:07:54.039 +functionalities are not like um disjoint + +00:07:51.360 --> 00:07:57.440 +so one two may have one functionality or + +00:07:54.039 --> 00:07:59.919 +more functionality so does anyone have + +00:07:57.440 --> 00:08:02.599 +any ideas for example have I know what + +00:07:59.919 --> 00:08:04.360 +tool can serve as like other for example + +00:08:02.599 --> 00:08:06.639 +perception and computation tool at the + +00:08:04.360 --> 00:08:06.639 +same + +00:08:13.960 --> 00:08:20.879 +time yeah maybe to give a um answer uh + +00:08:18.520 --> 00:08:22.759 +the first one I mentioned the wiki + +00:08:20.879 --> 00:08:25.440 +Search tool it can be both a + +00:08:22.759 --> 00:08:27.080 +interception tool and a computation tool + +00:08:25.440 --> 00:08:29.159 +so we have explained that you can + +00:08:27.080 --> 00:08:31.199 +collect relevant documents from the web + +00:08:29.159 --> 00:08:33.959 +so that's like getting information from + +00:08:31.199 --> 00:08:37.640 +the web but also as a competition tool + +00:08:33.959 --> 00:08:39.919 +if you think deeply in uh the process of + +00:08:37.640 --> 00:08:42.519 +how the search engine uh returns those + +00:08:39.919 --> 00:08:45.040 +socks back to you given the query it + +00:08:42.519 --> 00:08:48.200 +calculat the similarity scores of your + +00:08:45.040 --> 00:08:50.120 +query to many other documents and uh + +00:08:48.200 --> 00:08:52.959 +like rank them by their scores and + +00:08:50.120 --> 00:08:58.240 +return to top PR lines so that process + +00:08:52.959 --> 00:08:58.240 +also involves uh Computing yeah + +00:09:01.160 --> 00:09:05.920 +yeah that's a great question that's + +00:09:02.720 --> 00:09:08.920 +actually my next CL is about so what's + +00:09:05.920 --> 00:09:11.839 +the relationship between um tools and + +00:09:08.920 --> 00:09:14.760 +agents uh but maybe a brief answer to + +00:09:11.839 --> 00:09:16.680 +that is uh I think language models can + +00:09:14.760 --> 00:09:19.680 +use tools and not like language model + +00:09:16.680 --> 00:09:22.040 +based agents but uh tools are on the + +00:09:19.680 --> 00:09:24.760 +other hand tools are pretty important to + +00:09:22.040 --> 00:09:28.160 +help agents to achieve like success on + +00:09:24.760 --> 00:09:30.519 +many tasks so here's a de diab on the + +00:09:28.160 --> 00:09:35.120 +relationship between tools + +00:09:30.519 --> 00:09:37.720 +agents um so again what's that agents + +00:09:35.120 --> 00:09:40.640 +anyway so here we have a pretty good + +00:09:37.720 --> 00:09:42.839 +definition of agent which is um anything + +00:09:40.640 --> 00:09:45.079 +that can be viewed as perceiving its + +00:09:42.839 --> 00:09:47.760 +environment through sensors and acting + +00:09:45.079 --> 00:09:50.440 +upon that environment through actuators + +00:09:47.760 --> 00:09:54.120 +so connecting back to the functionality + +00:09:50.440 --> 00:09:56.480 +of tools an agent can use um P + +00:09:54.120 --> 00:09:59.680 +perception tools to perceive the world + +00:09:56.480 --> 00:10:03.200 +as um uh to get information from it and + +00:09:59.680 --> 00:10:05.720 +also use action tools to um take action + +00:10:03.200 --> 00:10:08.040 +on the environment um but there's a + +00:10:05.720 --> 00:10:10.720 +caveat here that for example language + +00:10:08.040 --> 00:10:13.000 +model that only uses computation tools + +00:10:10.720 --> 00:10:14.920 +but do not use perception action tools + +00:10:13.000 --> 00:10:19.000 +or actually do not fall into the + +00:10:14.920 --> 00:10:19.000 +category of Agents Alle by this + +00:10:22.800 --> 00:10:29.240 +definition example it's acep and + +00:10:37.560 --> 00:10:44.360 +yeah that's a good question I think um + +00:10:40.880 --> 00:10:47.920 +for I think for now the like to + +00:10:44.360 --> 00:10:51.360 +using build is not very mature yet so + +00:10:47.920 --> 00:10:54.279 +mostly the data sets uh only support + +00:10:51.360 --> 00:10:56.320 +like multi-term to usage so like you can + +00:10:54.279 --> 00:10:58.200 +split a task into multiple task that + +00:10:56.320 --> 00:11:00.639 +multiple steps but each step only used + +00:10:58.200 --> 00:11:02.480 +one CH there's not necessarily a lot of + +00:11:00.639 --> 00:11:04.440 +like interaction between those + +00:11:02.480 --> 00:11:07.680 +two yeah but that could be an + +00:11:04.440 --> 00:11:07.680 +interesting diretion to + +00:11:10.480 --> 00:11:18.040 +Mo cool so that's the basic of um tools + +00:11:14.800 --> 00:11:19.839 +so we'll dive into like more detail like + +00:11:18.040 --> 00:11:21.120 +what scenarios and what task can we + +00:11:19.839 --> 00:11:24.560 +apply this + +00:11:21.120 --> 00:11:27.000 +in so we start with like basic tool + +00:11:24.560 --> 00:11:30.639 +using Paradigm so how that can like + +00:11:27.000 --> 00:11:34.079 +language models use a tool um so first + +00:11:30.639 --> 00:11:36.480 +um or in a nutshell is basically a shift + +00:11:34.079 --> 00:11:39.760 +in between tax generation and Tool + +00:11:36.480 --> 00:11:43.560 +execution modes so for example in the + +00:11:39.760 --> 00:11:45.959 +example on the right that a user asks um + +00:11:43.560 --> 00:11:47.480 +how is the weather today so the langage + +00:11:45.959 --> 00:11:51.000 +model start with the standard tax + +00:11:47.480 --> 00:11:53.120 +generation process by doning S and when + +00:11:51.000 --> 00:11:55.240 +the model feels like needing like extra + +00:11:53.120 --> 00:11:58.399 +he from tools it starts to generate + +00:11:55.240 --> 00:12:00.760 +tokens that forms um the call expression + +00:11:58.399 --> 00:12:03.839 +to that tool for example the call uh + +00:12:00.760 --> 00:12:06.160 +check weather here and after this uh + +00:12:03.839 --> 00:12:08.720 +expression is completed this uh + +00:12:06.160 --> 00:12:11.399 +completed uh expression will trigger the + +00:12:08.720 --> 00:12:14.320 +remote um to excution server which is + +00:12:11.399 --> 00:12:16.639 +the weather server here and then we'll + +00:12:14.320 --> 00:12:18.800 +shift to the two execution mode where + +00:12:16.639 --> 00:12:21.880 +the server will execute the call and + +00:12:18.800 --> 00:12:24.240 +return to result which is sunny here and + +00:12:21.880 --> 00:12:28.000 +return that back to the loc model in + +00:12:24.240 --> 00:12:29.880 +Lage model replaces the API call by this + +00:12:28.000 --> 00:12:32.560 +returned execution output + +00:12:29.880 --> 00:12:34.760 +and continue like shift back to the text + +00:12:32.560 --> 00:12:35.959 +generation mode and continues generating + +00:12:34.760 --> 00:12:38.720 +the rest of the + +00:12:35.959 --> 00:12:40.760 +token and after finish all this process + +00:12:38.720 --> 00:12:43.519 +we will return the final response and + +00:12:40.760 --> 00:12:47.600 +the sing today to the original + +00:12:43.519 --> 00:12:51.240 +user and this process is not it's pretty + +00:12:47.600 --> 00:12:54.760 +like intuitive and the method for + +00:12:51.240 --> 00:12:56.920 +current methods to use like teach model + +00:12:54.760 --> 00:13:00.000 +to use tools like this is also pretty + +00:12:56.920 --> 00:13:02.600 +straightforward so there are two uh + +00:13:00.000 --> 00:13:05.880 +categories of approaches one is uh + +00:13:02.600 --> 00:13:08.079 +inference time counting so basically you + +00:13:05.880 --> 00:13:10.360 +can provide for example natural language + +00:13:08.079 --> 00:13:13.440 +instructions and instruct the model to + +00:13:10.360 --> 00:13:15.680 +do this kind of process also in context + +00:13:13.440 --> 00:13:18.199 +examples with like natural language + +00:13:15.680 --> 00:13:20.480 +input and also to involved solution + +00:13:18.199 --> 00:13:22.560 +outputs and also there are other works + +00:13:20.480 --> 00:13:25.440 +that provide like documentation of tools + +00:13:22.560 --> 00:13:27.279 +to help models inter in this tool and + +00:13:25.440 --> 00:13:30.240 +the second category is learning by + +00:13:27.279 --> 00:13:32.560 +training which is uh more here that you + +00:13:30.240 --> 00:13:36.480 +can just turn on examples of natural + +00:13:32.560 --> 00:13:36.480 +language quaries and to L + +00:13:37.680 --> 00:13:44.720 +solution and using this methods many + +00:13:40.880 --> 00:13:47.800 +people have applied um tools um in many + +00:13:44.720 --> 00:13:50.880 +scenarios and here are the major five + +00:13:47.800 --> 00:13:54.040 +scenarios you have sumarized the first + +00:13:50.880 --> 00:13:56.399 +is um knowledge access so that's aimed + +00:13:54.040 --> 00:13:58.480 +to solve like The Limited knowledge and + +00:13:56.399 --> 00:13:59.880 +knows can memorize or slore during the + +00:13:58.480 --> 00:14:01.680 +training time + +00:13:59.880 --> 00:14:05.880 +uh for example current time is probably + +00:14:01.680 --> 00:14:09.079 +not involved in Sharing data and uh one + +00:14:05.880 --> 00:14:11.600 +um there could be like many sources a + +00:14:09.079 --> 00:14:13.600 +model can access Knowledge from one a + +00:14:11.600 --> 00:14:15.920 +structured um knowledge bases with + +00:14:13.600 --> 00:14:18.680 +knowledge graphs but people can useal + +00:14:15.920 --> 00:14:21.519 +executor or spal executor to execute + +00:14:18.680 --> 00:14:24.720 +over the structured data to get the um + +00:14:21.519 --> 00:14:27.079 +final result and uh more generally on + +00:14:24.720 --> 00:14:28.839 +like free from tags people are using + +00:14:27.079 --> 00:14:31.600 +like search engines to search over the + +00:14:28.839 --> 00:14:35.160 +that uh Internet and get the information + +00:14:31.600 --> 00:14:37.920 +that want and more like generally maybe + +00:14:35.160 --> 00:14:40.680 +related to like retrieval generation if + +00:14:37.920 --> 00:14:43.199 +I have heard of it uh all like the + +00:14:40.680 --> 00:14:45.759 +retrieval models can be seen as like + +00:14:43.199 --> 00:14:48.320 +knowledge accessing to + +00:14:45.759 --> 00:14:49.680 +here and the second category is + +00:14:48.320 --> 00:14:53.079 +computation + +00:14:49.680 --> 00:14:56.320 +activities so there's also this aims to + +00:14:53.079 --> 00:14:58.399 +solve the issue that for complex for aqu + +00:14:56.320 --> 00:15:00.000 +that required complex reasoning models + +00:14:58.399 --> 00:15:03.279 +are probably + +00:15:00.000 --> 00:15:05.199 +not good at it like solving cannot solve + +00:15:03.279 --> 00:15:07.759 +the problem very efficiently but + +00:15:05.199 --> 00:15:10.480 +accurately so for example there are + +00:15:07.759 --> 00:15:13.320 +people that use calculator to solve math + +00:15:10.480 --> 00:15:16.040 +pattern or more generally for more + +00:15:13.320 --> 00:15:19.079 +complex operations to use py interpreter + +00:15:16.040 --> 00:15:21.519 +by wring like more complex P programs + +00:15:19.079 --> 00:15:24.440 +and lastly they're also like you can + +00:15:21.519 --> 00:15:27.720 +also leverage existing softw like uh + +00:15:24.440 --> 00:15:30.000 +Google uh work sheet uh Google sheet + +00:15:27.720 --> 00:15:33.759 +where like frally and like take the + +00:15:30.000 --> 00:15:36.120 +actions from there and uh exert the + +00:15:33.759 --> 00:15:38.880 +actions and the third category is + +00:15:36.120 --> 00:15:41.319 +interacting with the world so what + +00:15:38.880 --> 00:15:44.160 +language model can think is pretty + +00:15:41.319 --> 00:15:46.399 +Limited in this training data and how do + +00:15:44.160 --> 00:15:49.880 +we leverage language models to for + +00:15:46.399 --> 00:15:52.600 +example navigating the web so a lot of + +00:15:49.880 --> 00:15:55.319 +uh we're like getting access to re World + +00:15:52.600 --> 00:15:57.680 +information so there are some cases for + +00:15:55.319 --> 00:16:00.440 +example getting get weather or get + +00:15:57.680 --> 00:16:02.399 +location to get the current uh + +00:16:00.440 --> 00:16:03.519 +information in the current time when + +00:16:02.399 --> 00:16:06.920 +your + +00:16:03.519 --> 00:16:11.120 +location so it can also manipulate your + +00:16:06.920 --> 00:16:14.000 +calendar um events to like help automize + +00:16:11.120 --> 00:16:15.800 +your work or so it can also help like + +00:16:14.000 --> 00:16:19.199 +manage your + +00:16:15.800 --> 00:16:22.279 +email um the fourth category is non- + +00:16:19.199 --> 00:16:24.519 +taxure modalities so this is targeting + +00:16:22.279 --> 00:16:28.199 +another lotion that language models have + +00:16:24.519 --> 00:16:30.759 +as they're like mainly processing Tas + +00:16:28.199 --> 00:16:33.920 +they can only take tax input and + +00:16:30.759 --> 00:16:36.199 +generate tax outputs but if we provide + +00:16:33.920 --> 00:16:38.800 +like tools that connect with other + +00:16:36.199 --> 00:16:42.120 +modalities we can enable language models + +00:16:38.800 --> 00:16:45.040 +to uh access other modal data from other + +00:16:42.120 --> 00:16:48.720 +modalities or interact with them for + +00:16:45.040 --> 00:16:51.279 +example there are API for example called + +00:16:48.720 --> 00:16:55.399 +Tad image where you can access images of + +00:16:51.279 --> 00:16:59.199 +Tad or like uh delete them and also you + +00:16:55.399 --> 00:17:03.839 +can like listen to audios um by like the + +00:16:59.199 --> 00:17:07.880 +Spotify play music at API but also uh + +00:17:03.839 --> 00:17:10.000 +like Beyond um just viewing where simply + +00:17:07.880 --> 00:17:12.880 +when you're mutilating the data you can + +00:17:10.000 --> 00:17:15.439 +also like use for example a visual QA + +00:17:12.880 --> 00:17:18.319 +tool which probably involves your other + +00:17:15.439 --> 00:17:20.120 +to answer questions about this data and + +00:17:18.319 --> 00:17:23.839 +other + +00:17:20.120 --> 00:17:27.480 +modalities and lastly there's a special + +00:17:23.839 --> 00:17:29.520 +category called uh that use tools um + +00:17:27.480 --> 00:17:33.640 +based on your model + +00:17:29.520 --> 00:17:37.760 +so for example you can load a QA model + +00:17:33.640 --> 00:17:41.880 +and uh assign QA T specifically to this + +00:17:37.760 --> 00:17:44.799 +model and also another um example is the + +00:17:41.880 --> 00:17:46.640 +translation uh model where you can load + +00:17:44.799 --> 00:17:49.679 +in your model and plug in PH and do + +00:17:46.640 --> 00:17:52.760 +translation for you um and also like for + +00:17:49.679 --> 00:17:56.280 +the visual QA here it also LS like a vqa + +00:17:52.760 --> 00:17:57.840 +model from plugin Shades and um do the + +00:17:56.280 --> 00:18:01.520 +things you want + +00:17:57.840 --> 00:18:04.159 +here so yeah these are uh also again + +00:18:01.520 --> 00:18:06.120 +like tools can fall into like modle + +00:18:04.159 --> 00:18:10.679 +categories for example visual Cod can be + +00:18:06.120 --> 00:18:13.039 +both a special one also like non Capal + +00:18:10.679 --> 00:18:13.039 +vality + +00:18:15.480 --> 00:18:21.120 +tools um another limitation maybe not a + +00:18:18.919 --> 00:18:23.200 +limitation but just like shared property + +00:18:21.120 --> 00:18:25.799 +of this tools are they're all designed + +00:18:23.200 --> 00:18:29.400 +by human experts before we even tackle + +00:18:25.799 --> 00:18:32.120 +the test so like with all this uh tools + +00:18:29.400 --> 00:18:34.720 +pretty fun we can when we want to tackle + +00:18:32.120 --> 00:18:37.720 +a task we can just adopt those tools and + +00:18:34.720 --> 00:18:39.559 +use them but there are also tasks that + +00:18:37.720 --> 00:18:43.480 +don't have this predesigned tools + +00:18:39.559 --> 00:18:45.480 +available so another question of can we + +00:18:43.480 --> 00:18:47.679 +automatically make tools for example + +00:18:45.480 --> 00:18:49.480 +using langage models without like + +00:18:47.679 --> 00:18:52.559 +relying on human + +00:18:49.480 --> 00:18:55.480 +experts so in our previous work we + +00:18:52.559 --> 00:19:00.120 +export this a bit on pratic T and our + +00:18:55.480 --> 00:19:03.360 +answer is yes so to give a brief + +00:19:00.120 --> 00:19:06.840 +overview so for the standard way of + +00:19:03.360 --> 00:19:08.840 +solving program molecules so maybe + +00:19:06.840 --> 00:19:11.480 +before that like a programmatic test + +00:19:08.840 --> 00:19:13.039 +like you given an actra language problem + +00:19:11.480 --> 00:19:15.080 +and you ask the language models to + +00:19:13.039 --> 00:19:17.320 +generate a program then you execute the + +00:19:15.080 --> 00:19:19.840 +result to get the final + +00:19:17.320 --> 00:19:22.640 +answer so the similar way of doing this + +00:19:19.840 --> 00:19:25.120 +you have a stream of um Tex examples and + +00:19:22.640 --> 00:19:27.559 +you pass that you to the C langage model + +00:19:25.120 --> 00:19:29.919 +and usually generate uh solutions for + +00:19:27.559 --> 00:19:33.000 +each of this programs but they usually + +00:19:29.919 --> 00:19:35.159 +look like this so um maybe much longer + +00:19:33.000 --> 00:19:38.840 +than this but it's usually a long + +00:19:35.159 --> 00:19:41.720 +program um what issue of this is that it + +00:19:38.840 --> 00:19:44.679 +may be PL to error for example here it + +00:19:41.720 --> 00:19:46.760 +like just like Miss type one character + +00:19:44.679 --> 00:19:50.400 +that CA the entire program solution to + +00:19:46.760 --> 00:19:52.400 +be the our motivation is what if we ask + +00:19:50.400 --> 00:19:57.039 +po language models to additionally + +00:19:52.400 --> 00:19:58.960 +generate a toolbox and now with uh for + +00:19:57.039 --> 00:20:01.720 +example you have like a calculate rate + +00:19:58.960 --> 00:20:04.559 +of change tool now solution become very + +00:20:01.720 --> 00:20:07.280 +simple it's just like one tool calling + +00:20:04.559 --> 00:20:09.960 +expression all you need to figure out is + +00:20:07.280 --> 00:20:12.480 +um the argument they input into this + +00:20:09.960 --> 00:20:15.960 +function and that could alleviate the + +00:20:12.480 --> 00:20:18.600 +errors that we seen but also it could be + +00:20:15.960 --> 00:20:22.039 +like benefit humans say it's easier for + +00:20:18.600 --> 00:20:22.039 +humans to verify the + +00:20:22.480 --> 00:20:29.080 +solution so how does show like our + +00:20:25.760 --> 00:20:31.640 +method work + +00:20:29.080 --> 00:20:31.640 +this might be a + +00:20:33.600 --> 00:20:39.360 +little so in general here's the type + +00:20:37.480 --> 00:20:42.280 +that we have a stre of examples we + +00:20:39.360 --> 00:20:45.600 +output Solutions in the tool box and to + +00:20:42.280 --> 00:20:49.400 +start we initiate uh mt2 bo and prepare + +00:20:45.600 --> 00:20:52.679 +the examples and when we encounter A New + +00:20:49.400 --> 00:20:54.760 +U examples with unseen functionality we + +00:20:52.679 --> 00:20:56.720 +ask the along with model to First + +00:20:54.760 --> 00:20:58.600 +generate a reusable tool and then + +00:20:56.720 --> 00:21:01.840 +generate solution using the tool that + +00:20:58.600 --> 00:21:04.120 +that it just generated and because we're + +00:21:01.840 --> 00:21:06.799 +like operating purely in task time + +00:21:04.120 --> 00:21:09.280 +without turning to provision we to + +00:21:06.799 --> 00:21:12.960 +improve the quality of the pairs we do + +00:21:09.280 --> 00:21:15.840 +sampling inly select the best and then + +00:21:12.960 --> 00:21:18.400 +we add the DAT the solution and also add + +00:21:15.840 --> 00:21:21.039 +the tool to our tool box for future use + +00:21:18.400 --> 00:21:24.000 +so that allign to the first create mode + +00:21:21.039 --> 00:21:26.640 +the model can create a new function um + +00:21:24.000 --> 00:21:30.400 +when it see + +00:21:26.640 --> 00:21:34.840 +example and the second + +00:21:30.400 --> 00:21:36.679 +scenario is um using the import mode so + +00:21:34.840 --> 00:21:39.559 +now suppose we have many functions in + +00:21:36.679 --> 00:21:41.960 +our tool box and next time the when the + +00:21:39.559 --> 00:21:46.679 +model sees an example that uses similar + +00:21:41.960 --> 00:21:50.159 +functions without a setad of + +00:21:46.679 --> 00:21:53.240 +um instead of regenerate a new tool we + +00:21:50.159 --> 00:21:57.039 +can ask one model + +00:21:53.240 --> 00:22:00.880 +to uh directly import the tool from the + +00:21:57.039 --> 00:22:03.000 +tool box so it will like reuse the tool + +00:22:00.880 --> 00:22:05.120 +which aligns with the second import mode + +00:22:03.000 --> 00:22:06.400 +and similarly generate Solutions using + +00:22:05.120 --> 00:22:09.080 +that + +00:22:06.400 --> 00:22:12.480 +tool and lastly you also support a skip + +00:22:09.080 --> 00:22:15.440 +mode where you can uh not generate the + +00:22:12.480 --> 00:22:18.720 +tool because you think if you think the + +00:22:15.440 --> 00:22:22.559 +um problem is too easy to use + +00:22:18.720 --> 00:22:25.559 +to and the results are pretty good my + +00:22:22.559 --> 00:22:28.720 +oping so compared to the Basel lines on + +00:22:25.559 --> 00:22:31.039 +the two blocks on the top are actually + +00:22:28.720 --> 00:22:34.360 +is we can our method can like improve + +00:22:31.039 --> 00:22:36.799 +the accuracy a lot but also maintains a + +00:22:34.360 --> 00:22:39.960 +reasonably small size of libraries as + +00:22:36.799 --> 00:22:41.880 +you see here and also if you look at the + +00:22:39.960 --> 00:22:44.760 +middle line which measures the number of + +00:22:41.880 --> 00:22:47.440 +operations as a representative of how + +00:22:44.760 --> 00:22:49.360 +complex his solution is our me the + +00:22:47.440 --> 00:22:51.240 +solutions generated by our method is + +00:22:49.360 --> 00:22:54.400 +much uh + +00:22:51.240 --> 00:22:57.559 +simpler this is the results with Judy me + +00:22:54.400 --> 00:22:59.760 +model and also related to simpler + +00:22:57.559 --> 00:23:01.919 +Solutions of that method we did an + +00:22:59.760 --> 00:23:04.440 +interesting human verification study + +00:23:01.919 --> 00:23:07.200 +where we asked the humans uh first to + +00:23:04.440 --> 00:23:10.320 +verify the correctness of the stion + +00:23:07.200 --> 00:23:12.600 +whether it's correct or not and second + +00:23:10.320 --> 00:23:16.440 +we ask we measure the time that humans + +00:23:12.600 --> 00:23:18.440 +take to verify every single solution and + +00:23:16.440 --> 00:23:21.360 +from here you can see that our methods + +00:23:18.440 --> 00:23:23.960 +leat to 10% more accurate um + +00:23:21.360 --> 00:23:26.679 +verification process but also make the + +00:23:23.960 --> 00:23:30.400 +process 30 to 40% + +00:23:26.679 --> 00:23:33.440 +faster so that's like overview of the + +00:23:30.400 --> 00:23:35.679 +toot making methods so just to as event + +00:23:33.440 --> 00:23:38.000 +scenarios for toot using even if we + +00:23:35.679 --> 00:23:41.159 +don't have human use tools we can still + +00:23:38.000 --> 00:23:44.320 +leverage L models to make tools and + +00:23:41.159 --> 00:23:48.320 +finally benefit from those + +00:23:44.320 --> 00:23:48.320 +tools any questions so + +00:23:55.000 --> 00:24:02.480 +far so yeah it I'll go on to the next + +00:23:58.799 --> 00:24:05.039 +section which talks about um evaluation + +00:24:02.480 --> 00:24:08.600 +and empirical benefits of using + +00:24:05.039 --> 00:24:11.880 +tools so how do we currently evaluate to + +00:24:08.600 --> 00:24:15.200 +use the current benchmarks are Mally in + +00:24:11.880 --> 00:24:18.120 +two category one is reusing existing + +00:24:15.200 --> 00:24:19.679 +benchmarks that um they use two + +00:24:18.120 --> 00:24:22.720 +augmented language model as the + +00:24:19.679 --> 00:24:25.200 +alternative approach to solop the task + +00:24:22.720 --> 00:24:29.279 +um so for example this task usually + +00:24:25.200 --> 00:24:32.120 +involve reasoning um simple form like + +00:24:29.279 --> 00:24:35.279 +the most simple text form are like + +00:24:32.120 --> 00:24:38.880 +mathematical reasoning like math data + +00:24:35.279 --> 00:24:42.679 +set or big bench data set there are also + +00:24:38.880 --> 00:24:45.039 +um like a level up as tasks that require + +00:24:42.679 --> 00:24:48.840 +structured data such as table or + +00:24:45.039 --> 00:24:51.760 +knowledge Bo and data sets for this + +00:24:48.840 --> 00:24:54.120 +category are like um there are data sets + +00:24:51.760 --> 00:24:56.039 +for like we key table for more like + +00:24:54.120 --> 00:24:58.679 +complex structur + +00:24:56.039 --> 00:25:01.320 +tables and lastly there are also that + +00:24:58.679 --> 00:25:04.760 +involve other modalities like visual QA + +00:25:01.320 --> 00:25:07.760 +problems where you can like generate or + +00:25:04.760 --> 00:25:10.039 +re use existing program tools to execute + +00:25:07.760 --> 00:25:13.120 +over an image to answer some questions + +00:25:10.039 --> 00:25:16.399 +about it or do some editing on the + +00:25:13.120 --> 00:25:19.360 +image and another category of benchmarks + +00:25:16.399 --> 00:25:22.600 +are called like aggregated API + +00:25:19.360 --> 00:25:26.399 +benchmarks they mainly focus on the API + +00:25:22.600 --> 00:25:27.880 +category of tools and the like process + +00:25:26.399 --> 00:25:28.840 +how they create this benchmarks are + +00:25:27.880 --> 00:25:31.480 +pretty + +00:25:28.840 --> 00:25:34.880 +much the same that you find a website + +00:25:31.480 --> 00:25:37.520 +that provide a lot of like public apis + +00:25:34.880 --> 00:25:41.080 +you scrip the API as well as in like + +00:25:37.520 --> 00:25:44.320 +metadata about it and there are actually + +00:25:41.080 --> 00:25:47.200 +a lot of like data sets in this type uh + +00:25:44.320 --> 00:25:50.559 +however there are like two issues that + +00:25:47.200 --> 00:25:54.960 +we find when we analyze this data + +00:25:50.559 --> 00:25:57.399 +set one is um the naturalness issue so + +00:25:54.960 --> 00:25:59.720 +if you look deeper deeper into the + +00:25:57.399 --> 00:26:02.279 +process of how they examples using this + +00:25:59.720 --> 00:26:07.480 +apis they usually just theistically + +00:26:02.279 --> 00:26:10.640 +select like one or more apis um and then + +00:26:07.480 --> 00:26:13.880 +ask for example GPT models to synthesize + +00:26:10.640 --> 00:26:16.919 +examples of using this apis so the + +00:26:13.880 --> 00:26:19.200 +naturalness issue are two fold one is + +00:26:16.919 --> 00:26:21.799 +the selected tools may not be used + +00:26:19.200 --> 00:26:23.480 +together in practice and the second is + +00:26:21.799 --> 00:26:25.960 +the example that they created may not + +00:26:23.480 --> 00:26:30.000 +reflect the natural use case of this + +00:26:25.960 --> 00:26:33.640 +examples so um this the issue the first + +00:26:30.000 --> 00:26:36.399 +issue is existing benchmarks and second + +00:26:33.640 --> 00:26:40.520 +really relatedly to the first one is + +00:26:36.399 --> 00:26:42.399 +executability of the tools so like based + +00:26:40.520 --> 00:26:45.679 +on our definition that tools are + +00:26:42.399 --> 00:26:48.039 +programs you probably think that tools + +00:26:45.679 --> 00:26:51.480 +can be executed because they programs + +00:26:48.039 --> 00:26:53.880 +but actually not so if you look at this + +00:26:51.480 --> 00:26:57.760 +table actually more than half the data + +00:26:53.880 --> 00:27:00.440 +sets uh their tools are not executable + +00:26:57.760 --> 00:27:02.360 +um this may be related or at least + +00:27:00.440 --> 00:27:04.799 +partially related to how they create the + +00:27:02.360 --> 00:27:06.799 +examples but they just synthesize + +00:27:04.799 --> 00:27:09.360 +examples and sometimes the example + +00:27:06.799 --> 00:27:12.120 +outputs so without actually executing + +00:27:09.360 --> 00:27:13.520 +the tools so to evaluate on this data + +00:27:12.120 --> 00:27:16.399 +State you don't necessarily need to + +00:27:13.520 --> 00:27:18.520 +execute the tools they kind of like skip + +00:27:16.399 --> 00:27:21.440 +the step but there are also other + +00:27:18.520 --> 00:27:23.480 +reasons for example uh hosting this + +00:27:21.440 --> 00:27:25.880 +tools usually hundreds of thousands of + +00:27:23.480 --> 00:27:29.080 +apis are pretty costly especially if + +00:27:25.880 --> 00:27:31.480 +they involve like large Neo models + +00:27:29.080 --> 00:27:34.399 +and also some of those tools are not + +00:27:31.480 --> 00:27:36.320 +like stable for example the get weather + +00:27:34.399 --> 00:27:39.480 +get time tool they return different + +00:27:36.320 --> 00:27:41.159 +results at different times so it's very + +00:27:39.480 --> 00:27:43.799 +hard for people to create static + +00:27:41.159 --> 00:27:47.360 +benchmarks with a single reference tax + +00:27:43.799 --> 00:27:47.360 +answer for this kind of + +00:27:48.120 --> 00:27:55.760 +problems and the evaluation benchmarks + +00:27:50.679 --> 00:27:57.760 +are um I think pretty like basic now one + +00:27:55.760 --> 00:28:00.640 +is the task completion rate where + +00:27:57.760 --> 00:28:03.000 +basically just they just uh compare the + +00:28:00.640 --> 00:28:06.039 +model generated response with the + +00:28:03.000 --> 00:28:09.440 +annotated reference response and check + +00:28:06.039 --> 00:28:12.200 +the overlap of them and the second is + +00:28:09.440 --> 00:28:15.159 +because some of the works um the tools + +00:28:12.200 --> 00:28:16.840 +are not executable so they cannot like + +00:28:15.159 --> 00:28:18.960 +ask the the tool and get a final + +00:28:16.840 --> 00:28:21.960 +response so instead they just compare + +00:28:18.960 --> 00:28:25.600 +the two selection or like the to color + +00:28:21.960 --> 00:28:28.960 +expression match to see uh the + +00:28:25.600 --> 00:28:31.399 +correctness of the mod generation + +00:28:28.960 --> 00:28:34.919 +and third for especially for like to + +00:28:31.399 --> 00:28:37.559 +making Works um there like our works + +00:28:34.919 --> 00:28:39.919 +evaluate the to usability to encourage + +00:28:37.559 --> 00:28:41.720 +the students to have more like General + +00:28:39.919 --> 00:28:45.159 +functionalities as well as like + +00:28:41.720 --> 00:28:48.880 +encouraging the efficiency in Practical + +00:28:45.159 --> 00:28:52.559 +you however there may be several aspects + +00:28:48.880 --> 00:28:54.640 +missing um on this evalution Dimension + +00:28:52.559 --> 00:28:56.360 +can anyone think of some Dimensions that + +00:28:54.640 --> 00:29:00.039 +you think are important for tools but + +00:28:56.360 --> 00:29:00.039 +not covered here + +00:29:16.080 --> 00:29:22.840 +yeah yeah that's a good + +00:29:18.600 --> 00:29:22.840 +okay any other + +00:29:29.919 --> 00:29:34.159 +maybe that question is a bit too hard + +00:29:32.000 --> 00:29:36.360 +but yeah I think the Contrition c one is + +00:29:34.159 --> 00:29:38.039 +a good one and that's like related to + +00:29:36.360 --> 00:29:41.039 +the first point and second point that + +00:29:38.039 --> 00:29:43.720 +was it here so I'll go over each of them + +00:29:41.039 --> 00:29:47.760 +so one is the efficiency part it's + +00:29:43.720 --> 00:29:51.240 +basically comparing like um the + +00:29:47.760 --> 00:29:54.480 +computation cost that um you spend for + +00:29:51.240 --> 00:29:56.840 +langage models to learn those tools uh + +00:29:54.480 --> 00:29:58.559 +like intuitively we know using tools to + +00:29:56.840 --> 00:30:01.679 +improve the performance + +00:29:58.559 --> 00:30:03.919 +uh but like in on the other hand how + +00:30:01.679 --> 00:30:06.960 +much computation cost do you need to pay + +00:30:03.919 --> 00:30:09.799 +for example extra tokens in your pump or + +00:30:06.960 --> 00:30:12.360 +uh extra training steps uh are this + +00:30:09.799 --> 00:30:13.519 +competion cost worthy of the Improvement + +00:30:12.360 --> 00:30:16.080 +that the tool + +00:30:13.519 --> 00:30:19.880 +brought and the second is the quality of + +00:30:16.080 --> 00:30:21.519 +tools so instead of the task performance + +00:30:19.880 --> 00:30:24.080 +there Al it's also pretty important to + +00:30:21.519 --> 00:30:26.159 +measure the tool performance itself how + +00:30:24.080 --> 00:30:28.039 +quick can this tool return the response + +00:30:26.159 --> 00:30:31.000 +to you do you need to wait one second + +00:30:28.039 --> 00:30:33.720 +and wor 10 minutes where um how + +00:30:31.000 --> 00:30:38.279 +computation efficient of tools how much + +00:30:33.720 --> 00:30:38.279 +like GPU Sayes to cause where + +00:30:38.440 --> 00:30:45.480 +mely and the third is reliability of + +00:30:42.559 --> 00:30:47.919 +tools this uh maely involves unstable + +00:30:45.480 --> 00:30:50.840 +tools that involves neur models or other + +00:30:47.919 --> 00:30:53.480 +randomized modules so for example if you + +00:30:50.840 --> 00:30:56.279 +remember the visual QA example it + +00:30:53.480 --> 00:30:58.679 +actually loads a vqa model and answer a + +00:30:56.279 --> 00:31:00.200 +question about the image but some times + +00:30:58.679 --> 00:31:02.200 +it's in your model you can answer the + +00:31:00.200 --> 00:31:05.039 +cor question correctly but sometimes + +00:31:02.200 --> 00:31:07.240 +incorrectly so how do you are users + +00:31:05.039 --> 00:31:10.600 +aware of this certainty of correctness + +00:31:07.240 --> 00:31:13.880 +of this food and if so what should we do + +00:31:10.600 --> 00:31:16.519 +to like like manage + +00:31:13.880 --> 00:31:19.120 +it and the fourth is reproducible + +00:31:16.519 --> 00:31:22.039 +testing as we have shown with the issues + +00:31:19.120 --> 00:31:23.840 +of the evaluation Benchmark if we ask we + +00:31:22.039 --> 00:31:26.320 +have a question asking what is the + +00:31:23.840 --> 00:31:28.559 +current time what is the current weather + +00:31:26.320 --> 00:31:30.399 +having a static reference result is + +00:31:28.559 --> 00:31:32.960 +probably not going to work not going to + +00:31:30.399 --> 00:31:35.080 +make it a reliable Benchmark so maybe + +00:31:32.960 --> 00:31:37.840 +another approach is to have a reference + +00:31:35.080 --> 00:31:41.480 +solution trajectory that for example + +00:31:37.840 --> 00:31:43.639 +includes get weather um this calling + +00:31:41.480 --> 00:31:45.960 +expression and when the model generates + +00:31:43.639 --> 00:31:48.480 +it's solution they can like run the two + +00:31:45.960 --> 00:31:49.519 +solutions in parallel and check if their + +00:31:48.480 --> 00:31:53.880 +response + +00:31:49.519 --> 00:31:56.519 +is and lastly is to save usage of tools + +00:31:53.880 --> 00:31:58.919 +so for example a lot of tools are apis + +00:31:56.519 --> 00:32:03.080 +hosted by unknown + +00:31:58.919 --> 00:32:05.880 +uh do you trust his to uh sometimes you + +00:32:03.080 --> 00:32:08.639 +like send your personal information to + +00:32:05.880 --> 00:32:10.600 +the tools um are you like confident + +00:32:08.639 --> 00:32:12.480 +enough that your personal expiration + +00:32:10.600 --> 00:32:15.159 +will be + +00:32:12.480 --> 00:32:18.200 +protected this are all like interesting + +00:32:15.159 --> 00:32:21.720 +aspects but like no one the tool area + +00:32:18.200 --> 00:32:24.919 +has really looked into + +00:32:21.720 --> 00:32:28.679 +yet and we specifically did an C study + +00:32:24.919 --> 00:32:32.039 +on the side that we compare + +00:32:28.679 --> 00:32:33.600 +for each method We compare the + +00:32:32.039 --> 00:32:36.960 +performance Improvement and the + +00:32:33.600 --> 00:32:40.399 +computation cost that they have so we + +00:32:36.960 --> 00:32:44.120 +did analysis from two aspects one is + +00:32:40.399 --> 00:32:47.760 +the um like what path benefit the most + +00:32:44.120 --> 00:32:50.279 +from tools so for example on this figure + +00:32:47.760 --> 00:32:52.279 +this is the like single method the two + +00:32:50.279 --> 00:32:55.000 +form method so they evaluate on + +00:32:52.279 --> 00:32:58.080 +different data sets for different tasks + +00:32:55.000 --> 00:33:00.480 +you can see from the like top right + +00:32:58.080 --> 00:33:03.880 +corner here the mass data sets benefit a + +00:33:00.480 --> 00:33:06.039 +lot they have like a huge Improvement um + +00:33:03.880 --> 00:33:08.919 +but also here if you look at the + +00:33:06.039 --> 00:33:11.200 +multilingual task it actually it's not + +00:33:08.919 --> 00:33:13.880 +zero it decreases the performance but it + +00:33:11.200 --> 00:33:16.240 +still use a lot of competition TS so + +00:33:13.880 --> 00:33:20.559 +there's like multilingual that we may + +00:33:16.240 --> 00:33:23.799 +not use have it's like help that we use + +00:33:20.559 --> 00:33:25.840 +to sometimes and the other dimension is + +00:33:23.799 --> 00:33:28.240 +what methods are efficient in using + +00:33:25.840 --> 00:33:30.519 +tools even on the same data that to + +00:33:28.240 --> 00:33:32.639 +compare on the MTH and table data sets + +00:33:30.519 --> 00:33:35.080 +there those three different making + +00:33:32.639 --> 00:33:35.960 +methods and there are methods that use a + +00:33:35.080 --> 00:33:38.720 +w the + +00:33:35.960 --> 00:33:40.600 +computation um and sometimes without + +00:33:38.720 --> 00:33:44.519 +getting much improvement but there are + +00:33:40.600 --> 00:33:47.080 +also methods that use like much fewer + +00:33:44.519 --> 00:33:48.760 +computation cause but which is similar + +00:33:47.080 --> 00:33:52.240 +am of + +00:33:48.760 --> 00:33:55.080 +gin so yeah this kind of evaluation is + +00:33:52.240 --> 00:33:56.880 +probably um very important to show the + +00:33:55.080 --> 00:33:59.840 +whole picture where at least the more + +00:33:56.880 --> 00:34:03.559 +comprehensive picture of the treat + +00:33:59.840 --> 00:34:05.919 +meod so that's an overview of um the + +00:34:03.559 --> 00:34:08.960 +things that IAL talked about today the + +00:34:05.919 --> 00:34:11.000 +basics of tool scenarios and also the + +00:34:08.960 --> 00:34:14.720 +evaluation other + +00:34:11.000 --> 00:34:14.720 +aspects any final + +00:34:14.839 --> 00:34:19.879 +questions and tools and agents are + +00:34:17.240 --> 00:34:22.200 +really closely related also so I think + +00:34:19.879 --> 00:34:23.879 +we can also have some time uh to ask + +00:34:22.200 --> 00:34:27.000 +questions at the end after we covered + +00:34:23.879 --> 00:34:29.079 +both but um you know while while Frank + +00:34:27.000 --> 00:34:31.320 +comes up if you have any questions for + +00:34:29.079 --> 00:34:35.320 +Zora + +00:34:31.320 --> 00:34:35.320 +so we'll we'll set up the + +00:34:50.720 --> 00:34:56.240 +screen hello everyone I'm Frank and + +00:34:53.879 --> 00:34:58.359 +continueing the topic continuing the + +00:34:56.240 --> 00:35:02.119 +topic about the language model using + +00:34:58.359 --> 00:35:05.440 +tools I'm going to talk about + +00:35:02.119 --> 00:35:08.040 +uh going to talk about langage Model S + +00:35:05.440 --> 00:35:09.599 +agents these two are super closely + +00:35:08.040 --> 00:35:13.200 +related + +00:35:09.599 --> 00:35:13.200 +so as + +00:35:13.920 --> 00:35:18.880 +sorry hang on one + +00:35:17.320 --> 00:35:24.520 +second + +00:35:18.880 --> 00:35:24.520 +not they not muted but it would be nice + +00:35:28.599 --> 00:35:33.200 +so first what are agents I think Zora + +00:35:30.680 --> 00:35:35.240 +covered this a little bit but basically + +00:35:33.200 --> 00:35:37.280 +anything that can be viewed as you know + +00:35:35.240 --> 00:35:39.800 +perceiving uh environments through + +00:35:37.280 --> 00:35:42.880 +sensors and acting upon them through + +00:35:39.800 --> 00:35:45.359 +actuators if you look at this uh plot + +00:35:42.880 --> 00:35:48.280 +here you can see you the agent will take + +00:35:45.359 --> 00:35:50.800 +in observations from the environment and + +00:35:48.280 --> 00:35:53.000 +perform actions on it and the agent + +00:35:50.800 --> 00:35:55.640 +themselves will have some sort of + +00:35:53.000 --> 00:35:58.839 +abilities or knowledge goals preference + +00:35:55.640 --> 00:36:01.839 +or any preference uh prior knowledge + +00:35:58.839 --> 00:36:05.280 +since it's uh the topic is about LM + +00:36:01.839 --> 00:36:08.960 +agents now we usually use l the large NE + +00:36:05.280 --> 00:36:10.720 +models themselves as the agent itself + +00:36:08.960 --> 00:36:14.119 +and all the + +00:36:10.720 --> 00:36:17.319 +tools are covered about can be used as + +00:36:14.119 --> 00:36:19.119 +perceptors you know or actuators for + +00:36:17.319 --> 00:36:22.640 +example if you play music that's kind of + +00:36:19.119 --> 00:36:25.880 +an actuator and all these um aties prior + +00:36:22.640 --> 00:36:28.079 +knowledge or observations P experience + +00:36:25.880 --> 00:36:31.079 +can be seen as um + +00:36:28.079 --> 00:36:34.359 +data or training data for your langage + +00:36:31.079 --> 00:36:37.560 +model so to get started on langage model + +00:36:34.359 --> 00:36:41.079 +agents I'm going to cover four stages of + +00:36:37.560 --> 00:36:43.960 +get yourself a large L model agent first + +00:36:41.079 --> 00:36:46.240 +to cover a bit of tasks and applications + +00:36:43.960 --> 00:36:48.720 +second some trainingfree methods for + +00:36:46.240 --> 00:36:51.160 +building agents so that you can use with + +00:36:48.720 --> 00:36:53.880 +API based models and evaluation + +00:36:51.160 --> 00:36:56.720 +environment and Benchmark which is a + +00:36:53.880 --> 00:36:58.760 +super important topic in research and + +00:36:56.720 --> 00:37:00.680 +finally I'm going to briefly cover some + +00:36:58.760 --> 00:37:03.800 +of the training methods for improving + +00:37:00.680 --> 00:37:06.480 +agents and since it's a pretty ongoing + +00:37:03.800 --> 00:37:09.440 +area some of these training methods + +00:37:06.480 --> 00:37:12.520 +might not be the + +00:37:09.440 --> 00:37:14.480 +best so first uh what are task and + +00:37:12.520 --> 00:37:17.640 +applications for large langage model + +00:37:14.480 --> 00:37:19.480 +agents so to answer this question to + +00:37:17.640 --> 00:37:22.040 +cover this we need to answer why do we + +00:37:19.480 --> 00:37:24.000 +want agents imagine if things can be + +00:37:22.040 --> 00:37:26.960 +done by just talking that's what human + +00:37:24.000 --> 00:37:29.960 +agents do you talk to some real estate + +00:37:26.960 --> 00:37:32.839 +agent toy buy and house for you so + +00:37:29.960 --> 00:37:35.640 +nowadays when we are interacting with + +00:37:32.839 --> 00:37:38.720 +computers traditionally you know you use + +00:37:35.640 --> 00:37:41.160 +um graphical user interface or you write + +00:37:38.720 --> 00:37:44.040 +code manually with your hand using + +00:37:41.160 --> 00:37:45.920 +keyboard and mouse but what if you know + +00:37:44.040 --> 00:37:49.560 +everything in the future can be just + +00:37:45.920 --> 00:37:51.960 +done we are talking to some Alexa or + +00:37:49.560 --> 00:37:54.760 +Google Assistant it's safe time it is + +00:37:51.960 --> 00:37:57.119 +natural it is accessible and it there's + +00:37:54.760 --> 00:38:00.119 +no need to browse or any of those + +00:37:57.119 --> 00:38:03.040 +programs learning curve nowadays there + +00:38:00.119 --> 00:38:05.480 +are some agents that help you do task we + +00:38:03.040 --> 00:38:08.560 +are the um like Natural Energy interface + +00:38:05.480 --> 00:38:10.280 +computers for example there are Siri um + +00:38:08.560 --> 00:38:12.200 +there are like Google Assistant Alexa + +00:38:10.280 --> 00:38:14.560 +you can set an alarm that's actually an + +00:38:12.200 --> 00:38:17.520 +agent because it actually set an alarm + +00:38:14.560 --> 00:38:19.960 +for you and there are some natural + +00:38:17.520 --> 00:38:22.359 +language programming tools I think a lot + +00:38:19.960 --> 00:38:24.680 +not of a lot of us are using like GitHub + +00:38:22.359 --> 00:38:27.160 +co-pilot plugins to help you write code + +00:38:24.680 --> 00:38:29.400 +you can just say I want to sort my list + +00:38:27.160 --> 00:38:33.240 +in descending order it will generate + +00:38:29.400 --> 00:38:35.960 +code that does it for you um and + +00:38:33.240 --> 00:38:38.720 +also I think many people also use chat + +00:38:35.960 --> 00:38:41.760 +GPT in their daily life uh they once + +00:38:38.720 --> 00:38:44.319 +throw out the feature of having ability + +00:38:41.760 --> 00:38:46.440 +to include plugins into the chat GP so + +00:38:44.319 --> 00:38:49.440 +the tool Integrations into chat box can + +00:38:46.440 --> 00:38:52.960 +help you book PRS or you know uh buy + +00:38:49.440 --> 00:38:57.599 +inst create insta card orders and in + +00:38:52.960 --> 00:39:01.319 +some other areas um where + +00:38:57.599 --> 00:39:04.720 +agent could work which is in robotics um + +00:39:01.319 --> 00:39:07.520 +here in this example um the agent can + +00:39:04.720 --> 00:39:09.839 +see the surroundings uh as it in the + +00:39:07.520 --> 00:39:14.400 +like those Google Street View you have + +00:39:09.839 --> 00:39:16.560 +those Vision um surroundings and you ask + +00:39:14.400 --> 00:39:18.319 +the agent to you know turn and go with + +00:39:16.560 --> 00:39:20.440 +the FL track but you give some natural + +00:39:18.319 --> 00:39:21.920 +language instructions these task are + +00:39:20.440 --> 00:39:24.040 +traditional called natural language + +00:39:21.920 --> 00:39:26.160 +navigation which is given the natural + +00:39:24.040 --> 00:39:28.119 +language prompt you you ask to go + +00:39:26.160 --> 00:39:31.200 +through a streets + +00:39:28.119 --> 00:39:34.319 +uh and there's also some data set a data + +00:39:31.200 --> 00:39:38.200 +set called Alward where you can you are + +00:39:34.319 --> 00:39:40.760 +given a simulated environment here and + +00:39:38.200 --> 00:39:42.760 +you're also given a textual description + +00:39:40.760 --> 00:39:45.079 +like you are in the middle of room you + +00:39:42.760 --> 00:39:47.119 +look around you can see what's + +00:39:45.079 --> 00:39:49.440 +surrounding you and there's a bed + +00:39:47.119 --> 00:39:51.680 +there's drawer and you ask + +00:39:49.440 --> 00:39:55.400 +it you ask + +00:39:51.680 --> 00:39:58.800 +that uh the task is to examine an alarm + +00:39:55.400 --> 00:40:02.160 +clock with the desk uh with the desk SL + +00:39:58.800 --> 00:40:04.920 +and then ideally the if this langage + +00:40:02.160 --> 00:40:07.240 +model agent can't interact with this + +00:40:04.920 --> 00:40:09.440 +environment it should predict oh I + +00:40:07.240 --> 00:40:13.079 +should go to the Dex one which is my + +00:40:09.440 --> 00:40:15.960 +here in red fs and then you arrive and + +00:40:13.079 --> 00:40:18.800 +new you arrived at a new location and + +00:40:15.960 --> 00:40:21.119 +then you get a new observation so this + +00:40:18.800 --> 00:40:24.000 +kind of task you can already get a sense + +00:40:21.119 --> 00:40:27.440 +of how this agent is usually interacting + +00:40:24.000 --> 00:40:29.280 +with your surrounding um surrounding uh + +00:40:27.440 --> 00:40:32.440 +environment you have some sort of + +00:40:29.280 --> 00:40:34.079 +observation you have some sort of action + +00:40:32.440 --> 00:40:36.599 +to + +00:40:34.079 --> 00:40:38.880 +perform and there are of course other + +00:40:36.599 --> 00:40:42.839 +applications in games for example there + +00:40:38.880 --> 00:40:45.160 +are many uh benchmarks or applications + +00:40:42.839 --> 00:40:48.040 +for example this one's called Mind Dojo + +00:40:45.160 --> 00:40:49.960 +it works on like creating a agent that + +00:40:48.040 --> 00:40:51.440 +can you know listen to your in natural + +00:40:49.960 --> 00:40:54.000 +language instructions and perform + +00:40:51.440 --> 00:40:56.680 +Minecraft tasks for you there are also + +00:40:54.000 --> 00:40:59.480 +recent work in from Deep Mind called + +00:40:56.680 --> 00:41:01.720 +Sema that uh you know given a natural + +00:40:59.480 --> 00:41:04.040 +language um instruction program that + +00:41:01.720 --> 00:41:08.800 +shoot asteroid it will just help you + +00:41:04.040 --> 00:41:10.839 +shoot an asteroid in the game and also + +00:41:08.800 --> 00:41:13.079 +uh LM agents can be also used in + +00:41:10.839 --> 00:41:16.720 +software development uh recently there's + +00:41:13.079 --> 00:41:20.720 +a startup called Devon that created a um + +00:41:16.720 --> 00:41:24.400 +AI so-called AI software engineer that + +00:41:20.720 --> 00:41:27.359 +the alation is this um you have this + +00:41:24.400 --> 00:41:29.760 +code editor you have an terminal that + +00:41:27.359 --> 00:41:31.800 +could sorry code editor and a terminal + +00:41:29.760 --> 00:41:33.520 +that you could execute command and there + +00:41:31.800 --> 00:41:36.319 +also a web browser where you can search + +00:41:33.520 --> 00:41:37.280 +for documentations ideally if everything + +00:41:36.319 --> 00:41:39.359 +is + +00:41:37.280 --> 00:41:42.160 +automated this you can just give a + +00:41:39.359 --> 00:41:45.400 +natural instruction on create like + +00:41:42.160 --> 00:41:47.720 +finish 711 homework for me and it will + +00:41:45.400 --> 00:41:50.359 +just uh the agent will just compete the + +00:41:47.720 --> 00:41:52.800 +homework for you in these workspace so + +00:41:50.359 --> 00:41:55.599 +in this scenario the observation will be + +00:41:52.800 --> 00:41:58.440 +this string basically it contains four + +00:41:55.599 --> 00:42:00.800 +three like terminal web Ed web browser + +00:41:58.440 --> 00:42:03.599 +and editor and the action you take is + +00:42:00.800 --> 00:42:05.880 +just like you can issue commands to the + +00:42:03.599 --> 00:42:07.760 +terminal or you can search in the web + +00:42:05.880 --> 00:42:10.839 +browser or you can write code in the + +00:42:07.760 --> 00:42:13.680 +code editor there's of course UI + +00:42:10.839 --> 00:42:17.599 +automation where you can browse web and + +00:42:13.680 --> 00:42:23.800 +like P some songs or um you know zooming + +00:42:17.599 --> 00:42:27.960 +zoom out all these GOI navigation I are + +00:42:23.800 --> 00:42:30.680 +so so so now that we have a brief + +00:42:27.960 --> 00:42:34.319 +overview of what task and applications + +00:42:30.680 --> 00:42:38.119 +could be enabled uh say we have a pretty + +00:42:34.319 --> 00:42:41.160 +good LM agents so now the topic becomes + +00:42:38.119 --> 00:42:43.359 +uh can we have some methods to build + +00:42:41.160 --> 00:42:44.599 +these agents now I'm going to at the + +00:42:43.359 --> 00:42:50.720 +beginning I'm going to cover some + +00:42:44.599 --> 00:42:52.760 +trainingfree methods so how to um you + +00:42:50.720 --> 00:42:54.480 +know how to let an LM become agent + +00:42:52.760 --> 00:43:00.200 +basically that's the question we want to + +00:42:54.480 --> 00:43:02.400 +answer so um the usually an an Al agent + +00:43:00.200 --> 00:43:05.599 +needs to take as an agent it needs to + +00:43:02.400 --> 00:43:08.520 +take into take in an observation of the + +00:43:05.599 --> 00:43:10.319 +current environment usually the envir + +00:43:08.520 --> 00:43:12.480 +the observation can be you know of + +00:43:10.319 --> 00:43:16.520 +multiple modalities you could have a + +00:43:12.480 --> 00:43:18.640 +text input as a you know as a + +00:43:16.520 --> 00:43:22.079 +observation for example in this + +00:43:18.640 --> 00:43:24.720 +example you know I'm just GNA it's like + +00:43:22.079 --> 00:43:26.720 +a description of the previous afterward + +00:43:24.720 --> 00:43:29.800 +example what's surrounding you I have a + +00:43:26.720 --> 00:43:31.480 +desk I have a chair here and there and + +00:43:29.800 --> 00:43:33.400 +what should I if I want to grab the + +00:43:31.480 --> 00:43:36.160 +chair what should I do next that's the + +00:43:33.400 --> 00:43:39.000 +text input visual input of course if you + +00:43:36.160 --> 00:43:40.880 +are building an Alm agent for games of + +00:43:39.000 --> 00:43:43.400 +course you are going to have a capture + +00:43:40.880 --> 00:43:45.839 +or a current screenshot of your + +00:43:43.400 --> 00:43:48.400 +character and surrounding environment + +00:43:45.839 --> 00:43:50.359 +sometimes we may also have audio input + +00:43:48.400 --> 00:43:53.520 +like if you're playing games you want to + +00:43:50.359 --> 00:43:56.119 +hear uh what's happening surrounding you + +00:43:53.520 --> 00:44:00.079 +or behind you that is not seeing on the + +00:43:56.119 --> 00:44:03.079 +uh uh on the screen and uh of course we + +00:44:00.079 --> 00:44:05.880 +also have structured uh input take uh + +00:44:03.079 --> 00:44:09.119 +for example if you are + +00:44:05.880 --> 00:44:12.240 +building say U like an agent for + +00:44:09.119 --> 00:44:15.599 +websites or for other like desktop + +00:44:12.240 --> 00:44:18.640 +applications the website usually are + +00:44:15.599 --> 00:44:21.599 +written in HTML code reference layout of + +00:44:18.640 --> 00:44:24.520 +its UI so it might be like a tree like + +00:44:21.599 --> 00:44:28.119 +structure as you that you can take into + +00:44:24.520 --> 00:44:32.319 +as input for LM agent so this also kind + +00:44:28.119 --> 00:44:35.480 +of uh the need for LM agent also uh says + +00:44:32.319 --> 00:44:39.960 +that we need pretty good multimodel LMS + +00:44:35.480 --> 00:44:43.800 +for this to work and uh I'm going to + +00:44:39.960 --> 00:44:46.839 +cover how to let our become agent by + +00:44:43.800 --> 00:44:50.040 +first you need to plan and reason + +00:44:46.839 --> 00:44:52.839 +because in order to perform a complex + +00:44:50.040 --> 00:44:55.760 +task that the human issue to them you + +00:44:52.839 --> 00:44:57.839 +have to uh usually have to decompose + +00:44:55.760 --> 00:45:01.440 +them into subtance + +00:44:57.839 --> 00:45:04.640 +um the there are many many existing + +00:45:01.440 --> 00:45:07.960 +methods in I that tackle this reasoning + +00:45:04.640 --> 00:45:10.559 +problem and one the most famous of which + +00:45:07.960 --> 00:45:12.640 +is the Chain of Thought reasoning which + +00:45:10.559 --> 00:45:15.760 +uh famously is just prompt the language + +00:45:12.640 --> 00:45:18.920 +model to that is Sy step by step here's + +00:45:15.760 --> 00:45:20.160 +um so basically the goal here is to let + +00:45:18.920 --> 00:45:22.720 +the langage model generate some + +00:45:20.160 --> 00:45:25.040 +reasoning traces so that it has a + +00:45:22.720 --> 00:45:28.400 +roughly a good plan of how to perform + +00:45:25.040 --> 00:45:31.160 +cion tasks + +00:45:28.400 --> 00:45:34.000 +here is an example from alord which is + +00:45:31.160 --> 00:45:35.760 +you are given a texal description of + +00:45:34.000 --> 00:45:37.160 +your surroundings you know you're in the + +00:45:35.760 --> 00:45:39.760 +middle of + +00:45:37.160 --> 00:45:42.599 +room and what's surrounding you and your + +00:45:39.760 --> 00:45:46.400 +task is to put some paper Sher on a + +00:45:42.599 --> 00:45:50.119 +drawer and if you want to let an L to + +00:45:46.400 --> 00:45:52.680 +reason to how to perform this task into + +00:45:50.119 --> 00:45:56.559 +subtasks you ask what should I do next + +00:45:52.680 --> 00:45:59.000 +let's things step by step and Lang model + +00:45:56.559 --> 00:46:02.640 +if it's good langage model it will say + +00:45:59.000 --> 00:46:05.359 +first I need to find a pepper shaker and + +00:46:02.640 --> 00:46:09.200 +they a pepper sh Shaker more likely to + +00:46:05.359 --> 00:46:12.640 +be appear in cabinets counter chops and + +00:46:09.200 --> 00:46:16.520 +after I find pepper shaker one I need to + +00:46:12.640 --> 00:46:19.240 +put it and say your so here you can + +00:46:16.520 --> 00:46:21.079 +actually rely on large Lang models to + +00:46:19.240 --> 00:46:25.119 +generate reasoning traces basically + +00:46:21.079 --> 00:46:27.760 +decompose this find uh put pepper shaker + +00:46:25.119 --> 00:46:29.960 +on drawer T into two s STS first you + +00:46:27.760 --> 00:46:34.760 +need to find them and then you have to + +00:46:29.960 --> 00:46:36.559 +put it but it is not enough to just have + +00:46:34.760 --> 00:46:38.760 +this kind of planning and reasoning to + +00:46:36.559 --> 00:46:40.920 +generate some of these natural langues + +00:46:38.760 --> 00:46:42.040 +because you can't execute them uh if you + +00:46:40.920 --> 00:46:45.440 +just have + +00:46:42.040 --> 00:46:48.680 +this bunch of text uh an actual robot + +00:46:45.440 --> 00:46:51.119 +will not be able to perform this testt + +00:46:48.680 --> 00:46:54.079 +and that's why we also need the language + +00:46:51.119 --> 00:46:57.240 +model to have this tool use ability so + +00:46:54.079 --> 00:47:00.200 +not only it should generate um reasing + +00:46:57.240 --> 00:47:03.280 +traces it can it needs to also interact + +00:47:00.200 --> 00:47:05.800 +with environment here we focus on action + +00:47:03.280 --> 00:47:08.000 +here so it need to generate some action + +00:47:05.800 --> 00:47:10.960 +cause asks you the action cause in + +00:47:08.000 --> 00:47:13.119 +environment and then supposedly if your + +00:47:10.960 --> 00:47:15.520 +action is Meaningful enough it will + +00:47:13.119 --> 00:47:17.599 +change the environment a little bit and + +00:47:15.520 --> 00:47:20.559 +then supposedly you should have new + +00:47:17.599 --> 00:47:22.400 +observation to put it back in your PRT + +00:47:20.559 --> 00:47:25.160 +do the same + +00:47:22.400 --> 00:47:28.800 +example you you ask it what should I do + +00:47:25.160 --> 00:47:32.240 +Lessing step by step you find you you + +00:47:28.800 --> 00:47:34.680 +say oh first I need to find this pepper + +00:47:32.240 --> 00:47:38.599 +shaker and it is more likely to appear + +00:47:34.680 --> 00:47:42.400 +in cabinets then the like with proper + +00:47:38.599 --> 00:47:45.880 +prompting you know the uh Al should be + +00:47:42.400 --> 00:47:48.200 +able to uh produce an action like go to + +00:47:45.880 --> 00:47:51.240 +Cabinet one and then you actually + +00:47:48.200 --> 00:47:52.559 +execute this in this virtual environment + +00:47:51.240 --> 00:47:55.040 +and now your + +00:47:52.559 --> 00:47:56.960 +observation so you you go to Cabinet one + +00:47:55.040 --> 00:48:00.720 +you actually see what's inside cabinet + +00:47:56.960 --> 00:48:02.960 +one and oh having a one there's a was + +00:48:00.720 --> 00:48:05.119 +which you know is not pepper spray so + +00:48:02.960 --> 00:48:08.359 +then you can do further reasoning and + +00:48:05.119 --> 00:48:11.559 +further action planning so this is how + +00:48:08.359 --> 00:48:15.040 +uh so this this General framework is + +00:48:11.559 --> 00:48:16.960 +called react and is quite uh popular in + +00:48:15.040 --> 00:48:18.839 +creating language model agents so + +00:48:16.960 --> 00:48:20.800 +basically to let a language model become + +00:48:18.839 --> 00:48:23.079 +an agent it needs to have planning and + +00:48:20.800 --> 00:48:25.319 +reasoning ability some of those mostly + +00:48:23.079 --> 00:48:28.480 +achieved by Chain of La prompting and + +00:48:25.319 --> 00:48:30.599 +Tool usability by uh also you know + +00:48:28.480 --> 00:48:35.200 +providing prompting to let it generate + +00:48:30.599 --> 00:48:39.319 +API cost here's a real world example um + +00:48:35.200 --> 00:48:42.599 +so H to you may ask how should I you + +00:48:39.319 --> 00:48:44.200 +know generate actions proper actions + +00:48:42.599 --> 00:48:47.520 +that can be + +00:48:44.200 --> 00:48:50.280 +executed um without training you can + +00:48:47.520 --> 00:48:53.520 +just do prompting so you are given + +00:48:50.280 --> 00:48:57.040 +supposed you have the following apis the + +00:48:53.520 --> 00:48:59.720 +um the text that are not in um not in + +00:48:57.040 --> 00:49:01.480 +Blue uh sorry green are the prompts + +00:48:59.720 --> 00:49:05.040 +given to the langage model and the texts + +00:49:01.480 --> 00:49:07.119 +in blue are generated by gp4 and you + +00:49:05.040 --> 00:49:10.599 +just ask them you provide them with four + +00:49:07.119 --> 00:49:13.799 +apis get weather get location you know + +00:49:10.599 --> 00:49:16.319 +bus rout or count characters and the + +00:49:13.799 --> 00:49:18.680 +question is you know is it okay to go + +00:49:16.319 --> 00:49:20.599 +hiking today to answer this question you + +00:49:18.680 --> 00:49:23.440 +know you can see the language model can + +00:49:20.599 --> 00:49:25.480 +actually reason a good way of solving + +00:49:23.440 --> 00:49:30.079 +this task by first checking your + +00:49:25.480 --> 00:49:33.200 +location okay I'm in Seattle and then um + +00:49:30.079 --> 00:49:34.520 +so it the API call is you know it calls + +00:49:33.200 --> 00:49:38.400 +weather + +00:49:34.520 --> 00:49:40.559 +Seattle and then cloudy equals to you + +00:49:38.400 --> 00:49:43.440 +know like it's chance of ring and then + +00:49:40.559 --> 00:49:45.440 +based on this observation information it + +00:49:43.440 --> 00:49:48.119 +is not recommended to go hiking so this + +00:49:45.440 --> 00:49:51.799 +is kind of a actual example of how + +00:49:48.119 --> 00:49:54.400 +you're going to call this a uh how you + +00:49:51.799 --> 00:49:57.640 +are going to enable a to generate + +00:49:54.400 --> 00:50:01.599 +executable actions you PL examples or + +00:49:57.640 --> 00:50:04.000 +you give instructions of what these apis + +00:50:01.599 --> 00:50:05.040 +looks like in the prompt and if you just + +00:50:04.000 --> 00:50:07.400 +continue + +00:50:05.040 --> 00:50:10.400 +generation it also have a previous + +00:50:07.400 --> 00:50:14.160 +example as a like a three shot example + +00:50:10.400 --> 00:50:14.160 +where you can ask new + +00:50:15.040 --> 00:50:20.400 +questions there's besides you know this + +00:50:17.960 --> 00:50:23.640 +kind of natural back and force + +00:50:20.400 --> 00:50:26.359 +generating code sorry generating actions + +00:50:23.640 --> 00:50:27.559 +there's also oh actually that's a + +00:50:26.359 --> 00:50:31.119 +question + +00:50:27.559 --> 00:50:34.359 +what if there's a lot of apis uh say now + +00:50:31.119 --> 00:50:37.720 +I have four apis here for the language + +00:50:34.359 --> 00:50:41.440 +model to choose but in reality I might + +00:50:37.720 --> 00:50:44.359 +have um a thousand possible actions to + +00:50:41.440 --> 00:50:47.760 +perform and if I just present them in + +00:50:44.359 --> 00:50:50.440 +this way like describe what each API + +00:50:47.760 --> 00:50:53.480 +takes in as an argument what they do + +00:50:50.440 --> 00:50:56.920 +they might extend uh they might you know + +00:50:53.480 --> 00:50:59.920 +like exceed the context that + +00:50:56.920 --> 00:51:02.960 +so I don't or they might cost a lot of + +00:50:59.920 --> 00:51:06.359 +money because you know token is money so + +00:51:02.960 --> 00:51:09.400 +do you have anyone else suggestions if + +00:51:06.359 --> 00:51:14.440 +there's a lot of apis how should I still + +00:51:09.400 --> 00:51:14.440 +let a uh Lang model generate apis + +00:51:19.040 --> 00:51:24.160 +PA yeah so basically yeah that's a good + +00:51:21.480 --> 00:51:28.000 +answer so you have kind of a external + +00:51:24.160 --> 00:51:30.319 +memory where you can query based on your + +00:51:28.000 --> 00:51:33.480 +current context what are the most + +00:51:30.319 --> 00:51:35.720 +necessary required apis from that + +00:51:33.480 --> 00:51:38.200 +external and provide the documentations + +00:51:35.720 --> 00:51:42.240 +for them yeah that's a good + +00:51:38.200 --> 00:51:44.119 +answer besides these we also have + +00:51:42.240 --> 00:51:46.960 +another scenario where we can actually + +00:51:44.119 --> 00:51:49.520 +just generate code to perform certain + +00:51:46.960 --> 00:51:51.880 +tasks this way we are combining + +00:51:49.520 --> 00:51:55.119 +reasoning ability planning ability plus + +00:51:51.880 --> 00:51:56.839 +this action calling ability o one + +00:51:55.119 --> 00:51:59.200 +previously we have to generate natur + +00:51:56.839 --> 00:52:02.880 +language traces and transform natural + +00:51:59.200 --> 00:52:05.400 +language traces into actions so this is + +00:52:02.880 --> 00:52:08.000 +how you would solve this test so + +00:52:05.400 --> 00:52:11.160 +assuming you know you just ask chat gbd + +00:52:08.000 --> 00:52:13.640 +assuming you can use Python you you have + +00:52:11.160 --> 00:52:16.640 +some common apis install you have done + +00:52:13.640 --> 00:52:18.880 +the authentication steps ask answers the + +00:52:16.640 --> 00:52:22.200 +following questions like set up a + +00:52:18.880 --> 00:52:26.040 +meeting with someone tomorrow at 10:00 + +00:52:22.200 --> 00:52:27.720 +a.m. and actually chat GP can generate a + +00:52:26.040 --> 00:52:30.839 +python code + +00:52:27.720 --> 00:52:33.520 +um that that's this task for you and + +00:52:30.839 --> 00:52:37.160 +then you probably have guessed it you + +00:52:33.520 --> 00:52:39.119 +can just execute then probably you won't + +00:52:37.160 --> 00:52:41.960 +actually need it because nowadays you + +00:52:39.119 --> 00:52:44.040 +know open ey or other jamni those + +00:52:41.960 --> 00:52:46.359 +documents have building like code + +00:52:44.040 --> 00:52:48.319 +interpreter functionality where they + +00:52:46.359 --> 00:52:50.359 +generate code but basically they + +00:52:48.319 --> 00:52:54.160 +generate a code based on your task + +00:52:50.359 --> 00:52:57.119 +instruction and execute them and this + +00:52:54.160 --> 00:53:00.440 +way you are still doing promply but and + +00:52:57.119 --> 00:53:04.960 +your reasoning or all of these Logics + +00:53:00.440 --> 00:53:08.520 +are more likely to be handled by this + +00:53:04.960 --> 00:53:12.599 +code so these are an overview previously + +00:53:08.520 --> 00:53:15.880 +we touched a bit about um how we going + +00:53:12.599 --> 00:53:18.559 +to prompt Lage models to perform task + +00:53:15.880 --> 00:53:20.559 +then we are going to touch a bit about + +00:53:18.559 --> 00:53:23.640 +evaluation environment and + +00:53:20.559 --> 00:53:27.040 +Benchmark this is a research oriented + +00:53:23.640 --> 00:53:29.720 +class we are definitely going to uh + +00:53:27.040 --> 00:53:32.480 +think about you know reproducible + +00:53:29.720 --> 00:53:35.119 +environments or like evaluations other + +00:53:32.480 --> 00:53:37.400 +than just products so evalation of + +00:53:35.119 --> 00:53:41.200 +langage model agents are actually quite + +00:53:37.400 --> 00:53:45.720 +hard existing there are several existing + +00:53:41.200 --> 00:53:48.599 +work on this area many of them contain + +00:53:45.720 --> 00:53:51.599 +simplified environments and basic tasks + +00:53:48.599 --> 00:53:54.079 +and if you are performing like basic + +00:53:51.599 --> 00:53:56.119 +test performance is saturating you have + +00:53:54.079 --> 00:53:58.920 +already seen the example I previously + +00:53:56.119 --> 00:54:01.599 +presented it to ask it to whether it is + +00:53:58.920 --> 00:54:04.720 +okay to go hiking today to check whether + +00:54:01.599 --> 00:54:06.640 +it is super easy for chat GPT to do that + +00:54:04.720 --> 00:54:10.200 +and even just to book a meeting through + +00:54:06.640 --> 00:54:11.880 +the Google API uh Google Calender API uh + +00:54:10.200 --> 00:54:14.319 +actually that code I verified is + +00:54:11.880 --> 00:54:17.079 +actually correct in the pre previous + +00:54:14.319 --> 00:54:19.599 +slide so you can see if it's simple task + +00:54:17.079 --> 00:54:22.480 +simple envirment the performance is + +00:54:19.599 --> 00:54:25.400 +saturating a 100% accuracy won't tell + +00:54:22.480 --> 00:54:27.079 +you any further progress in LM agent + +00:54:25.400 --> 00:54:30.520 +research + +00:54:27.079 --> 00:54:34.160 +so but still it's nice to know some of + +00:54:30.520 --> 00:54:36.839 +these existing um evaluation benchmarks + +00:54:34.160 --> 00:54:39.839 +the first Ty them are usually stateless + +00:54:36.839 --> 00:54:42.880 +with non-interactive environment uh for + +00:54:39.839 --> 00:54:47.400 +example does mind to what work um + +00:54:42.880 --> 00:54:49.640 +focused on like uh dumping web pages and + +00:54:47.400 --> 00:54:53.200 +then what actions can be performed on + +00:54:49.640 --> 00:54:53.960 +them to get them transformed into other + +00:54:53.200 --> 00:54:57.200 +uh + +00:54:53.960 --> 00:54:59.880 +stages and sometimes they are valuated + +00:54:57.200 --> 00:55:01.599 +by checking action sequence + +00:54:59.880 --> 00:55:03.680 +accuracy sometimes they are just + +00:55:01.599 --> 00:55:06.040 +checking the stepwise or Surface form + +00:55:03.680 --> 00:55:08.920 +only accuracy here is an example from + +00:55:06.040 --> 00:55:10.599 +mind to web so the task is to you know + +00:55:08.920 --> 00:55:13.480 +for one of the team + +00:55:10.599 --> 00:55:16.000 +leaders um of the one of the follow the + +00:55:13.480 --> 00:55:19.760 +team leaders of one of the NL teams from + +00:55:16.000 --> 00:55:19.760 +Atlantic division and + +00:55:19.799 --> 00:55:25.280 +supposedly this action sequence is the + +00:55:22.200 --> 00:55:27.920 +ground shoes action sequence you should + +00:55:25.280 --> 00:55:32.240 +have you have this uh starting website + +00:55:27.920 --> 00:55:36.000 +of pro possibly NHL and you click on uh + +00:55:32.240 --> 00:55:38.520 +you hover the link click on some click + +00:55:36.000 --> 00:55:42.480 +on something eventually you will arrive + +00:55:38.520 --> 00:55:45.280 +at the final designed state but um these + +00:55:42.480 --> 00:55:48.359 +previous uh Benchmark like M2 web they + +00:55:45.280 --> 00:55:51.119 +evaluate based on the accuracy of how + +00:55:48.359 --> 00:55:54.079 +these actions predicted matches the + +00:55:51.119 --> 00:55:57.119 +ground choose one and they only check + +00:55:54.079 --> 00:56:00.400 +you know click if cck if is correct or + +00:55:57.119 --> 00:56:02.359 +if the if the argument is correct anyone + +00:56:00.400 --> 00:56:05.319 +have idea why this might not be + +00:56:02.359 --> 00:56:08.520 +desirable a desirable way of evaluating + +00:56:05.319 --> 00:56:08.520 +El agents yeah go + +00:56:11.440 --> 00:56:17.200 +ahead different + +00:56:14.440 --> 00:56:19.720 +way yeah yeah that's a good that's a + +00:56:17.200 --> 00:56:22.839 +good answer and also possibly other + +00:56:19.720 --> 00:56:25.039 +reasons are so you first have to go + +00:56:22.839 --> 00:56:27.240 +through in this order but you know you + +00:56:25.039 --> 00:56:29.359 +can sometimes these orders are not + +00:56:27.240 --> 00:56:32.799 +strictly dependent on each other so you + +00:56:29.359 --> 00:56:36.240 +can perform one task like say this task + +00:56:32.799 --> 00:56:38.760 +is composed of two subtask you can do + +00:56:36.240 --> 00:56:40.400 +these two in parallel or in any order + +00:56:38.760 --> 00:56:43.359 +then if you are just given a ground + +00:56:40.400 --> 00:56:45.280 +choose action sequence and you WR based + +00:56:43.359 --> 00:56:47.839 +on you know if the action is correct and + +00:56:45.280 --> 00:56:50.119 +if the argument is correct then you are + +00:56:47.839 --> 00:56:51.880 +going to miss out you know basically + +00:56:50.119 --> 00:56:54.359 +miss out a lot of opportunity where the + +00:56:51.880 --> 00:56:58.319 +agent actually does it correctly but you + +00:56:54.359 --> 00:57:00.640 +know it didn't cover in this task and + +00:56:58.319 --> 00:57:03.079 +there are other T So previous I was + +00:57:00.640 --> 00:57:05.200 +thinking you know I was previously I was + +00:57:03.079 --> 00:57:07.359 +talking about these kind of stateless + +00:57:05.200 --> 00:57:09.359 +non interacting environment but then + +00:57:07.359 --> 00:57:11.079 +there are also some environment + +00:57:09.359 --> 00:57:14.160 +interactive environment here already + +00:57:11.079 --> 00:57:16.839 +available but usually they are short + +00:57:14.160 --> 00:57:19.839 +Horizon for example there's webshop and + +00:57:16.839 --> 00:57:23.119 +many water with bits here here's an + +00:57:19.839 --> 00:57:27.039 +example of an actual interactive web + +00:57:23.119 --> 00:57:29.440 +environment for agents so it's + +00:57:27.039 --> 00:57:31.559 +simple web pages where you can click you + +00:57:29.440 --> 00:57:35.920 +can enter stuff so there are simple + +00:57:31.559 --> 00:57:40.319 +tasks for example let's say in this this + +00:57:35.920 --> 00:57:45.200 +example I just asked you to um submit I + +00:57:40.319 --> 00:57:47.119 +love 711 in this text box and submit so + +00:57:45.200 --> 00:57:50.280 +as you can imagine these tasks usually + +00:57:47.119 --> 00:57:53.440 +just take one or two or three steps + +00:57:50.280 --> 00:57:57.160 +actions to perform so that's why we call + +00:57:53.440 --> 00:58:00.000 +short Horizon and they the enironment is + +00:57:57.160 --> 00:58:01.280 +pretty simple because it's just like the + +00:58:00.000 --> 00:58:03.640 +websites from + +00:58:01.280 --> 00:58:08.000 +1990s + +00:58:03.640 --> 00:58:11.680 +so and also there are uh and also there + +00:58:08.000 --> 00:58:14.480 +are um like webshop Benchmark which is a + +00:58:11.680 --> 00:58:16.680 +simplified version of Amazon basically + +00:58:14.480 --> 00:58:22.599 +Amazon is you know it + +00:58:16.680 --> 00:58:25.119 +is 20 years ago and there is um like you + +00:58:22.599 --> 00:58:27.680 +can search for jacket your favorite + +00:58:25.119 --> 00:58:31.160 +jacket the the question here is still + +00:58:27.680 --> 00:58:34.520 +you know I'm looking for extra red color + +00:58:31.160 --> 00:58:36.400 +womon some like warm jacket coat and + +00:58:34.520 --> 00:58:39.319 +price lower than $70 you have a lot of + +00:58:36.400 --> 00:58:41.839 +instructions here the goal is to find + +00:58:39.319 --> 00:58:43.480 +the correct uh item that suits your + +00:58:41.839 --> 00:58:46.160 +instruction me you still have to + +00:58:43.480 --> 00:58:49.599 +navigate through this um simplified + +00:58:46.160 --> 00:58:52.799 +version of Amazon but you know the same + +00:58:49.599 --> 00:58:56.160 +rule the same issue applies it is a + +00:58:52.799 --> 00:58:58.319 +pretty simple environment and it is uh + +00:58:56.160 --> 00:59:00.359 +like short Horizon just to give you a + +00:58:58.319 --> 00:59:04.599 +example I just just give you like a + +00:59:00.359 --> 00:59:08.039 +feeling like she asked gp4 to perform on + +00:59:04.599 --> 00:59:09.880 +these kind of u w shop with proper up + +00:59:08.039 --> 00:59:15.960 +tuning it can already achieve my + +00:59:09.880 --> 00:59:20.200 +basically fa of task so um it's getting + +00:59:15.960 --> 00:59:23.640 +there so that's why we think there are + +00:59:20.200 --> 00:59:26.240 +if you are interested in building agent + +00:59:23.640 --> 00:59:29.119 +benchmarks there are several key con + +00:59:26.240 --> 00:59:31.760 +considerations so first you have to have + +00:59:29.119 --> 00:59:33.440 +a interactive environment because + +00:59:31.760 --> 00:59:36.280 +without environment you know you can + +00:59:33.440 --> 00:59:39.319 +just you are stuck with uh evaluation + +00:59:36.280 --> 00:59:41.160 +metrics that are just B uh checking if + +00:59:39.319 --> 00:59:44.119 +the action is correct rather than the + +00:59:41.160 --> 00:59:46.119 +final execution results so uh and also + +00:59:44.119 --> 00:59:47.200 +you know you you need to have kind of + +00:59:46.119 --> 00:59:49.520 +diverse + +00:59:47.200 --> 00:59:51.960 +functionality previously like you are + +00:59:49.520 --> 00:59:53.559 +just focused on shopping and then you + +00:59:51.960 --> 00:59:56.440 +know if I want to cheat on this + +00:59:53.559 --> 00:59:59.760 +Benchmark I would just overfit to this + +00:59:56.440 --> 01:00:02.240 +shopping functionality and then you need + +00:59:59.760 --> 01:00:05.400 +to have Rich and realistic content to + +01:00:02.240 --> 01:00:08.680 +make it adaptive uh to like basically + +01:00:05.400 --> 01:00:10.599 +closer to Modern websites so you can + +01:00:08.680 --> 01:00:13.720 +transfer your performance in these + +01:00:10.599 --> 01:00:15.680 +benchmarks better to a real website it + +01:00:13.720 --> 01:00:18.400 +has to be interactive and easy + +01:00:15.680 --> 01:00:22.240 +extendable and reproducible reproducible + +01:00:18.400 --> 01:00:26.640 +is quite important for um creating a + +01:00:22.240 --> 01:00:29.200 +benchmark uh in research community + +01:00:26.640 --> 01:00:32.119 +that's why we don't use uh we we don't + +01:00:29.200 --> 01:00:35.359 +want use live websites as an environment + +01:00:32.119 --> 01:00:37.440 +because they change so often you got 90% + +01:00:35.359 --> 01:00:40.240 +accuracy yesterday today you will only + +01:00:37.440 --> 01:00:43.720 +get 20% accuracy because the websites + +01:00:40.240 --> 01:00:47.280 +become much much hotter just because of + +01:00:43.720 --> 01:00:50.240 +today and tasks we ideally want long + +01:00:47.280 --> 01:00:53.480 +Horizon tasks with enough difficulty and + +01:00:50.240 --> 01:00:56.160 +also IDE involves M website because + +01:00:53.480 --> 01:00:58.799 +that's more realistic you are not going + +01:00:56.160 --> 01:01:01.760 +to stuck your whole day browsing Amazon + +01:00:58.799 --> 01:01:04.799 +website alone you sometimes go to you + +01:01:01.760 --> 01:01:07.559 +know other websites like Reddit to + +01:01:04.799 --> 01:01:11.079 +search for some reviews so it is nice to + +01:01:07.559 --> 01:01:13.839 +involve them and also for + +01:01:11.079 --> 01:01:17.160 +evaluation it's nice to have reliable + +01:01:13.839 --> 01:01:19.760 +metrics so that it encourages final goal + +01:01:17.160 --> 01:01:22.240 +rather than partial satisfaction and + +01:01:19.760 --> 01:01:24.400 +also it encourages you know the agent to + +01:01:22.240 --> 01:01:26.400 +actually perform the task right instead + +01:01:24.400 --> 01:01:28.240 +of just following you know the provided + +01:01:26.400 --> 01:01:30.559 +BR choose action because you you can + +01:01:28.240 --> 01:01:32.480 +have multiple ways of achieving the same + +01:01:30.559 --> 01:01:35.599 +correct uh final + +01:01:32.480 --> 01:01:38.920 +goal so here present uh one of the + +01:01:35.599 --> 01:01:42.799 +latest work in this uh area called Web + +01:01:38.920 --> 01:01:45.720 +Arina and it's environment to S it we + +01:01:42.799 --> 01:01:48.799 +try to satisfy this environment uh + +01:01:45.720 --> 01:01:51.359 +requirement we previously put out which + +01:01:48.799 --> 01:01:53.640 +is a setbox internet it is open source + +01:01:51.359 --> 01:01:56.240 +and production ready implementation of + +01:01:53.640 --> 01:01:58.160 +websites and the data of popular created + +01:01:56.240 --> 01:02:01.079 +from Real World websites we basically + +01:01:58.160 --> 01:02:04.559 +scrap Amazon say for example and put it + +01:02:01.079 --> 01:02:07.400 +in our fake Amazon website it is also + +01:02:04.559 --> 01:02:11.400 +easily distributable we use you know + +01:02:07.400 --> 01:02:13.640 +Dockers to use them so since we want it + +01:02:11.400 --> 01:02:16.000 +to reproducible to be reproducible we + +01:02:13.640 --> 01:02:18.599 +don't use right websites and that kind + +01:02:16.000 --> 01:02:21.359 +of means that uh the website selection + +01:02:18.599 --> 01:02:24.599 +is going to be limited no matter what + +01:02:21.359 --> 01:02:26.520 +however we try to make it you know as + +01:02:24.599 --> 01:02:28.760 +diverse as possible by including a + +01:02:26.520 --> 01:02:33.039 +shopping website a cent management + +01:02:28.760 --> 01:02:36.880 +website as well as a Reddit like forum + +01:02:33.039 --> 01:02:39.799 +and plus a g l so it covers um like + +01:02:36.880 --> 01:02:42.000 +social media some of the development of + +01:02:39.799 --> 01:02:44.880 +work related things as well as content + +01:02:42.000 --> 01:02:47.520 +management and shopping we also have a + +01:02:44.880 --> 01:02:51.240 +Wikipedia and some other toolbox like we + +01:02:47.520 --> 01:02:53.760 +even have a map here in our in this + +01:02:51.240 --> 01:02:56.799 +Benchmark called Web so and then you + +01:02:53.760 --> 01:02:59.720 +need to collect realistic intents + +01:02:56.799 --> 01:03:01.599 +so of course a good way of collecting + +01:02:59.720 --> 01:03:05.920 +these is just checking our own browser + +01:03:01.599 --> 01:03:07.520 +history or checking others then uh we + +01:03:05.920 --> 01:03:10.119 +then categorize them into three + +01:03:07.520 --> 01:03:12.839 +different types first is information + +01:03:10.119 --> 01:03:15.279 +seeking a lot of our browser histories + +01:03:12.839 --> 01:03:17.440 +or like what we do on internet is check + +01:03:15.279 --> 01:03:21.079 +information for example when was the + +01:03:17.440 --> 01:03:25.160 +last time I bought shampoo so to remind + +01:03:21.079 --> 01:03:28.599 +myself and then a lot some other things + +01:03:25.160 --> 01:03:31.279 +just navigation you know as a human how + +01:03:28.599 --> 01:03:34.079 +to get there for example I want to check + +01:03:31.279 --> 01:03:37.599 +out merge requests that are assed to me + +01:03:34.079 --> 01:03:40.520 +as my new day out work started and then + +01:03:37.599 --> 01:03:43.559 +there's some content and configuration + +01:03:40.520 --> 01:03:46.640 +operations so here's an example like uh + +01:03:43.559 --> 01:03:49.119 +these type of task are usually you are + +01:03:46.640 --> 01:03:52.200 +going to modify the environment somewhat + +01:03:49.119 --> 01:03:54.839 +because previously as we previously + +01:03:52.200 --> 01:03:57.559 +discussed a agent not only can perceive + +01:03:54.839 --> 01:04:00.200 +or get information from the environment + +01:03:57.559 --> 01:04:01.799 +you sometimes have to do actuations back + +01:04:00.200 --> 01:04:04.760 +to the environment for example if I want + +01:04:01.799 --> 01:04:07.559 +to post a question is a car necessary in + +01:04:04.760 --> 01:04:10.039 +New York City in a subreddit where I'm + +01:04:07.559 --> 01:04:12.400 +most likely to get an answer here you if + +01:04:10.039 --> 01:04:15.480 +you think about the U you know the we + +01:04:12.400 --> 01:04:18.000 +size as an environment you actually make + +01:04:15.480 --> 01:04:18.880 +a dent made a dent to the environment + +01:04:18.000 --> 01:04:23.880 +this + +01:04:18.880 --> 01:04:26.559 +way so here's an example task in this uh + +01:04:23.880 --> 01:04:29.640 +depos uh in this benchmark Mark uh + +01:04:26.559 --> 01:04:32.520 +create a plan to visit Pittsburgh uh + +01:04:29.640 --> 01:04:36.160 +museums with minimal driving distance + +01:04:32.520 --> 01:04:41.279 +starting from shy Park log the order in + +01:04:36.160 --> 01:04:45.359 +my uh you know awesome Northeast US + +01:04:41.279 --> 01:04:48.960 +Travel repository uh so here you can + +01:04:45.359 --> 01:04:52.319 +think of it as pretty complex uh task it + +01:04:48.960 --> 01:04:56.839 +involves if you just B market it will at + +01:04:52.319 --> 01:04:59.520 +least be 20 pages of 20 times of page + +01:04:56.839 --> 01:05:02.440 +transitions some you also have to cover + +01:04:59.520 --> 01:05:04.400 +several websites so you first have to + +01:05:02.440 --> 01:05:06.000 +search for museums and P where probably + +01:05:04.400 --> 01:05:09.079 +you're going to use Google or you + +01:05:06.000 --> 01:05:11.760 +probably use um Wikipedia and then after + +01:05:09.079 --> 01:05:15.119 +you get that you are going to search for + +01:05:11.760 --> 01:05:17.680 +each Art Museum on a map software and + +01:05:15.119 --> 01:05:19.319 +finally you check you know minimal + +01:05:17.680 --> 01:05:21.520 +driving distance so there are some even + +01:05:19.319 --> 01:05:23.680 +some mathematic reasoning here you have + +01:05:21.520 --> 01:05:26.599 +to gather the driving distance and find + +01:05:23.680 --> 01:05:30.400 +the minimal one and Rec them in the + +01:05:26.599 --> 01:05:32.960 +repository and then that involves gitl + +01:05:30.400 --> 01:05:34.880 +operations and then for these kind of + +01:05:32.960 --> 01:05:37.559 +complex tasks we are definitely not + +01:05:34.880 --> 01:05:40.119 +going to rely on uh like action sequence + +01:05:37.559 --> 01:05:42.279 +based evaluation so our goal is to + +01:05:40.119 --> 01:05:44.920 +directly validate the correctness of the + +01:05:42.279 --> 01:05:47.760 +execution for example when was the last + +01:05:44.920 --> 01:05:50.079 +time I bought shampoo the answer is can + +01:05:47.760 --> 01:05:53.359 +be directly answered by you + +01:05:50.079 --> 01:05:56.240 +know if it's just like this some date + +01:05:53.359 --> 01:05:59.319 +because the data in the we know ahead of + +01:05:56.240 --> 01:06:01.720 +time what data we have in this Benchmark + +01:05:59.319 --> 01:06:04.640 +so you have the correct answer to check + +01:06:01.720 --> 01:06:06.559 +and for others if they are more tricky + +01:06:04.640 --> 01:06:09.480 +for example the previous question of is + +01:06:06.559 --> 01:06:12.480 +a car necessary in New York City uh you + +01:06:09.480 --> 01:06:16.440 +actually have to check oh I got in at + +01:06:12.480 --> 01:06:18.640 +New York uh New York City in page URL + +01:06:16.440 --> 01:06:20.440 +and then if the content is a car + +01:06:18.640 --> 01:06:24.760 +necessary in New York City actually + +01:06:20.440 --> 01:06:28.039 +inside the uh document page or the HTML + +01:06:24.760 --> 01:06:30.480 +page so with these kind of verifiers you + +01:06:28.039 --> 01:06:32.599 +can think of it as unit test in software + +01:06:30.480 --> 01:06:35.799 +development we are only checking the + +01:06:32.599 --> 01:06:37.760 +output or the outcome of the final State + +01:06:35.799 --> 01:06:40.240 +uh of the environment and check if it is + +01:06:37.760 --> 01:06:43.119 +correct then it alleviates the issue of + +01:06:40.240 --> 01:06:45.720 +you know relying on checking + +01:06:43.119 --> 01:06:49.200 +actions the obervation and action space + +01:06:45.720 --> 01:06:51.559 +for these kind of task can be multimodel + +01:06:49.200 --> 01:06:54.680 +you can use screenshot of course of the + +01:06:51.559 --> 01:06:58.440 +website or you can use a Tex space like + +01:06:54.680 --> 01:07:01.799 +directly you just using raw HTML source + +01:06:58.440 --> 01:07:04.839 +code or you could use a slightly more + +01:07:01.799 --> 01:07:07.200 +structured version called accessib tree + +01:07:04.839 --> 01:07:10.000 +where it is a tree based structure to + +01:07:07.200 --> 01:07:13.000 +represent the website structure and for + +01:07:10.000 --> 01:07:15.920 +the action space you we can use keyboard + +01:07:13.000 --> 01:07:20.559 +to import or Mouse to click hover and + +01:07:15.920 --> 01:07:24.200 +scroll and use browser to go back + +01:07:20.559 --> 01:07:26.119 +so we can perform previously we disc + +01:07:24.200 --> 01:07:28.400 +discussed a bit about using + +01:07:26.119 --> 01:07:30.720 +prompting few shot in context learning + +01:07:28.400 --> 01:07:34.200 +to provide a bit of General guidance and + +01:07:30.720 --> 01:07:37.119 +two examples so just to let you know + +01:07:34.200 --> 01:07:38.880 +about the performance about GP how gp4 + +01:07:37.119 --> 01:07:41.480 +would perform in this kind of complex + +01:07:38.880 --> 01:07:44.279 +task we just provide the gp4 with this + +01:07:41.480 --> 01:07:47.119 +kind of instruction we we are we let it + +01:07:44.279 --> 01:07:48.839 +know it's an atomous intelligent engine + +01:07:47.119 --> 01:07:51.000 +and you can observe the following + +01:07:48.839 --> 01:07:52.520 +information we describe a little bit of + +01:07:51.000 --> 01:07:54.240 +what the observation space would look + +01:07:52.520 --> 01:07:56.920 +like and you can do the following + +01:07:54.240 --> 01:07:59.640 +actions what the actions space are and + +01:07:56.920 --> 01:08:01.640 +provide them a little bit of examples + +01:07:59.640 --> 01:08:04.079 +like what the observation here would be + +01:08:01.640 --> 01:08:07.079 +you can see the observation here is like + +01:08:04.079 --> 01:08:09.520 +a tree based a tree tree like structure + +01:08:07.079 --> 01:08:11.839 +of basically a filtered down version of + +01:08:09.520 --> 01:08:14.279 +the HTML a representation of the web + +01:08:11.839 --> 01:08:17.400 +page you have URL of course you have the + +01:08:14.279 --> 01:08:19.719 +objective or the task description and an + +01:08:17.400 --> 01:08:21.440 +example output here you can see we are + +01:08:19.719 --> 01:08:23.839 +using uh Chain of Thought based + +01:08:21.440 --> 01:08:26.400 +reasoning uh seeing step by step what + +01:08:23.839 --> 01:08:29.120 +you should do and act here because + +01:08:26.400 --> 01:08:31.640 +action space is previously provided this + +01:08:29.120 --> 01:08:34.199 +kind of action we can are something we + +01:08:31.640 --> 01:08:37.520 +can accure in the environment you can + +01:08:34.199 --> 01:08:39.880 +see web arena is very a challenging task + +01:08:37.520 --> 01:08:42.159 +um for humans we ask humans to perform + +01:08:39.880 --> 01:08:47.480 +these tasks they can easily achieve + +01:08:42.159 --> 01:08:49.719 +about 78% accuracy within like a time + +01:08:47.480 --> 01:08:53.920 +limit of two minutes we give human 2 + +01:08:49.719 --> 01:08:56.359 +minutes but with GPT 3.5 or Advanced + +01:08:53.920 --> 01:08:58.719 +prompting techniques or even like gp4 + +01:08:56.359 --> 01:09:01.960 +with Advanced Tech prompting techniques + +01:08:58.719 --> 01:09:04.759 +we can only solve about 14% of the test + +01:09:01.960 --> 01:09:07.400 +so chain off L indeed helps but it + +01:09:04.759 --> 01:09:09.199 +provides limited benefits and gp4 + +01:09:07.400 --> 01:09:11.880 +remains significantly behind human + +01:09:09.199 --> 01:09:15.120 +performance and prompt engineering you + +01:09:11.880 --> 01:09:18.239 +know sometimes emphasizes large J models + +01:09:15.120 --> 01:09:20.839 +sensitivity to sub subtle instruction + +01:09:18.239 --> 01:09:22.600 +changes because prompt engineering + +01:09:20.839 --> 01:09:26.199 +sometimes actually is + +01:09:22.600 --> 01:09:28.359 +hard and here are some failure cases + +01:09:26.199 --> 01:09:29.920 +for example if sometimes the langage + +01:09:28.359 --> 01:09:32.040 +model just don't know what button to + +01:09:29.920 --> 01:09:34.279 +click show me the customers who have + +01:09:32.040 --> 01:09:37.279 +expressed this satisfaction this + +01:09:34.279 --> 01:09:39.319 +satisfaction in a zip jacket then the + +01:09:37.279 --> 01:09:41.000 +correct one as a human you know you + +01:09:39.319 --> 01:09:43.679 +should probably go to the catalog + +01:09:41.000 --> 01:09:45.440 +product page and check reviews or just + +01:09:43.679 --> 01:09:47.799 +go to the review sections and search for + +01:09:45.440 --> 01:09:50.239 +the jacket but you know sometimes the + +01:09:47.799 --> 01:09:52.400 +gp4 without these kind of Common Sense + +01:09:50.239 --> 01:09:55.960 +knowledge they would just go to + +01:09:52.400 --> 01:09:57.840 +customers sections so this one is + +01:09:55.960 --> 01:09:59.760 +basically means that the language model + +01:09:57.840 --> 01:10:02.120 +does not have good enough reasoning or + +01:09:59.760 --> 01:10:04.520 +planning ability without a basic + +01:10:02.120 --> 01:10:07.360 +information a basic knowledge sometimes + +01:10:04.520 --> 01:10:10.480 +it's just uh simply not being accurate + +01:10:07.360 --> 01:10:13.120 +for example you ask it to uh enter a due + +01:10:10.480 --> 01:10:15.239 +date it enters a wrong format and then + +01:10:13.120 --> 01:10:18.760 +you know if the website is not designed + +01:10:15.239 --> 01:10:21.480 +really well I would just stop at this + +01:10:18.760 --> 01:10:24.120 +point because it is a incorrect format + +01:10:21.480 --> 01:10:26.679 +ideally you should use this state peer + +01:10:24.120 --> 01:10:28.239 +State peer like vdet but sometimes the + +01:10:26.679 --> 01:10:31.600 +language model will just figure out to + +01:10:28.239 --> 01:10:35.560 +enter some text itself and sometimes + +01:10:31.600 --> 01:10:39.440 +it's trivial errors like in our study + +01:10:35.560 --> 01:10:42.840 +gb4 in with gb4 about 21% of examples + +01:10:39.440 --> 01:10:44.760 +fil due to repeated typing I we think + +01:10:42.840 --> 01:10:47.280 +this is probably related to + +01:10:44.760 --> 01:10:50.280 +hallucination effects in large common in + +01:10:47.280 --> 01:10:53.320 +large rank models they will just enter a + +01:10:50.280 --> 01:10:55.960 +bunch of you know DMV area DMV era DMV + +01:10:53.320 --> 01:10:59.239 +area and then these two Arrow + +01:10:55.960 --> 01:11:01.199 +sometimes they are not so trivial errors + +01:10:59.239 --> 01:11:04.880 +here's a very interesting one assign + +01:11:01.199 --> 01:11:08.360 +this issue to myself this is a gitlab um + +01:11:04.880 --> 01:11:11.600 +page and if you ask Lang uh like gp4 to + +01:11:08.360 --> 01:11:16.040 +perform this Tas for you it doesn't know + +01:11:11.600 --> 01:11:18.560 +myself it just enters myself as a stram + +01:11:16.040 --> 01:11:21.159 +to this assigning it actually needs to + +01:11:18.560 --> 01:11:24.719 +query what's you itself it probably + +01:11:21.159 --> 01:11:27.480 +needs to enter me or its own use + +01:11:24.719 --> 01:11:30.560 +username in this field so it's kind of + +01:11:27.480 --> 01:11:33.040 +interesting finally I'm going to touch a + +01:11:30.560 --> 01:11:35.440 +little bit of training methods for + +01:11:33.040 --> 01:11:38.080 +improving the agents previously we have + +01:11:35.440 --> 01:11:40.120 +what we have covered tests and + +01:11:38.080 --> 01:11:43.280 +applications I presented some of the + +01:11:40.120 --> 01:11:44.440 +prompting techniques and also uh one of + +01:11:43.280 --> 01:11:46.880 +the state-ofthe-art + +01:11:44.440 --> 01:11:49.320 +uh uh one of the state-ofthe-art you + +01:11:46.880 --> 01:11:52.480 +know um like Benchmark so you have this + +01:11:49.320 --> 01:11:55.480 +playground you have this environment now + +01:11:52.480 --> 01:11:57.600 +but you are still not satisfied with uh + +01:11:55.480 --> 01:12:00.480 +the performance like even if you do all + +01:11:57.600 --> 01:12:01.639 +the like Chain of Thought prompting and + +01:12:00.480 --> 01:12:04.560 +all that + +01:12:01.639 --> 01:12:07.120 +stuff so the learning this topic for the + +01:12:04.560 --> 01:12:10.159 +learning of language model learn agents + +01:12:07.120 --> 01:12:12.239 +I'm going to cover mainly three uh types + +01:12:10.159 --> 01:12:15.080 +of learning first is Inc context + +01:12:12.239 --> 01:12:18.679 +learning some may argue oh this is just + +01:12:15.080 --> 01:12:21.760 +prompting yes I agree but you can still + +01:12:18.679 --> 01:12:24.920 +probably get most of it out of it by + +01:12:21.760 --> 01:12:26.920 +providing better um viewshot examples + +01:12:24.920 --> 01:12:29.440 +and of course there's supervised F + +01:12:26.920 --> 01:12:31.760 +tuning and learning from basically this + +01:12:29.440 --> 01:12:34.560 +is learning from experts supposedly if + +01:12:31.760 --> 01:12:36.800 +you have good ground choose trajectories + +01:12:34.560 --> 01:12:39.760 +of how a human would perform a task you + +01:12:36.800 --> 01:12:42.080 +probably can do use these data and + +01:12:39.760 --> 01:12:44.199 +finally if you thinking about agents + +01:12:42.080 --> 01:12:48.159 +that interact with um + +01:12:44.199 --> 01:12:50.600 +enironment um you know a quite popular + +01:12:48.159 --> 01:12:52.840 +technique for uh these type of tastics + +01:12:50.600 --> 01:12:54.480 +using reinforcement learning because if + +01:12:52.840 --> 01:12:57.440 +you have a good enough environment you + +01:12:54.480 --> 01:13:00.960 +probably can learn from the environment + +01:12:57.440 --> 01:13:03.800 +so first a little bit of uh background + +01:13:00.960 --> 01:13:06.440 +on in context learning so the language + +01:13:03.800 --> 01:13:08.480 +model they basically language model can + +01:13:06.440 --> 01:13:11.400 +perform a task by just conditioning on + +01:13:08.480 --> 01:13:14.199 +input output examples without optimizing + +01:13:11.400 --> 01:13:16.719 +other parameters sometimes it is because + +01:13:14.199 --> 01:13:19.520 +we don't have access to these parameters + +01:13:16.719 --> 01:13:22.520 +sometimes it is too costly to train but + +01:13:19.520 --> 01:13:25.719 +nonetheless this is a very popular way + +01:13:22.520 --> 01:13:28.080 +of doing uh learning + +01:13:25.719 --> 01:13:30.159 +just like previously shown example on + +01:13:28.080 --> 01:13:32.520 +where how we get the Benchmark + +01:13:30.159 --> 01:13:35.639 +performance on like Baseline performance + +01:13:32.520 --> 01:13:38.360 +on web Arena tasks uh we provide a + +01:13:35.639 --> 01:13:41.800 +little bit of example of what the + +01:13:38.360 --> 01:13:46.120 +observation would look like and what the + +01:13:41.800 --> 01:13:48.960 +actions should be given a task so just + +01:13:46.120 --> 01:13:52.719 +like this if you are doing in context + +01:13:48.960 --> 01:13:55.679 +Durning we can just provide some uh like + +01:13:52.719 --> 01:13:57.880 +user example user observation here like + +01:13:55.679 --> 01:14:00.760 +for example this observation represents + +01:13:57.880 --> 01:14:04.719 +a web page uh it is a trm down version + +01:14:00.760 --> 01:14:06.840 +of the HTML document tree and then the + +01:14:04.719 --> 01:14:09.159 +example assistant part is basically kind + +01:14:06.840 --> 01:14:12.560 +of the you can think of it as a output + +01:14:09.159 --> 01:14:14.920 +label where you you know you think step + +01:14:12.560 --> 01:14:17.600 +by step you you you kind of show it how + +01:14:14.920 --> 01:14:20.960 +to do this uh Chain of Thought reasoning + +01:14:17.600 --> 01:14:23.800 +and also you show it how to kind of uh + +01:14:20.960 --> 01:14:27.080 +what a what what format it will be like + +01:14:23.800 --> 01:14:30.239 +to issue the kind of uh stop for stop + +01:14:27.080 --> 01:14:33.719 +action usually with proper instruction + +01:14:30.239 --> 01:14:35.920 +tuning sorry usually with proper + +01:14:33.719 --> 01:14:39.040 +instruction instructions providing all + +01:14:35.920 --> 01:14:41.199 +the action space and all the format and + +01:14:39.040 --> 01:14:44.520 +a good several you know few shot + +01:14:41.199 --> 01:14:47.880 +examples of these kind of obervation and + +01:14:44.520 --> 01:14:50.520 +action um pairs you would at least the + +01:14:47.880 --> 01:14:52.880 +language model are good enough to figure + +01:14:50.520 --> 01:14:56.159 +out the format they will usually just + +01:14:52.880 --> 01:14:58.440 +generate um like these kind of of uh + +01:14:56.159 --> 01:15:01.480 +like stop action kind of this kind of + +01:14:58.440 --> 01:15:03.840 +action plus a parenthesis with arguments + +01:15:01.480 --> 01:15:06.159 +format so it is really in context + +01:15:03.840 --> 01:15:09.760 +learning sometimes is really good to + +01:15:06.159 --> 01:15:12.560 +tune the language model to your like + +01:15:09.760 --> 01:15:15.800 +specification of how this this + +01:15:12.560 --> 01:15:21.840 +format however there + +01:15:15.800 --> 01:15:24.199 +are a couple oh okay yeah so then we + +01:15:21.840 --> 01:15:25.920 +have like supervised fine tuning where + +01:15:24.199 --> 01:15:28.880 +you can collect L amount of expert + +01:15:25.920 --> 01:15:30.320 +directories from like human adapation + +01:15:28.880 --> 01:15:32.600 +like for example you have this task + +01:15:30.320 --> 01:15:35.400 +intent observation action observation + +01:15:32.600 --> 01:15:38.080 +action pairs but then finally you can + +01:15:35.400 --> 01:15:41.120 +find find elements with like the cross + +01:15:38.080 --> 01:15:44.880 +entropy loss like for example like a lot + +01:15:41.120 --> 01:15:47.520 +of existing work try to optimize Lang + +01:15:44.880 --> 01:15:50.560 +model agents by just collecting human + +01:15:47.520 --> 01:15:53.199 +annotations it is super uh it is going + +01:15:50.560 --> 01:15:55.199 +to work super well but it is St hungry + +01:15:53.199 --> 01:15:57.360 +and cannot learn much from fa + +01:15:55.199 --> 01:16:00.600 +trajectories for example if you have a + +01:15:57.360 --> 01:16:04.320 +success in the failed trajectory uh you + +01:16:00.600 --> 01:16:06.440 +probably would not be able to um uh you + +01:16:04.320 --> 01:16:08.800 +probably will not be able you kind of + +01:16:06.440 --> 01:16:10.520 +wasted the failed the trory even if say + +01:16:08.800 --> 01:16:13.440 +only the last step is + +01:16:10.520 --> 01:16:16.440 +incorrect and you know there are several + +01:16:13.440 --> 01:16:18.679 +like data augmentation techniques where + +01:16:16.440 --> 01:16:22.480 +for example in this Minecraft playing + +01:16:18.679 --> 01:16:24.639 +example you can just let it uh do dat + +01:16:22.480 --> 01:16:25.679 +augmentation based on you know YouTube + +01:16:24.639 --> 01:16:28.480 +video + +01:16:25.679 --> 01:16:30.440 +or Wiki pedia or RIT + +01:16:28.480 --> 01:16:32.760 +threats and + +01:16:30.440 --> 01:16:35.000 +finally uh we have this uh like + +01:16:32.760 --> 01:16:38.199 +reinforcement learning based methods a + +01:16:35.000 --> 01:16:41.880 +lot of like uh ongoing research in this + +01:16:38.199 --> 01:16:44.080 +area but previously we have like R + +01:16:41.880 --> 01:16:47.120 +resarch uh research learning from Human + +01:16:44.080 --> 01:16:49.199 +feedback but then this time uh without + +01:16:47.120 --> 01:16:52.760 +human feedback we probably can just + +01:16:49.199 --> 01:16:55.800 +replace all these rewards with a real + +01:16:52.760 --> 01:16:58.120 +environment for example if you are do + +01:16:55.800 --> 01:17:01.239 +uh you have access to web Arena whether + +01:16:58.120 --> 01:17:03.600 +or not a task is successful can be + +01:17:01.239 --> 01:17:06.080 +automatically determined with that + +01:17:03.600 --> 01:17:09.440 +environment so it provides a natural + +01:17:06.080 --> 01:17:11.840 +feedback from the environment so you + +01:17:09.440 --> 01:17:14.520 +know with some I also listed some of the + +01:17:11.840 --> 01:17:16.840 +reference here if you are interested you + +01:17:14.520 --> 01:17:19.679 +can go to the course website or check + +01:17:16.840 --> 01:17:21.800 +the slides for some of these ongoing + +01:17:19.679 --> 01:17:25.040 +research but I'm going not going to + +01:17:21.800 --> 01:17:27.159 +cover much in details here due to the + +01:17:25.040 --> 01:17:32.199 +the time constraint but these are + +01:17:27.159 --> 01:17:35.600 +generally the methods for so I + +01:17:32.199 --> 01:17:40.000 +think yeah if you have any questions + +01:17:35.600 --> 01:17:42.800 +about anything tools and agents uh feel + +01:17:40.000 --> 01:17:45.199 +free to ask yeah thanks a lot we have + +01:17:42.800 --> 01:17:48.880 +time for maybe one or one or two quick + +01:17:45.199 --> 01:17:52.719 +questions um I'd like to prasis with um + +01:17:48.880 --> 01:17:54.840 +saying that uh Frank gave a really good + +01:17:52.719 --> 01:17:57.120 +example of web Arena and I think web + +01:17:54.840 --> 01:17:59.440 +arena is a good example of some of the + +01:17:57.120 --> 01:18:01.320 +challenges in the whole agent space uh + +01:17:59.440 --> 01:18:03.000 +not just like web agents but also code + +01:18:01.320 --> 01:18:05.600 +generation agents + +01:18:03.000 --> 01:18:08.159 +robots um you know embodied environments + +01:18:05.600 --> 01:18:10.560 +and stuff which is that there's a lot of + +01:18:08.159 --> 01:18:13.000 +really simple ones that are that were + +01:18:10.560 --> 01:18:15.080 +interesting a few years ago but are kind + +01:18:13.000 --> 01:18:16.400 +of like solved now or close enough to + +01:18:15.080 --> 01:18:18.520 +solve that they don't test the really + +01:18:16.400 --> 01:18:20.239 +hard things like planning you know + +01:18:18.520 --> 01:18:22.360 +ability to handle diverse environments + +01:18:20.239 --> 01:18:25.199 +and stuff like that and so weing is just + +01:18:22.360 --> 01:18:27.040 +one example and then even if you're + +01:18:25.199 --> 01:18:28.600 +interested in other varieties of things + +01:18:27.040 --> 01:18:32.480 +you're going to be facing the same props + +01:18:28.600 --> 01:18:34.400 +of evaluation so um evaluation and model + +01:18:32.480 --> 01:18:36.280 +cool um any any things people would like + +01:18:34.400 --> 01:18:38.520 +to ask we can also take things up front + +01:18:36.280 --> 01:18:38.520 +if diff --git a/CMU Advanced NLP 2024 (21) Complex Reasoning/CMU Advanced NLP 2024 (21) Complex Reasoning.mp4 b/CMU Advanced NLP 2024 (21) Complex Reasoning/CMU Advanced NLP 2024 (21) Complex Reasoning.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..93aa1495cc3f57711f2796f9d0fb1ae154044840 --- /dev/null +++ b/CMU Advanced NLP 2024 (21) Complex Reasoning/CMU Advanced NLP 2024 (21) Complex Reasoning.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10a2dabeb41186cd432caae205d3e22b8ad34e91253d174abfdadaa82ea581f2 +size 56293331 diff --git a/CMU Advanced NLP 2024 (21) Complex Reasoning/metadata.json b/CMU Advanced NLP 2024 (21) Complex Reasoning/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..7d691c811eedb6591732e30b638804c47de46e6d --- /dev/null +++ b/CMU Advanced NLP 2024 (21) Complex Reasoning/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=mPd2hFmzjWE", + "title": "CMU Advanced NLP 2024 (21) Complex Reasoning" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (21) Complex Reasoning/transcript.srt b/CMU Advanced NLP 2024 (21) Complex Reasoning/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..c26ba9da6dd4b233224ae8a6602de8afa643b632 --- /dev/null +++ b/CMU Advanced NLP 2024 (21) Complex Reasoning/transcript.srt @@ -0,0 +1,5007 @@ +1 +00:00:00,280 --> 00:00:05,120 +so I'd like to go ahead with uh complex + +2 +00:00:02,399 --> 00:00:08,719 +reasoning and we've talked a little bit + +3 +00:00:05,120 --> 00:00:10,719 +about uh reasoning in language models uh + +4 +00:00:08,719 --> 00:00:12,160 +up until now and so I'm going to be + +5 +00:00:10,719 --> 00:00:15,280 +talking about stuff that we didn't talk + +6 +00:00:12,160 --> 00:00:17,240 +about yet um this might be a little bit + +7 +00:00:15,280 --> 00:00:19,199 +short because of that because I'm not + +8 +00:00:17,240 --> 00:00:20,640 +talking about like programs because we + +9 +00:00:19,199 --> 00:00:22,080 +talked about that in the code generation + +10 +00:00:20,640 --> 00:00:24,199 +class and we already talked a little bit + +11 +00:00:22,080 --> 00:00:26,320 +about some of the basics here but um you + +12 +00:00:24,199 --> 00:00:30,119 +know if we have time at the end I'd be + +13 +00:00:26,320 --> 00:00:30,840 +happy to discuss free form also so what + +14 +00:00:30,119 --> 00:00:34,320 +is + +15 +00:00:30,840 --> 00:00:35,920 +reasoning um the basic idea is using + +16 +00:00:34,320 --> 00:00:37,680 +evidence and logic to arrive at + +17 +00:00:35,920 --> 00:00:40,200 +conclusions and make + +18 +00:00:37,680 --> 00:00:43,760 +judgments + +19 +00:00:40,200 --> 00:00:48,039 +and what is it in language models is a + +20 +00:00:43,760 --> 00:00:49,399 +little bit um you know less clear uh but + +21 +00:00:48,039 --> 00:00:52,680 +if we talk about it kind of like from + +22 +00:00:49,399 --> 00:00:56,280 +the philosophical standpoint um there + +23 +00:00:52,680 --> 00:00:58,399 +are two varieties of this one is formal + +24 +00:00:56,280 --> 00:01:01,680 +uh reasoning and formal reasoning is + +25 +00:00:58,399 --> 00:01:04,239 +mostly based on strict truth values so + +26 +00:01:01,680 --> 00:01:05,920 +it's kind of like um you can definitely + +27 +00:01:04,239 --> 00:01:08,360 +say this is true you can definitely say + +28 +00:01:05,920 --> 00:01:11,680 +this is not true + +29 +00:01:08,360 --> 00:01:13,799 +and in real life there's very little + +30 +00:01:11,680 --> 00:01:15,759 +actual formal reasoning outside of like + +31 +00:01:13,799 --> 00:01:17,960 +for example mathematics or maybe you + +32 +00:01:15,759 --> 00:01:20,240 +know algorithms computer science and + +33 +00:01:17,960 --> 00:01:21,759 +other things like that um and then + +34 +00:01:20,240 --> 00:01:23,240 +separately from that we have informal + +35 +00:01:21,759 --> 00:01:27,040 +reasoning based on experience and + +36 +00:01:23,240 --> 00:01:30,439 +intuition and actually um this is this + +37 +00:01:27,040 --> 00:01:32,360 +was uh rather elusive uh until + +38 +00:01:30,439 --> 00:01:33,720 +large language models you know people + +39 +00:01:32,360 --> 00:01:35,560 +were working on it but it was really + +40 +00:01:33,720 --> 00:01:38,119 +hard and this is like one of the big + +41 +00:01:35,560 --> 00:01:41,479 +breakthroughs I think of the past few + +42 +00:01:38,119 --> 00:01:46,799 +years um I should note that this uh + +43 +00:01:41,479 --> 00:01:48,520 +paper here uh hang and Chan is a kind of + +44 +00:01:46,799 --> 00:01:50,119 +review survey paper of reasoning in + +45 +00:01:48,520 --> 00:01:51,520 +large language models it's on the + +46 +00:01:50,119 --> 00:01:54,719 +references so if you're interested you + +47 +00:01:51,520 --> 00:01:57,600 +can take a look at that too um but + +48 +00:01:54,719 --> 00:01:59,200 +there's three kinds of reasoning uh + +49 +00:01:57,600 --> 00:02:00,840 +there's many kinds of reasoning but + +50 +00:01:59,200 --> 00:02:03,280 +there's three kinds of reasoning in + +51 +00:02:00,840 --> 00:02:06,240 +particular that I'd like to talk about + +52 +00:02:03,280 --> 00:02:08,840 +um from the point of view of today and + +53 +00:02:06,240 --> 00:02:10,360 +the first one is uh deductive reasoning + +54 +00:02:08,840 --> 00:02:13,080 +and deductive reasoning is basically + +55 +00:02:10,360 --> 00:02:16,040 +using logic to go from a premise to a + +56 +00:02:13,080 --> 00:02:18,440 +conclusion and this is largely what + +57 +00:02:16,040 --> 00:02:19,879 +people not entirely but largely what + +58 +00:02:18,440 --> 00:02:22,400 +people talk about when they think about + +59 +00:02:19,879 --> 00:02:25,879 +formal reasoning and so basically you + +60 +00:02:22,400 --> 00:02:28,640 +have several premises um like all + +61 +00:02:25,879 --> 00:02:32,120 +mammals have kidneys and all whales are + +62 +00:02:28,640 --> 00:02:35,239 +mammals and then from this uh you can go + +63 +00:02:32,120 --> 00:02:35,239 +to all whales have + +64 +00:02:35,440 --> 00:02:40,640 +kidneys then separately there's + +65 +00:02:38,000 --> 00:02:44,040 +inductive reasoning and inductive + +66 +00:02:40,640 --> 00:02:46,040 +reasoning is um from + +67 +00:02:44,040 --> 00:02:48,480 +observation uh predict a likely + +68 +00:02:46,040 --> 00:02:50,080 +conclusion or predict a likely kind of + +69 +00:02:48,480 --> 00:02:53,640 +generalized + +70 +00:02:50,080 --> 00:02:55,360 +conclusion um so this is one example uh + +71 +00:02:53,640 --> 00:02:56,920 +when we see a creature with wings it is + +72 +00:02:55,360 --> 00:02:58,599 +usually a bird we see a creature with + +73 +00:02:56,920 --> 00:03:00,400 +wings the creature is likely to be a + +74 +00:02:58,599 --> 00:03:02,879 +bird so it's kind of this is kind of + +75 +00:03:00,400 --> 00:03:05,319 +like a soft version of deduction another + +76 +00:03:02,879 --> 00:03:07,440 +common thing is like every single + +77 +00:03:05,319 --> 00:03:10,760 +creature I have seen with wings is a + +78 +00:03:07,440 --> 00:03:12,480 +bird and then you can kind of um induce + +79 +00:03:10,760 --> 00:03:16,799 +that all + +80 +00:03:12,480 --> 00:03:19,159 +uh like all uh creatures with wings are + +81 +00:03:16,799 --> 00:03:21,120 +birds but that might not be true it's + +82 +00:03:19,159 --> 00:03:23,879 +not necessarily logically entailed but + +83 +00:03:21,120 --> 00:03:27,560 +you you make that kind + +84 +00:03:23,879 --> 00:03:31,000 +of logical conclusion uh without it + +85 +00:03:27,560 --> 00:03:32,840 +being formally uh correct or verif + +86 +00:03:31,000 --> 00:03:34,720 +and then the final one is abductive + +87 +00:03:32,840 --> 00:03:38,000 +reasoning and so this is from an + +88 +00:03:34,720 --> 00:03:40,760 +observation we predict the most likely + +89 +00:03:38,000 --> 00:03:42,760 +explanation and so for example if we + +90 +00:03:40,760 --> 00:03:44,480 +have something like the car cannot start + +91 +00:03:42,760 --> 00:03:48,319 +and there is a puddle of liquid under + +92 +00:03:44,480 --> 00:03:50,200 +the engine um then we might have a + +93 +00:03:48,319 --> 00:03:53,360 +likely explanation that the car has a + +94 +00:03:50,200 --> 00:03:55,280 +leak in the radiator so we're going from + +95 +00:03:53,360 --> 00:03:58,760 +kind of uh the + +96 +00:03:55,280 --> 00:04:00,879 +car you know these these things and then + +97 +00:03:58,760 --> 00:04:02,280 +we try to predict the reason why this + +98 +00:04:00,879 --> 00:04:05,040 +happens so we're trying to predict like + +99 +00:04:02,280 --> 00:04:07,360 +reverse pausal links + +100 +00:04:05,040 --> 00:04:08,480 +essentially um there's other types of re + +101 +00:04:07,360 --> 00:04:10,400 +reasoning that I'm not going to talk + +102 +00:04:08,480 --> 00:04:12,159 +about as much like analogical reasoning + +103 +00:04:10,400 --> 00:04:14,079 +and and things like this but uh these + +104 +00:04:12,159 --> 00:04:15,440 +are the three main ones I want to talk + +105 +00:04:14,079 --> 00:04:17,720 +about + +106 +00:04:15,440 --> 00:04:22,040 +today uh one thing I should point out is + +107 +00:04:17,720 --> 00:04:24,400 +like even in philosophy or you know + +108 +00:04:22,040 --> 00:04:26,240 +like even when you read descriptions + +109 +00:04:24,400 --> 00:04:29,280 +about these various types of reasoning + +110 +00:04:26,240 --> 00:04:31,880 +the types are a little bit vague so um + +111 +00:04:29,280 --> 00:04:35,280 +take these is like + +112 +00:04:31,880 --> 00:04:37,240 +general not you know General directions + +113 +00:04:35,280 --> 00:04:39,400 +and not strict rules because like which + +114 +00:04:37,240 --> 00:04:42,120 +falls on under which category also can + +115 +00:04:39,400 --> 00:04:44,880 +be a little bit uh you know unclear uh + +116 +00:04:42,120 --> 00:04:44,880 +according to various + +117 +00:04:45,479 --> 00:04:53,440 +definitions cool um so first before + +118 +00:04:49,840 --> 00:04:55,720 +getting into formal reasoning methods + +119 +00:04:53,440 --> 00:04:57,759 +are before getting into the bulk of the + +120 +00:04:55,720 --> 00:05:00,000 +talk which is going to be about llms I + +121 +00:04:57,759 --> 00:05:02,479 +want to talk about some pre-m reasoning + +122 +00:05:00,000 --> 00:05:03,720 +methods and the first one is kind of + +123 +00:05:02,479 --> 00:05:05,160 +like formal reasoning within + +124 +00:05:03,720 --> 00:05:07,320 +computational + +125 +00:05:05,160 --> 00:05:09,840 +semantics and this has been around for a + +126 +00:05:07,320 --> 00:05:12,479 +really long time um it's also kind of + +127 +00:05:09,840 --> 00:05:15,000 +what powered the things that worked over + +128 +00:05:12,479 --> 00:05:21,039 +knowledge bases and other things like + +129 +00:05:15,000 --> 00:05:23,639 +this um and the way it works is it does + +130 +00:05:21,039 --> 00:05:27,600 +derivational um + +131 +00:05:23,639 --> 00:05:31,800 +reasoning by uh sorry I can't read that + +132 +00:05:27,600 --> 00:05:34,720 +in the back um by starting out with + +133 +00:05:31,800 --> 00:05:36,080 +certain premises and getting to um + +134 +00:05:34,720 --> 00:05:40,000 +getting to final + +135 +00:05:36,080 --> 00:05:43,039 +conclusions so there's ways that you can + +136 +00:05:40,000 --> 00:05:44,060 +write this I think you might have + +137 +00:05:43,039 --> 00:05:47,080 +seen + +138 +00:05:44,060 --> 00:05:50,479 +[Music] + +139 +00:05:47,080 --> 00:05:54,240 +um you might have seen + +140 +00:05:50,479 --> 00:05:58,319 +uh this in uh another like math class or + +141 +00:05:54,240 --> 00:06:02,440 +something but uh we we have symbols like + +142 +00:05:58,319 --> 00:06:02,440 +all and um + +143 +00:06:03,039 --> 00:06:08,280 +exist let's + +144 +00:06:04,960 --> 00:06:10,960 +see yeah we have things like all and + +145 +00:06:08,280 --> 00:06:13,319 +exist and like all + +146 +00:06:10,960 --> 00:06:16,240 +X + +147 +00:06:13,319 --> 00:06:20,479 +die means + +148 +00:06:16,240 --> 00:06:23,919 +like every Everything has died and this + +149 +00:06:20,479 --> 00:06:27,360 +uh implies that Mia and Zed have + +150 +00:06:23,919 --> 00:06:30,440 +died um + +151 +00:06:27,360 --> 00:06:32,240 +so yeah this is a actually maybe I'll + +152 +00:06:30,440 --> 00:06:33,280 +not I'll not go through this one and let + +153 +00:06:32,240 --> 00:06:37,639 +me go + +154 +00:06:33,280 --> 00:06:40,440 +through um go to this one so like it + +155 +00:06:37,639 --> 00:06:40,440 +would be something + +156 +00:06:40,639 --> 00:06:45,080 +like uh for + +157 +00:06:42,960 --> 00:06:47,480 +all + +158 +00:06:45,080 --> 00:06:50,669 +X um + +159 +00:06:47,480 --> 00:06:50,669 +[Music] + +160 +00:06:52,039 --> 00:07:00,400 +mamal well X + +161 +00:06:56,759 --> 00:07:03,520 +implies have + +162 +00:07:00,400 --> 00:07:07,560 +X kidney or something like + +163 +00:07:03,520 --> 00:07:09,280 +that and then you would have other rules + +164 +00:07:07,560 --> 00:07:11,879 +and you can go through uh through + +165 +00:07:09,280 --> 00:07:14,440 +derivations and and other things like + +166 +00:07:11,879 --> 00:07:16,120 +this + +167 +00:07:14,440 --> 00:07:19,280 +um + +168 +00:07:16,120 --> 00:07:21,560 +my favorite reference for this is this + +169 +00:07:19,280 --> 00:07:24,599 +Blackburn and buz book right here it's + +170 +00:07:21,560 --> 00:07:26,400 +really well written um and it has like + +171 +00:07:24,599 --> 00:07:28,039 +lots of good examples it also explains + +172 +00:07:26,400 --> 00:07:30,440 +how you go through derivations and other + +173 +00:07:28,039 --> 00:07:34,360 +stuff like that + +174 +00:07:30,440 --> 00:07:35,759 +um and actually neural networks can do + +175 +00:07:34,360 --> 00:07:37,039 +this variety of reasoning through Chain + +176 +00:07:35,759 --> 00:07:38,599 +of Thought and other things I'm going to + +177 +00:07:37,039 --> 00:07:40,120 +talk about today but it's a very rough + +178 +00:07:38,599 --> 00:07:43,960 +approximation and it doesn't work + +179 +00:07:40,120 --> 00:07:47,199 +particularly well for saying like all + +180 +00:07:43,960 --> 00:07:51,240 +you know all people + +181 +00:07:47,199 --> 00:07:53,599 +are of a uh like things that apply to + +182 +00:07:51,240 --> 00:07:57,240 +all people or things that apply to sets + +183 +00:07:53,599 --> 00:08:00,039 +or other things like this so within + +184 +00:07:57,240 --> 00:08:02,879 +prologue you could + +185 +00:08:00,039 --> 00:08:06,520 +take a knowledge base and ask the + +186 +00:08:02,879 --> 00:08:11,960 +knowledge base like do + +187 +00:08:06,520 --> 00:08:12,800 +all people who work at CMU as professors + +188 +00:08:11,960 --> 00:08:15,840 +have a + +189 +00:08:12,800 --> 00:08:18,080 +PhD and you could like actually examine + +190 +00:08:15,840 --> 00:08:20,639 +that based on the knowledge base uh + +191 +00:08:18,080 --> 00:08:23,520 +whereas even if you had + +192 +00:08:20,639 --> 00:08:25,800 +a language model that had access to + +193 +00:08:23,520 --> 00:08:27,280 +everybody's CVS it wouldn't necessarily + +194 +00:08:25,800 --> 00:08:28,599 +be able to answer that question and it + +195 +00:08:27,280 --> 00:08:31,440 +especially wouldn't be able to answer + +196 +00:08:28,599 --> 00:08:31,440 +that question if you were + +197 +00:08:32,320 --> 00:08:37,760 +um it wouldn't be able to answer that + +198 +00:08:34,640 --> 00:08:42,880 +question if there were like multiple + +199 +00:08:37,760 --> 00:08:46,480 +steps so did all people who are working + +200 +00:08:42,880 --> 00:08:50,959 +at CMU get their PHD after + +201 +00:08:46,480 --> 00:08:52,959 +19 90 or something like that um so and + +202 +00:08:50,959 --> 00:08:54,680 +the answer to that is obviously no but + +203 +00:08:52,959 --> 00:08:56,519 +uh this would be able to find the + +204 +00:08:54,680 --> 00:08:58,120 +counter evidence to that whereas LMS + +205 +00:08:56,519 --> 00:09:00,000 +would not be guaranteed to be able to do + +206 +00:08:58,120 --> 00:09:02,800 +that + +207 +00:09:00,000 --> 00:09:04,279 +so I I think this is really uh like a + +208 +00:09:02,800 --> 00:09:06,760 +nice thing to know but there's a couple + +209 +00:09:04,279 --> 00:09:09,600 +problems with it the first thing is this + +210 +00:09:06,760 --> 00:09:12,519 +really only traffics in like strictly + +211 +00:09:09,600 --> 00:09:17,880 +true or strictly false statements um and + +212 +00:09:12,519 --> 00:09:20,560 +that's a really big issue um so like if + +213 +00:09:17,880 --> 00:09:22,959 +anything's soft you start uh this sort + +214 +00:09:20,560 --> 00:09:24,320 +of formal reasoning starts breaking down + +215 +00:09:22,959 --> 00:09:25,880 +the second thing which actually is a + +216 +00:09:24,320 --> 00:09:28,959 +really big problem is once you start + +217 +00:09:25,880 --> 00:09:30,600 +dealing with more complex things you + +218 +00:09:28,959 --> 00:09:32,560 +don't ize it but there's always like + +219 +00:09:30,600 --> 00:09:34,560 +exceptions and exceptions to exceptions + +220 +00:09:32,560 --> 00:09:36,240 +and other things like that and actually + +221 +00:09:34,560 --> 00:09:38,320 +becomes very computationally expensive + +222 +00:09:36,240 --> 00:09:41,640 +to prove anything that's kind of like + +223 +00:09:38,320 --> 00:09:43,279 +non-trivial um and so because of that uh + +224 +00:09:41,640 --> 00:09:45,839 +I'm not actually going to cover it in + +225 +00:09:43,279 --> 00:09:47,880 +the lecture today but recently there are + +226 +00:09:45,839 --> 00:09:50,880 +um kind of search algorithms through + +227 +00:09:47,880 --> 00:09:54,279 +proof spaces that use uh like neural + +228 +00:09:50,880 --> 00:09:55,880 +models to do to speed up the search by + +229 +00:09:54,279 --> 00:09:58,120 +picking the best and most promising + +230 +00:09:55,880 --> 00:10:00,800 +hypotheses and uh for example Sean + +231 +00:09:58,120 --> 00:10:02,800 +wellik uh here at CMU is working on that + +232 +00:10:00,800 --> 00:10:04,800 +for neural theorem proving where you + +233 +00:10:02,800 --> 00:10:05,959 +have uh like mathematical theorem + +234 +00:10:04,800 --> 00:10:08,079 +proving and then you use a neural + +235 +00:10:05,959 --> 00:10:13,120 +network to pick the best uh paths + +236 +00:10:08,079 --> 00:10:14,880 +through logical uh operations so um + +237 +00:10:13,120 --> 00:10:19,279 +that's kind of a combination of the more + +238 +00:10:14,880 --> 00:10:22,920 +classical and uh modern + +239 +00:10:19,279 --> 00:10:26,240 +methods then another thing that's useful + +240 +00:10:22,920 --> 00:10:28,079 +to talk about I think this isn't very + +241 +00:10:26,240 --> 00:10:31,640 +popular right now but I think it might + +242 +00:10:28,079 --> 00:10:34,360 +be become more popular uh in the future + +243 +00:10:31,640 --> 00:10:36,120 +is we start hitting the limits of uh you + +244 +00:10:34,360 --> 00:10:38,560 +know what we can fit into long context + +245 +00:10:36,120 --> 00:10:40,040 +Windows uh for neural models and stuff + +246 +00:10:38,560 --> 00:10:42,600 +like this is memory + +247 +00:10:40,040 --> 00:10:48,600 +networks and basically the way that + +248 +00:10:42,600 --> 00:10:50,639 +memory networks work is they have write + +249 +00:10:48,600 --> 00:10:51,399 +they have the ability to write and read + +250 +00:10:50,639 --> 00:10:55,639 +from + +251 +00:10:51,399 --> 00:10:57,360 +memory and so this figure is a little + +252 +00:10:55,639 --> 00:11:00,440 +bit complex here but + +253 +00:10:57,360 --> 00:11:02,880 +basically you have a query and then you + +254 +00:11:00,440 --> 00:11:04,560 +get the embedding of the query um you + +255 +00:11:02,880 --> 00:11:06,760 +take the inner product you get the soft + +256 +00:11:04,560 --> 00:11:09,720 +Max of the inner product so this looks + +257 +00:11:06,760 --> 00:11:11,040 +like a tension you look up embeddings + +258 +00:11:09,720 --> 00:11:12,839 +and you take the weighted Su of the + +259 +00:11:11,040 --> 00:11:14,560 +embeddings and you get the like summary + +260 +00:11:12,839 --> 00:11:17,680 +of the memor so this is basically + +261 +00:11:14,560 --> 00:11:20,320 +attention over a big memory + +262 +00:11:17,680 --> 00:11:22,120 +base but then uh memory networks also + +263 +00:11:20,320 --> 00:11:24,000 +have the ability to go in and update the + +264 +00:11:22,120 --> 00:11:26,639 +memory so they also H have write + +265 +00:11:24,000 --> 00:11:30,360 +operations so you can read and write + +266 +00:11:26,639 --> 00:11:34,320 +from uh from the memory + +267 +00:11:30,360 --> 00:11:36,279 +base and so the reason why I say this + +268 +00:11:34,320 --> 00:11:40,480 +might become more popular is like one of + +269 +00:11:36,279 --> 00:11:42,200 +the big issues with large language + +270 +00:11:40,480 --> 00:11:45,320 +models nowadays is they don't get like + +271 +00:11:42,200 --> 00:11:47,320 +to continually update their memory um + +272 +00:11:45,320 --> 00:11:50,279 +and like one way you can do that is you + +273 +00:11:47,320 --> 00:11:52,160 +can just add text to the memory but + +274 +00:11:50,279 --> 00:11:54,000 +there are certain limits to that right + +275 +00:11:52,160 --> 00:11:56,360 +uh you know text isn't necessarily the + +276 +00:11:54,000 --> 00:11:58,959 +best way to encode all of the things + +277 +00:11:56,360 --> 00:12:01,880 +that you've seen in the past so I I feel + +278 +00:11:58,959 --> 00:12:03,360 +like this kind of architecture might be + +279 +00:12:01,880 --> 00:12:04,920 +um how to pin these sorts of + +280 +00:12:03,360 --> 00:12:06,480 +architectures onto language models might + +281 +00:12:04,920 --> 00:12:08,639 +be an interesting research direction for + +282 +00:12:06,480 --> 00:12:08,639 +the + +283 +00:12:08,680 --> 00:12:15,360 +future um another thing which I am not + +284 +00:12:12,600 --> 00:12:16,720 +going to talk about very much uh but + +285 +00:12:15,360 --> 00:12:20,560 +because we kind of already talked about + +286 +00:12:16,720 --> 00:12:23,560 +it in the code Generation Um area but + +287 +00:12:20,560 --> 00:12:26,959 +it's actually been around for a while is + +288 +00:12:23,560 --> 00:12:30,600 +solving questions with sort of symbolic + +289 +00:12:26,959 --> 00:12:36,480 +reasoning and the way it works + +290 +00:12:30,600 --> 00:12:41,320 +is for example you would have a + +291 +00:12:36,480 --> 00:12:43,639 +um you would have a text here and based + +292 +00:12:41,320 --> 00:12:47,440 +on the text you can run these sort of + +293 +00:12:43,639 --> 00:12:50,440 +symbolic operations like find and filter + +294 +00:12:47,440 --> 00:12:52,720 +and find the max number and relocate and + +295 +00:12:50,440 --> 00:12:54,480 +other things like this and this + +296 +00:12:52,720 --> 00:12:58,040 +explicitly + +297 +00:12:54,480 --> 00:12:59,880 +manipulates uh kind of the attention and + +298 +00:12:58,040 --> 00:13:02,519 +the um + +299 +00:12:59,880 --> 00:13:03,839 +you can do things like filtering down to + +300 +00:13:02,519 --> 00:13:08,600 +find the + +301 +00:13:03,839 --> 00:13:11,040 +most uh like highest largest number for + +302 +00:13:08,600 --> 00:13:12,800 +example or other things like this and + +303 +00:13:11,040 --> 00:13:14,160 +this is kind of interesting because like + +304 +00:13:12,800 --> 00:13:17,240 +some of the things that neural networks + +305 +00:13:14,160 --> 00:13:20,360 +are bad at are like finding the largest + +306 +00:13:17,240 --> 00:13:21,600 +number in a big data set or um finding + +307 +00:13:20,360 --> 00:13:23,360 +all of the things where something + +308 +00:13:21,600 --> 00:13:26,240 +applies and throwing out all of the + +309 +00:13:23,360 --> 00:13:27,959 +things where something doesn't apply so + +310 +00:13:26,240 --> 00:13:29,560 +again this isn't used super widely in + +311 +00:13:27,959 --> 00:13:31,959 +larged language models right now because + +312 +00:13:29,560 --> 00:13:33,920 +I feel like um people have been focusing + +313 +00:13:31,959 --> 00:13:36,440 +on prompting + +314 +00:13:33,920 --> 00:13:38,880 +techniques uh in order to do this sort + +315 +00:13:36,440 --> 00:13:41,199 +of reasoning but I think this is another + +316 +00:13:38,880 --> 00:13:43,320 +thing that's worth thinking about taking + +317 +00:13:41,199 --> 00:13:45,079 +a close another look at and seeing if + +318 +00:13:43,320 --> 00:13:47,440 +there are ways to incorporate it with + +319 +00:13:45,079 --> 00:13:49,320 +the current models because like + +320 +00:13:47,440 --> 00:13:50,720 +basically what I wanted to say is like + +321 +00:13:49,320 --> 00:13:52,279 +all of the things that I decided to + +322 +00:13:50,720 --> 00:13:54,560 +introduce here in this section are + +323 +00:13:52,279 --> 00:13:57,600 +things that current models are still not + +324 +00:13:54,560 --> 00:14:00,800 +particularly good at like Reon taking + +325 +00:13:57,600 --> 00:14:03,079 +many steps over sets of + +326 +00:14:00,800 --> 00:14:05,079 +inputs um reading and writing from + +327 +00:14:03,079 --> 00:14:09,839 +memory so that you can remember things + +328 +00:14:05,079 --> 00:14:11,720 +over long periods and also um filtering + +329 +00:14:09,839 --> 00:14:13,399 +down large pieces of text into smaller + +330 +00:14:11,720 --> 00:14:16,040 +pieces of text to find relevant + +331 +00:14:13,399 --> 00:14:17,560 +information so um if any of those things + +332 +00:14:16,040 --> 00:14:19,880 +sound interesting you can take a look at + +333 +00:14:17,560 --> 00:14:22,800 +this but um after this I'd like to go + +334 +00:14:19,880 --> 00:14:24,399 +kind of into the you know main event + +335 +00:14:22,800 --> 00:14:27,759 +where I talk about the stuff that people + +336 +00:14:24,399 --> 00:14:31,040 +are actually using a lot no it is um any + +337 +00:14:27,759 --> 00:14:31,040 +questions about these three + +338 +00:14:33,000 --> 00:14:39,120 +okay cool um so now I'd like to go into + +339 +00:14:36,399 --> 00:14:40,639 +Chain of Thought and variance and I + +340 +00:14:39,120 --> 00:14:42,279 +actually have already talked about Chain + +341 +00:14:40,639 --> 00:14:44,199 +of Thought in fact we've mentioned it a + +342 +00:14:42,279 --> 00:14:47,720 +couple times um but just you know to + +343 +00:14:44,199 --> 00:14:49,399 +remind everybody the basic idea is um + +344 +00:14:47,720 --> 00:14:52,880 +compared to standard prompting where we + +345 +00:14:49,399 --> 00:14:55,519 +have like a question um and an answer in + +346 +00:14:52,880 --> 00:14:58,480 +Chain of Thought we have a question and + +347 +00:14:55,519 --> 00:15:01,040 +then we have a derivation for the + +348 +00:14:58,480 --> 00:15:02,440 +questions so like uh Roger started with + +349 +00:15:01,040 --> 00:15:06,120 +five + +350 +00:15:02,440 --> 00:15:09,040 +balls two can uh five balls two cans of + +351 +00:15:06,120 --> 00:15:13,839 +three tennis balls each is six tenis 5 + +352 +00:15:09,040 --> 00:15:15,639 +plus 6al 11 the answer is 11 so um you + +353 +00:15:13,839 --> 00:15:17,519 +add this to the prompt and by adding + +354 +00:15:15,639 --> 00:15:19,240 +this to the prompt you get the model to + +355 +00:15:17,519 --> 00:15:22,600 +uh also do these derivations at test + +356 +00:15:19,240 --> 00:15:25,199 +time and this greatly improves some + +357 +00:15:22,600 --> 00:15:27,759 +tasks it improves tasks where we can't + +358 +00:15:25,199 --> 00:15:30,040 +like immediately predict the answer + +359 +00:15:27,759 --> 00:15:32,000 +directly and then I also previously + +360 +00:15:30,040 --> 00:15:33,440 +talked about zero shot Chain of Thought + +361 +00:15:32,000 --> 00:15:35,880 +uh reasoning where we just prompt the + +362 +00:15:33,440 --> 00:15:38,480 +model to with something like let's think + +363 +00:15:35,880 --> 00:15:42,680 +step by step and then the model becomes + +364 +00:15:38,480 --> 00:15:46,240 +able to do this uh Chain of Thought + +365 +00:15:42,680 --> 00:15:48,279 +reasoning okay so that was review and + +366 +00:15:46,240 --> 00:15:51,680 +now I'd like to talk about some of like + +367 +00:15:48,279 --> 00:15:53,560 +more advanced methods that people use + +368 +00:15:51,680 --> 00:15:55,079 +for uh reasoning as + +369 +00:15:53,560 --> 00:15:58,040 +well + +370 +00:15:55,079 --> 00:15:59,959 +and this is by no means an exhaustive + +371 +00:15:58,040 --> 00:16:01,800 +list they're just of the ones that I + +372 +00:15:59,959 --> 00:16:03,319 +found interesting so if you know other + +373 +00:16:01,800 --> 00:16:04,839 +ones that you'd like to talk about or + +374 +00:16:03,319 --> 00:16:07,720 +introduce to the class or something like + +375 +00:16:04,839 --> 00:16:10,600 +that I'd also be happy to uh to hear uh + +376 +00:16:07,720 --> 00:16:14,120 +which ones you like or have heard about + +377 +00:16:10,600 --> 00:16:16,920 +but the first one is um self-as and one + +378 +00:16:14,120 --> 00:16:20,959 +of the issues with large language models + +379 +00:16:16,920 --> 00:16:23,240 +nowadays is that they're not uh very + +380 +00:16:20,959 --> 00:16:25,519 +good at asking follow-up questions or + +381 +00:16:23,240 --> 00:16:27,839 +maybe not that they're not very good at + +382 +00:16:25,519 --> 00:16:31,160 +it but just they're not trained to do it + +383 +00:16:27,839 --> 00:16:32,880 +so like if you play around with chat GPT + +384 +00:16:31,160 --> 00:16:35,240 +I have never had chat GPT ask me a + +385 +00:16:32,880 --> 00:16:36,680 +follow-up question I don't think it's + +386 +00:16:35,240 --> 00:16:38,319 +like it's not because large language + +387 +00:16:36,680 --> 00:16:41,920 +models aren't capable of doing it it's + +388 +00:16:38,319 --> 00:16:43,519 +just that they like the open AI must + +389 +00:16:41,920 --> 00:16:45,000 +think it's a bad user experience to have + +390 +00:16:43,519 --> 00:16:47,680 +a language model that asks you follow up + +391 +00:16:45,000 --> 00:16:51,319 +questions that's only like you know + +392 +00:16:47,680 --> 00:16:53,160 +reason I can think about it um but + +393 +00:16:51,319 --> 00:16:56,199 +basically what self ask does is it + +394 +00:16:53,160 --> 00:17:00,000 +explicitly prompts the model to ask to + +395 +00:16:56,199 --> 00:17:02,360 +ask if there are followup questions so + +396 +00:17:00,000 --> 00:17:05,799 +here's an example on the left where the + +397 +00:17:02,360 --> 00:17:11,240 +question is uh who lived longer Theodore + +398 +00:17:05,799 --> 00:17:12,640 +haer or Harry vau wat uh Watkins and + +399 +00:17:11,240 --> 00:17:15,240 +basically it says are follow-up + +400 +00:17:12,640 --> 00:17:17,679 +questions needed here yes and then the + +401 +00:17:15,240 --> 00:17:20,319 +followup is how old was Theodore hacker + +402 +00:17:17,679 --> 00:17:23,640 +when he died and the intermediate answer + +403 +00:17:20,319 --> 00:17:26,959 +is Theodore hacker was 65 years old how + +404 +00:17:23,640 --> 00:17:29,000 +old was Harry Von Watkins um Harry Von + +405 +00:17:26,959 --> 00:17:32,400 +Watkins was 69 years old but so the + +406 +00:17:29,000 --> 00:17:35,240 +final answer is Harry Bon Watkins and um + +407 +00:17:32,400 --> 00:17:37,520 +in this particular paper this is just + +408 +00:17:35,240 --> 00:17:42,520 +like another variety of Chain of Thought + +409 +00:17:37,520 --> 00:17:44,720 +it's like not using it to incorporate + +410 +00:17:42,520 --> 00:17:47,400 +any external information or anything + +411 +00:17:44,720 --> 00:17:48,720 +like that it's just trying to more + +412 +00:17:47,400 --> 00:17:52,360 +directly + +413 +00:17:48,720 --> 00:17:53,840 +elicit um information from the model um + +414 +00:17:52,360 --> 00:17:55,360 +but nonetheless they demonstrate that + +415 +00:17:53,840 --> 00:17:57,760 +this is useful and then there's also + +416 +00:17:55,360 --> 00:18:00,120 +other methods that actually try to look + +417 +00:17:57,760 --> 00:18:02,240 +up information explicit to answer these + +418 +00:18:00,120 --> 00:18:05,280 +questions um which are even more + +419 +00:18:02,240 --> 00:18:05,280 +powerful than what we have + +420 +00:18:05,720 --> 00:18:13,200 +here um so that's what I'd like to + +421 +00:18:09,960 --> 00:18:16,919 +introduce next and basically the idea um + +422 +00:18:13,200 --> 00:18:19,760 +here is this is a method that instead of + +423 +00:18:16,919 --> 00:18:22,880 +just doing Chain of Thought it retrieves + +424 +00:18:19,760 --> 00:18:25,480 +relevant sentences when you're doing the + +425 +00:18:22,880 --> 00:18:28,919 +Chain of Thought So like + +426 +00:18:25,480 --> 00:18:30,880 +here um + +427 +00:18:28,919 --> 00:18:32,960 +uh we have the followup are follow-ups + +428 +00:18:30,880 --> 00:18:35,159 +needed here yes and then this is the + +429 +00:18:32,960 --> 00:18:36,880 +followup but if the model itself doesn't + +430 +00:18:35,159 --> 00:18:39,440 +know how old somebody was when they died + +431 +00:18:36,880 --> 00:18:40,760 +then it won't be able to answer this so + +432 +00:18:39,440 --> 00:18:44,400 +what they do in order to make this + +433 +00:18:40,760 --> 00:18:47,200 +happen is they um do bm25 based + +434 +00:18:44,400 --> 00:18:49,520 +retrieval over Wikipedia for each of the + +435 +00:18:47,200 --> 00:18:51,760 +Chain of Thought uh answers and then + +436 +00:18:49,520 --> 00:18:53,400 +they use the retrieved uh I think it's + +437 +00:18:51,760 --> 00:18:56,039 +like 10 documents or something like that + +438 +00:18:53,400 --> 00:18:59,640 +multiple retriev documents to prompt the + +439 +00:18:56,039 --> 00:19:03,200 +model um to basically follow up with its + +440 +00:18:59,640 --> 00:19:05,440 +Chain of Thought so this is another uh + +441 +00:19:03,200 --> 00:19:07,880 +variety of things that you can do in + +442 +00:19:05,440 --> 00:19:07,880 +order to + +443 +00:19:10,720 --> 00:19:16,120 +improve + +444 +00:19:13,120 --> 00:19:16,120 +cool + +445 +00:19:16,400 --> 00:19:21,440 +um then another one that I'd like to + +446 +00:19:18,960 --> 00:19:22,559 +talk about is U multilingual Chain of + +447 +00:19:21,440 --> 00:19:24,039 +Thought reasoning I'm going to be + +448 +00:19:22,559 --> 00:19:28,000 +talking more about multilingual things + +449 +00:19:24,039 --> 00:19:29,960 +in the multilingual class in a week but + +450 +00:19:28,000 --> 00:19:33,559 +the interesting thing about multilingual + +451 +00:19:29,960 --> 00:19:37,200 +Chain of Thought is we have a design + +452 +00:19:33,559 --> 00:19:41,280 +decision right like do we want to just + +453 +00:19:37,200 --> 00:19:44,000 +answer questions in the language that we + +454 +00:19:41,280 --> 00:19:46,679 +are asking questions in like so if I ask + +455 +00:19:44,000 --> 00:19:48,080 +a question in Japanese am I going to + +456 +00:19:46,679 --> 00:19:49,840 +have it go through the whole chain of + +457 +00:19:48,080 --> 00:19:52,720 +thought process in Japanese and then + +458 +00:19:49,840 --> 00:19:55,840 +answer my question in Japanese or do I + +459 +00:19:52,720 --> 00:19:57,120 +want it to uh somehow go through English + +460 +00:19:55,840 --> 00:19:59,159 +because the model has been trained on + +461 +00:19:57,120 --> 00:20:00,640 +lots of English and it has better + +462 +00:19:59,159 --> 00:20:02,120 +it's like a better way to take advantage + +463 +00:20:00,640 --> 00:20:04,840 +of its reasoning + +464 +00:20:02,120 --> 00:20:07,200 +capabilities does anyone have a idea + +465 +00:20:04,840 --> 00:20:07,200 +about the + +466 +00:20:07,960 --> 00:20:12,480 +answer who thinks it's better to do it + +467 +00:20:10,240 --> 00:20:15,360 +entirely in the the language that the + +468 +00:20:12,480 --> 00:20:15,360 +question is asked + +469 +00:20:15,640 --> 00:20:20,080 +in and who thinks it's better to do + +470 +00:20:17,919 --> 00:20:23,000 +something in + +471 +00:20:20,080 --> 00:20:28,200 +English + +472 +00:20:23,000 --> 00:20:29,159 +okay so um basically the answer is do it + +473 +00:20:28,200 --> 00:20:31,440 +in English + +474 +00:20:29,159 --> 00:20:34,120 +um and maybe this + +475 +00:20:31,440 --> 00:20:35,799 +is it might be a little bit dependent on + +476 +00:20:34,120 --> 00:20:39,840 +the language but all of the languages + +477 +00:20:35,799 --> 00:20:42,880 +they tested it's essentially uh that's + +478 +00:20:39,840 --> 00:20:44,919 +the conclusion that they came to and + +479 +00:20:42,880 --> 00:20:47,679 +it's pretty Stark in this particular + +480 +00:20:44,919 --> 00:20:50,640 +paper this might change a little bit + +481 +00:20:47,679 --> 00:20:52,960 +with um with more powerful models but I + +482 +00:20:50,640 --> 00:20:57,360 +still would be very surprised if this is + +483 +00:20:52,960 --> 00:21:00,440 +not like if this doesn't hold still so + +484 +00:20:57,360 --> 00:21:04,440 +you can see it's like approximately on + +485 +00:21:00,440 --> 00:21:08,200 +average uh 7even Point increase in the + +486 +00:21:04,440 --> 00:21:11,720 +results and just to to be clear here um + +487 +00:21:08,200 --> 00:21:13,600 +we have native uh Chain of Thought So + +488 +00:21:11,720 --> 00:21:16,039 +This is doing Chain of Thought in the in + +489 +00:21:13,600 --> 00:21:17,799 +the language itself this is doing Chain + +490 +00:21:16,039 --> 00:21:19,240 +of Thought in English but then answering + +491 +00:21:17,799 --> 00:21:22,200 +in the language itself and this is just + +492 +00:21:19,240 --> 00:21:23,799 +like translating everything into + +493 +00:21:22,200 --> 00:21:27,440 +English + +494 +00:21:23,799 --> 00:21:30,159 +um you can try this out too like if you + +495 +00:21:27,440 --> 00:21:31,840 +uh if you speak another Lang you can um + +496 +00:21:30,159 --> 00:21:34,200 +try to do it myself when I try it in + +497 +00:21:31,840 --> 00:21:36,200 +Japanese it's very clear that like the + +498 +00:21:34,200 --> 00:21:38,640 +model seems more intelligent in English + +499 +00:21:36,200 --> 00:21:41,559 +it just can seems like it can do other + +500 +00:21:38,640 --> 00:21:43,120 +things even though like intelligence uh + +501 +00:21:41,559 --> 00:21:44,640 +shouldn't be a function of the language + +502 +00:21:43,120 --> 00:21:47,120 +that you're asking a question in right + +503 +00:21:44,640 --> 00:21:49,679 +like the model should have the ability + +504 +00:21:47,120 --> 00:21:51,440 +to answer questions but it because + +505 +00:21:49,679 --> 00:21:53,000 +that's how humans work right our + +506 +00:21:51,440 --> 00:21:54,520 +intelligence is kind of separated from + +507 +00:21:53,000 --> 00:21:57,039 +our language how well we can express + +508 +00:21:54,520 --> 00:22:00,480 +ourselves is a little bit different but + +509 +00:21:57,039 --> 00:22:02,320 +um yeah for the final appli this was it + +510 +00:22:00,480 --> 00:22:04,840 +translated back to the original language + +511 +00:22:02,320 --> 00:22:09,440 +and then evaluated for translate English + +512 +00:22:04,840 --> 00:22:12,559 +I'm not 100% sure about this I think it + +513 +00:22:09,440 --> 00:22:13,840 +was not so that might be a confounding + +514 +00:22:12,559 --> 00:22:16,799 +factor for this one but it's not a + +515 +00:22:13,840 --> 00:22:20,039 +confounding factor for this one anyway + +516 +00:22:16,799 --> 00:22:20,039 +yeah any other + +517 +00:22:20,679 --> 00:22:23,919 +questions Okay + +518 +00:22:24,200 --> 00:22:29,559 +cool so this is a pretty interesting + +519 +00:22:26,799 --> 00:22:32,000 +result here um + +520 +00:22:29,559 --> 00:22:34,120 +and the next kind of series of results + +521 +00:22:32,000 --> 00:22:35,360 +are going to be based on the uh that I'm + +522 +00:22:34,120 --> 00:22:36,919 +going to talk about are going to be + +523 +00:22:35,360 --> 00:22:39,240 +based on the quality of the reasoning + +524 +00:22:36,919 --> 00:22:43,480 +chains that the model uses in Chain of + +525 +00:22:39,240 --> 00:22:45,520 +Thought and this one is a simple + +526 +00:22:43,480 --> 00:22:46,600 +heuristic for improving the quality of + +527 +00:22:45,520 --> 00:22:49,279 +the reasoning + +528 +00:22:46,600 --> 00:22:50,640 +chains and um yeah one thing I should + +529 +00:22:49,279 --> 00:22:52,480 +mention is that the quality of the + +530 +00:22:50,640 --> 00:22:55,760 +reasoning chain is definitely connected + +531 +00:22:52,480 --> 00:22:58,080 +to the uh quality of the output like + +532 +00:22:55,760 --> 00:23:00,159 +some that's not necessarily the case + +533 +00:22:58,080 --> 00:23:04,679 +right it could just say a whole bunch of + +534 +00:23:00,159 --> 00:23:07,799 +you know false like uh actually no maybe + +535 +00:23:04,679 --> 00:23:07,799 +I'll I'll skip this + +536 +00:23:08,200 --> 00:23:14,919 +one and go and and explain this one next + +537 +00:23:11,919 --> 00:23:14,919 +so + +538 +00:23:15,159 --> 00:23:19,039 +um yeah actually sorry the or the + +539 +00:23:17,600 --> 00:23:20,520 +explanation ordering for this is a + +540 +00:23:19,039 --> 00:23:25,360 +little bit hard but yeah I'll explain + +541 +00:23:20,520 --> 00:23:26,840 +this one next so um very quickly um + +542 +00:23:25,360 --> 00:23:29,640 +there's two ways that you could be + +543 +00:23:26,840 --> 00:23:32,880 +reasoning one way you could be reasoning + +544 +00:23:29,640 --> 00:23:35,000 +is doing an explanation first and then + +545 +00:23:32,880 --> 00:23:36,720 +uh predicting the answer the other way + +546 +00:23:35,000 --> 00:23:39,080 +you could do it is predicting the answer + +547 +00:23:36,720 --> 00:23:43,039 +and then do it um then giving the + +548 +00:23:39,080 --> 00:23:45,559 +explanation and in general if you have a + +549 +00:23:43,039 --> 00:23:47,919 +reasonably strong model uh you know any + +550 +00:23:45,559 --> 00:23:50,679 +of the modern kind of Frontier level + +551 +00:23:47,919 --> 00:23:52,240 +models right now doing the explanation + +552 +00:23:50,679 --> 00:23:54,039 +first and then making the prediction is + +553 +00:23:52,240 --> 00:23:56,880 +better and the reason why is because + +554 +00:23:54,039 --> 00:23:59,240 +Chain of Thought works and the model is + +555 +00:23:56,880 --> 00:24:02,960 +able to break down the quest um the + +556 +00:23:59,240 --> 00:24:07,279 +questions into kind of + +557 +00:24:02,960 --> 00:24:10,159 +simpler uh it's able to break down the + +558 +00:24:07,279 --> 00:24:11,520 +like the answer into like simp simpler + +559 +00:24:10,159 --> 00:24:14,080 +questions for like mathematical + +560 +00:24:11,520 --> 00:24:15,679 +reasoning or something like that um and + +561 +00:24:14,080 --> 00:24:18,039 +then give me the answer so like for + +562 +00:24:15,679 --> 00:24:20,000 +example for text DCI 002 which was State + +563 +00:24:18,039 --> 00:24:22,679 +ofth art at the time of this writing you + +564 +00:24:20,000 --> 00:24:24,360 +see a fivepoint boost from using um + +565 +00:24:22,679 --> 00:24:29,080 +explanation first and then prediction + +566 +00:24:24,360 --> 00:24:30,640 +after that um and in accur + +567 +00:24:29,080 --> 00:24:34,039 +but for the weaker models that was not + +568 +00:24:30,640 --> 00:24:36,039 +the case so if you were using um GPD 3 + +569 +00:24:34,039 --> 00:24:38,720 +that wasn't trained for Chain of Thought + +570 +00:24:36,039 --> 00:24:40,600 +or you were using opt uh that was not + +571 +00:24:38,720 --> 00:24:42,640 +the case but nowadays I think basically + +572 +00:24:40,600 --> 00:24:45,279 +all models uh doing the explanation + +573 +00:24:42,640 --> 00:24:48,120 +first and then the prediction is + +574 +00:24:45,279 --> 00:24:49,640 +better um so going + +575 +00:24:48,120 --> 00:24:51,640 +back + +576 +00:24:49,640 --> 00:24:53,559 +um another thing that people have + +577 +00:24:51,640 --> 00:24:55,120 +noticed is like if your explanation is + +578 +00:24:53,559 --> 00:24:56,520 +wrong your prediction also tends to be + +579 +00:24:55,120 --> 00:24:58,120 +wrong so if you make mistakes in + +580 +00:24:56,520 --> 00:25:00,520 +intermediate steps of your explanation + +581 +00:24:58,120 --> 00:25:03,679 +it's tends to mess up your final + +582 +00:25:00,520 --> 00:25:06,000 +prediction um so like one of the + +583 +00:25:03,679 --> 00:25:09,320 +interesting ways that people have found + +584 +00:25:06,000 --> 00:25:11,559 +to improve the final the explanation + +585 +00:25:09,320 --> 00:25:13,880 +quality is they just observe that if the + +586 +00:25:11,559 --> 00:25:18,840 +explanations are longer they tend to be + +587 +00:25:13,880 --> 00:25:20,960 +better it's uh kind of interesting but + +588 +00:25:18,840 --> 00:25:23,000 +like if they give you more reasoning + +589 +00:25:20,960 --> 00:25:25,000 +steps this tends to be more accurate and + +590 +00:25:23,000 --> 00:25:27,320 +they actually demonstrate that in this + +591 +00:25:25,000 --> 00:25:29,200 +paper where here's a simple reasoning + +592 +00:25:27,320 --> 00:25:31,720 +chain here's a more complex reasoning + +593 +00:25:29,200 --> 00:25:35,480 +chain and you actually see for exactly + +594 +00:25:31,720 --> 00:25:36,760 +the same problem they get about a 15% + +595 +00:25:35,480 --> 00:25:38,360 +boost and these are kind of like + +596 +00:25:36,760 --> 00:25:39,960 +naturally occurring reasoning chains + +597 +00:25:38,360 --> 00:25:41,520 +they didn't like train the model to give + +598 +00:25:39,960 --> 00:25:43,919 +you longer reasoning chains or anything + +599 +00:25:41,520 --> 00:25:45,279 +like that but amongst the naturally + +600 +00:25:43,919 --> 00:25:46,840 +occurring reasoning chains the longer + +601 +00:25:45,279 --> 00:25:50,480 +ones tend to be + +602 +00:25:46,840 --> 00:25:53,159 +better and this fact could be simply + +603 +00:25:50,480 --> 00:25:54,679 +used to improve accuracy um and so the + +604 +00:25:53,159 --> 00:25:57,360 +way they did this is they just sampled + +605 +00:25:54,679 --> 00:25:59,279 +multiple reasoning paths and then they + +606 +00:25:57,360 --> 00:26:00,840 +performed self consistency over the + +607 +00:25:59,279 --> 00:26:03,000 +longer reasoning paths so if you + +608 +00:26:00,840 --> 00:26:05,240 +remember what self consistency is it's + +609 +00:26:03,000 --> 00:26:07,240 +basically like you do majority voting + +610 +00:26:05,240 --> 00:26:09,679 +over the answers for multiple reasoning + +611 +00:26:07,240 --> 00:26:13,880 +paths so they threw out the lower + +612 +00:26:09,679 --> 00:26:13,880 +quality ones and that improved overall + +613 +00:26:14,399 --> 00:26:20,279 +accuracy so um yeah that's a thing that + +614 +00:26:18,000 --> 00:26:20,279 +you can + +615 +00:26:21,039 --> 00:26:25,960 +do + +616 +00:26:23,120 --> 00:26:28,880 +um so yeah going back to systematic + +617 +00:26:25,960 --> 00:26:31,360 +studies of reasoning in llms + +618 +00:26:28,880 --> 00:26:33,559 +um one of the big results that's + +619 +00:26:31,360 --> 00:26:35,880 +actually really important to know about + +620 +00:26:33,559 --> 00:26:39,039 +is th this sort of Chain of Thought + +621 +00:26:35,880 --> 00:26:41,080 +reasoning um is considered to be an + +622 +00:26:39,039 --> 00:26:43,520 +emergent ability + +623 +00:26:41,080 --> 00:26:47,080 +in uh large language models and what we + +624 +00:26:43,520 --> 00:26:49,360 +mean by an emergent ability is it's or + +625 +00:26:47,080 --> 00:26:53,679 +what what the the name emergent ability + +626 +00:26:49,360 --> 00:26:56,399 +typically refers to is that it is + +627 +00:26:53,679 --> 00:26:58,640 +something that increases dramatically as + +628 +00:26:56,399 --> 00:27:01,679 +the model size gets uh up up to a + +629 +00:26:58,640 --> 00:27:03,200 +certain point so these actually I'm I'm + +630 +00:27:01,679 --> 00:27:06,080 +really sorry I cut off the thing on the + +631 +00:27:03,200 --> 00:27:07,360 +bottom here this is like open AI does + +632 +00:27:06,080 --> 00:27:08,520 +this all the time to not tell you how + +633 +00:27:07,360 --> 00:27:11,399 +many parameters they have in their + +634 +00:27:08,520 --> 00:27:12,760 +models but I did not do it intentionally + +635 +00:27:11,399 --> 00:27:15,360 +here because I think it's actually in + +636 +00:27:12,760 --> 00:27:17,320 +here in the paper um but like these ones + +637 +00:27:15,360 --> 00:27:19,399 +over here are kind of the like 175 + +638 +00:27:17,320 --> 00:27:20,640 +billion parameter models and like the + +639 +00:27:19,399 --> 00:27:24,520 +the larger + +640 +00:27:20,640 --> 00:27:25,960 +models um and what you see is like up + +641 +00:27:24,520 --> 00:27:29,919 +until a certain point you get basically + +642 +00:27:25,960 --> 00:27:33,919 +zero accuracy and then uh the outputs + +643 +00:27:29,919 --> 00:27:37,000 +improve and so for a while people were + +644 +00:27:33,919 --> 00:27:39,240 +really like confused about this like why + +645 +00:27:37,000 --> 00:27:41,440 +why does this happen it feels like magic + +646 +00:27:39,240 --> 00:27:44,279 +that you get a really you know powerful + +647 +00:27:41,440 --> 00:27:46,679 +model and then suddenly it gets better + +648 +00:27:44,279 --> 00:27:49,799 +uh uh like at the very + +649 +00:27:46,679 --> 00:27:52,159 +end but actually there's a much simpler + +650 +00:27:49,799 --> 00:27:53,760 +solution there's not not that much magic + +651 +00:27:52,159 --> 00:27:55,960 +to this + +652 +00:27:53,760 --> 00:27:58,399 +and we've known about this for a little + +653 +00:27:55,960 --> 00:28:00,919 +while but this paper from 2023 really + +654 +00:27:58,399 --> 00:28:02,360 +like expressed it very clearly um so I + +655 +00:28:00,919 --> 00:28:04,360 +highly recommend you take a look at this + +656 +00:28:02,360 --> 00:28:07,720 +if you're interested in kind of like the + +657 +00:28:04,360 --> 00:28:10,159 +emerg abilities and language models but + +658 +00:28:07,720 --> 00:28:15,039 +basically the the thing about emergent + +659 +00:28:10,159 --> 00:28:19,720 +abilities is that they're mostly + +660 +00:28:15,039 --> 00:28:20,720 +a matter of how you um how you measure + +661 +00:28:19,720 --> 00:28:22,519 +your + +662 +00:28:20,720 --> 00:28:27,640 +models + +663 +00:28:22,519 --> 00:28:30,120 +accuracy and so let's say as your model + +664 +00:28:27,640 --> 00:28:30,120 +gets better + +665 +00:28:39,039 --> 00:28:45,600 +it gets gradually better at predicting + +666 +00:28:41,200 --> 00:28:45,600 +the like a reasonable next + +667 +00:28:47,799 --> 00:28:54,760 +token so this is like a I don't know + +668 +00:28:50,919 --> 00:28:59,120 +like 200 million parameter model 500 + +669 +00:28:54,760 --> 00:29:03,240 +million 1 billion 3 billion + +670 +00:28:59,120 --> 00:29:06,600 +7 billion and like 70 billion or + +671 +00:29:03,240 --> 00:29:09,600 +something like that um and so this is + +672 +00:29:06,600 --> 00:29:12,640 +like the next token prediction accuracy + +673 +00:29:09,600 --> 00:29:14,320 +um or like the the accuracy of + +674 +00:29:12,640 --> 00:29:16,279 +predicting a reasonable next token that + +675 +00:29:14,320 --> 00:29:18,880 +won't make result in your reasoning + +676 +00:29:16,279 --> 00:29:20,000 +chain being wrong and making a mistake + +677 +00:29:18,880 --> 00:29:24,200 +and + +678 +00:29:20,000 --> 00:29:26,200 +so if you have an accuracy like this in + +679 +00:29:24,200 --> 00:29:28,880 +order to get the correct answer like + +680 +00:29:26,200 --> 00:29:30,559 +let's say there's about five or eight + +681 +00:29:28,880 --> 00:29:33,519 +places where you could possibly make a + +682 +00:29:30,559 --> 00:29:35,080 +mistake in the derivation like one + +683 +00:29:33,519 --> 00:29:36,760 +common places to make a mistake in a + +684 +00:29:35,080 --> 00:29:38,519 +derivation for math for example are + +685 +00:29:36,760 --> 00:29:40,200 +where you predict a number like where + +686 +00:29:38,519 --> 00:29:42,679 +you predict the result of an equation + +687 +00:29:40,200 --> 00:29:44,120 +and you might have five reasoning steps + +688 +00:29:42,679 --> 00:29:47,720 +where you might predict the result of an + +689 +00:29:44,120 --> 00:29:53,039 +equation um and so if we do + +690 +00:29:47,720 --> 00:29:53,039 +this let's exponentiate all of these by + +691 +00:29:54,799 --> 00:29:58,799 +five um + +692 +00:30:06,640 --> 00:30:16,120 +uh write python code to exp + +693 +00:30:11,200 --> 00:30:16,120 +she these numbers by + +694 +00:30:19,600 --> 00:30:27,559 +five I'm wszy enough that I just ask + +695 +00:30:22,159 --> 00:30:27,559 +chat GP chat GP to do this for me now + +696 +00:30:30,080 --> 00:30:32,919 +and so if we do + +697 +00:30:35,399 --> 00:30:39,840 +this do go go chat + +698 +00:30:50,000 --> 00:30:58,360 +GPD so now we are getting something that + +699 +00:30:54,760 --> 00:30:58,360 +looks like zero + +700 +00:31:02,159 --> 00:31:07,960 +um basically zero basically + +701 +00:31:05,639 --> 00:31:10,960 +zero + +702 +00:31:07,960 --> 00:31:10,960 +uh + +703 +00:31:13,399 --> 00:31:16,399 +3% + +704 +00:31:16,799 --> 00:31:22,440 +23% + +705 +00:31:19,080 --> 00:31:22,440 +9% and + +706 +00:31:22,559 --> 00:31:28,720 +90% so what you can see is there's + +707 +00:31:26,639 --> 00:31:30,600 +actually a pretty steady GR gradation of + +708 +00:31:28,720 --> 00:31:33,120 +like the next token prediction accuracy + +709 +00:31:30,600 --> 00:31:36,600 +here but if you need to predict multiple + +710 +00:31:33,120 --> 00:31:38,919 +tokens correct then it looks like it's + +711 +00:31:36,600 --> 00:31:41,240 +doing basically nothing until you get up + +712 +00:31:38,919 --> 00:31:43,600 +to like 75% next token accuracy and then + +713 +00:31:41,240 --> 00:31:45,320 +it starts taking off so that's like uh + +714 +00:31:43,600 --> 00:31:46,960 +what happens in emergent abilities and + +715 +00:31:45,320 --> 00:31:49,159 +you'll notice that most things that are + +716 +00:31:46,960 --> 00:31:50,880 +talking about emergent abilities are + +717 +00:31:49,159 --> 00:31:53,559 +usually talking about some sort of Chain + +718 +00:31:50,880 --> 00:31:55,799 +of Thought or some sort of reasoning uh + +719 +00:31:53,559 --> 00:31:58,480 +reasoning accuracy even if that's not + +720 +00:31:55,799 --> 00:32:00,480 +the case um even if they're just + +721 +00:31:58,480 --> 00:32:02,639 +predicting a single token it can still + +722 +00:32:00,480 --> 00:32:05,399 +happen because + +723 +00:32:02,639 --> 00:32:08,559 +basically the probability of a single + +724 +00:32:05,399 --> 00:32:11,639 +token can continue to go up smoothly but + +725 +00:32:08,559 --> 00:32:13,240 +you only get the the token correct after + +726 +00:32:11,639 --> 00:32:14,760 +the probability starts getting higher + +727 +00:32:13,240 --> 00:32:18,320 +than all the others and that's also a + +728 +00:32:14,760 --> 00:32:21,279 +discontinuous function so um so + +729 +00:32:18,320 --> 00:32:23,080 +basically what this paper shows is like + +730 +00:32:21,279 --> 00:32:26,440 +even if you have like the probability of + +731 +00:32:23,080 --> 00:32:28,679 +the correct token going um the correct + +732 +00:32:26,440 --> 00:32:30,639 +token going up gradually uh you can see + +733 +00:32:28,679 --> 00:32:33,440 +this emergent ability based on how you + +734 +00:32:30,639 --> 00:32:37,279 +uh measure it so um that's an important + +735 +00:32:33,440 --> 00:32:38,960 +thing to realize about uh this another + +736 +00:32:37,279 --> 00:32:41,080 +correl of this is like let's say you + +737 +00:32:38,960 --> 00:32:44,679 +want to do interesting experiments about + +738 +00:32:41,080 --> 00:32:45,960 +reasoning on um on smaller models like + +739 +00:32:44,679 --> 00:32:47,279 +let's say you want to train a smaller + +740 +00:32:45,960 --> 00:32:49,159 +model and see how it improves on + +741 +00:32:47,279 --> 00:32:52,159 +reasoning I would definitely encourage + +742 +00:32:49,159 --> 00:32:54,799 +you to measure not only accuracy because + +743 +00:32:52,159 --> 00:32:57,279 +you might see like very little change in + +744 +00:32:54,799 --> 00:32:58,720 +accuracy but also measure like log + +745 +00:32:57,279 --> 00:33:00,360 +likelihood of reasoning chains or + +746 +00:32:58,720 --> 00:33:02,960 +something like that because you'll see a + +747 +00:33:00,360 --> 00:33:02,960 +a smoother + +748 +00:33:03,799 --> 00:33:09,080 +curve cool um any questions about + +749 +00:33:11,039 --> 00:33:17,240 +this okay um sounds + +750 +00:33:14,720 --> 00:33:20,559 +good so I I talked a little bit about + +751 +00:33:17,240 --> 00:33:23,120 +this um one one of the things here that + +752 +00:33:20,559 --> 00:33:25,320 +I didn't talk about is this paper + +753 +00:33:23,120 --> 00:33:28,159 +measures not just the accuracy of the + +754 +00:33:25,320 --> 00:33:30,880 +answer with chain of thoughts um but it + +755 +00:33:28,159 --> 00:33:35,840 +also measures the factuality of the + +756 +00:33:30,880 --> 00:33:40,480 +explanation so basically um whether the + +757 +00:33:35,840 --> 00:33:40,480 +explanation is a good explanation for + +758 +00:33:40,760 --> 00:33:47,240 +the um whether the explanation is a good + +759 +00:33:43,960 --> 00:33:50,039 +explanation for the actual + +760 +00:33:47,240 --> 00:33:51,919 +derivation um and also the consistency + +761 +00:33:50,039 --> 00:33:53,480 +of the answer in the explanation to + +762 +00:33:51,919 --> 00:33:56,120 +figure out whether the answer and the + +763 +00:33:53,480 --> 00:33:58,200 +explanation um match up with each other + +764 +00:33:56,120 --> 00:33:59,600 +and they they did this with some uh + +765 +00:33:58,200 --> 00:34:02,320 +synthetic data sets where you could + +766 +00:33:59,600 --> 00:34:07,120 +actually measure the um the re the + +767 +00:34:02,320 --> 00:34:10,399 +reasoning steps uh by using math so um + +768 +00:34:07,120 --> 00:34:13,560 +what they were able to find is basically + +769 +00:34:10,399 --> 00:34:15,760 +the answer and the explanation um + +770 +00:34:13,560 --> 00:34:17,639 +when the answer in the explanation + +771 +00:34:15,760 --> 00:34:22,079 +tended to be consistent especially for + +772 +00:34:17,639 --> 00:34:23,760 +the stronger models and let's see yeah + +773 +00:34:22,079 --> 00:34:25,399 +the the answer in the explanation tended + +774 +00:34:23,760 --> 00:34:28,440 +to be consistent especially for the + +775 +00:34:25,399 --> 00:34:30,879 +stronger models and um + +776 +00:34:28,440 --> 00:34:33,000 +that also meant that if you had higher + +777 +00:34:30,879 --> 00:34:35,839 +factuality in the explanation that + +778 +00:34:33,000 --> 00:34:38,240 +translates into higher um you know + +779 +00:34:35,839 --> 00:34:40,520 +factuality of the accuracy of the actual + +780 +00:34:38,240 --> 00:34:43,159 +prediction um I would bet that these + +781 +00:34:40,520 --> 00:34:45,240 +numbers are even higher uh nowadays I + +782 +00:34:43,159 --> 00:34:49,040 +bet the consistency is even higher uh + +783 +00:34:45,240 --> 00:34:49,040 +with more modern models than Tex avenci + +784 +00:34:49,399 --> 00:34:53,200 +002 and the re the reason being is like + +785 +00:34:51,839 --> 00:34:54,760 +number one models are stronger number + +786 +00:34:53,200 --> 00:34:56,560 +two all models are like trained for + +787 +00:34:54,760 --> 00:35:00,960 +Chain of Thought pretty aggressively now + +788 +00:34:56,560 --> 00:35:00,960 +so uh that would make the difference + +789 +00:35:02,200 --> 00:35:08,640 +there cool um so the the other thing I'd + +790 +00:35:07,000 --> 00:35:09,359 +like to talk about is training for Chain + +791 +00:35:08,640 --> 00:35:13,079 +of + +792 +00:35:09,359 --> 00:35:17,440 +Thought um so there's a fair amount of + +793 +00:35:13,079 --> 00:35:19,200 +work in this general direction um from + +794 +00:35:17,440 --> 00:35:23,040 +my point of view there's basically two + +795 +00:35:19,200 --> 00:35:25,800 +ways that people do this nowadays um the + +796 +00:35:23,040 --> 00:35:28,960 +first way is usually through generating + +797 +00:35:25,800 --> 00:35:33,480 +lots of synthetic data that represents + +798 +00:35:28,960 --> 00:35:37,800 +chains of thoughts and then using that + +799 +00:35:33,480 --> 00:35:39,520 +to um to train models and this is the + +800 +00:35:37,800 --> 00:35:41,839 +most famous version of this although + +801 +00:35:39,520 --> 00:35:44,079 +this paper cites a lot of uh a lot of + +802 +00:35:41,839 --> 00:35:45,760 +other ones but basically they generate a + +803 +00:35:44,079 --> 00:35:48,280 +large and diverse uh Chain of Thought + +804 +00:35:45,760 --> 00:35:51,240 +data set from GPT 3.5 and + +805 +00:35:48,280 --> 00:35:53,200 +gp4 um it includes 5 million complex + +806 +00:35:51,240 --> 00:35:55,640 +instructions I think they generated 1 + +807 +00:35:53,200 --> 00:35:59,000 +million from GPD 4 and 4 million from uh + +808 +00:35:55,640 --> 00:36:01,640 +GPT 3.5 just because generating long + +809 +00:35:59,000 --> 00:36:06,520 +sequences from gp4 is expensive and they + +810 +00:36:01,640 --> 00:36:09,640 +didn't want to do that many um and + +811 +00:36:06,520 --> 00:36:11,760 +then they uh achieved corresponding high + +812 +00:36:09,640 --> 00:36:13,200 +accuracy on Chain of Thought related + +813 +00:36:11,760 --> 00:36:16,200 +things compared to other data sets so + +814 +00:36:13,200 --> 00:36:17,760 +compared to like alpaka which is much uh + +815 +00:36:16,200 --> 00:36:21,760 +smaller and doesn't have as much Chain + +816 +00:36:17,760 --> 00:36:24,079 +of Thought and also um uh vicuna which + +817 +00:36:21,760 --> 00:36:26,640 +is similarly less focused on chain of + +818 +00:36:24,079 --> 00:36:29,359 +thought they were able to do uh a good + +819 +00:36:26,640 --> 00:36:31,599 +job + +820 +00:36:29,359 --> 00:36:33,640 +um this paper was by Microsoft and they + +821 +00:36:31,599 --> 00:36:36,960 +didn't actually release the Orca data + +822 +00:36:33,640 --> 00:36:39,400 +set um for whatever reason uh legal + +823 +00:36:36,960 --> 00:36:41,400 +legal or competitive reasons or whatever + +824 +00:36:39,400 --> 00:36:43,000 +but there's another open Orca data set + +825 +00:36:41,400 --> 00:36:44,359 +that you can download and use uh that + +826 +00:36:43,000 --> 00:36:47,480 +attempts to replicate it and it's + +827 +00:36:44,359 --> 00:36:50,440 +reasonably good so uh you you can uh + +828 +00:36:47,480 --> 00:36:50,440 +keep that in mind if you're + +829 +00:36:50,800 --> 00:36:59,520 +interested um this is another really + +830 +00:36:53,280 --> 00:36:59,520 +interesting paper on uh trying to create + +831 +00:37:00,160 --> 00:37:05,760 +assessments automatic assessments of how + +832 +00:37:03,440 --> 00:37:09,880 +good chains of thought are and what they + +833 +00:37:05,760 --> 00:37:13,079 +do essentially is it's relatively simple + +834 +00:37:09,880 --> 00:37:15,200 +they get human feedback on each step of + +835 +00:37:13,079 --> 00:37:17,760 +a derivation so they just basically ask + +836 +00:37:15,200 --> 00:37:20,599 +people is this step of the derivation + +837 +00:37:17,760 --> 00:37:22,160 +good and uh if the answer is yes then + +838 +00:37:20,599 --> 00:37:24,760 +they give it a a smiley face if the + +839 +00:37:22,160 --> 00:37:26,440 +answer is no they give it a frowny face + +840 +00:37:24,760 --> 00:37:28,560 +and they use this to train a reward + +841 +00:37:26,440 --> 00:37:32,000 +model where the reward model basically + +842 +00:37:28,560 --> 00:37:34,760 +predicts whether each uh thing of the um + +843 +00:37:32,000 --> 00:37:36,800 +each step of the derivation is good and + +844 +00:37:34,760 --> 00:37:38,160 +so we have two examples over here I know + +845 +00:37:36,800 --> 00:37:41,160 +this is really small you might be able + +846 +00:37:38,160 --> 00:37:43,200 +to see it um either in the paper on uh + +847 +00:37:41,160 --> 00:37:46,359 +the slides on the website but what we + +848 +00:37:43,200 --> 00:37:49,000 +can see here is that it assesses each of + +849 +00:37:46,359 --> 00:37:52,680 +these steps and uh checks that the + +850 +00:37:49,000 --> 00:37:55,760 +answer is good um but it's also able to + +851 +00:37:52,680 --> 00:37:57,119 +identify places where uh like steps are + +852 +00:37:55,760 --> 00:37:59,560 +incorrect and then the final answer + +853 +00:37:57,119 --> 00:38:02,560 +becomes Incorrect and then they use this + +854 +00:37:59,560 --> 00:38:04,440 +for training um a Chain of Thought style + +855 +00:38:02,560 --> 00:38:06,319 +model so they have the model generate + +856 +00:38:04,440 --> 00:38:08,520 +chains of thought and they assess them + +857 +00:38:06,319 --> 00:38:10,079 +with the reward model and upweight + +858 +00:38:08,520 --> 00:38:12,160 +answers that have good chains of thought + +859 +00:38:10,079 --> 00:38:15,680 +and so the good thing about this is they + +860 +00:38:12,160 --> 00:38:17,440 +actually don't need um they don't need + +861 +00:38:15,680 --> 00:38:20,160 +the correct answers to train the model + +862 +00:38:17,440 --> 00:38:21,640 +this way and because they don't need the + +863 +00:38:20,160 --> 00:38:23,920 +correct answers to train the model this + +864 +00:38:21,640 --> 00:38:26,640 +way they can also train the model on + +865 +00:38:23,920 --> 00:38:29,200 +lots of other questions the reason why + +866 +00:38:26,640 --> 00:38:31,520 +this works is because like Chain of + +867 +00:38:29,200 --> 00:38:34,880 +Thought makes it easier to generate each + +868 +00:38:31,520 --> 00:38:36,720 +of the steps in the derivation it's also + +869 +00:38:34,880 --> 00:38:38,640 +easier to assess whether an individual + +870 +00:38:36,720 --> 00:38:40,000 +step in a derivation is wrong then + +871 +00:38:38,640 --> 00:38:42,960 +assess whether the answer is correct + +872 +00:38:40,000 --> 00:38:45,319 +overall so um this feedback signal is + +873 +00:38:42,960 --> 00:38:48,640 +easier to get model provided than it is + +874 +00:38:45,319 --> 00:38:51,160 +for um uh like getting feedback on the + +875 +00:38:48,640 --> 00:38:53,839 +answer itself yeah failure in one step + +876 +00:38:51,160 --> 00:38:56,920 +causes all the other steps to fail yep + +877 +00:38:53,839 --> 00:38:57,960 +you just assess the next steps based on + +878 +00:38:56,920 --> 00:39:00,079 +the assumption + +879 +00:38:57,960 --> 00:39:02,920 +the or do + +880 +00:39:00,079 --> 00:39:05,240 +you I I don't think + +881 +00:39:02,920 --> 00:39:07,599 +they I don't think they do that I think + +882 +00:39:05,240 --> 00:39:10,119 +they um it it's a good question I'm not + +883 +00:39:07,599 --> 00:39:12,160 +100% sure about this but I think they um + +884 +00:39:10,119 --> 00:39:14,280 +assess each one of the steps + +885 +00:39:12,160 --> 00:39:15,920 +independently um and it's not + +886 +00:39:14,280 --> 00:39:17,480 +necessarily the case that like failing + +887 +00:39:15,920 --> 00:39:19,000 +on this step means the step is wrong + +888 +00:39:17,480 --> 00:39:21,319 +right it could be just not using it at + +889 +00:39:19,000 --> 00:39:25,240 +all also + +890 +00:39:21,319 --> 00:39:25,240 +so um + +891 +00:39:25,440 --> 00:39:31,119 +cool so a final thing like to talk about + +892 +00:39:28,160 --> 00:39:34,640 +which I think is kind of interesting um + +893 +00:39:31,119 --> 00:39:37,040 +is abductive reasoning uh or learning + +894 +00:39:34,640 --> 00:39:40,040 +explanations from + +895 +00:39:37,040 --> 00:39:40,040 +data + +896 +00:39:46,359 --> 00:39:49,359 +and + +897 +00:39:52,440 --> 00:39:57,119 +sorry + +898 +00:39:54,480 --> 00:40:00,760 +so basically the idea is can we find a + +899 +00:39:57,119 --> 00:40:03,599 +rule that underes a pattern in data + +900 +00:40:00,760 --> 00:40:06,680 +and here are some examples of this the + +901 +00:40:03,599 --> 00:40:11,680 +basic idea is if we have + +902 +00:40:06,680 --> 00:40:16,599 +examples um which are like if I put + +903 +00:40:11,680 --> 00:40:19,960 +a cylinder and a square a cylinder and a + +904 +00:40:16,599 --> 00:40:22,119 +cube on uh this pink block I get a noise + +905 +00:40:19,960 --> 00:40:25,440 +if I put just a cylinder on the pink + +906 +00:40:22,119 --> 00:40:29,359 +block I don't get a noise and you want + +907 +00:40:25,440 --> 00:40:31,800 +to discover underlying rules based on + +908 +00:40:29,359 --> 00:40:33,160 +the data that you observed and so why + +909 +00:40:31,800 --> 00:40:34,720 +would you want to do this there's a + +910 +00:40:33,160 --> 00:40:38,000 +couple reasons why you would want to do + +911 +00:40:34,720 --> 00:40:41,560 +this um the first reason why you would + +912 +00:40:38,000 --> 00:40:42,920 +like to do this is because um you might + +913 +00:40:41,560 --> 00:40:45,119 +want something that you can explain to + +914 +00:40:42,920 --> 00:40:47,760 +humans right you can explain I this + +915 +00:40:45,119 --> 00:40:51,240 +underlying pattern um exists in this + +916 +00:40:47,760 --> 00:40:55,119 +data it explains why the + +917 +00:40:51,240 --> 00:40:57,319 +data you know appears as it does appear + +918 +00:40:55,119 --> 00:40:59,240 +and then humans can go in and analyze it + +919 +00:40:57,319 --> 00:41:02,079 +or something like that so recently + +920 +00:40:59,240 --> 00:41:03,880 +there's been a big focus on like using + +921 +00:41:02,079 --> 00:41:06,480 +large language models for scientific + +922 +00:41:03,880 --> 00:41:08,240 +inquiry and other things like that by + +923 +00:41:06,480 --> 00:41:10,920 +coming up with good explanations for why + +924 +00:41:08,240 --> 00:41:12,160 +data is the way it is so if we were able + +925 +00:41:10,920 --> 00:41:15,599 +to do that that would be really + +926 +00:41:12,160 --> 00:41:19,280 +interesting another thing is um language + +927 +00:41:15,599 --> 00:41:22,960 +models are not particularly good + +928 +00:41:19,280 --> 00:41:24,760 +at coming up with they're not + +929 +00:41:22,960 --> 00:41:29,480 +particularly good at being consistent + +930 +00:41:24,760 --> 00:41:33,640 +about difficult tasks across very large + +931 +00:41:29,480 --> 00:41:35,319 +you know numbers of examples so if you + +932 +00:41:33,640 --> 00:41:37,920 +could look at like all of the data at + +933 +00:41:35,319 --> 00:41:41,240 +once infer general rules from them put + +934 +00:41:37,920 --> 00:41:43,480 +those rules in a prompt and then apply + +935 +00:41:41,240 --> 00:41:44,960 +that prompt to make predictions on new + +936 +00:41:43,480 --> 00:41:47,880 +examples you might be able to raise your + +937 +00:41:44,960 --> 00:41:49,760 +overall accuracy as well so it's kind of + +938 +00:41:47,880 --> 00:41:52,480 +like you know that's how humans learn as + +939 +00:41:49,760 --> 00:41:55,560 +well right we don't like just memorize + +940 +00:41:52,480 --> 00:41:57,400 +each example um if we just look at a few + +941 +00:41:55,560 --> 00:41:59,040 +examples then we might you know not + +942 +00:41:57,400 --> 00:42:02,560 +generalize well to new examples so we + +943 +00:41:59,040 --> 00:42:06,359 +kind of tried to abstract away general + +944 +00:42:02,560 --> 00:42:08,160 +rules um so this is also similar to + +945 +00:42:06,359 --> 00:42:10,200 +program induction from input output + +946 +00:42:08,160 --> 00:42:12,240 +examples which I talked during the code + +947 +00:42:10,200 --> 00:42:14,040 +uh generation class so you have like + +948 +00:42:12,240 --> 00:42:16,200 +input output examples and from them you + +949 +00:42:14,040 --> 00:42:18,119 +would like to come up with uh general + +950 +00:42:16,200 --> 00:42:19,920 +rules but this is a little bit more + +951 +00:42:18,119 --> 00:42:21,920 +General it doesn't necessarily need to + +952 +00:42:19,920 --> 00:42:24,160 +be a program that you're inducing it + +953 +00:42:21,920 --> 00:42:25,920 +could be you know a grammar or it could + +954 +00:42:24,160 --> 00:42:29,119 +be an explanation or it could be + +955 +00:42:25,920 --> 00:42:29,119 +anything else like this + +956 +00:42:30,079 --> 00:42:34,680 +um so there's a bit of work on rule + +957 +00:42:31,960 --> 00:42:36,800 +induction with llms it's pretty recent + +958 +00:42:34,680 --> 00:42:40,200 +work uh but I think it's pretty + +959 +00:42:36,800 --> 00:42:43,400 +interesting so the first one is um + +960 +00:42:40,200 --> 00:42:45,119 +hypothesis generation or the first step + +961 +00:42:43,400 --> 00:42:47,839 +um of this particular work here is + +962 +00:42:45,119 --> 00:42:53,280 +hypothesis generation and basically what + +963 +00:42:47,839 --> 00:42:55,480 +it does is it takes all of these uh you + +964 +00:42:53,280 --> 00:42:58,119 +know input output examples and from + +965 +00:42:55,480 --> 00:43:01,680 +these input output examples it predicts + +966 +00:42:58,119 --> 00:43:04,720 +these uh rules like the answer is always + +967 +00:43:01,680 --> 00:43:06,720 +one or uh you want to pick the smallest + +968 +00:43:04,720 --> 00:43:10,839 +one or you want to pick the first + +969 +00:43:06,720 --> 00:43:12,880 +element and then you evaluate it um and + +970 +00:43:10,839 --> 00:43:14,359 +so you pick the smallest one and you can + +971 +00:43:12,880 --> 00:43:16,040 +either evaluate it using another + +972 +00:43:14,359 --> 00:43:19,040 +language model or you can evaluate it + +973 +00:43:16,040 --> 00:43:21,280 +using symbolic uh using a symbolic + +974 +00:43:19,040 --> 00:43:23,359 +evaluator um if it's a program you could + +975 +00:43:21,280 --> 00:43:24,680 +use a symbolic evaluator if it's a + +976 +00:43:23,359 --> 00:43:28,559 +language model you could just ask the + +977 +00:43:24,680 --> 00:43:30,960 +language model to pick you know + +978 +00:43:28,559 --> 00:43:33,400 +an answer one always or pick the + +979 +00:43:30,960 --> 00:43:35,400 +smallest one or pick the first element + +980 +00:43:33,400 --> 00:43:37,480 +and then you get lots of outputs and + +981 +00:43:35,400 --> 00:43:39,240 +then when you get lots of outputs you + +982 +00:43:37,480 --> 00:43:42,079 +then can compare them against the + +983 +00:43:39,240 --> 00:43:44,559 +expected outputs and verify whether the + +984 +00:43:42,079 --> 00:43:47,920 +rule is correct verify whether the rule + +985 +00:43:44,559 --> 00:43:50,160 +gives you the appropriate answer + +986 +00:43:47,920 --> 00:43:53,599 +and once you've done that you can go + +987 +00:43:50,160 --> 00:43:56,079 +back and do hypothesis refinement um uh + +988 +00:43:53,599 --> 00:43:57,720 +and maybe even give this feedback about + +989 +00:43:56,079 --> 00:44:00,079 +like what was wrong + +990 +00:43:57,720 --> 00:44:03,280 +and gradually refine you know more + +991 +00:44:00,079 --> 00:44:03,280 +accurate and more complex + +992 +00:44:04,880 --> 00:44:11,040 +hypothesis this is another variant of + +993 +00:44:07,720 --> 00:44:12,760 +this idea um which uses different + +994 +00:44:11,040 --> 00:44:14,960 +methodology I think both are completely + +995 +00:44:12,760 --> 00:44:17,920 +valid but um this one has a little bit + +996 +00:44:14,960 --> 00:44:20,400 +higher data constraints so basically + +997 +00:44:17,920 --> 00:44:23,160 +what we do is we use hypotheses in Chain + +998 +00:44:20,400 --> 00:44:25,319 +of Thought reasoning and keep ones that + +999 +00:44:23,160 --> 00:44:28,480 +give resul in correct + +1000 +00:44:25,319 --> 00:44:30,760 +answers so + +1001 +00:44:28,480 --> 00:44:35,880 +uh this is the step where they're trying + +1002 +00:44:30,760 --> 00:44:40,440 +to induce rules and so here this says um + +1003 +00:44:35,880 --> 00:44:42,599 +in base 9 what is 76 + 14 and they used + +1004 +00:44:40,440 --> 00:44:44,079 +base 9 here obviously because if it was + +1005 +00:44:42,599 --> 00:44:45,520 +in base 10 the language model would just + +1006 +00:44:44,079 --> 00:44:48,400 +solve the problem and it's not very + +1007 +00:44:45,520 --> 00:44:54,319 +interesting so uh they they did base 9 + +1008 +00:44:48,400 --> 00:44:55,839 +addition and so the answer is um we have + +1009 +00:44:54,319 --> 00:45:00,280 +or the answer provided by the language + +1010 +00:44:55,839 --> 00:45:03,319 +model is we have 6 + 4 = 11 um the digit + +1011 +00:45:00,280 --> 00:45:07,480 +is 1 and the carry is 1 we have 7 + 1 + + +1012 +00:45:03,319 --> 00:45:09,480 +1 = 10 the digit is zero and the is one + +1013 +00:45:07,480 --> 00:45:13,000 +a leading digit is one so the answer is + +1014 +00:45:09,480 --> 00:45:15,240 +101 um and this verifies so they get the + +1015 +00:45:13,000 --> 00:45:17,240 +answer correct and so they know that + +1016 +00:45:15,240 --> 00:45:20,800 +they assume that this derivation is also + +1017 +00:45:17,240 --> 00:45:25,599 +correct and then they extract particular + +1018 +00:45:20,800 --> 00:45:28,200 +rules like 6 + 4 = 11 and 7 + 1 + 1 = 10 + +1019 +00:45:25,599 --> 00:45:30,800 +um and they add this to the rule + +1020 +00:45:28,200 --> 00:45:32,960 +Library so then the question is how do + +1021 +00:45:30,800 --> 00:45:35,000 +they extract the rules the way they + +1022 +00:45:32,960 --> 00:45:37,920 +extract the rules is they have an in + +1023 +00:45:35,000 --> 00:45:40,760 +context prompt which surrounds the rules + +1024 +00:45:37,920 --> 00:45:43,520 +by basically XML tags that says this is + +1025 +00:45:40,760 --> 00:45:46,640 +a rule that should be extracted and so + +1026 +00:45:43,520 --> 00:45:48,400 +then um anything that is in an XML tag + +1027 +00:45:46,640 --> 00:45:50,960 +they when you get the correct answer + +1028 +00:45:48,400 --> 00:45:53,440 +they extract and add that to the rule + +1029 +00:45:50,960 --> 00:45:55,680 +library and then conversely like if the + +1030 +00:45:53,440 --> 00:45:57,800 +derivation um if the answer is wrong + +1031 +00:45:55,680 --> 00:45:59,920 +they just don't add it or they add it as + +1032 +00:45:57,800 --> 00:46:01,079 +a negative example and say this is a + +1033 +00:45:59,920 --> 00:46:04,119 +incorrect + +1034 +00:46:01,079 --> 00:46:05,839 +rule um and then in the final step where + +1035 +00:46:04,119 --> 00:46:07,480 +they do deductive reasoning they can + +1036 +00:46:05,839 --> 00:46:09,119 +then go ahead and use these rules and + +1037 +00:46:07,480 --> 00:46:11,640 +improve accuracy and they demonstrate + +1038 +00:46:09,119 --> 00:46:12,960 +that that helps so basically these are + +1039 +00:46:11,640 --> 00:46:14,520 +two different approaches one is + +1040 +00:46:12,960 --> 00:46:17,400 +extracting directly from the Chain of + +1041 +00:46:14,520 --> 00:46:18,880 +Thought the other is uh a priori trying + +1042 +00:46:17,400 --> 00:46:23,760 +to generate rules from the whole rule + +1043 +00:46:18,880 --> 00:46:27,480 +base and then um then verifying them um + +1044 +00:46:23,760 --> 00:46:31,000 +notably both of these require verifiers + +1045 +00:46:27,480 --> 00:46:33,839 +um and so in some recent work which uh I + +1046 +00:46:31,000 --> 00:46:36,040 +I hope will be on archive very soon uh + +1047 +00:46:33,839 --> 00:46:38,839 +we took a look at whether language + +1048 +00:46:36,040 --> 00:46:42,800 +models themselves can verify their own + +1049 +00:46:38,839 --> 00:46:46,079 +hypothesis and um so that removes the + +1050 +00:46:42,800 --> 00:46:48,000 +symbolic verifier here um by just asking + +1051 +00:46:46,079 --> 00:46:51,480 +the language model whether the output is + +1052 +00:46:48,000 --> 00:46:53,480 +correct or not and um we found that with + +1053 +00:46:51,480 --> 00:46:55,240 +very powerful language models like gp4 + +1054 +00:46:53,480 --> 00:46:57,760 +you can actually do that as well so that + +1055 +00:46:55,240 --> 00:47:01,319 +REM removes the necess necessity to have + +1056 +00:46:57,760 --> 00:47:05,480 +a symbolic verifier in the loop as + +1057 +00:47:01,319 --> 00:47:08,200 +well cool um the reason why I wanted to + +1058 +00:47:05,480 --> 00:47:09,440 +introduce this is I don't know if like + +1059 +00:47:08,200 --> 00:47:12,359 +like it seems like all of these have + +1060 +00:47:09,440 --> 00:47:16,359 +been applied so far on kind of very toy + +1061 +00:47:12,359 --> 00:47:19,119 +examples like you know + +1062 +00:47:16,359 --> 00:47:22,240 +um like honestly I don't really care + +1063 +00:47:19,119 --> 00:47:25,920 +about whether I can play Tetris or um + +1064 +00:47:22,240 --> 00:47:27,920 +you know uh find the largest or smallest + +1065 +00:47:25,920 --> 00:47:30,880 +number within + +1066 +00:47:27,920 --> 00:47:33,720 +um you know list or something like this + +1067 +00:47:30,880 --> 00:47:36,000 +but I think they have like really exting + +1068 +00:47:33,720 --> 00:47:38,480 +possibilities for how we could extract + +1069 +00:47:36,000 --> 00:47:40,319 +more General patterns and like use these + +1070 +00:47:38,480 --> 00:47:41,720 +to improve language model based systems + +1071 +00:47:40,319 --> 00:47:43,599 +so I think it's a really exciting + +1072 +00:47:41,720 --> 00:47:48,000 +research + +1073 +00:47:43,599 --> 00:47:51,000 +Direction um cool any questions about + +1074 +00:47:48,000 --> 00:47:51,000 +this + +1075 +00:47:54,240 --> 00:48:02,160 +yeah yeah so that's a good question + +1076 +00:47:58,160 --> 00:48:06,079 +um so I I think tool + +1077 +00:48:02,160 --> 00:48:09,359 +learning is maybe kind of a sub subset + +1078 +00:48:06,079 --> 00:48:12,319 +of this possibly like I feel like in + +1079 +00:48:09,359 --> 00:48:13,559 +tool learning you're learning functions + +1080 +00:48:12,319 --> 00:48:15,559 +that + +1081 +00:48:13,559 --> 00:48:17,559 +are I don't know if they are like good + +1082 +00:48:15,559 --> 00:48:19,680 +explanations of the data but at the very + +1083 +00:48:17,559 --> 00:48:23,119 +least they're like useful um they're + +1084 +00:48:19,680 --> 00:48:25,119 +useful rules for solving the task um so + +1085 +00:48:23,119 --> 00:48:26,880 +I I feel like they're approaching it + +1086 +00:48:25,119 --> 00:48:28,760 +from two different motivations but + +1087 +00:48:26,880 --> 00:48:30,960 +actually + +1088 +00:48:28,760 --> 00:48:33,559 +the methods that they're using are + +1089 +00:48:30,960 --> 00:48:36,240 +similar so like for example in our tool + +1090 +00:48:33,559 --> 00:48:38,559 +learning work Trove we generated like + +1091 +00:48:36,240 --> 00:48:42,240 +multiple options for tools and we kept + +1092 +00:48:38,559 --> 00:48:44,000 +the ones that had high self- consistency + +1093 +00:48:42,240 --> 00:48:46,800 +so that's kind of like the verifier step + +1094 +00:48:44,000 --> 00:48:49,040 +right and then um we threw away the ones + +1095 +00:48:46,800 --> 00:48:52,760 +that weren't useful so that helps make a + +1096 +00:48:49,040 --> 00:48:56,760 +concise rule set so + +1097 +00:48:52,760 --> 00:48:59,280 +yeah and then like could we use tools to + +1098 +00:48:56,760 --> 00:49:01,880 +[Music] + +1099 +00:48:59,280 --> 00:49:04,079 +attack kind of the more like conceptual + +1100 +00:49:01,880 --> 00:49:05,319 +reasoning stuff I I don't actually know + +1101 +00:49:04,079 --> 00:49:06,839 +uh the answer to that it's a good + +1102 +00:49:05,319 --> 00:49:10,599 +question + +1103 +00:49:06,839 --> 00:49:10,599 +yeah any any other + +1104 +00:49:11,240 --> 00:49:18,680 +things okay uh another final one that + +1105 +00:49:14,440 --> 00:49:21,680 +I'd like to introduce um this is really + +1106 +00:49:18,680 --> 00:49:23,839 +like I I really really like this paper + +1107 +00:49:21,680 --> 00:49:27,440 +um just from the point of view of its + +1108 +00:49:23,839 --> 00:49:29,880 +ambition and motivation um and + +1109 +00:49:27,440 --> 00:49:31,920 +the idea is that they want to learn + +1110 +00:49:29,880 --> 00:49:34,440 +differences between text + +1111 +00:49:31,920 --> 00:49:36,200 +Collections and why would you want to do + +1112 +00:49:34,440 --> 00:49:38,079 +this there's actually a ton of reasons + +1113 +00:49:36,200 --> 00:49:39,720 +why you would want to do this but the + +1114 +00:49:38,079 --> 00:49:44,720 +the best one that they give + +1115 +00:49:39,720 --> 00:49:44,720 +here is actually no sorry maybe I I + +1116 +00:49:46,440 --> 00:49:50,359 +didn't okay so this is a less + +1117 +00:49:48,480 --> 00:49:53,440 +interesting one the the more interesting + +1118 +00:49:50,359 --> 00:49:57,799 +one uh that they give in the paper is um + +1119 +00:49:53,440 --> 00:50:00,200 +examples of reports from patients who + +1120 +00:49:57,799 --> 00:50:04,200 +took an actual drug and took a + +1121 +00:50:00,200 --> 00:50:06,640 +placebo and so patients write about like + +1122 +00:50:04,200 --> 00:50:08,400 +their their symptoms or how they felt or + +1123 +00:50:06,640 --> 00:50:11,000 +they have checkups or things like that + +1124 +00:50:08,400 --> 00:50:13,839 +that are all written in natural language + +1125 +00:50:11,000 --> 00:50:16,319 +so one of the things that doctors try to + +1126 +00:50:13,839 --> 00:50:18,000 +do is they try to look at all of these + +1127 +00:50:16,319 --> 00:50:20,240 +reports and figure out if there's any + +1128 +00:50:18,000 --> 00:50:21,880 +like consistent difference between + +1129 +00:50:20,240 --> 00:50:25,079 +people who took a placebo and people who + +1130 +00:50:21,880 --> 00:50:27,359 +took an actual um actual drug and this + +1131 +00:50:25,079 --> 00:50:31,079 +is like a major part of medical trials + +1132 +00:50:27,359 --> 00:50:32,960 +right um and so the idea is like given + +1133 +00:50:31,079 --> 00:50:35,000 +all of the texts of people who took the + +1134 +00:50:32,960 --> 00:50:36,599 +drug given all the texts of people who + +1135 +00:50:35,000 --> 00:50:38,319 +of people who took the placebo could you + +1136 +00:50:36,599 --> 00:50:40,960 +automatically extract differences + +1137 +00:50:38,319 --> 00:50:45,000 +between them in some way and so the + +1138 +00:50:40,960 --> 00:50:47,760 +methodology that they use for this is um + +1139 +00:50:45,000 --> 00:50:51,359 +they have like group a uh the Manchester + +1140 +00:50:47,760 --> 00:50:53,240 +United soccer Squad welcomes Rising Star + +1141 +00:50:51,359 --> 00:50:54,599 +as Serena Williams joins the UCLA + +1142 +00:50:53,240 --> 00:50:56,920 +women's tennis roster and then you have + +1143 +00:50:54,599 --> 00:51:00,200 +like 20 more examples and then here you + +1144 +00:50:56,920 --> 00:51:03,480 +have Egypt's President uh at the African + +1145 +00:51:00,200 --> 00:51:07,200 +unit Union Summit um and other things + +1146 +00:51:03,480 --> 00:51:12,000 +like that in 20 examples uh not seen + +1147 +00:51:07,200 --> 00:51:14,359 +here and so then if I asked a question + +1148 +00:51:12,000 --> 00:51:16,359 +um the original data set includes news + +1149 +00:51:14,359 --> 00:51:18,680 +summaries the two corpora are generated + +1150 +00:51:16,359 --> 00:51:21,240 +based on when they were published uh + +1151 +00:51:18,680 --> 00:51:24,359 +samples from group a include news from + +1152 +00:51:21,240 --> 00:51:27,480 +2007 while samples from Group B include + +1153 +00:51:24,359 --> 00:51:29,000 +news from 2008 I'm a joural trying to + +1154 +00:51:27,480 --> 00:51:31,240 +understand what topics are popular + +1155 +00:51:29,000 --> 00:51:33,440 +across years please write a list of + +1156 +00:51:31,240 --> 00:51:35,280 +hypotheses separated by bullet points of + +1157 +00:51:33,440 --> 00:51:39,920 +how data points from group a differ from + +1158 +00:51:35,280 --> 00:51:42,400 +those of group b um and then formatting + +1159 +00:51:39,920 --> 00:51:44,160 +information + +1160 +00:51:42,400 --> 00:51:46,960 +um + +1161 +00:51:44,160 --> 00:51:49,680 +and so based on the two sentence groups + +1162 +00:51:46,960 --> 00:51:50,559 +A and B from the above more sentences in + +1163 +00:51:49,680 --> 00:51:53,400 +group + +1164 +00:51:50,559 --> 00:51:55,240 +a mention a sports team or mention about + +1165 +00:51:53,400 --> 00:51:57,319 +academic relations or things like that + +1166 +00:51:55,240 --> 00:51:58,599 +and so what this allows you to do is it + +1167 +00:51:57,319 --> 00:52:00,319 +allows you to come up with a whole bunch + +1168 +00:51:58,599 --> 00:52:01,400 +of hypotheses about why one might be + +1169 +00:52:00,319 --> 00:52:04,920 +better than the + +1170 +00:52:01,400 --> 00:52:08,920 +other so the problem with this though is + +1171 +00:52:04,920 --> 00:52:10,880 +like because of language model you know + +1172 +00:52:08,920 --> 00:52:13,440 +limits number one they might just + +1173 +00:52:10,880 --> 00:52:17,119 +hallucinate things and be totally wrong + +1174 +00:52:13,440 --> 00:52:19,680 +um number two + +1175 +00:52:17,119 --> 00:52:21,040 +the size of the context so that they can + +1176 +00:52:19,680 --> 00:52:23,960 +take into account when making this + +1177 +00:52:21,040 --> 00:52:26,720 +decision is relatively small so the next + +1178 +00:52:23,960 --> 00:52:29,280 +thing that they do is then they have a a + +1179 +00:52:26,720 --> 00:52:32,119 +much larger Corpus of + +1180 +00:52:29,280 --> 00:52:33,200 +text um with like a thousand examples or + +1181 +00:52:32,119 --> 00:52:36,640 +something like + +1182 +00:52:33,200 --> 00:52:40,240 +this and then they treat each of these + +1183 +00:52:36,640 --> 00:52:42,680 +hypotheses as a + +1184 +00:52:40,240 --> 00:52:44,559 +classifier and then they go through all + +1185 +00:52:42,680 --> 00:52:47,480 +of the examples from Corpus one which is + +1186 +00:52:44,559 --> 00:52:50,480 +like maybe 2000 year 2000 and then + +1187 +00:52:47,480 --> 00:52:52,079 +Corpus 2 which is year 2008 and they ask + +1188 +00:52:50,480 --> 00:52:55,880 +the language model with respect to all + +1189 +00:52:52,079 --> 00:52:58,119 +of them um does this sentence mention a + +1190 +00:52:55,880 --> 00:53:01,400 +sports team recording recruiting a new + +1191 +00:52:58,119 --> 00:53:04,839 +member um and so you get a + +1192 +00:53:01,400 --> 00:53:04,839 +classification for each one of + +1193 +00:53:12,359 --> 00:53:17,440 +these and you get a certain number of + +1194 +00:53:14,520 --> 00:53:18,799 +ones and zeros and so once you have a + +1195 +00:53:17,440 --> 00:53:20,839 +certain number of ones and zeros what's + +1196 +00:53:18,799 --> 00:53:24,079 +the next thing that you would do + +1197 +00:53:20,839 --> 00:53:24,079 +here any + +1198 +00:53:24,880 --> 00:53:30,599 +ideas how do you tell there's like + +1199 +00:53:27,359 --> 00:53:30,599 +actually a difference between these + +1200 +00:53:36,520 --> 00:53:43,319 +two between two sets + +1201 +00:53:39,319 --> 00:53:45,920 +of numbers like one and + +1202 +00:53:43,319 --> 00:53:48,680 +zero a hint is you probably had to do + +1203 +00:53:45,920 --> 00:53:48,680 +this for assignment + +1204 +00:53:53,720 --> 00:53:58,520 +two yeah + +1205 +00:53:56,799 --> 00:54:01,200 +yeah exactly you you do a significance + +1206 +00:53:58,520 --> 00:54:04,200 +test between the two and so um what you + +1207 +00:54:01,200 --> 00:54:06,440 +can then do is you have lots of + +1208 +00:54:04,200 --> 00:54:08,839 +hypotheses you have lots of significance + +1209 +00:54:06,440 --> 00:54:11,040 +values you can order them by the + +1210 +00:54:08,839 --> 00:54:13,839 +significance value and say the most + +1211 +00:54:11,040 --> 00:54:17,559 +significance or the the difference with + +1212 +00:54:13,839 --> 00:54:19,160 +the like lowest P value between them is + +1213 +00:54:17,559 --> 00:54:20,480 +the one that's most likely to be an + +1214 +00:54:19,160 --> 00:54:26,520 +actual difference between the two and + +1215 +00:54:20,480 --> 00:54:29,079 +you can find um like uh the news in 2007 + +1216 +00:54:26,520 --> 00:54:32,520 +indeed tended to talk about X more than + +1217 +00:54:29,079 --> 00:54:34,559 +uh than other things so I uh I actually + +1218 +00:54:32,520 --> 00:54:36,079 +used this in one of my uh one of my + +1219 +00:54:34,559 --> 00:54:39,520 +unrelated projects where I wanted to + +1220 +00:54:36,079 --> 00:54:42,680 +find the difference between um language + +1221 +00:54:39,520 --> 00:54:45,640 +models sentences that language models + +1222 +00:54:42,680 --> 00:54:47,839 +aligned well with human brain signals in + +1223 +00:54:45,640 --> 00:54:49,760 +sentences where language models didn't + +1224 +00:54:47,839 --> 00:54:52,559 +align well with human brain signals so + +1225 +00:54:49,760 --> 00:54:53,799 +we like we had some data of human brain + +1226 +00:54:52,559 --> 00:54:56,880 +signals and we had a measure of + +1227 +00:54:53,799 --> 00:54:58,240 +alignment um on each sentence and it + +1228 +00:54:56,880 --> 00:55:01,799 +actually found some pretty interesting + +1229 +00:54:58,240 --> 00:55:03,359 +hypothesis like um uh language models + +1230 +00:55:01,799 --> 00:55:06,200 +tend to align less well with human brain + +1231 +00:55:03,359 --> 00:55:07,319 +signals on metaphorical language or a + +1232 +00:55:06,200 --> 00:55:10,599 +language that had to do with + +1233 +00:55:07,319 --> 00:55:11,799 +interpersonal relations or um or other + +1234 +00:55:10,599 --> 00:55:15,200 +things like that and then we actually + +1235 +00:55:11,799 --> 00:55:17,559 +went in and pursued um you know these to + +1236 +00:55:15,200 --> 00:55:21,000 +examine them further and uh we didn't + +1237 +00:55:17,559 --> 00:55:22,680 +entirely rely on this um you know like + +1238 +00:55:21,000 --> 00:55:25,160 +significance test because I didn't quite + +1239 +00:55:22,680 --> 00:55:26,880 +trust language models that much to like + +1240 +00:55:25,160 --> 00:55:28,559 +shape my entire resource + +1241 +00:55:26,880 --> 00:55:29,880 +research agenda around them but we came + +1242 +00:55:28,559 --> 00:55:31,720 +up with other ways to measure it and + +1243 +00:55:29,880 --> 00:55:35,000 +some of the things checked out some of + +1244 +00:55:31,720 --> 00:55:36,799 +the things didn't check out so um again + +1245 +00:55:35,000 --> 00:55:38,760 +I think this general direction of like + +1246 +00:55:36,799 --> 00:55:41,720 +how can language models help us answer + +1247 +00:55:38,760 --> 00:55:43,760 +you know uh complex research questions + +1248 +00:55:41,720 --> 00:55:45,480 +that we wouldn't be able to easily or + +1249 +00:55:43,760 --> 00:55:47,960 +very efficiently that would require + +1250 +00:55:45,480 --> 00:55:52,200 +normally humans annotating lots of data + +1251 +00:55:47,960 --> 00:55:56,839 +is um an interesting topic as + +1252 +00:55:52,200 --> 00:55:56,839 +well cool um \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (21) Complex Reasoning/transcript.vtt b/CMU Advanced NLP 2024 (21) Complex Reasoning/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..7ef813cca638c5d207c0b8c7df2e74d59ce2fd17 --- /dev/null +++ b/CMU Advanced NLP 2024 (21) Complex Reasoning/transcript.vtt @@ -0,0 +1,3757 @@ +WEBVTT + +00:00:00.280 --> 00:00:05.120 +so I'd like to go ahead with uh complex + +00:00:02.399 --> 00:00:08.719 +reasoning and we've talked a little bit + +00:00:05.120 --> 00:00:10.719 +about uh reasoning in language models uh + +00:00:08.719 --> 00:00:12.160 +up until now and so I'm going to be + +00:00:10.719 --> 00:00:15.280 +talking about stuff that we didn't talk + +00:00:12.160 --> 00:00:17.240 +about yet um this might be a little bit + +00:00:15.280 --> 00:00:19.199 +short because of that because I'm not + +00:00:17.240 --> 00:00:20.640 +talking about like programs because we + +00:00:19.199 --> 00:00:22.080 +talked about that in the code generation + +00:00:20.640 --> 00:00:24.199 +class and we already talked a little bit + +00:00:22.080 --> 00:00:26.320 +about some of the basics here but um you + +00:00:24.199 --> 00:00:30.119 +know if we have time at the end I'd be + +00:00:26.320 --> 00:00:30.840 +happy to discuss free form also so what + +00:00:30.119 --> 00:00:34.320 +is + +00:00:30.840 --> 00:00:35.920 +reasoning um the basic idea is using + +00:00:34.320 --> 00:00:37.680 +evidence and logic to arrive at + +00:00:35.920 --> 00:00:40.200 +conclusions and make + +00:00:37.680 --> 00:00:43.760 +judgments + +00:00:40.200 --> 00:00:48.039 +and what is it in language models is a + +00:00:43.760 --> 00:00:49.399 +little bit um you know less clear uh but + +00:00:48.039 --> 00:00:52.680 +if we talk about it kind of like from + +00:00:49.399 --> 00:00:56.280 +the philosophical standpoint um there + +00:00:52.680 --> 00:00:58.399 +are two varieties of this one is formal + +00:00:56.280 --> 00:01:01.680 +uh reasoning and formal reasoning is + +00:00:58.399 --> 00:01:04.239 +mostly based on strict truth values so + +00:01:01.680 --> 00:01:05.920 +it's kind of like um you can definitely + +00:01:04.239 --> 00:01:08.360 +say this is true you can definitely say + +00:01:05.920 --> 00:01:11.680 +this is not true + +00:01:08.360 --> 00:01:13.799 +and in real life there's very little + +00:01:11.680 --> 00:01:15.759 +actual formal reasoning outside of like + +00:01:13.799 --> 00:01:17.960 +for example mathematics or maybe you + +00:01:15.759 --> 00:01:20.240 +know algorithms computer science and + +00:01:17.960 --> 00:01:21.759 +other things like that um and then + +00:01:20.240 --> 00:01:23.240 +separately from that we have informal + +00:01:21.759 --> 00:01:27.040 +reasoning based on experience and + +00:01:23.240 --> 00:01:30.439 +intuition and actually um this is this + +00:01:27.040 --> 00:01:32.360 +was uh rather elusive uh until + +00:01:30.439 --> 00:01:33.720 +large language models you know people + +00:01:32.360 --> 00:01:35.560 +were working on it but it was really + +00:01:33.720 --> 00:01:38.119 +hard and this is like one of the big + +00:01:35.560 --> 00:01:41.479 +breakthroughs I think of the past few + +00:01:38.119 --> 00:01:46.799 +years um I should note that this uh + +00:01:41.479 --> 00:01:48.520 +paper here uh hang and Chan is a kind of + +00:01:46.799 --> 00:01:50.119 +review survey paper of reasoning in + +00:01:48.520 --> 00:01:51.520 +large language models it's on the + +00:01:50.119 --> 00:01:54.719 +references so if you're interested you + +00:01:51.520 --> 00:01:57.600 +can take a look at that too um but + +00:01:54.719 --> 00:01:59.200 +there's three kinds of reasoning uh + +00:01:57.600 --> 00:02:00.840 +there's many kinds of reasoning but + +00:01:59.200 --> 00:02:03.280 +there's three kinds of reasoning in + +00:02:00.840 --> 00:02:06.240 +particular that I'd like to talk about + +00:02:03.280 --> 00:02:08.840 +um from the point of view of today and + +00:02:06.240 --> 00:02:10.360 +the first one is uh deductive reasoning + +00:02:08.840 --> 00:02:13.080 +and deductive reasoning is basically + +00:02:10.360 --> 00:02:16.040 +using logic to go from a premise to a + +00:02:13.080 --> 00:02:18.440 +conclusion and this is largely what + +00:02:16.040 --> 00:02:19.879 +people not entirely but largely what + +00:02:18.440 --> 00:02:22.400 +people talk about when they think about + +00:02:19.879 --> 00:02:25.879 +formal reasoning and so basically you + +00:02:22.400 --> 00:02:28.640 +have several premises um like all + +00:02:25.879 --> 00:02:32.120 +mammals have kidneys and all whales are + +00:02:28.640 --> 00:02:35.239 +mammals and then from this uh you can go + +00:02:32.120 --> 00:02:35.239 +to all whales have + +00:02:35.440 --> 00:02:40.640 +kidneys then separately there's + +00:02:38.000 --> 00:02:44.040 +inductive reasoning and inductive + +00:02:40.640 --> 00:02:46.040 +reasoning is um from + +00:02:44.040 --> 00:02:48.480 +observation uh predict a likely + +00:02:46.040 --> 00:02:50.080 +conclusion or predict a likely kind of + +00:02:48.480 --> 00:02:53.640 +generalized + +00:02:50.080 --> 00:02:55.360 +conclusion um so this is one example uh + +00:02:53.640 --> 00:02:56.920 +when we see a creature with wings it is + +00:02:55.360 --> 00:02:58.599 +usually a bird we see a creature with + +00:02:56.920 --> 00:03:00.400 +wings the creature is likely to be a + +00:02:58.599 --> 00:03:02.879 +bird so it's kind of this is kind of + +00:03:00.400 --> 00:03:05.319 +like a soft version of deduction another + +00:03:02.879 --> 00:03:07.440 +common thing is like every single + +00:03:05.319 --> 00:03:10.760 +creature I have seen with wings is a + +00:03:07.440 --> 00:03:12.480 +bird and then you can kind of um induce + +00:03:10.760 --> 00:03:16.799 +that all + +00:03:12.480 --> 00:03:19.159 +uh like all uh creatures with wings are + +00:03:16.799 --> 00:03:21.120 +birds but that might not be true it's + +00:03:19.159 --> 00:03:23.879 +not necessarily logically entailed but + +00:03:21.120 --> 00:03:27.560 +you you make that kind + +00:03:23.879 --> 00:03:31.000 +of logical conclusion uh without it + +00:03:27.560 --> 00:03:32.840 +being formally uh correct or verif + +00:03:31.000 --> 00:03:34.720 +and then the final one is abductive + +00:03:32.840 --> 00:03:38.000 +reasoning and so this is from an + +00:03:34.720 --> 00:03:40.760 +observation we predict the most likely + +00:03:38.000 --> 00:03:42.760 +explanation and so for example if we + +00:03:40.760 --> 00:03:44.480 +have something like the car cannot start + +00:03:42.760 --> 00:03:48.319 +and there is a puddle of liquid under + +00:03:44.480 --> 00:03:50.200 +the engine um then we might have a + +00:03:48.319 --> 00:03:53.360 +likely explanation that the car has a + +00:03:50.200 --> 00:03:55.280 +leak in the radiator so we're going from + +00:03:53.360 --> 00:03:58.760 +kind of uh the + +00:03:55.280 --> 00:04:00.879 +car you know these these things and then + +00:03:58.760 --> 00:04:02.280 +we try to predict the reason why this + +00:04:00.879 --> 00:04:05.040 +happens so we're trying to predict like + +00:04:02.280 --> 00:04:07.360 +reverse pausal links + +00:04:05.040 --> 00:04:08.480 +essentially um there's other types of re + +00:04:07.360 --> 00:04:10.400 +reasoning that I'm not going to talk + +00:04:08.480 --> 00:04:12.159 +about as much like analogical reasoning + +00:04:10.400 --> 00:04:14.079 +and and things like this but uh these + +00:04:12.159 --> 00:04:15.440 +are the three main ones I want to talk + +00:04:14.079 --> 00:04:17.720 +about + +00:04:15.440 --> 00:04:22.040 +today uh one thing I should point out is + +00:04:17.720 --> 00:04:24.400 +like even in philosophy or you know + +00:04:22.040 --> 00:04:26.240 +like even when you read descriptions + +00:04:24.400 --> 00:04:29.280 +about these various types of reasoning + +00:04:26.240 --> 00:04:31.880 +the types are a little bit vague so um + +00:04:29.280 --> 00:04:35.280 +take these is like + +00:04:31.880 --> 00:04:37.240 +general not you know General directions + +00:04:35.280 --> 00:04:39.400 +and not strict rules because like which + +00:04:37.240 --> 00:04:42.120 +falls on under which category also can + +00:04:39.400 --> 00:04:44.880 +be a little bit uh you know unclear uh + +00:04:42.120 --> 00:04:44.880 +according to various + +00:04:45.479 --> 00:04:53.440 +definitions cool um so first before + +00:04:49.840 --> 00:04:55.720 +getting into formal reasoning methods + +00:04:53.440 --> 00:04:57.759 +are before getting into the bulk of the + +00:04:55.720 --> 00:05:00.000 +talk which is going to be about llms I + +00:04:57.759 --> 00:05:02.479 +want to talk about some pre-m reasoning + +00:05:00.000 --> 00:05:03.720 +methods and the first one is kind of + +00:05:02.479 --> 00:05:05.160 +like formal reasoning within + +00:05:03.720 --> 00:05:07.320 +computational + +00:05:05.160 --> 00:05:09.840 +semantics and this has been around for a + +00:05:07.320 --> 00:05:12.479 +really long time um it's also kind of + +00:05:09.840 --> 00:05:15.000 +what powered the things that worked over + +00:05:12.479 --> 00:05:21.039 +knowledge bases and other things like + +00:05:15.000 --> 00:05:23.639 +this um and the way it works is it does + +00:05:21.039 --> 00:05:27.600 +derivational um + +00:05:23.639 --> 00:05:31.800 +reasoning by uh sorry I can't read that + +00:05:27.600 --> 00:05:34.720 +in the back um by starting out with + +00:05:31.800 --> 00:05:36.080 +certain premises and getting to um + +00:05:34.720 --> 00:05:40.000 +getting to final + +00:05:36.080 --> 00:05:43.039 +conclusions so there's ways that you can + +00:05:40.000 --> 00:05:44.060 +write this I think you might have + +00:05:43.039 --> 00:05:47.080 +seen + +00:05:44.060 --> 00:05:50.479 +[Music] + +00:05:47.080 --> 00:05:54.240 +um you might have seen + +00:05:50.479 --> 00:05:58.319 +uh this in uh another like math class or + +00:05:54.240 --> 00:06:02.440 +something but uh we we have symbols like + +00:05:58.319 --> 00:06:02.440 +all and um + +00:06:03.039 --> 00:06:08.280 +exist let's + +00:06:04.960 --> 00:06:10.960 +see yeah we have things like all and + +00:06:08.280 --> 00:06:13.319 +exist and like all + +00:06:10.960 --> 00:06:16.240 +X + +00:06:13.319 --> 00:06:20.479 +die means + +00:06:16.240 --> 00:06:23.919 +like every Everything has died and this + +00:06:20.479 --> 00:06:27.360 +uh implies that Mia and Zed have + +00:06:23.919 --> 00:06:30.440 +died um + +00:06:27.360 --> 00:06:32.240 +so yeah this is a actually maybe I'll + +00:06:30.440 --> 00:06:33.280 +not I'll not go through this one and let + +00:06:32.240 --> 00:06:37.639 +me go + +00:06:33.280 --> 00:06:40.440 +through um go to this one so like it + +00:06:37.639 --> 00:06:40.440 +would be something + +00:06:40.639 --> 00:06:45.080 +like uh for + +00:06:42.960 --> 00:06:47.480 +all + +00:06:45.080 --> 00:06:50.669 +X um + +00:06:47.480 --> 00:06:50.669 +[Music] + +00:06:52.039 --> 00:07:00.400 +mamal well X + +00:06:56.759 --> 00:07:03.520 +implies have + +00:07:00.400 --> 00:07:07.560 +X kidney or something like + +00:07:03.520 --> 00:07:09.280 +that and then you would have other rules + +00:07:07.560 --> 00:07:11.879 +and you can go through uh through + +00:07:09.280 --> 00:07:14.440 +derivations and and other things like + +00:07:11.879 --> 00:07:16.120 +this + +00:07:14.440 --> 00:07:19.280 +um + +00:07:16.120 --> 00:07:21.560 +my favorite reference for this is this + +00:07:19.280 --> 00:07:24.599 +Blackburn and buz book right here it's + +00:07:21.560 --> 00:07:26.400 +really well written um and it has like + +00:07:24.599 --> 00:07:28.039 +lots of good examples it also explains + +00:07:26.400 --> 00:07:30.440 +how you go through derivations and other + +00:07:28.039 --> 00:07:34.360 +stuff like that + +00:07:30.440 --> 00:07:35.759 +um and actually neural networks can do + +00:07:34.360 --> 00:07:37.039 +this variety of reasoning through Chain + +00:07:35.759 --> 00:07:38.599 +of Thought and other things I'm going to + +00:07:37.039 --> 00:07:40.120 +talk about today but it's a very rough + +00:07:38.599 --> 00:07:43.960 +approximation and it doesn't work + +00:07:40.120 --> 00:07:47.199 +particularly well for saying like all + +00:07:43.960 --> 00:07:51.240 +you know all people + +00:07:47.199 --> 00:07:53.599 +are of a uh like things that apply to + +00:07:51.240 --> 00:07:57.240 +all people or things that apply to sets + +00:07:53.599 --> 00:08:00.039 +or other things like this so within + +00:07:57.240 --> 00:08:02.879 +prologue you could + +00:08:00.039 --> 00:08:06.520 +take a knowledge base and ask the + +00:08:02.879 --> 00:08:11.960 +knowledge base like do + +00:08:06.520 --> 00:08:12.800 +all people who work at CMU as professors + +00:08:11.960 --> 00:08:15.840 +have a + +00:08:12.800 --> 00:08:18.080 +PhD and you could like actually examine + +00:08:15.840 --> 00:08:20.639 +that based on the knowledge base uh + +00:08:18.080 --> 00:08:23.520 +whereas even if you had + +00:08:20.639 --> 00:08:25.800 +a language model that had access to + +00:08:23.520 --> 00:08:27.280 +everybody's CVS it wouldn't necessarily + +00:08:25.800 --> 00:08:28.599 +be able to answer that question and it + +00:08:27.280 --> 00:08:31.440 +especially wouldn't be able to answer + +00:08:28.599 --> 00:08:31.440 +that question if you were + +00:08:32.320 --> 00:08:37.760 +um it wouldn't be able to answer that + +00:08:34.640 --> 00:08:42.880 +question if there were like multiple + +00:08:37.760 --> 00:08:46.480 +steps so did all people who are working + +00:08:42.880 --> 00:08:50.959 +at CMU get their PHD after + +00:08:46.480 --> 00:08:52.959 +19 90 or something like that um so and + +00:08:50.959 --> 00:08:54.680 +the answer to that is obviously no but + +00:08:52.959 --> 00:08:56.519 +uh this would be able to find the + +00:08:54.680 --> 00:08:58.120 +counter evidence to that whereas LMS + +00:08:56.519 --> 00:09:00.000 +would not be guaranteed to be able to do + +00:08:58.120 --> 00:09:02.800 +that + +00:09:00.000 --> 00:09:04.279 +so I I think this is really uh like a + +00:09:02.800 --> 00:09:06.760 +nice thing to know but there's a couple + +00:09:04.279 --> 00:09:09.600 +problems with it the first thing is this + +00:09:06.760 --> 00:09:12.519 +really only traffics in like strictly + +00:09:09.600 --> 00:09:17.880 +true or strictly false statements um and + +00:09:12.519 --> 00:09:20.560 +that's a really big issue um so like if + +00:09:17.880 --> 00:09:22.959 +anything's soft you start uh this sort + +00:09:20.560 --> 00:09:24.320 +of formal reasoning starts breaking down + +00:09:22.959 --> 00:09:25.880 +the second thing which actually is a + +00:09:24.320 --> 00:09:28.959 +really big problem is once you start + +00:09:25.880 --> 00:09:30.600 +dealing with more complex things you + +00:09:28.959 --> 00:09:32.560 +don't ize it but there's always like + +00:09:30.600 --> 00:09:34.560 +exceptions and exceptions to exceptions + +00:09:32.560 --> 00:09:36.240 +and other things like that and actually + +00:09:34.560 --> 00:09:38.320 +becomes very computationally expensive + +00:09:36.240 --> 00:09:41.640 +to prove anything that's kind of like + +00:09:38.320 --> 00:09:43.279 +non-trivial um and so because of that uh + +00:09:41.640 --> 00:09:45.839 +I'm not actually going to cover it in + +00:09:43.279 --> 00:09:47.880 +the lecture today but recently there are + +00:09:45.839 --> 00:09:50.880 +um kind of search algorithms through + +00:09:47.880 --> 00:09:54.279 +proof spaces that use uh like neural + +00:09:50.880 --> 00:09:55.880 +models to do to speed up the search by + +00:09:54.279 --> 00:09:58.120 +picking the best and most promising + +00:09:55.880 --> 00:10:00.800 +hypotheses and uh for example Sean + +00:09:58.120 --> 00:10:02.800 +wellik uh here at CMU is working on that + +00:10:00.800 --> 00:10:04.800 +for neural theorem proving where you + +00:10:02.800 --> 00:10:05.959 +have uh like mathematical theorem + +00:10:04.800 --> 00:10:08.079 +proving and then you use a neural + +00:10:05.959 --> 00:10:13.120 +network to pick the best uh paths + +00:10:08.079 --> 00:10:14.880 +through logical uh operations so um + +00:10:13.120 --> 00:10:19.279 +that's kind of a combination of the more + +00:10:14.880 --> 00:10:22.920 +classical and uh modern + +00:10:19.279 --> 00:10:26.240 +methods then another thing that's useful + +00:10:22.920 --> 00:10:28.079 +to talk about I think this isn't very + +00:10:26.240 --> 00:10:31.640 +popular right now but I think it might + +00:10:28.079 --> 00:10:34.360 +be become more popular uh in the future + +00:10:31.640 --> 00:10:36.120 +is we start hitting the limits of uh you + +00:10:34.360 --> 00:10:38.560 +know what we can fit into long context + +00:10:36.120 --> 00:10:40.040 +Windows uh for neural models and stuff + +00:10:38.560 --> 00:10:42.600 +like this is memory + +00:10:40.040 --> 00:10:48.600 +networks and basically the way that + +00:10:42.600 --> 00:10:50.639 +memory networks work is they have write + +00:10:48.600 --> 00:10:51.399 +they have the ability to write and read + +00:10:50.639 --> 00:10:55.639 +from + +00:10:51.399 --> 00:10:57.360 +memory and so this figure is a little + +00:10:55.639 --> 00:11:00.440 +bit complex here but + +00:10:57.360 --> 00:11:02.880 +basically you have a query and then you + +00:11:00.440 --> 00:11:04.560 +get the embedding of the query um you + +00:11:02.880 --> 00:11:06.760 +take the inner product you get the soft + +00:11:04.560 --> 00:11:09.720 +Max of the inner product so this looks + +00:11:06.760 --> 00:11:11.040 +like a tension you look up embeddings + +00:11:09.720 --> 00:11:12.839 +and you take the weighted Su of the + +00:11:11.040 --> 00:11:14.560 +embeddings and you get the like summary + +00:11:12.839 --> 00:11:17.680 +of the memor so this is basically + +00:11:14.560 --> 00:11:20.320 +attention over a big memory + +00:11:17.680 --> 00:11:22.120 +base but then uh memory networks also + +00:11:20.320 --> 00:11:24.000 +have the ability to go in and update the + +00:11:22.120 --> 00:11:26.639 +memory so they also H have write + +00:11:24.000 --> 00:11:30.360 +operations so you can read and write + +00:11:26.639 --> 00:11:34.320 +from uh from the memory + +00:11:30.360 --> 00:11:36.279 +base and so the reason why I say this + +00:11:34.320 --> 00:11:40.480 +might become more popular is like one of + +00:11:36.279 --> 00:11:42.200 +the big issues with large language + +00:11:40.480 --> 00:11:45.320 +models nowadays is they don't get like + +00:11:42.200 --> 00:11:47.320 +to continually update their memory um + +00:11:45.320 --> 00:11:50.279 +and like one way you can do that is you + +00:11:47.320 --> 00:11:52.160 +can just add text to the memory but + +00:11:50.279 --> 00:11:54.000 +there are certain limits to that right + +00:11:52.160 --> 00:11:56.360 +uh you know text isn't necessarily the + +00:11:54.000 --> 00:11:58.959 +best way to encode all of the things + +00:11:56.360 --> 00:12:01.880 +that you've seen in the past so I I feel + +00:11:58.959 --> 00:12:03.360 +like this kind of architecture might be + +00:12:01.880 --> 00:12:04.920 +um how to pin these sorts of + +00:12:03.360 --> 00:12:06.480 +architectures onto language models might + +00:12:04.920 --> 00:12:08.639 +be an interesting research direction for + +00:12:06.480 --> 00:12:08.639 +the + +00:12:08.680 --> 00:12:15.360 +future um another thing which I am not + +00:12:12.600 --> 00:12:16.720 +going to talk about very much uh but + +00:12:15.360 --> 00:12:20.560 +because we kind of already talked about + +00:12:16.720 --> 00:12:23.560 +it in the code Generation Um area but + +00:12:20.560 --> 00:12:26.959 +it's actually been around for a while is + +00:12:23.560 --> 00:12:30.600 +solving questions with sort of symbolic + +00:12:26.959 --> 00:12:36.480 +reasoning and the way it works + +00:12:30.600 --> 00:12:41.320 +is for example you would have a + +00:12:36.480 --> 00:12:43.639 +um you would have a text here and based + +00:12:41.320 --> 00:12:47.440 +on the text you can run these sort of + +00:12:43.639 --> 00:12:50.440 +symbolic operations like find and filter + +00:12:47.440 --> 00:12:52.720 +and find the max number and relocate and + +00:12:50.440 --> 00:12:54.480 +other things like this and this + +00:12:52.720 --> 00:12:58.040 +explicitly + +00:12:54.480 --> 00:12:59.880 +manipulates uh kind of the attention and + +00:12:58.040 --> 00:13:02.519 +the um + +00:12:59.880 --> 00:13:03.839 +you can do things like filtering down to + +00:13:02.519 --> 00:13:08.600 +find the + +00:13:03.839 --> 00:13:11.040 +most uh like highest largest number for + +00:13:08.600 --> 00:13:12.800 +example or other things like this and + +00:13:11.040 --> 00:13:14.160 +this is kind of interesting because like + +00:13:12.800 --> 00:13:17.240 +some of the things that neural networks + +00:13:14.160 --> 00:13:20.360 +are bad at are like finding the largest + +00:13:17.240 --> 00:13:21.600 +number in a big data set or um finding + +00:13:20.360 --> 00:13:23.360 +all of the things where something + +00:13:21.600 --> 00:13:26.240 +applies and throwing out all of the + +00:13:23.360 --> 00:13:27.959 +things where something doesn't apply so + +00:13:26.240 --> 00:13:29.560 +again this isn't used super widely in + +00:13:27.959 --> 00:13:31.959 +larged language models right now because + +00:13:29.560 --> 00:13:33.920 +I feel like um people have been focusing + +00:13:31.959 --> 00:13:36.440 +on prompting + +00:13:33.920 --> 00:13:38.880 +techniques uh in order to do this sort + +00:13:36.440 --> 00:13:41.199 +of reasoning but I think this is another + +00:13:38.880 --> 00:13:43.320 +thing that's worth thinking about taking + +00:13:41.199 --> 00:13:45.079 +a close another look at and seeing if + +00:13:43.320 --> 00:13:47.440 +there are ways to incorporate it with + +00:13:45.079 --> 00:13:49.320 +the current models because like + +00:13:47.440 --> 00:13:50.720 +basically what I wanted to say is like + +00:13:49.320 --> 00:13:52.279 +all of the things that I decided to + +00:13:50.720 --> 00:13:54.560 +introduce here in this section are + +00:13:52.279 --> 00:13:57.600 +things that current models are still not + +00:13:54.560 --> 00:14:00.800 +particularly good at like Reon taking + +00:13:57.600 --> 00:14:03.079 +many steps over sets of + +00:14:00.800 --> 00:14:05.079 +inputs um reading and writing from + +00:14:03.079 --> 00:14:09.839 +memory so that you can remember things + +00:14:05.079 --> 00:14:11.720 +over long periods and also um filtering + +00:14:09.839 --> 00:14:13.399 +down large pieces of text into smaller + +00:14:11.720 --> 00:14:16.040 +pieces of text to find relevant + +00:14:13.399 --> 00:14:17.560 +information so um if any of those things + +00:14:16.040 --> 00:14:19.880 +sound interesting you can take a look at + +00:14:17.560 --> 00:14:22.800 +this but um after this I'd like to go + +00:14:19.880 --> 00:14:24.399 +kind of into the you know main event + +00:14:22.800 --> 00:14:27.759 +where I talk about the stuff that people + +00:14:24.399 --> 00:14:31.040 +are actually using a lot no it is um any + +00:14:27.759 --> 00:14:31.040 +questions about these three + +00:14:33.000 --> 00:14:39.120 +okay cool um so now I'd like to go into + +00:14:36.399 --> 00:14:40.639 +Chain of Thought and variance and I + +00:14:39.120 --> 00:14:42.279 +actually have already talked about Chain + +00:14:40.639 --> 00:14:44.199 +of Thought in fact we've mentioned it a + +00:14:42.279 --> 00:14:47.720 +couple times um but just you know to + +00:14:44.199 --> 00:14:49.399 +remind everybody the basic idea is um + +00:14:47.720 --> 00:14:52.880 +compared to standard prompting where we + +00:14:49.399 --> 00:14:55.519 +have like a question um and an answer in + +00:14:52.880 --> 00:14:58.480 +Chain of Thought we have a question and + +00:14:55.519 --> 00:15:01.040 +then we have a derivation for the + +00:14:58.480 --> 00:15:02.440 +questions so like uh Roger started with + +00:15:01.040 --> 00:15:06.120 +five + +00:15:02.440 --> 00:15:09.040 +balls two can uh five balls two cans of + +00:15:06.120 --> 00:15:13.839 +three tennis balls each is six tenis 5 + +00:15:09.040 --> 00:15:15.639 +plus 6al 11 the answer is 11 so um you + +00:15:13.839 --> 00:15:17.519 +add this to the prompt and by adding + +00:15:15.639 --> 00:15:19.240 +this to the prompt you get the model to + +00:15:17.519 --> 00:15:22.600 +uh also do these derivations at test + +00:15:19.240 --> 00:15:25.199 +time and this greatly improves some + +00:15:22.600 --> 00:15:27.759 +tasks it improves tasks where we can't + +00:15:25.199 --> 00:15:30.040 +like immediately predict the answer + +00:15:27.759 --> 00:15:32.000 +directly and then I also previously + +00:15:30.040 --> 00:15:33.440 +talked about zero shot Chain of Thought + +00:15:32.000 --> 00:15:35.880 +uh reasoning where we just prompt the + +00:15:33.440 --> 00:15:38.480 +model to with something like let's think + +00:15:35.880 --> 00:15:42.680 +step by step and then the model becomes + +00:15:38.480 --> 00:15:46.240 +able to do this uh Chain of Thought + +00:15:42.680 --> 00:15:48.279 +reasoning okay so that was review and + +00:15:46.240 --> 00:15:51.680 +now I'd like to talk about some of like + +00:15:48.279 --> 00:15:53.560 +more advanced methods that people use + +00:15:51.680 --> 00:15:55.079 +for uh reasoning as + +00:15:53.560 --> 00:15:58.040 +well + +00:15:55.079 --> 00:15:59.959 +and this is by no means an exhaustive + +00:15:58.040 --> 00:16:01.800 +list they're just of the ones that I + +00:15:59.959 --> 00:16:03.319 +found interesting so if you know other + +00:16:01.800 --> 00:16:04.839 +ones that you'd like to talk about or + +00:16:03.319 --> 00:16:07.720 +introduce to the class or something like + +00:16:04.839 --> 00:16:10.600 +that I'd also be happy to uh to hear uh + +00:16:07.720 --> 00:16:14.120 +which ones you like or have heard about + +00:16:10.600 --> 00:16:16.920 +but the first one is um self-as and one + +00:16:14.120 --> 00:16:20.959 +of the issues with large language models + +00:16:16.920 --> 00:16:23.240 +nowadays is that they're not uh very + +00:16:20.959 --> 00:16:25.519 +good at asking follow-up questions or + +00:16:23.240 --> 00:16:27.839 +maybe not that they're not very good at + +00:16:25.519 --> 00:16:31.160 +it but just they're not trained to do it + +00:16:27.839 --> 00:16:32.880 +so like if you play around with chat GPT + +00:16:31.160 --> 00:16:35.240 +I have never had chat GPT ask me a + +00:16:32.880 --> 00:16:36.680 +follow-up question I don't think it's + +00:16:35.240 --> 00:16:38.319 +like it's not because large language + +00:16:36.680 --> 00:16:41.920 +models aren't capable of doing it it's + +00:16:38.319 --> 00:16:43.519 +just that they like the open AI must + +00:16:41.920 --> 00:16:45.000 +think it's a bad user experience to have + +00:16:43.519 --> 00:16:47.680 +a language model that asks you follow up + +00:16:45.000 --> 00:16:51.319 +questions that's only like you know + +00:16:47.680 --> 00:16:53.160 +reason I can think about it um but + +00:16:51.319 --> 00:16:56.199 +basically what self ask does is it + +00:16:53.160 --> 00:17:00.000 +explicitly prompts the model to ask to + +00:16:56.199 --> 00:17:02.360 +ask if there are followup questions so + +00:17:00.000 --> 00:17:05.799 +here's an example on the left where the + +00:17:02.360 --> 00:17:11.240 +question is uh who lived longer Theodore + +00:17:05.799 --> 00:17:12.640 +haer or Harry vau wat uh Watkins and + +00:17:11.240 --> 00:17:15.240 +basically it says are follow-up + +00:17:12.640 --> 00:17:17.679 +questions needed here yes and then the + +00:17:15.240 --> 00:17:20.319 +followup is how old was Theodore hacker + +00:17:17.679 --> 00:17:23.640 +when he died and the intermediate answer + +00:17:20.319 --> 00:17:26.959 +is Theodore hacker was 65 years old how + +00:17:23.640 --> 00:17:29.000 +old was Harry Von Watkins um Harry Von + +00:17:26.959 --> 00:17:32.400 +Watkins was 69 years old but so the + +00:17:29.000 --> 00:17:35.240 +final answer is Harry Bon Watkins and um + +00:17:32.400 --> 00:17:37.520 +in this particular paper this is just + +00:17:35.240 --> 00:17:42.520 +like another variety of Chain of Thought + +00:17:37.520 --> 00:17:44.720 +it's like not using it to incorporate + +00:17:42.520 --> 00:17:47.400 +any external information or anything + +00:17:44.720 --> 00:17:48.720 +like that it's just trying to more + +00:17:47.400 --> 00:17:52.360 +directly + +00:17:48.720 --> 00:17:53.840 +elicit um information from the model um + +00:17:52.360 --> 00:17:55.360 +but nonetheless they demonstrate that + +00:17:53.840 --> 00:17:57.760 +this is useful and then there's also + +00:17:55.360 --> 00:18:00.120 +other methods that actually try to look + +00:17:57.760 --> 00:18:02.240 +up information explicit to answer these + +00:18:00.120 --> 00:18:05.280 +questions um which are even more + +00:18:02.240 --> 00:18:05.280 +powerful than what we have + +00:18:05.720 --> 00:18:13.200 +here um so that's what I'd like to + +00:18:09.960 --> 00:18:16.919 +introduce next and basically the idea um + +00:18:13.200 --> 00:18:19.760 +here is this is a method that instead of + +00:18:16.919 --> 00:18:22.880 +just doing Chain of Thought it retrieves + +00:18:19.760 --> 00:18:25.480 +relevant sentences when you're doing the + +00:18:22.880 --> 00:18:28.919 +Chain of Thought So like + +00:18:25.480 --> 00:18:30.880 +here um + +00:18:28.919 --> 00:18:32.960 +uh we have the followup are follow-ups + +00:18:30.880 --> 00:18:35.159 +needed here yes and then this is the + +00:18:32.960 --> 00:18:36.880 +followup but if the model itself doesn't + +00:18:35.159 --> 00:18:39.440 +know how old somebody was when they died + +00:18:36.880 --> 00:18:40.760 +then it won't be able to answer this so + +00:18:39.440 --> 00:18:44.400 +what they do in order to make this + +00:18:40.760 --> 00:18:47.200 +happen is they um do bm25 based + +00:18:44.400 --> 00:18:49.520 +retrieval over Wikipedia for each of the + +00:18:47.200 --> 00:18:51.760 +Chain of Thought uh answers and then + +00:18:49.520 --> 00:18:53.400 +they use the retrieved uh I think it's + +00:18:51.760 --> 00:18:56.039 +like 10 documents or something like that + +00:18:53.400 --> 00:18:59.640 +multiple retriev documents to prompt the + +00:18:56.039 --> 00:19:03.200 +model um to basically follow up with its + +00:18:59.640 --> 00:19:05.440 +Chain of Thought so this is another uh + +00:19:03.200 --> 00:19:07.880 +variety of things that you can do in + +00:19:05.440 --> 00:19:07.880 +order to + +00:19:10.720 --> 00:19:16.120 +improve + +00:19:13.120 --> 00:19:16.120 +cool + +00:19:16.400 --> 00:19:21.440 +um then another one that I'd like to + +00:19:18.960 --> 00:19:22.559 +talk about is U multilingual Chain of + +00:19:21.440 --> 00:19:24.039 +Thought reasoning I'm going to be + +00:19:22.559 --> 00:19:28.000 +talking more about multilingual things + +00:19:24.039 --> 00:19:29.960 +in the multilingual class in a week but + +00:19:28.000 --> 00:19:33.559 +the interesting thing about multilingual + +00:19:29.960 --> 00:19:37.200 +Chain of Thought is we have a design + +00:19:33.559 --> 00:19:41.280 +decision right like do we want to just + +00:19:37.200 --> 00:19:44.000 +answer questions in the language that we + +00:19:41.280 --> 00:19:46.679 +are asking questions in like so if I ask + +00:19:44.000 --> 00:19:48.080 +a question in Japanese am I going to + +00:19:46.679 --> 00:19:49.840 +have it go through the whole chain of + +00:19:48.080 --> 00:19:52.720 +thought process in Japanese and then + +00:19:49.840 --> 00:19:55.840 +answer my question in Japanese or do I + +00:19:52.720 --> 00:19:57.120 +want it to uh somehow go through English + +00:19:55.840 --> 00:19:59.159 +because the model has been trained on + +00:19:57.120 --> 00:20:00.640 +lots of English and it has better + +00:19:59.159 --> 00:20:02.120 +it's like a better way to take advantage + +00:20:00.640 --> 00:20:04.840 +of its reasoning + +00:20:02.120 --> 00:20:07.200 +capabilities does anyone have a idea + +00:20:04.840 --> 00:20:07.200 +about the + +00:20:07.960 --> 00:20:12.480 +answer who thinks it's better to do it + +00:20:10.240 --> 00:20:15.360 +entirely in the the language that the + +00:20:12.480 --> 00:20:15.360 +question is asked + +00:20:15.640 --> 00:20:20.080 +in and who thinks it's better to do + +00:20:17.919 --> 00:20:23.000 +something in + +00:20:20.080 --> 00:20:28.200 +English + +00:20:23.000 --> 00:20:29.159 +okay so um basically the answer is do it + +00:20:28.200 --> 00:20:31.440 +in English + +00:20:29.159 --> 00:20:34.120 +um and maybe this + +00:20:31.440 --> 00:20:35.799 +is it might be a little bit dependent on + +00:20:34.120 --> 00:20:39.840 +the language but all of the languages + +00:20:35.799 --> 00:20:42.880 +they tested it's essentially uh that's + +00:20:39.840 --> 00:20:44.919 +the conclusion that they came to and + +00:20:42.880 --> 00:20:47.679 +it's pretty Stark in this particular + +00:20:44.919 --> 00:20:50.640 +paper this might change a little bit + +00:20:47.679 --> 00:20:52.960 +with um with more powerful models but I + +00:20:50.640 --> 00:20:57.360 +still would be very surprised if this is + +00:20:52.960 --> 00:21:00.440 +not like if this doesn't hold still so + +00:20:57.360 --> 00:21:04.440 +you can see it's like approximately on + +00:21:00.440 --> 00:21:08.200 +average uh 7even Point increase in the + +00:21:04.440 --> 00:21:11.720 +results and just to to be clear here um + +00:21:08.200 --> 00:21:13.600 +we have native uh Chain of Thought So + +00:21:11.720 --> 00:21:16.039 +This is doing Chain of Thought in the in + +00:21:13.600 --> 00:21:17.799 +the language itself this is doing Chain + +00:21:16.039 --> 00:21:19.240 +of Thought in English but then answering + +00:21:17.799 --> 00:21:22.200 +in the language itself and this is just + +00:21:19.240 --> 00:21:23.799 +like translating everything into + +00:21:22.200 --> 00:21:27.440 +English + +00:21:23.799 --> 00:21:30.159 +um you can try this out too like if you + +00:21:27.440 --> 00:21:31.840 +uh if you speak another Lang you can um + +00:21:30.159 --> 00:21:34.200 +try to do it myself when I try it in + +00:21:31.840 --> 00:21:36.200 +Japanese it's very clear that like the + +00:21:34.200 --> 00:21:38.640 +model seems more intelligent in English + +00:21:36.200 --> 00:21:41.559 +it just can seems like it can do other + +00:21:38.640 --> 00:21:43.120 +things even though like intelligence uh + +00:21:41.559 --> 00:21:44.640 +shouldn't be a function of the language + +00:21:43.120 --> 00:21:47.120 +that you're asking a question in right + +00:21:44.640 --> 00:21:49.679 +like the model should have the ability + +00:21:47.120 --> 00:21:51.440 +to answer questions but it because + +00:21:49.679 --> 00:21:53.000 +that's how humans work right our + +00:21:51.440 --> 00:21:54.520 +intelligence is kind of separated from + +00:21:53.000 --> 00:21:57.039 +our language how well we can express + +00:21:54.520 --> 00:22:00.480 +ourselves is a little bit different but + +00:21:57.039 --> 00:22:02.320 +um yeah for the final appli this was it + +00:22:00.480 --> 00:22:04.840 +translated back to the original language + +00:22:02.320 --> 00:22:09.440 +and then evaluated for translate English + +00:22:04.840 --> 00:22:12.559 +I'm not 100% sure about this I think it + +00:22:09.440 --> 00:22:13.840 +was not so that might be a confounding + +00:22:12.559 --> 00:22:16.799 +factor for this one but it's not a + +00:22:13.840 --> 00:22:20.039 +confounding factor for this one anyway + +00:22:16.799 --> 00:22:20.039 +yeah any other + +00:22:20.679 --> 00:22:23.919 +questions Okay + +00:22:24.200 --> 00:22:29.559 +cool so this is a pretty interesting + +00:22:26.799 --> 00:22:32.000 +result here um + +00:22:29.559 --> 00:22:34.120 +and the next kind of series of results + +00:22:32.000 --> 00:22:35.360 +are going to be based on the uh that I'm + +00:22:34.120 --> 00:22:36.919 +going to talk about are going to be + +00:22:35.360 --> 00:22:39.240 +based on the quality of the reasoning + +00:22:36.919 --> 00:22:43.480 +chains that the model uses in Chain of + +00:22:39.240 --> 00:22:45.520 +Thought and this one is a simple + +00:22:43.480 --> 00:22:46.600 +heuristic for improving the quality of + +00:22:45.520 --> 00:22:49.279 +the reasoning + +00:22:46.600 --> 00:22:50.640 +chains and um yeah one thing I should + +00:22:49.279 --> 00:22:52.480 +mention is that the quality of the + +00:22:50.640 --> 00:22:55.760 +reasoning chain is definitely connected + +00:22:52.480 --> 00:22:58.080 +to the uh quality of the output like + +00:22:55.760 --> 00:23:00.159 +some that's not necessarily the case + +00:22:58.080 --> 00:23:04.679 +right it could just say a whole bunch of + +00:23:00.159 --> 00:23:07.799 +you know false like uh actually no maybe + +00:23:04.679 --> 00:23:07.799 +I'll I'll skip this + +00:23:08.200 --> 00:23:14.919 +one and go and and explain this one next + +00:23:11.919 --> 00:23:14.919 +so + +00:23:15.159 --> 00:23:19.039 +um yeah actually sorry the or the + +00:23:17.600 --> 00:23:20.520 +explanation ordering for this is a + +00:23:19.039 --> 00:23:25.360 +little bit hard but yeah I'll explain + +00:23:20.520 --> 00:23:26.840 +this one next so um very quickly um + +00:23:25.360 --> 00:23:29.640 +there's two ways that you could be + +00:23:26.840 --> 00:23:32.880 +reasoning one way you could be reasoning + +00:23:29.640 --> 00:23:35.000 +is doing an explanation first and then + +00:23:32.880 --> 00:23:36.720 +uh predicting the answer the other way + +00:23:35.000 --> 00:23:39.080 +you could do it is predicting the answer + +00:23:36.720 --> 00:23:43.039 +and then do it um then giving the + +00:23:39.080 --> 00:23:45.559 +explanation and in general if you have a + +00:23:43.039 --> 00:23:47.919 +reasonably strong model uh you know any + +00:23:45.559 --> 00:23:50.679 +of the modern kind of Frontier level + +00:23:47.919 --> 00:23:52.240 +models right now doing the explanation + +00:23:50.679 --> 00:23:54.039 +first and then making the prediction is + +00:23:52.240 --> 00:23:56.880 +better and the reason why is because + +00:23:54.039 --> 00:23:59.240 +Chain of Thought works and the model is + +00:23:56.880 --> 00:24:02.960 +able to break down the quest um the + +00:23:59.240 --> 00:24:07.279 +questions into kind of + +00:24:02.960 --> 00:24:10.159 +simpler uh it's able to break down the + +00:24:07.279 --> 00:24:11.520 +like the answer into like simp simpler + +00:24:10.159 --> 00:24:14.080 +questions for like mathematical + +00:24:11.520 --> 00:24:15.679 +reasoning or something like that um and + +00:24:14.080 --> 00:24:18.039 +then give me the answer so like for + +00:24:15.679 --> 00:24:20.000 +example for text DCI 002 which was State + +00:24:18.039 --> 00:24:22.679 +ofth art at the time of this writing you + +00:24:20.000 --> 00:24:24.360 +see a fivepoint boost from using um + +00:24:22.679 --> 00:24:29.080 +explanation first and then prediction + +00:24:24.360 --> 00:24:30.640 +after that um and in accur + +00:24:29.080 --> 00:24:34.039 +but for the weaker models that was not + +00:24:30.640 --> 00:24:36.039 +the case so if you were using um GPD 3 + +00:24:34.039 --> 00:24:38.720 +that wasn't trained for Chain of Thought + +00:24:36.039 --> 00:24:40.600 +or you were using opt uh that was not + +00:24:38.720 --> 00:24:42.640 +the case but nowadays I think basically + +00:24:40.600 --> 00:24:45.279 +all models uh doing the explanation + +00:24:42.640 --> 00:24:48.120 +first and then the prediction is + +00:24:45.279 --> 00:24:49.640 +better um so going + +00:24:48.120 --> 00:24:51.640 +back + +00:24:49.640 --> 00:24:53.559 +um another thing that people have + +00:24:51.640 --> 00:24:55.120 +noticed is like if your explanation is + +00:24:53.559 --> 00:24:56.520 +wrong your prediction also tends to be + +00:24:55.120 --> 00:24:58.120 +wrong so if you make mistakes in + +00:24:56.520 --> 00:25:00.520 +intermediate steps of your explanation + +00:24:58.120 --> 00:25:03.679 +it's tends to mess up your final + +00:25:00.520 --> 00:25:06.000 +prediction um so like one of the + +00:25:03.679 --> 00:25:09.320 +interesting ways that people have found + +00:25:06.000 --> 00:25:11.559 +to improve the final the explanation + +00:25:09.320 --> 00:25:13.880 +quality is they just observe that if the + +00:25:11.559 --> 00:25:18.840 +explanations are longer they tend to be + +00:25:13.880 --> 00:25:20.960 +better it's uh kind of interesting but + +00:25:18.840 --> 00:25:23.000 +like if they give you more reasoning + +00:25:20.960 --> 00:25:25.000 +steps this tends to be more accurate and + +00:25:23.000 --> 00:25:27.320 +they actually demonstrate that in this + +00:25:25.000 --> 00:25:29.200 +paper where here's a simple reasoning + +00:25:27.320 --> 00:25:31.720 +chain here's a more complex reasoning + +00:25:29.200 --> 00:25:35.480 +chain and you actually see for exactly + +00:25:31.720 --> 00:25:36.760 +the same problem they get about a 15% + +00:25:35.480 --> 00:25:38.360 +boost and these are kind of like + +00:25:36.760 --> 00:25:39.960 +naturally occurring reasoning chains + +00:25:38.360 --> 00:25:41.520 +they didn't like train the model to give + +00:25:39.960 --> 00:25:43.919 +you longer reasoning chains or anything + +00:25:41.520 --> 00:25:45.279 +like that but amongst the naturally + +00:25:43.919 --> 00:25:46.840 +occurring reasoning chains the longer + +00:25:45.279 --> 00:25:50.480 +ones tend to be + +00:25:46.840 --> 00:25:53.159 +better and this fact could be simply + +00:25:50.480 --> 00:25:54.679 +used to improve accuracy um and so the + +00:25:53.159 --> 00:25:57.360 +way they did this is they just sampled + +00:25:54.679 --> 00:25:59.279 +multiple reasoning paths and then they + +00:25:57.360 --> 00:26:00.840 +performed self consistency over the + +00:25:59.279 --> 00:26:03.000 +longer reasoning paths so if you + +00:26:00.840 --> 00:26:05.240 +remember what self consistency is it's + +00:26:03.000 --> 00:26:07.240 +basically like you do majority voting + +00:26:05.240 --> 00:26:09.679 +over the answers for multiple reasoning + +00:26:07.240 --> 00:26:13.880 +paths so they threw out the lower + +00:26:09.679 --> 00:26:13.880 +quality ones and that improved overall + +00:26:14.399 --> 00:26:20.279 +accuracy so um yeah that's a thing that + +00:26:18.000 --> 00:26:20.279 +you can + +00:26:21.039 --> 00:26:25.960 +do + +00:26:23.120 --> 00:26:28.880 +um so yeah going back to systematic + +00:26:25.960 --> 00:26:31.360 +studies of reasoning in llms + +00:26:28.880 --> 00:26:33.559 +um one of the big results that's + +00:26:31.360 --> 00:26:35.880 +actually really important to know about + +00:26:33.559 --> 00:26:39.039 +is th this sort of Chain of Thought + +00:26:35.880 --> 00:26:41.080 +reasoning um is considered to be an + +00:26:39.039 --> 00:26:43.520 +emergent ability + +00:26:41.080 --> 00:26:47.080 +in uh large language models and what we + +00:26:43.520 --> 00:26:49.360 +mean by an emergent ability is it's or + +00:26:47.080 --> 00:26:53.679 +what what the the name emergent ability + +00:26:49.360 --> 00:26:56.399 +typically refers to is that it is + +00:26:53.679 --> 00:26:58.640 +something that increases dramatically as + +00:26:56.399 --> 00:27:01.679 +the model size gets uh up up to a + +00:26:58.640 --> 00:27:03.200 +certain point so these actually I'm I'm + +00:27:01.679 --> 00:27:06.080 +really sorry I cut off the thing on the + +00:27:03.200 --> 00:27:07.360 +bottom here this is like open AI does + +00:27:06.080 --> 00:27:08.520 +this all the time to not tell you how + +00:27:07.360 --> 00:27:11.399 +many parameters they have in their + +00:27:08.520 --> 00:27:12.760 +models but I did not do it intentionally + +00:27:11.399 --> 00:27:15.360 +here because I think it's actually in + +00:27:12.760 --> 00:27:17.320 +here in the paper um but like these ones + +00:27:15.360 --> 00:27:19.399 +over here are kind of the like 175 + +00:27:17.320 --> 00:27:20.640 +billion parameter models and like the + +00:27:19.399 --> 00:27:24.520 +the larger + +00:27:20.640 --> 00:27:25.960 +models um and what you see is like up + +00:27:24.520 --> 00:27:29.919 +until a certain point you get basically + +00:27:25.960 --> 00:27:33.919 +zero accuracy and then uh the outputs + +00:27:29.919 --> 00:27:37.000 +improve and so for a while people were + +00:27:33.919 --> 00:27:39.240 +really like confused about this like why + +00:27:37.000 --> 00:27:41.440 +why does this happen it feels like magic + +00:27:39.240 --> 00:27:44.279 +that you get a really you know powerful + +00:27:41.440 --> 00:27:46.679 +model and then suddenly it gets better + +00:27:44.279 --> 00:27:49.799 +uh uh like at the very + +00:27:46.679 --> 00:27:52.159 +end but actually there's a much simpler + +00:27:49.799 --> 00:27:53.760 +solution there's not not that much magic + +00:27:52.159 --> 00:27:55.960 +to this + +00:27:53.760 --> 00:27:58.399 +and we've known about this for a little + +00:27:55.960 --> 00:28:00.919 +while but this paper from 2023 really + +00:27:58.399 --> 00:28:02.360 +like expressed it very clearly um so I + +00:28:00.919 --> 00:28:04.360 +highly recommend you take a look at this + +00:28:02.360 --> 00:28:07.720 +if you're interested in kind of like the + +00:28:04.360 --> 00:28:10.159 +emerg abilities and language models but + +00:28:07.720 --> 00:28:15.039 +basically the the thing about emergent + +00:28:10.159 --> 00:28:19.720 +abilities is that they're mostly + +00:28:15.039 --> 00:28:20.720 +a matter of how you um how you measure + +00:28:19.720 --> 00:28:22.519 +your + +00:28:20.720 --> 00:28:27.640 +models + +00:28:22.519 --> 00:28:30.120 +accuracy and so let's say as your model + +00:28:27.640 --> 00:28:30.120 +gets better + +00:28:39.039 --> 00:28:45.600 +it gets gradually better at predicting + +00:28:41.200 --> 00:28:45.600 +the like a reasonable next + +00:28:47.799 --> 00:28:54.760 +token so this is like a I don't know + +00:28:50.919 --> 00:28:59.120 +like 200 million parameter model 500 + +00:28:54.760 --> 00:29:03.240 +million 1 billion 3 billion + +00:28:59.120 --> 00:29:06.600 +7 billion and like 70 billion or + +00:29:03.240 --> 00:29:09.600 +something like that um and so this is + +00:29:06.600 --> 00:29:12.640 +like the next token prediction accuracy + +00:29:09.600 --> 00:29:14.320 +um or like the the accuracy of + +00:29:12.640 --> 00:29:16.279 +predicting a reasonable next token that + +00:29:14.320 --> 00:29:18.880 +won't make result in your reasoning + +00:29:16.279 --> 00:29:20.000 +chain being wrong and making a mistake + +00:29:18.880 --> 00:29:24.200 +and + +00:29:20.000 --> 00:29:26.200 +so if you have an accuracy like this in + +00:29:24.200 --> 00:29:28.880 +order to get the correct answer like + +00:29:26.200 --> 00:29:30.559 +let's say there's about five or eight + +00:29:28.880 --> 00:29:33.519 +places where you could possibly make a + +00:29:30.559 --> 00:29:35.080 +mistake in the derivation like one + +00:29:33.519 --> 00:29:36.760 +common places to make a mistake in a + +00:29:35.080 --> 00:29:38.519 +derivation for math for example are + +00:29:36.760 --> 00:29:40.200 +where you predict a number like where + +00:29:38.519 --> 00:29:42.679 +you predict the result of an equation + +00:29:40.200 --> 00:29:44.120 +and you might have five reasoning steps + +00:29:42.679 --> 00:29:47.720 +where you might predict the result of an + +00:29:44.120 --> 00:29:53.039 +equation um and so if we do + +00:29:47.720 --> 00:29:53.039 +this let's exponentiate all of these by + +00:29:54.799 --> 00:29:58.799 +five um + +00:30:06.640 --> 00:30:16.120 +uh write python code to exp + +00:30:11.200 --> 00:30:16.120 +she these numbers by + +00:30:19.600 --> 00:30:27.559 +five I'm wszy enough that I just ask + +00:30:22.159 --> 00:30:27.559 +chat GP chat GP to do this for me now + +00:30:30.080 --> 00:30:32.919 +and so if we do + +00:30:35.399 --> 00:30:39.840 +this do go go chat + +00:30:50.000 --> 00:30:58.360 +GPD so now we are getting something that + +00:30:54.760 --> 00:30:58.360 +looks like zero + +00:31:02.159 --> 00:31:07.960 +um basically zero basically + +00:31:05.639 --> 00:31:10.960 +zero + +00:31:07.960 --> 00:31:10.960 +uh + +00:31:13.399 --> 00:31:16.399 +3% + +00:31:16.799 --> 00:31:22.440 +23% + +00:31:19.080 --> 00:31:22.440 +9% and + +00:31:22.559 --> 00:31:28.720 +90% so what you can see is there's + +00:31:26.639 --> 00:31:30.600 +actually a pretty steady GR gradation of + +00:31:28.720 --> 00:31:33.120 +like the next token prediction accuracy + +00:31:30.600 --> 00:31:36.600 +here but if you need to predict multiple + +00:31:33.120 --> 00:31:38.919 +tokens correct then it looks like it's + +00:31:36.600 --> 00:31:41.240 +doing basically nothing until you get up + +00:31:38.919 --> 00:31:43.600 +to like 75% next token accuracy and then + +00:31:41.240 --> 00:31:45.320 +it starts taking off so that's like uh + +00:31:43.600 --> 00:31:46.960 +what happens in emergent abilities and + +00:31:45.320 --> 00:31:49.159 +you'll notice that most things that are + +00:31:46.960 --> 00:31:50.880 +talking about emergent abilities are + +00:31:49.159 --> 00:31:53.559 +usually talking about some sort of Chain + +00:31:50.880 --> 00:31:55.799 +of Thought or some sort of reasoning uh + +00:31:53.559 --> 00:31:58.480 +reasoning accuracy even if that's not + +00:31:55.799 --> 00:32:00.480 +the case um even if they're just + +00:31:58.480 --> 00:32:02.639 +predicting a single token it can still + +00:32:00.480 --> 00:32:05.399 +happen because + +00:32:02.639 --> 00:32:08.559 +basically the probability of a single + +00:32:05.399 --> 00:32:11.639 +token can continue to go up smoothly but + +00:32:08.559 --> 00:32:13.240 +you only get the the token correct after + +00:32:11.639 --> 00:32:14.760 +the probability starts getting higher + +00:32:13.240 --> 00:32:18.320 +than all the others and that's also a + +00:32:14.760 --> 00:32:21.279 +discontinuous function so um so + +00:32:18.320 --> 00:32:23.080 +basically what this paper shows is like + +00:32:21.279 --> 00:32:26.440 +even if you have like the probability of + +00:32:23.080 --> 00:32:28.679 +the correct token going um the correct + +00:32:26.440 --> 00:32:30.639 +token going up gradually uh you can see + +00:32:28.679 --> 00:32:33.440 +this emergent ability based on how you + +00:32:30.639 --> 00:32:37.279 +uh measure it so um that's an important + +00:32:33.440 --> 00:32:38.960 +thing to realize about uh this another + +00:32:37.279 --> 00:32:41.080 +correl of this is like let's say you + +00:32:38.960 --> 00:32:44.679 +want to do interesting experiments about + +00:32:41.080 --> 00:32:45.960 +reasoning on um on smaller models like + +00:32:44.679 --> 00:32:47.279 +let's say you want to train a smaller + +00:32:45.960 --> 00:32:49.159 +model and see how it improves on + +00:32:47.279 --> 00:32:52.159 +reasoning I would definitely encourage + +00:32:49.159 --> 00:32:54.799 +you to measure not only accuracy because + +00:32:52.159 --> 00:32:57.279 +you might see like very little change in + +00:32:54.799 --> 00:32:58.720 +accuracy but also measure like log + +00:32:57.279 --> 00:33:00.360 +likelihood of reasoning chains or + +00:32:58.720 --> 00:33:02.960 +something like that because you'll see a + +00:33:00.360 --> 00:33:02.960 +a smoother + +00:33:03.799 --> 00:33:09.080 +curve cool um any questions about + +00:33:11.039 --> 00:33:17.240 +this okay um sounds + +00:33:14.720 --> 00:33:20.559 +good so I I talked a little bit about + +00:33:17.240 --> 00:33:23.120 +this um one one of the things here that + +00:33:20.559 --> 00:33:25.320 +I didn't talk about is this paper + +00:33:23.120 --> 00:33:28.159 +measures not just the accuracy of the + +00:33:25.320 --> 00:33:30.880 +answer with chain of thoughts um but it + +00:33:28.159 --> 00:33:35.840 +also measures the factuality of the + +00:33:30.880 --> 00:33:40.480 +explanation so basically um whether the + +00:33:35.840 --> 00:33:40.480 +explanation is a good explanation for + +00:33:40.760 --> 00:33:47.240 +the um whether the explanation is a good + +00:33:43.960 --> 00:33:50.039 +explanation for the actual + +00:33:47.240 --> 00:33:51.919 +derivation um and also the consistency + +00:33:50.039 --> 00:33:53.480 +of the answer in the explanation to + +00:33:51.919 --> 00:33:56.120 +figure out whether the answer and the + +00:33:53.480 --> 00:33:58.200 +explanation um match up with each other + +00:33:56.120 --> 00:33:59.600 +and they they did this with some uh + +00:33:58.200 --> 00:34:02.320 +synthetic data sets where you could + +00:33:59.600 --> 00:34:07.120 +actually measure the um the re the + +00:34:02.320 --> 00:34:10.399 +reasoning steps uh by using math so um + +00:34:07.120 --> 00:34:13.560 +what they were able to find is basically + +00:34:10.399 --> 00:34:15.760 +the answer and the explanation um + +00:34:13.560 --> 00:34:17.639 +when the answer in the explanation + +00:34:15.760 --> 00:34:22.079 +tended to be consistent especially for + +00:34:17.639 --> 00:34:23.760 +the stronger models and let's see yeah + +00:34:22.079 --> 00:34:25.399 +the the answer in the explanation tended + +00:34:23.760 --> 00:34:28.440 +to be consistent especially for the + +00:34:25.399 --> 00:34:30.879 +stronger models and um + +00:34:28.440 --> 00:34:33.000 +that also meant that if you had higher + +00:34:30.879 --> 00:34:35.839 +factuality in the explanation that + +00:34:33.000 --> 00:34:38.240 +translates into higher um you know + +00:34:35.839 --> 00:34:40.520 +factuality of the accuracy of the actual + +00:34:38.240 --> 00:34:43.159 +prediction um I would bet that these + +00:34:40.520 --> 00:34:45.240 +numbers are even higher uh nowadays I + +00:34:43.159 --> 00:34:49.040 +bet the consistency is even higher uh + +00:34:45.240 --> 00:34:49.040 +with more modern models than Tex avenci + +00:34:49.399 --> 00:34:53.200 +002 and the re the reason being is like + +00:34:51.839 --> 00:34:54.760 +number one models are stronger number + +00:34:53.200 --> 00:34:56.560 +two all models are like trained for + +00:34:54.760 --> 00:35:00.960 +Chain of Thought pretty aggressively now + +00:34:56.560 --> 00:35:00.960 +so uh that would make the difference + +00:35:02.200 --> 00:35:08.640 +there cool um so the the other thing I'd + +00:35:07.000 --> 00:35:09.359 +like to talk about is training for Chain + +00:35:08.640 --> 00:35:13.079 +of + +00:35:09.359 --> 00:35:17.440 +Thought um so there's a fair amount of + +00:35:13.079 --> 00:35:19.200 +work in this general direction um from + +00:35:17.440 --> 00:35:23.040 +my point of view there's basically two + +00:35:19.200 --> 00:35:25.800 +ways that people do this nowadays um the + +00:35:23.040 --> 00:35:28.960 +first way is usually through generating + +00:35:25.800 --> 00:35:33.480 +lots of synthetic data that represents + +00:35:28.960 --> 00:35:37.800 +chains of thoughts and then using that + +00:35:33.480 --> 00:35:39.520 +to um to train models and this is the + +00:35:37.800 --> 00:35:41.839 +most famous version of this although + +00:35:39.520 --> 00:35:44.079 +this paper cites a lot of uh a lot of + +00:35:41.839 --> 00:35:45.760 +other ones but basically they generate a + +00:35:44.079 --> 00:35:48.280 +large and diverse uh Chain of Thought + +00:35:45.760 --> 00:35:51.240 +data set from GPT 3.5 and + +00:35:48.280 --> 00:35:53.200 +gp4 um it includes 5 million complex + +00:35:51.240 --> 00:35:55.640 +instructions I think they generated 1 + +00:35:53.200 --> 00:35:59.000 +million from GPD 4 and 4 million from uh + +00:35:55.640 --> 00:36:01.640 +GPT 3.5 just because generating long + +00:35:59.000 --> 00:36:06.520 +sequences from gp4 is expensive and they + +00:36:01.640 --> 00:36:09.640 +didn't want to do that many um and + +00:36:06.520 --> 00:36:11.760 +then they uh achieved corresponding high + +00:36:09.640 --> 00:36:13.200 +accuracy on Chain of Thought related + +00:36:11.760 --> 00:36:16.200 +things compared to other data sets so + +00:36:13.200 --> 00:36:17.760 +compared to like alpaka which is much uh + +00:36:16.200 --> 00:36:21.760 +smaller and doesn't have as much Chain + +00:36:17.760 --> 00:36:24.079 +of Thought and also um uh vicuna which + +00:36:21.760 --> 00:36:26.640 +is similarly less focused on chain of + +00:36:24.079 --> 00:36:29.359 +thought they were able to do uh a good + +00:36:26.640 --> 00:36:31.599 +job + +00:36:29.359 --> 00:36:33.640 +um this paper was by Microsoft and they + +00:36:31.599 --> 00:36:36.960 +didn't actually release the Orca data + +00:36:33.640 --> 00:36:39.400 +set um for whatever reason uh legal + +00:36:36.960 --> 00:36:41.400 +legal or competitive reasons or whatever + +00:36:39.400 --> 00:36:43.000 +but there's another open Orca data set + +00:36:41.400 --> 00:36:44.359 +that you can download and use uh that + +00:36:43.000 --> 00:36:47.480 +attempts to replicate it and it's + +00:36:44.359 --> 00:36:50.440 +reasonably good so uh you you can uh + +00:36:47.480 --> 00:36:50.440 +keep that in mind if you're + +00:36:50.800 --> 00:36:59.520 +interested um this is another really + +00:36:53.280 --> 00:36:59.520 +interesting paper on uh trying to create + +00:37:00.160 --> 00:37:05.760 +assessments automatic assessments of how + +00:37:03.440 --> 00:37:09.880 +good chains of thought are and what they + +00:37:05.760 --> 00:37:13.079 +do essentially is it's relatively simple + +00:37:09.880 --> 00:37:15.200 +they get human feedback on each step of + +00:37:13.079 --> 00:37:17.760 +a derivation so they just basically ask + +00:37:15.200 --> 00:37:20.599 +people is this step of the derivation + +00:37:17.760 --> 00:37:22.160 +good and uh if the answer is yes then + +00:37:20.599 --> 00:37:24.760 +they give it a a smiley face if the + +00:37:22.160 --> 00:37:26.440 +answer is no they give it a frowny face + +00:37:24.760 --> 00:37:28.560 +and they use this to train a reward + +00:37:26.440 --> 00:37:32.000 +model where the reward model basically + +00:37:28.560 --> 00:37:34.760 +predicts whether each uh thing of the um + +00:37:32.000 --> 00:37:36.800 +each step of the derivation is good and + +00:37:34.760 --> 00:37:38.160 +so we have two examples over here I know + +00:37:36.800 --> 00:37:41.160 +this is really small you might be able + +00:37:38.160 --> 00:37:43.200 +to see it um either in the paper on uh + +00:37:41.160 --> 00:37:46.359 +the slides on the website but what we + +00:37:43.200 --> 00:37:49.000 +can see here is that it assesses each of + +00:37:46.359 --> 00:37:52.680 +these steps and uh checks that the + +00:37:49.000 --> 00:37:55.760 +answer is good um but it's also able to + +00:37:52.680 --> 00:37:57.119 +identify places where uh like steps are + +00:37:55.760 --> 00:37:59.560 +incorrect and then the final answer + +00:37:57.119 --> 00:38:02.560 +becomes Incorrect and then they use this + +00:37:59.560 --> 00:38:04.440 +for training um a Chain of Thought style + +00:38:02.560 --> 00:38:06.319 +model so they have the model generate + +00:38:04.440 --> 00:38:08.520 +chains of thought and they assess them + +00:38:06.319 --> 00:38:10.079 +with the reward model and upweight + +00:38:08.520 --> 00:38:12.160 +answers that have good chains of thought + +00:38:10.079 --> 00:38:15.680 +and so the good thing about this is they + +00:38:12.160 --> 00:38:17.440 +actually don't need um they don't need + +00:38:15.680 --> 00:38:20.160 +the correct answers to train the model + +00:38:17.440 --> 00:38:21.640 +this way and because they don't need the + +00:38:20.160 --> 00:38:23.920 +correct answers to train the model this + +00:38:21.640 --> 00:38:26.640 +way they can also train the model on + +00:38:23.920 --> 00:38:29.200 +lots of other questions the reason why + +00:38:26.640 --> 00:38:31.520 +this works is because like Chain of + +00:38:29.200 --> 00:38:34.880 +Thought makes it easier to generate each + +00:38:31.520 --> 00:38:36.720 +of the steps in the derivation it's also + +00:38:34.880 --> 00:38:38.640 +easier to assess whether an individual + +00:38:36.720 --> 00:38:40.000 +step in a derivation is wrong then + +00:38:38.640 --> 00:38:42.960 +assess whether the answer is correct + +00:38:40.000 --> 00:38:45.319 +overall so um this feedback signal is + +00:38:42.960 --> 00:38:48.640 +easier to get model provided than it is + +00:38:45.319 --> 00:38:51.160 +for um uh like getting feedback on the + +00:38:48.640 --> 00:38:53.839 +answer itself yeah failure in one step + +00:38:51.160 --> 00:38:56.920 +causes all the other steps to fail yep + +00:38:53.839 --> 00:38:57.960 +you just assess the next steps based on + +00:38:56.920 --> 00:39:00.079 +the assumption + +00:38:57.960 --> 00:39:02.920 +the or do + +00:39:00.079 --> 00:39:05.240 +you I I don't think + +00:39:02.920 --> 00:39:07.599 +they I don't think they do that I think + +00:39:05.240 --> 00:39:10.119 +they um it it's a good question I'm not + +00:39:07.599 --> 00:39:12.160 +100% sure about this but I think they um + +00:39:10.119 --> 00:39:14.280 +assess each one of the steps + +00:39:12.160 --> 00:39:15.920 +independently um and it's not + +00:39:14.280 --> 00:39:17.480 +necessarily the case that like failing + +00:39:15.920 --> 00:39:19.000 +on this step means the step is wrong + +00:39:17.480 --> 00:39:21.319 +right it could be just not using it at + +00:39:19.000 --> 00:39:25.240 +all also + +00:39:21.319 --> 00:39:25.240 +so um + +00:39:25.440 --> 00:39:31.119 +cool so a final thing like to talk about + +00:39:28.160 --> 00:39:34.640 +which I think is kind of interesting um + +00:39:31.119 --> 00:39:37.040 +is abductive reasoning uh or learning + +00:39:34.640 --> 00:39:40.040 +explanations from + +00:39:37.040 --> 00:39:40.040 +data + +00:39:46.359 --> 00:39:49.359 +and + +00:39:52.440 --> 00:39:57.119 +sorry + +00:39:54.480 --> 00:40:00.760 +so basically the idea is can we find a + +00:39:57.119 --> 00:40:03.599 +rule that underes a pattern in data + +00:40:00.760 --> 00:40:06.680 +and here are some examples of this the + +00:40:03.599 --> 00:40:11.680 +basic idea is if we have + +00:40:06.680 --> 00:40:16.599 +examples um which are like if I put + +00:40:11.680 --> 00:40:19.960 +a cylinder and a square a cylinder and a + +00:40:16.599 --> 00:40:22.119 +cube on uh this pink block I get a noise + +00:40:19.960 --> 00:40:25.440 +if I put just a cylinder on the pink + +00:40:22.119 --> 00:40:29.359 +block I don't get a noise and you want + +00:40:25.440 --> 00:40:31.800 +to discover underlying rules based on + +00:40:29.359 --> 00:40:33.160 +the data that you observed and so why + +00:40:31.800 --> 00:40:34.720 +would you want to do this there's a + +00:40:33.160 --> 00:40:38.000 +couple reasons why you would want to do + +00:40:34.720 --> 00:40:41.560 +this um the first reason why you would + +00:40:38.000 --> 00:40:42.920 +like to do this is because um you might + +00:40:41.560 --> 00:40:45.119 +want something that you can explain to + +00:40:42.920 --> 00:40:47.760 +humans right you can explain I this + +00:40:45.119 --> 00:40:51.240 +underlying pattern um exists in this + +00:40:47.760 --> 00:40:55.119 +data it explains why the + +00:40:51.240 --> 00:40:57.319 +data you know appears as it does appear + +00:40:55.119 --> 00:40:59.240 +and then humans can go in and analyze it + +00:40:57.319 --> 00:41:02.079 +or something like that so recently + +00:40:59.240 --> 00:41:03.880 +there's been a big focus on like using + +00:41:02.079 --> 00:41:06.480 +large language models for scientific + +00:41:03.880 --> 00:41:08.240 +inquiry and other things like that by + +00:41:06.480 --> 00:41:10.920 +coming up with good explanations for why + +00:41:08.240 --> 00:41:12.160 +data is the way it is so if we were able + +00:41:10.920 --> 00:41:15.599 +to do that that would be really + +00:41:12.160 --> 00:41:19.280 +interesting another thing is um language + +00:41:15.599 --> 00:41:22.960 +models are not particularly good + +00:41:19.280 --> 00:41:24.760 +at coming up with they're not + +00:41:22.960 --> 00:41:29.480 +particularly good at being consistent + +00:41:24.760 --> 00:41:33.640 +about difficult tasks across very large + +00:41:29.480 --> 00:41:35.319 +you know numbers of examples so if you + +00:41:33.640 --> 00:41:37.920 +could look at like all of the data at + +00:41:35.319 --> 00:41:41.240 +once infer general rules from them put + +00:41:37.920 --> 00:41:43.480 +those rules in a prompt and then apply + +00:41:41.240 --> 00:41:44.960 +that prompt to make predictions on new + +00:41:43.480 --> 00:41:47.880 +examples you might be able to raise your + +00:41:44.960 --> 00:41:49.760 +overall accuracy as well so it's kind of + +00:41:47.880 --> 00:41:52.480 +like you know that's how humans learn as + +00:41:49.760 --> 00:41:55.560 +well right we don't like just memorize + +00:41:52.480 --> 00:41:57.400 +each example um if we just look at a few + +00:41:55.560 --> 00:41:59.040 +examples then we might you know not + +00:41:57.400 --> 00:42:02.560 +generalize well to new examples so we + +00:41:59.040 --> 00:42:06.359 +kind of tried to abstract away general + +00:42:02.560 --> 00:42:08.160 +rules um so this is also similar to + +00:42:06.359 --> 00:42:10.200 +program induction from input output + +00:42:08.160 --> 00:42:12.240 +examples which I talked during the code + +00:42:10.200 --> 00:42:14.040 +uh generation class so you have like + +00:42:12.240 --> 00:42:16.200 +input output examples and from them you + +00:42:14.040 --> 00:42:18.119 +would like to come up with uh general + +00:42:16.200 --> 00:42:19.920 +rules but this is a little bit more + +00:42:18.119 --> 00:42:21.920 +General it doesn't necessarily need to + +00:42:19.920 --> 00:42:24.160 +be a program that you're inducing it + +00:42:21.920 --> 00:42:25.920 +could be you know a grammar or it could + +00:42:24.160 --> 00:42:29.119 +be an explanation or it could be + +00:42:25.920 --> 00:42:29.119 +anything else like this + +00:42:30.079 --> 00:42:34.680 +um so there's a bit of work on rule + +00:42:31.960 --> 00:42:36.800 +induction with llms it's pretty recent + +00:42:34.680 --> 00:42:40.200 +work uh but I think it's pretty + +00:42:36.800 --> 00:42:43.400 +interesting so the first one is um + +00:42:40.200 --> 00:42:45.119 +hypothesis generation or the first step + +00:42:43.400 --> 00:42:47.839 +um of this particular work here is + +00:42:45.119 --> 00:42:53.280 +hypothesis generation and basically what + +00:42:47.839 --> 00:42:55.480 +it does is it takes all of these uh you + +00:42:53.280 --> 00:42:58.119 +know input output examples and from + +00:42:55.480 --> 00:43:01.680 +these input output examples it predicts + +00:42:58.119 --> 00:43:04.720 +these uh rules like the answer is always + +00:43:01.680 --> 00:43:06.720 +one or uh you want to pick the smallest + +00:43:04.720 --> 00:43:10.839 +one or you want to pick the first + +00:43:06.720 --> 00:43:12.880 +element and then you evaluate it um and + +00:43:10.839 --> 00:43:14.359 +so you pick the smallest one and you can + +00:43:12.880 --> 00:43:16.040 +either evaluate it using another + +00:43:14.359 --> 00:43:19.040 +language model or you can evaluate it + +00:43:16.040 --> 00:43:21.280 +using symbolic uh using a symbolic + +00:43:19.040 --> 00:43:23.359 +evaluator um if it's a program you could + +00:43:21.280 --> 00:43:24.680 +use a symbolic evaluator if it's a + +00:43:23.359 --> 00:43:28.559 +language model you could just ask the + +00:43:24.680 --> 00:43:30.960 +language model to pick you know + +00:43:28.559 --> 00:43:33.400 +an answer one always or pick the + +00:43:30.960 --> 00:43:35.400 +smallest one or pick the first element + +00:43:33.400 --> 00:43:37.480 +and then you get lots of outputs and + +00:43:35.400 --> 00:43:39.240 +then when you get lots of outputs you + +00:43:37.480 --> 00:43:42.079 +then can compare them against the + +00:43:39.240 --> 00:43:44.559 +expected outputs and verify whether the + +00:43:42.079 --> 00:43:47.920 +rule is correct verify whether the rule + +00:43:44.559 --> 00:43:50.160 +gives you the appropriate answer + +00:43:47.920 --> 00:43:53.599 +and once you've done that you can go + +00:43:50.160 --> 00:43:56.079 +back and do hypothesis refinement um uh + +00:43:53.599 --> 00:43:57.720 +and maybe even give this feedback about + +00:43:56.079 --> 00:44:00.079 +like what was wrong + +00:43:57.720 --> 00:44:03.280 +and gradually refine you know more + +00:44:00.079 --> 00:44:03.280 +accurate and more complex + +00:44:04.880 --> 00:44:11.040 +hypothesis this is another variant of + +00:44:07.720 --> 00:44:12.760 +this idea um which uses different + +00:44:11.040 --> 00:44:14.960 +methodology I think both are completely + +00:44:12.760 --> 00:44:17.920 +valid but um this one has a little bit + +00:44:14.960 --> 00:44:20.400 +higher data constraints so basically + +00:44:17.920 --> 00:44:23.160 +what we do is we use hypotheses in Chain + +00:44:20.400 --> 00:44:25.319 +of Thought reasoning and keep ones that + +00:44:23.160 --> 00:44:28.480 +give resul in correct + +00:44:25.319 --> 00:44:30.760 +answers so + +00:44:28.480 --> 00:44:35.880 +uh this is the step where they're trying + +00:44:30.760 --> 00:44:40.440 +to induce rules and so here this says um + +00:44:35.880 --> 00:44:42.599 +in base 9 what is 76 + 14 and they used + +00:44:40.440 --> 00:44:44.079 +base 9 here obviously because if it was + +00:44:42.599 --> 00:44:45.520 +in base 10 the language model would just + +00:44:44.079 --> 00:44:48.400 +solve the problem and it's not very + +00:44:45.520 --> 00:44:54.319 +interesting so uh they they did base 9 + +00:44:48.400 --> 00:44:55.839 +addition and so the answer is um we have + +00:44:54.319 --> 00:45:00.280 +or the answer provided by the language + +00:44:55.839 --> 00:45:03.319 +model is we have 6 + 4 = 11 um the digit + +00:45:00.280 --> 00:45:07.480 +is 1 and the carry is 1 we have 7 + 1 + + +00:45:03.319 --> 00:45:09.480 +1 = 10 the digit is zero and the is one + +00:45:07.480 --> 00:45:13.000 +a leading digit is one so the answer is + +00:45:09.480 --> 00:45:15.240 +101 um and this verifies so they get the + +00:45:13.000 --> 00:45:17.240 +answer correct and so they know that + +00:45:15.240 --> 00:45:20.800 +they assume that this derivation is also + +00:45:17.240 --> 00:45:25.599 +correct and then they extract particular + +00:45:20.800 --> 00:45:28.200 +rules like 6 + 4 = 11 and 7 + 1 + 1 = 10 + +00:45:25.599 --> 00:45:30.800 +um and they add this to the rule + +00:45:28.200 --> 00:45:32.960 +Library so then the question is how do + +00:45:30.800 --> 00:45:35.000 +they extract the rules the way they + +00:45:32.960 --> 00:45:37.920 +extract the rules is they have an in + +00:45:35.000 --> 00:45:40.760 +context prompt which surrounds the rules + +00:45:37.920 --> 00:45:43.520 +by basically XML tags that says this is + +00:45:40.760 --> 00:45:46.640 +a rule that should be extracted and so + +00:45:43.520 --> 00:45:48.400 +then um anything that is in an XML tag + +00:45:46.640 --> 00:45:50.960 +they when you get the correct answer + +00:45:48.400 --> 00:45:53.440 +they extract and add that to the rule + +00:45:50.960 --> 00:45:55.680 +library and then conversely like if the + +00:45:53.440 --> 00:45:57.800 +derivation um if the answer is wrong + +00:45:55.680 --> 00:45:59.920 +they just don't add it or they add it as + +00:45:57.800 --> 00:46:01.079 +a negative example and say this is a + +00:45:59.920 --> 00:46:04.119 +incorrect + +00:46:01.079 --> 00:46:05.839 +rule um and then in the final step where + +00:46:04.119 --> 00:46:07.480 +they do deductive reasoning they can + +00:46:05.839 --> 00:46:09.119 +then go ahead and use these rules and + +00:46:07.480 --> 00:46:11.640 +improve accuracy and they demonstrate + +00:46:09.119 --> 00:46:12.960 +that that helps so basically these are + +00:46:11.640 --> 00:46:14.520 +two different approaches one is + +00:46:12.960 --> 00:46:17.400 +extracting directly from the Chain of + +00:46:14.520 --> 00:46:18.880 +Thought the other is uh a priori trying + +00:46:17.400 --> 00:46:23.760 +to generate rules from the whole rule + +00:46:18.880 --> 00:46:27.480 +base and then um then verifying them um + +00:46:23.760 --> 00:46:31.000 +notably both of these require verifiers + +00:46:27.480 --> 00:46:33.839 +um and so in some recent work which uh I + +00:46:31.000 --> 00:46:36.040 +I hope will be on archive very soon uh + +00:46:33.839 --> 00:46:38.839 +we took a look at whether language + +00:46:36.040 --> 00:46:42.800 +models themselves can verify their own + +00:46:38.839 --> 00:46:46.079 +hypothesis and um so that removes the + +00:46:42.800 --> 00:46:48.000 +symbolic verifier here um by just asking + +00:46:46.079 --> 00:46:51.480 +the language model whether the output is + +00:46:48.000 --> 00:46:53.480 +correct or not and um we found that with + +00:46:51.480 --> 00:46:55.240 +very powerful language models like gp4 + +00:46:53.480 --> 00:46:57.760 +you can actually do that as well so that + +00:46:55.240 --> 00:47:01.319 +REM removes the necess necessity to have + +00:46:57.760 --> 00:47:05.480 +a symbolic verifier in the loop as + +00:47:01.319 --> 00:47:08.200 +well cool um the reason why I wanted to + +00:47:05.480 --> 00:47:09.440 +introduce this is I don't know if like + +00:47:08.200 --> 00:47:12.359 +like it seems like all of these have + +00:47:09.440 --> 00:47:16.359 +been applied so far on kind of very toy + +00:47:12.359 --> 00:47:19.119 +examples like you know + +00:47:16.359 --> 00:47:22.240 +um like honestly I don't really care + +00:47:19.119 --> 00:47:25.920 +about whether I can play Tetris or um + +00:47:22.240 --> 00:47:27.920 +you know uh find the largest or smallest + +00:47:25.920 --> 00:47:30.880 +number within + +00:47:27.920 --> 00:47:33.720 +um you know list or something like this + +00:47:30.880 --> 00:47:36.000 +but I think they have like really exting + +00:47:33.720 --> 00:47:38.480 +possibilities for how we could extract + +00:47:36.000 --> 00:47:40.319 +more General patterns and like use these + +00:47:38.480 --> 00:47:41.720 +to improve language model based systems + +00:47:40.319 --> 00:47:43.599 +so I think it's a really exciting + +00:47:41.720 --> 00:47:48.000 +research + +00:47:43.599 --> 00:47:51.000 +Direction um cool any questions about + +00:47:48.000 --> 00:47:51.000 +this + +00:47:54.240 --> 00:48:02.160 +yeah yeah so that's a good question + +00:47:58.160 --> 00:48:06.079 +um so I I think tool + +00:48:02.160 --> 00:48:09.359 +learning is maybe kind of a sub subset + +00:48:06.079 --> 00:48:12.319 +of this possibly like I feel like in + +00:48:09.359 --> 00:48:13.559 +tool learning you're learning functions + +00:48:12.319 --> 00:48:15.559 +that + +00:48:13.559 --> 00:48:17.559 +are I don't know if they are like good + +00:48:15.559 --> 00:48:19.680 +explanations of the data but at the very + +00:48:17.559 --> 00:48:23.119 +least they're like useful um they're + +00:48:19.680 --> 00:48:25.119 +useful rules for solving the task um so + +00:48:23.119 --> 00:48:26.880 +I I feel like they're approaching it + +00:48:25.119 --> 00:48:28.760 +from two different motivations but + +00:48:26.880 --> 00:48:30.960 +actually + +00:48:28.760 --> 00:48:33.559 +the methods that they're using are + +00:48:30.960 --> 00:48:36.240 +similar so like for example in our tool + +00:48:33.559 --> 00:48:38.559 +learning work Trove we generated like + +00:48:36.240 --> 00:48:42.240 +multiple options for tools and we kept + +00:48:38.559 --> 00:48:44.000 +the ones that had high self- consistency + +00:48:42.240 --> 00:48:46.800 +so that's kind of like the verifier step + +00:48:44.000 --> 00:48:49.040 +right and then um we threw away the ones + +00:48:46.800 --> 00:48:52.760 +that weren't useful so that helps make a + +00:48:49.040 --> 00:48:56.760 +concise rule set so + +00:48:52.760 --> 00:48:59.280 +yeah and then like could we use tools to + +00:48:56.760 --> 00:49:01.880 +[Music] + +00:48:59.280 --> 00:49:04.079 +attack kind of the more like conceptual + +00:49:01.880 --> 00:49:05.319 +reasoning stuff I I don't actually know + +00:49:04.079 --> 00:49:06.839 +uh the answer to that it's a good + +00:49:05.319 --> 00:49:10.599 +question + +00:49:06.839 --> 00:49:10.599 +yeah any any other + +00:49:11.240 --> 00:49:18.680 +things okay uh another final one that + +00:49:14.440 --> 00:49:21.680 +I'd like to introduce um this is really + +00:49:18.680 --> 00:49:23.839 +like I I really really like this paper + +00:49:21.680 --> 00:49:27.440 +um just from the point of view of its + +00:49:23.839 --> 00:49:29.880 +ambition and motivation um and + +00:49:27.440 --> 00:49:31.920 +the idea is that they want to learn + +00:49:29.880 --> 00:49:34.440 +differences between text + +00:49:31.920 --> 00:49:36.200 +Collections and why would you want to do + +00:49:34.440 --> 00:49:38.079 +this there's actually a ton of reasons + +00:49:36.200 --> 00:49:39.720 +why you would want to do this but the + +00:49:38.079 --> 00:49:44.720 +the best one that they give + +00:49:39.720 --> 00:49:44.720 +here is actually no sorry maybe I I + +00:49:46.440 --> 00:49:50.359 +didn't okay so this is a less + +00:49:48.480 --> 00:49:53.440 +interesting one the the more interesting + +00:49:50.359 --> 00:49:57.799 +one uh that they give in the paper is um + +00:49:53.440 --> 00:50:00.200 +examples of reports from patients who + +00:49:57.799 --> 00:50:04.200 +took an actual drug and took a + +00:50:00.200 --> 00:50:06.640 +placebo and so patients write about like + +00:50:04.200 --> 00:50:08.400 +their their symptoms or how they felt or + +00:50:06.640 --> 00:50:11.000 +they have checkups or things like that + +00:50:08.400 --> 00:50:13.839 +that are all written in natural language + +00:50:11.000 --> 00:50:16.319 +so one of the things that doctors try to + +00:50:13.839 --> 00:50:18.000 +do is they try to look at all of these + +00:50:16.319 --> 00:50:20.240 +reports and figure out if there's any + +00:50:18.000 --> 00:50:21.880 +like consistent difference between + +00:50:20.240 --> 00:50:25.079 +people who took a placebo and people who + +00:50:21.880 --> 00:50:27.359 +took an actual um actual drug and this + +00:50:25.079 --> 00:50:31.079 +is like a major part of medical trials + +00:50:27.359 --> 00:50:32.960 +right um and so the idea is like given + +00:50:31.079 --> 00:50:35.000 +all of the texts of people who took the + +00:50:32.960 --> 00:50:36.599 +drug given all the texts of people who + +00:50:35.000 --> 00:50:38.319 +of people who took the placebo could you + +00:50:36.599 --> 00:50:40.960 +automatically extract differences + +00:50:38.319 --> 00:50:45.000 +between them in some way and so the + +00:50:40.960 --> 00:50:47.760 +methodology that they use for this is um + +00:50:45.000 --> 00:50:51.359 +they have like group a uh the Manchester + +00:50:47.760 --> 00:50:53.240 +United soccer Squad welcomes Rising Star + +00:50:51.359 --> 00:50:54.599 +as Serena Williams joins the UCLA + +00:50:53.240 --> 00:50:56.920 +women's tennis roster and then you have + +00:50:54.599 --> 00:51:00.200 +like 20 more examples and then here you + +00:50:56.920 --> 00:51:03.480 +have Egypt's President uh at the African + +00:51:00.200 --> 00:51:07.200 +unit Union Summit um and other things + +00:51:03.480 --> 00:51:12.000 +like that in 20 examples uh not seen + +00:51:07.200 --> 00:51:14.359 +here and so then if I asked a question + +00:51:12.000 --> 00:51:16.359 +um the original data set includes news + +00:51:14.359 --> 00:51:18.680 +summaries the two corpora are generated + +00:51:16.359 --> 00:51:21.240 +based on when they were published uh + +00:51:18.680 --> 00:51:24.359 +samples from group a include news from + +00:51:21.240 --> 00:51:27.480 +2007 while samples from Group B include + +00:51:24.359 --> 00:51:29.000 +news from 2008 I'm a joural trying to + +00:51:27.480 --> 00:51:31.240 +understand what topics are popular + +00:51:29.000 --> 00:51:33.440 +across years please write a list of + +00:51:31.240 --> 00:51:35.280 +hypotheses separated by bullet points of + +00:51:33.440 --> 00:51:39.920 +how data points from group a differ from + +00:51:35.280 --> 00:51:42.400 +those of group b um and then formatting + +00:51:39.920 --> 00:51:44.160 +information + +00:51:42.400 --> 00:51:46.960 +um + +00:51:44.160 --> 00:51:49.680 +and so based on the two sentence groups + +00:51:46.960 --> 00:51:50.559 +A and B from the above more sentences in + +00:51:49.680 --> 00:51:53.400 +group + +00:51:50.559 --> 00:51:55.240 +a mention a sports team or mention about + +00:51:53.400 --> 00:51:57.319 +academic relations or things like that + +00:51:55.240 --> 00:51:58.599 +and so what this allows you to do is it + +00:51:57.319 --> 00:52:00.319 +allows you to come up with a whole bunch + +00:51:58.599 --> 00:52:01.400 +of hypotheses about why one might be + +00:52:00.319 --> 00:52:04.920 +better than the + +00:52:01.400 --> 00:52:08.920 +other so the problem with this though is + +00:52:04.920 --> 00:52:10.880 +like because of language model you know + +00:52:08.920 --> 00:52:13.440 +limits number one they might just + +00:52:10.880 --> 00:52:17.119 +hallucinate things and be totally wrong + +00:52:13.440 --> 00:52:19.680 +um number two + +00:52:17.119 --> 00:52:21.040 +the size of the context so that they can + +00:52:19.680 --> 00:52:23.960 +take into account when making this + +00:52:21.040 --> 00:52:26.720 +decision is relatively small so the next + +00:52:23.960 --> 00:52:29.280 +thing that they do is then they have a a + +00:52:26.720 --> 00:52:32.119 +much larger Corpus of + +00:52:29.280 --> 00:52:33.200 +text um with like a thousand examples or + +00:52:32.119 --> 00:52:36.640 +something like + +00:52:33.200 --> 00:52:40.240 +this and then they treat each of these + +00:52:36.640 --> 00:52:42.680 +hypotheses as a + +00:52:40.240 --> 00:52:44.559 +classifier and then they go through all + +00:52:42.680 --> 00:52:47.480 +of the examples from Corpus one which is + +00:52:44.559 --> 00:52:50.480 +like maybe 2000 year 2000 and then + +00:52:47.480 --> 00:52:52.079 +Corpus 2 which is year 2008 and they ask + +00:52:50.480 --> 00:52:55.880 +the language model with respect to all + +00:52:52.079 --> 00:52:58.119 +of them um does this sentence mention a + +00:52:55.880 --> 00:53:01.400 +sports team recording recruiting a new + +00:52:58.119 --> 00:53:04.839 +member um and so you get a + +00:53:01.400 --> 00:53:04.839 +classification for each one of + +00:53:12.359 --> 00:53:17.440 +these and you get a certain number of + +00:53:14.520 --> 00:53:18.799 +ones and zeros and so once you have a + +00:53:17.440 --> 00:53:20.839 +certain number of ones and zeros what's + +00:53:18.799 --> 00:53:24.079 +the next thing that you would do + +00:53:20.839 --> 00:53:24.079 +here any + +00:53:24.880 --> 00:53:30.599 +ideas how do you tell there's like + +00:53:27.359 --> 00:53:30.599 +actually a difference between these + +00:53:36.520 --> 00:53:43.319 +two between two sets + +00:53:39.319 --> 00:53:45.920 +of numbers like one and + +00:53:43.319 --> 00:53:48.680 +zero a hint is you probably had to do + +00:53:45.920 --> 00:53:48.680 +this for assignment + +00:53:53.720 --> 00:53:58.520 +two yeah + +00:53:56.799 --> 00:54:01.200 +yeah exactly you you do a significance + +00:53:58.520 --> 00:54:04.200 +test between the two and so um what you + +00:54:01.200 --> 00:54:06.440 +can then do is you have lots of + +00:54:04.200 --> 00:54:08.839 +hypotheses you have lots of significance + +00:54:06.440 --> 00:54:11.040 +values you can order them by the + +00:54:08.839 --> 00:54:13.839 +significance value and say the most + +00:54:11.040 --> 00:54:17.559 +significance or the the difference with + +00:54:13.839 --> 00:54:19.160 +the like lowest P value between them is + +00:54:17.559 --> 00:54:20.480 +the one that's most likely to be an + +00:54:19.160 --> 00:54:26.520 +actual difference between the two and + +00:54:20.480 --> 00:54:29.079 +you can find um like uh the news in 2007 + +00:54:26.520 --> 00:54:32.520 +indeed tended to talk about X more than + +00:54:29.079 --> 00:54:34.559 +uh than other things so I uh I actually + +00:54:32.520 --> 00:54:36.079 +used this in one of my uh one of my + +00:54:34.559 --> 00:54:39.520 +unrelated projects where I wanted to + +00:54:36.079 --> 00:54:42.680 +find the difference between um language + +00:54:39.520 --> 00:54:45.640 +models sentences that language models + +00:54:42.680 --> 00:54:47.839 +aligned well with human brain signals in + +00:54:45.640 --> 00:54:49.760 +sentences where language models didn't + +00:54:47.839 --> 00:54:52.559 +align well with human brain signals so + +00:54:49.760 --> 00:54:53.799 +we like we had some data of human brain + +00:54:52.559 --> 00:54:56.880 +signals and we had a measure of + +00:54:53.799 --> 00:54:58.240 +alignment um on each sentence and it + +00:54:56.880 --> 00:55:01.799 +actually found some pretty interesting + +00:54:58.240 --> 00:55:03.359 +hypothesis like um uh language models + +00:55:01.799 --> 00:55:06.200 +tend to align less well with human brain + +00:55:03.359 --> 00:55:07.319 +signals on metaphorical language or a + +00:55:06.200 --> 00:55:10.599 +language that had to do with + +00:55:07.319 --> 00:55:11.799 +interpersonal relations or um or other + +00:55:10.599 --> 00:55:15.200 +things like that and then we actually + +00:55:11.799 --> 00:55:17.559 +went in and pursued um you know these to + +00:55:15.200 --> 00:55:21.000 +examine them further and uh we didn't + +00:55:17.559 --> 00:55:22.680 +entirely rely on this um you know like + +00:55:21.000 --> 00:55:25.160 +significance test because I didn't quite + +00:55:22.680 --> 00:55:26.880 +trust language models that much to like + +00:55:25.160 --> 00:55:28.559 +shape my entire resource + +00:55:26.880 --> 00:55:29.880 +research agenda around them but we came + +00:55:28.559 --> 00:55:31.720 +up with other ways to measure it and + +00:55:29.880 --> 00:55:35.000 +some of the things checked out some of + +00:55:31.720 --> 00:55:36.799 +the things didn't check out so um again + +00:55:35.000 --> 00:55:38.760 +I think this general direction of like + +00:55:36.799 --> 00:55:41.720 +how can language models help us answer + +00:55:38.760 --> 00:55:43.760 +you know uh complex research questions + +00:55:41.720 --> 00:55:45.480 +that we wouldn't be able to easily or + +00:55:43.760 --> 00:55:47.960 +very efficiently that would require + +00:55:45.480 --> 00:55:52.200 +normally humans annotating lots of data + +00:55:47.960 --> 00:55:56.839 +is um an interesting topic as + +00:55:52.200 --> 00:55:56.839 +well cool um diff --git a/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics.mp4 b/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d9cb67bd6bc2e1cf46cdc93437a01281b06eb776 --- /dev/null +++ b/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2ba1e151ead63c27e40f5b4e74afe57d221e1c0234298fceb10208d60fa6783 +size 91209687 diff --git a/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/metadata.json b/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..6ccdd6ef1b0d096f190e59c4850f1e37fa2894d9 --- /dev/null +++ b/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=7Sse6P5xbEc", + "title": "CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/transcript.srt b/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..29455ec57862839dd1cd729de4b67ada7fefb24f --- /dev/null +++ b/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/transcript.srt @@ -0,0 +1,8195 @@ +1 +00:00:01,000 --> 00:00:06,080 +okay yeah cool so I'll be giving a + +2 +00:00:03,399 --> 00:00:07,720 +really Whirlwind tour of linguistics as + +3 +00:00:06,080 --> 00:00:10,240 +Graham said it's a very broad field but + +4 +00:00:07,720 --> 00:00:14,040 +I'll try my best to cover some major + +5 +00:00:10,240 --> 00:00:16,800 +parts of it um yeah uh so to begin what + +6 +00:00:14,040 --> 00:00:18,520 +is linguistics um Linguistics as a field + +7 +00:00:16,800 --> 00:00:21,320 +is the scientific study of language and + +8 +00:00:18,520 --> 00:00:23,240 +its structure um at a very high level + +9 +00:00:21,320 --> 00:00:25,680 +theoretical Linguistics aims to find a + +10 +00:00:23,240 --> 00:00:28,119 +very general theory that explains the + +11 +00:00:25,680 --> 00:00:29,359 +structure underlying languages um and a + +12 +00:00:28,119 --> 00:00:31,840 +framework in which we can describe + +13 +00:00:29,359 --> 00:00:34,160 +language as a structure um now we can + +14 +00:00:31,840 --> 00:00:36,120 +describe individual rules and the types + +15 +00:00:34,160 --> 00:00:39,200 +of structures that occur in specific + +16 +00:00:36,120 --> 00:00:41,000 +languages however um one very important + +17 +00:00:39,200 --> 00:00:43,120 +aspect of theoretical Linguistics is to + +18 +00:00:41,000 --> 00:00:46,079 +try and find things that Encompass all + +19 +00:00:43,120 --> 00:00:48,440 +natural languages um and for this reason + +20 +00:00:46,079 --> 00:00:50,320 +uh one like topic that some linguists + +21 +00:00:48,440 --> 00:00:51,960 +are concerned about are things like uh + +22 +00:00:50,320 --> 00:00:53,440 +universals in linguistics like what + +23 +00:00:51,960 --> 00:00:55,320 +concepts are present in all natural + +24 +00:00:53,440 --> 00:00:57,199 +languages how do they come about how do + +25 +00:00:55,320 --> 00:00:59,600 +they express + +26 +00:00:57,199 --> 00:01:02,039 +themselves um and insights from Theory + +27 +00:00:59,600 --> 00:01:04,280 +can Al inform more applied research so + +28 +00:01:02,039 --> 00:01:06,040 +we can ask questions like what are the + +29 +00:01:04,280 --> 00:01:08,200 +uh variations between speakers in a + +30 +00:01:06,040 --> 00:01:09,759 +single language um how does this come + +31 +00:01:08,200 --> 00:01:11,840 +about from social factors how does this + +32 +00:01:09,759 --> 00:01:12,799 +come about from language change also + +33 +00:01:11,840 --> 00:01:15,040 +what are the different types of + +34 +00:01:12,799 --> 00:01:16,799 +linguistic structures within and across + +35 +00:01:15,040 --> 00:01:19,479 +languages and how are they processed by + +36 +00:01:16,799 --> 00:01:20,960 +the brain um and also a really + +37 +00:01:19,479 --> 00:01:22,280 +interesting question is how do people + +38 +00:01:20,960 --> 00:01:24,240 +acquire a new language at different + +39 +00:01:22,280 --> 00:01:26,200 +stages of their life and how does this + +40 +00:01:24,240 --> 00:01:28,479 +change from like infancy when you're + +41 +00:01:26,200 --> 00:01:30,680 +acing acquiring your native language + +42 +00:01:28,479 --> 00:01:35,439 +versus your second your third + +43 +00:01:30,680 --> 00:01:35,439 +um at ages like 10 50 + +44 +00:01:35,600 --> 00:01:40,560 +Etc um now this is a class on NLP and + +45 +00:01:39,040 --> 00:01:42,240 +many of you might be asking like why + +46 +00:01:40,560 --> 00:01:45,040 +should you care about Linguistics in the + +47 +00:01:42,240 --> 00:01:46,439 +age of llms where everything can be fed + +48 +00:01:45,040 --> 00:01:48,799 +into a Transformer and then you get a + +49 +00:01:46,439 --> 00:01:50,399 +bunch of coherent English texts um I'd + +50 +00:01:48,799 --> 00:01:52,159 +like to argue that there are reasons why + +51 +00:01:50,399 --> 00:01:54,280 +you should be aware of linguistics um + +52 +00:01:52,159 --> 00:01:55,840 +first at minimum it allows you to + +53 +00:01:54,280 --> 00:01:57,759 +understand your data better and more + +54 +00:01:55,840 --> 00:01:59,079 +thoroughly um I think this is especially + +55 +00:01:57,759 --> 00:02:01,200 +important when you're characterizing + +56 +00:01:59,079 --> 00:02:03,119 +specific failure of your model like you + +57 +00:02:01,200 --> 00:02:04,640 +have certain errors um how do you + +58 +00:02:03,119 --> 00:02:06,200 +classify them how can you characterize + +59 +00:02:04,640 --> 00:02:08,160 +them can you look to previous literature + +60 +00:02:06,200 --> 00:02:11,000 +to see how people have explained this + +61 +00:02:08,160 --> 00:02:12,480 +from a human perspective um along that + +62 +00:02:11,000 --> 00:02:14,440 +point it also gives you interesting test + +63 +00:02:12,480 --> 00:02:16,640 +cases and Frameworks to explore um + +64 +00:02:14,440 --> 00:02:19,400 +linguists like to explore really + +65 +00:02:16,640 --> 00:02:21,680 +specific strange phenomena and this is a + +66 +00:02:19,400 --> 00:02:24,280 +great test bed for a lot of things that + +67 +00:02:21,680 --> 00:02:26,319 +your model might fail on um another + +68 +00:02:24,280 --> 00:02:28,160 +thing is as models become more and more + +69 +00:02:26,319 --> 00:02:30,599 +advanced people are now drawing + +70 +00:02:28,160 --> 00:02:33,920 +connections between human capabilities + +71 +00:02:30,599 --> 00:02:35,519 +cognitive and linguistic with models um + +72 +00:02:33,920 --> 00:02:37,560 +I would like to say that if you want to + +73 +00:02:35,519 --> 00:02:39,480 +make such claims about how your models + +74 +00:02:37,560 --> 00:02:41,080 +or systems are similar to humans at + +75 +00:02:39,480 --> 00:02:42,840 +least being aware of these theories as + +76 +00:02:41,080 --> 00:02:44,640 +going to be a necessary starting point + +77 +00:02:42,840 --> 00:02:46,560 +even if you don't agree with them and + +78 +00:02:44,640 --> 00:02:48,879 +you just really really hate chomskyan + +79 +00:02:46,560 --> 00:02:51,080 +Syntax for example um and another thing + +80 +00:02:48,879 --> 00:02:53,959 +is just like it's fun um it's cool to + +81 +00:02:51,080 --> 00:02:56,360 +learn about and it's a cool like party + +82 +00:02:53,959 --> 00:02:59,280 +conversation uh yeah so that's why you + +83 +00:02:56,360 --> 00:03:00,680 +should care um so as a lecture road map + +84 +00:02:59,280 --> 00:03:02,480 +I'm going to give a a brief overview of + +85 +00:03:00,680 --> 00:03:04,440 +subfields and coverage over various + +86 +00:03:02,480 --> 00:03:05,959 +topics um for each topic group we're + +87 +00:03:04,440 --> 00:03:08,120 +going to go over main Concepts and + +88 +00:03:05,959 --> 00:03:09,680 +research questions that linguists ask + +89 +00:03:08,120 --> 00:03:12,000 +also some current and previous + +90 +00:03:09,680 --> 00:03:13,519 +computational approaches and then some + +91 +00:03:12,000 --> 00:03:15,519 +applications to NLP that you might be + +92 +00:03:13,519 --> 00:03:18,200 +interested in um and of course because + +93 +00:03:15,519 --> 00:03:20,120 +there's a lot to cover in only about 80 + +94 +00:03:18,200 --> 00:03:22,120 +now 75 minutes this is going to be very + +95 +00:03:20,120 --> 00:03:25,280 +dun in certain areas so apologies in + +96 +00:03:22,120 --> 00:03:27,720 +advance and please feel free to ask + +97 +00:03:25,280 --> 00:03:29,360 +questions so how do we break down + +98 +00:03:27,720 --> 00:03:30,280 +Linguistics as a field into separate + +99 +00:03:29,360 --> 00:03:32,280 +subfields + +100 +00:03:30,280 --> 00:03:34,480 +one way we can do this is by looking at + +101 +00:03:32,280 --> 00:03:36,680 +the structures that we are studying so + +102 +00:03:34,480 --> 00:03:39,400 +here I have a list of different uh + +103 +00:03:36,680 --> 00:03:40,799 +subfields increasing an abstraction uh + +104 +00:03:39,400 --> 00:03:43,120 +first we have phonetics which is the + +105 +00:03:40,799 --> 00:03:46,280 +study of individual speech sounds um and + +106 +00:03:43,120 --> 00:03:47,879 +for sign languages gestures um one level + +107 +00:03:46,280 --> 00:03:49,319 +Above This is phenology how do we + +108 +00:03:47,879 --> 00:03:51,680 +actually organize these sounds and + +109 +00:03:49,319 --> 00:03:52,959 +gestures in the mind what makes coherent + +110 +00:03:51,680 --> 00:03:55,239 +categories and + +111 +00:03:52,959 --> 00:03:58,079 +languages then up we go up to the word + +112 +00:03:55,239 --> 00:04:00,760 +level how are words formed then from + +113 +00:03:58,079 --> 00:04:02,760 +words to phrases and sentences + +114 +00:04:00,760 --> 00:04:04,879 +and then combining a bunch of different + +115 +00:04:02,760 --> 00:04:07,200 +parts together how do we extract meaning + +116 +00:04:04,879 --> 00:04:09,000 +from these different forms and then how + +117 +00:04:07,200 --> 00:04:11,200 +does meaning change and adapt in + +118 +00:04:09,000 --> 00:04:13,439 +language use in + +119 +00:04:11,200 --> 00:04:15,400 +context um now I presented these + +120 +00:04:13,439 --> 00:04:16,400 +categories in very discreet boxes but + +121 +00:04:15,400 --> 00:04:18,320 +it's really important to remember + +122 +00:04:16,400 --> 00:04:21,120 +there's a lot of like bleeding between + +123 +00:04:18,320 --> 00:04:22,560 +categories um like between morphology + +124 +00:04:21,120 --> 00:04:24,680 +and syntax there's a whole field of + +125 +00:04:22,560 --> 00:04:26,960 +study called morphosyntax same thing + +126 +00:04:24,680 --> 00:04:28,759 +with the syntax semantics interface um + +127 +00:04:26,960 --> 00:04:30,240 +we can argue for days about what the + +128 +00:04:28,759 --> 00:04:32,320 +actual difference between semantics and + +129 +00:04:30,240 --> 00:04:34,440 +pragmatics is I'm going to ignore that + +130 +00:04:32,320 --> 00:04:36,320 +for now along with a lot of other things + +131 +00:04:34,440 --> 00:04:37,800 +and we can even span the whole gradient + +132 +00:04:36,320 --> 00:04:39,759 +from phonetics to pragmatics when we + +133 +00:04:37,800 --> 00:04:41,520 +talk about like proy inflection and + +134 +00:04:39,759 --> 00:04:43,680 +stress so there's lots of different + +135 +00:04:41,520 --> 00:04:45,199 +interactions that can occur here um + +136 +00:04:43,680 --> 00:04:48,360 +while I am presenting them in very + +137 +00:04:45,199 --> 00:04:51,440 +discrete forms um do keep that in + +138 +00:04:48,360 --> 00:04:53,320 +mind so I have described kind of like + +139 +00:04:51,440 --> 00:04:55,080 +the separate subfields based on + +140 +00:04:53,320 --> 00:04:57,560 +structures but we can also apply these + +141 +00:04:55,080 --> 00:04:59,960 +to other areas like neurolinguistics how + +142 +00:04:57,560 --> 00:05:01,240 +does language uh work in the brain psych + +143 +00:04:59,960 --> 00:05:02,880 +Linguistics like what is the + +144 +00:05:01,240 --> 00:05:05,120 +psychological reality of structures how + +145 +00:05:02,880 --> 00:05:07,039 +do we process them social Linguistics + +146 +00:05:05,120 --> 00:05:09,080 +deals with like social context how do + +147 +00:05:07,039 --> 00:05:11,360 +speakers vary based on like their social + +148 +00:05:09,080 --> 00:05:12,840 +setting a linguistic typology what are + +149 +00:05:11,360 --> 00:05:14,680 +the different variations between + +150 +00:05:12,840 --> 00:05:17,080 +languages and historical Linguistics how + +151 +00:05:14,680 --> 00:05:18,479 +has language changed over time um and as + +152 +00:05:17,080 --> 00:05:20,360 +much as I would love to cover all these + +153 +00:05:18,479 --> 00:05:23,400 +things I'm going to mainly focus on the + +154 +00:05:20,360 --> 00:05:24,960 +things on the left um and across all of + +155 +00:05:23,400 --> 00:05:26,759 +these different subfields we can use + +156 +00:05:24,960 --> 00:05:28,720 +computational methods to explore + +157 +00:05:26,759 --> 00:05:30,680 +questions within and then also across + +158 +00:05:28,720 --> 00:05:32,000 +subfields + +159 +00:05:30,680 --> 00:05:34,440 +um so in this lecture I'm going to break + +160 +00:05:32,000 --> 00:05:36,120 +down uh things into three main parts + +161 +00:05:34,440 --> 00:05:37,840 +first we'll start with sound and gesture + +162 +00:05:36,120 --> 00:05:39,759 +then we'll move on to subwords and + +163 +00:05:37,840 --> 00:05:42,080 +constituents and then we'll move on to + +164 +00:05:39,759 --> 00:05:43,800 +meaning and intent um roughly broken up + +165 +00:05:42,080 --> 00:05:46,720 +in these parts but like I said + +166 +00:05:43,800 --> 00:05:49,080 +everything's a Continuum so things will + +167 +00:05:46,720 --> 00:05:52,160 +uh be referred to or like we might + +168 +00:05:49,080 --> 00:05:54,880 +Advance a little bit in certain + +169 +00:05:52,160 --> 00:05:56,919 +sections so with that being said let's + +170 +00:05:54,880 --> 00:05:59,759 +start with sound and + +171 +00:05:56,919 --> 00:06:02,440 +gesture okay so at the very basic level + +172 +00:05:59,759 --> 00:06:05,120 +at phonetics we can study speech sounds + +173 +00:06:02,440 --> 00:06:07,639 +and gestures specifically how we produce + +174 +00:06:05,120 --> 00:06:09,919 +them um like what are the functions in + +175 +00:06:07,639 --> 00:06:12,759 +our body uh that we do to actually + +176 +00:06:09,919 --> 00:06:14,520 +create sounds how we perceive them um + +177 +00:06:12,759 --> 00:06:16,759 +like how does the actual physical + +178 +00:06:14,520 --> 00:06:19,039 +property of the waveform for example uh + +179 +00:06:16,759 --> 00:06:21,520 +turn into our perceptions of like pitch + +180 +00:06:19,039 --> 00:06:24,080 +and volume and then how we can analyze + +181 +00:06:21,520 --> 00:06:25,759 +them so we can look at physical + +182 +00:06:24,080 --> 00:06:27,960 +properties and mathematical properties + +183 +00:06:25,759 --> 00:06:31,560 +of these waveforms break them down into + +184 +00:06:27,960 --> 00:06:31,560 +like their spectral components Etc + +185 +00:06:32,280 --> 00:06:36,479 +um so one very important distinction + +186 +00:06:35,080 --> 00:06:38,440 +distinction that we need to make when we + +187 +00:06:36,479 --> 00:06:41,560 +study things like phonetics is that + +188 +00:06:38,440 --> 00:06:43,319 +there is a discret uh separation between + +189 +00:06:41,560 --> 00:06:46,360 +how things actually sound and how they + +190 +00:06:43,319 --> 00:06:48,000 +are spelled in phonetics the actual like + +191 +00:06:46,360 --> 00:06:49,960 +Atomic unit that we study are things + +192 +00:06:48,000 --> 00:06:53,160 +called phones and these are individual + +193 +00:06:49,960 --> 00:06:56,039 +speech sounds um so it' be like H in the + +194 +00:06:53,160 --> 00:06:59,680 +sound for the English word + +195 +00:06:56,039 --> 00:07:01,919 +hat um we need to keep in mind as like + +196 +00:06:59,680 --> 00:07:03,280 +we work with text a lot is that uh one + +197 +00:07:01,919 --> 00:07:05,360 +thing to keep in mind is that text is + +198 +00:07:03,280 --> 00:07:06,840 +not a onetoone mapping between + +199 +00:07:05,360 --> 00:07:08,919 +characters and these sounds and this is + +200 +00:07:06,840 --> 00:07:10,120 +very obvious in certain scripts so for + +201 +00:07:08,919 --> 00:07:12,960 +those of you that know how to read + +202 +00:07:10,120 --> 00:07:14,759 +Chinese for example um Chinese is very + +203 +00:07:12,960 --> 00:07:16,479 +logographic um even though there are + +204 +00:07:14,759 --> 00:07:18,440 +some indications in the character of + +205 +00:07:16,479 --> 00:07:21,360 +like how you might pronounce it it's + +206 +00:07:18,440 --> 00:07:22,800 +very uh sparse in certain uh for certain + +207 +00:07:21,360 --> 00:07:24,879 +characters and there's little indication + +208 +00:07:22,800 --> 00:07:27,280 +of how you would actually say certain + +209 +00:07:24,879 --> 00:07:29,280 +words um other scripts have very + +210 +00:07:27,280 --> 00:07:32,199 +consistent spellings for sounds that are + +211 +00:07:29,280 --> 00:07:34,639 +one one um so we can uh determine the + +212 +00:07:32,199 --> 00:07:36,520 +exact pronunciation of a word uh based + +213 +00:07:34,639 --> 00:07:38,680 +on its characters so this would be + +214 +00:07:36,520 --> 00:07:41,720 +things like Japanese Kaa which are syll + +215 +00:07:38,680 --> 00:07:43,599 +like uh that did like show each syllable + +216 +00:07:41,720 --> 00:07:45,120 +uh Spanish is also very easy to + +217 +00:07:43,599 --> 00:07:46,879 +pronounce once you know the rules and + +218 +00:07:45,120 --> 00:07:50,759 +Hindi also falls in this category as + +219 +00:07:46,879 --> 00:07:54,400 +well um some other scripts oh yes does + +220 +00:07:50,759 --> 00:07:57,560 +that mean that sound more + +221 +00:07:54,400 --> 00:08:01,560 +grinding sound is more well it depends + +222 +00:07:57,560 --> 00:08:03,039 +on the script uh like sometimes you can + +223 +00:08:01,560 --> 00:08:04,960 +like there is a script called the IPA + +224 +00:08:03,039 --> 00:08:06,400 +which is exactly one to one between the + +225 +00:08:04,960 --> 00:08:10,919 +sound that you produce and how it's + +226 +00:08:06,400 --> 00:08:12,120 +spelled um but for the most part um your + +227 +00:08:10,919 --> 00:08:13,599 +the way that you would represent the + +228 +00:08:12,120 --> 00:08:15,479 +exact sound is always going to be more + +229 +00:08:13,599 --> 00:08:19,199 +granular than how it's spelled in + +230 +00:08:15,479 --> 00:08:21,400 +orthography yeah thank you um and then + +231 +00:08:19,199 --> 00:08:23,080 +finally uh especially for those of you + +232 +00:08:21,400 --> 00:08:24,960 +who acquired English as a second + +233 +00:08:23,080 --> 00:08:26,960 +language or even if you are a native + +234 +00:08:24,960 --> 00:08:28,800 +English speaker you know that certain + +235 +00:08:26,960 --> 00:08:30,960 +words are really weird to spell and + +236 +00:08:28,800 --> 00:08:32,680 +really weird to pronounce for the + +237 +00:08:30,960 --> 00:08:34,440 +longest time when I was a kid up until I + +238 +00:08:32,680 --> 00:08:38,159 +was maybe seven or eight I thought chaos + +239 +00:08:34,440 --> 00:08:39,880 +was pronounced Chows um so uh even + +240 +00:08:38,159 --> 00:08:41,080 +though there are very general rules + +241 +00:08:39,880 --> 00:08:42,599 +about how you would say something in + +242 +00:08:41,080 --> 00:08:45,680 +English there are exceptions that have + +243 +00:08:42,599 --> 00:08:48,880 +to be made and have to be learned um and + +244 +00:08:45,680 --> 00:08:51,640 +this happens as well in French um so + +245 +00:08:48,880 --> 00:08:54,360 +between scripts like English and French + +246 +00:08:51,640 --> 00:08:56,720 +uh which are harder to uh get those + +247 +00:08:54,360 --> 00:08:58,640 +irregular forms from we call those deep + +248 +00:08:56,720 --> 00:09:00,920 +orthographies and then the ones that are + +249 +00:08:58,640 --> 00:09:03,519 +very onetoone like Japanese Ka we call + +250 +00:09:00,920 --> 00:09:03,519 +them shallow + +251 +00:09:03,680 --> 00:09:07,600 +orthographies so because there are so + +252 +00:09:06,040 --> 00:09:10,760 +many different ways to represent sounds + +253 +00:09:07,600 --> 00:09:12,040 +across languages and scripts um we as + +254 +00:09:10,760 --> 00:09:14,360 +linguists use something called The + +255 +00:09:12,040 --> 00:09:16,680 +International Phonetic Alphabet and in + +256 +00:09:14,360 --> 00:09:21,600 +Brackets here are how you would actually + +257 +00:09:16,680 --> 00:09:24,959 +write IPA using um IPA uh so this is + +258 +00:09:21,600 --> 00:09:27,680 +like the updated chart from I believe + +259 +00:09:24,959 --> 00:09:29,640 +2022 I think it says but my video is + +260 +00:09:27,680 --> 00:09:31,560 +blocking it from what year it's from um + +261 +00:09:29,640 --> 00:09:33,600 +but it basically categorizes a bunch of + +262 +00:09:31,560 --> 00:09:35,600 +sounds and then shows you the exact + +263 +00:09:33,600 --> 00:09:38,120 +character to write to represent that + +264 +00:09:35,600 --> 00:09:40,800 +sound um one computational tool that you + +265 +00:09:38,120 --> 00:09:42,760 +can use to convert from some orthography + +266 +00:09:40,800 --> 00:09:46,240 +to IPA text is epan which is actually + +267 +00:09:42,760 --> 00:09:49,279 +developed by David mortson here in + +268 +00:09:46,240 --> 00:09:51,839 +LTI so another aspect of phonetics that + +269 +00:09:49,279 --> 00:09:54,200 +I touched upon in the interest slide is + +270 +00:09:51,839 --> 00:09:56,760 +uh how we actually produce sounds with + +271 +00:09:54,200 --> 00:10:00,000 +the body so this is what articulatory + +272 +00:09:56,760 --> 00:10:03,160 +phonetics studies um basically uh with + +273 +00:10:00,000 --> 00:10:06,079 +spoken language uh various organs in + +274 +00:10:03,160 --> 00:10:08,040 +your mouth nose and throat can modify + +275 +00:10:06,079 --> 00:10:09,240 +air flow from your lungs into your lungs + +276 +00:10:08,040 --> 00:10:11,360 +and that's how we produce different + +277 +00:10:09,240 --> 00:10:13,040 +sounds and based on how these different + +278 +00:10:11,360 --> 00:10:15,839 +modifications occur we can get different + +279 +00:10:13,040 --> 00:10:18,000 +types um so very coarsely we can + +280 +00:10:15,839 --> 00:10:19,800 +categorize sounds into vowels which are + +281 +00:10:18,000 --> 00:10:21,560 +produced without much restriction + +282 +00:10:19,800 --> 00:10:23,000 +consonants which are produced with some + +283 +00:10:21,560 --> 00:10:25,240 +partial or full restriction and then + +284 +00:10:23,000 --> 00:10:26,720 +finally semivowels which are kind of + +285 +00:10:25,240 --> 00:10:30,440 +between a consonant and a vowel these + +286 +00:10:26,720 --> 00:10:32,600 +are sounds like Y and W + +287 +00:10:30,440 --> 00:10:34,440 +um and then we can break down some of + +288 +00:10:32,600 --> 00:10:36,360 +these categories even more so for + +289 +00:10:34,440 --> 00:10:37,600 +consonants we can categorize them based + +290 +00:10:36,360 --> 00:10:40,399 +on their place and manner of + +291 +00:10:37,600 --> 00:10:43,079 +articulation like the sound M I create + +292 +00:10:40,399 --> 00:10:46,440 +by putting my lips together and then + +293 +00:10:43,079 --> 00:10:48,720 +like nasiz uh we can get into that + +294 +00:10:46,440 --> 00:10:50,360 +another time um but basically we can + +295 +00:10:48,720 --> 00:10:52,160 +categorize based on the placement and + +296 +00:10:50,360 --> 00:10:55,040 +manner as well as whether they are + +297 +00:10:52,160 --> 00:10:58,360 +voiced or voiceless so the difference + +298 +00:10:55,040 --> 00:11:00,360 +between like T and du is that there is + +299 +00:10:58,360 --> 00:11:02,600 +vibration in my vocal cords and that's + +300 +00:11:00,360 --> 00:11:05,519 +the distinction between voice and + +301 +00:11:02,600 --> 00:11:07,320 +voiceless um vowels can be categorized + +302 +00:11:05,519 --> 00:11:09,000 +in a different way uh based on the + +303 +00:11:07,320 --> 00:11:10,560 +position of your tongue how open your + +304 +00:11:09,000 --> 00:11:12,760 +mouth is and the roundedness of your + +305 +00:11:10,560 --> 00:11:14,760 +lips I really like the IPA chart for + +306 +00:11:12,760 --> 00:11:16,920 +vowels um because it's actually very + +307 +00:11:14,760 --> 00:11:18,560 +intuitive imagine someone splits you + +308 +00:11:16,920 --> 00:11:20,680 +right down the middle like this and then + +309 +00:11:18,560 --> 00:11:22,639 +takes a cross-section and that's kind of + +310 +00:11:20,680 --> 00:11:25,000 +how you can map the vowels to the vowel + +311 +00:11:22,639 --> 00:11:29,440 +chart vowels are typically voice but + +312 +00:11:25,000 --> 00:11:31,440 +voiceless vowels do actually exist + +313 +00:11:29,440 --> 00:11:33,360 +um so we've covered the basics of + +314 +00:11:31,440 --> 00:11:35,160 +phonetics now we can move on to phology + +315 +00:11:33,360 --> 00:11:37,360 +which is the study of the categorization + +316 +00:11:35,160 --> 00:11:40,360 +of speech sounds or equivalent gestures + +317 +00:11:37,360 --> 00:11:42,079 +and sign languages now in contrast to + +318 +00:11:40,360 --> 00:11:44,720 +phonetics which deals with the physical + +319 +00:11:42,079 --> 00:11:46,040 +properties of sounds regardless of their + +320 +00:11:44,720 --> 00:11:48,079 +context regardless of what language + +321 +00:11:46,040 --> 00:11:49,839 +you're actually speaking them in phology + +322 +00:11:48,079 --> 00:11:51,959 +Now deals with abstract rules or + +323 +00:11:49,839 --> 00:11:53,399 +constraints that govern interactions of + +324 +00:11:51,959 --> 00:11:55,839 +sounds within a language and also like + +325 +00:11:53,399 --> 00:11:57,920 +your mental reality of how you perceive + +326 +00:11:55,839 --> 00:12:00,120 +sounds so some questions that + +327 +00:11:57,920 --> 00:12:01,920 +phonologists might asks are are like + +328 +00:12:00,120 --> 00:12:04,079 +what sounds are meaningfully distinct in + +329 +00:12:01,920 --> 00:12:06,560 +a language how are sounds organized into + +330 +00:12:04,079 --> 00:12:09,880 +syllables and then what rules govern + +331 +00:12:06,560 --> 00:12:09,880 +allowable sequences of + +332 +00:12:10,519 --> 00:12:17,680 +sounds so like I said before phones uh + +333 +00:12:14,199 --> 00:12:20,040 +which are the uh like basic unit of + +334 +00:12:17,680 --> 00:12:22,320 +sounds for phonetics they're individual + +335 +00:12:20,040 --> 00:12:23,880 +speech sounds U but in phology what + +336 +00:12:22,320 --> 00:12:26,360 +we're actually concerned with are things + +337 +00:12:23,880 --> 00:12:28,839 +called phones and these are perceptually + +338 +00:12:26,360 --> 00:12:31,320 +distinct units of sound in a language um + +339 +00:12:28,839 --> 00:12:34,480 +pH are sounds that can distinguish one + +340 +00:12:31,320 --> 00:12:36,680 +word from another so in English um pit + +341 +00:12:34,480 --> 00:12:38,800 +is a different word from lit so we can + +342 +00:12:36,680 --> 00:12:42,600 +say that P and L are different + +343 +00:12:38,800 --> 00:12:43,920 +phones uh If We Gather a set of all of + +344 +00:12:42,600 --> 00:12:45,800 +the sounds that can create these + +345 +00:12:43,920 --> 00:12:48,519 +distinct these distinctions of meanings + +346 +00:12:45,800 --> 00:12:50,199 +in a language we have the phon inventory + +347 +00:12:48,519 --> 00:12:52,839 +of that + +348 +00:12:50,199 --> 00:12:54,880 +language so a really fun fact here + +349 +00:12:52,839 --> 00:12:58,040 +connecting phonetics and phenology and + +350 +00:12:54,880 --> 00:12:59,240 +psych Linguistics is that over time uh + +351 +00:12:58,040 --> 00:13:02,240 +with the languages that you speak + +352 +00:12:59,240 --> 00:13:04,800 +regularly we are conditioned um to limit + +353 +00:13:02,240 --> 00:13:07,519 +our mental distinction of sounds to uh + +354 +00:13:04,800 --> 00:13:08,680 +as well as production sometimes to those + +355 +00:13:07,519 --> 00:13:11,040 +that are distinct in our native + +356 +00:13:08,680 --> 00:13:12,839 +languages whereas babies can + +357 +00:13:11,040 --> 00:13:14,560 +perceptually easily distinguish between + +358 +00:13:12,839 --> 00:13:16,440 +all the different phones and they did + +359 +00:13:14,560 --> 00:13:19,279 +they did the study where they had like + +360 +00:13:16,440 --> 00:13:22,120 +babies like either changing attention or + +361 +00:13:19,279 --> 00:13:24,240 +like sucking on a pacifier and if their + +362 +00:13:22,120 --> 00:13:25,839 +like rate of sucking increased it means + +363 +00:13:24,240 --> 00:13:27,399 +they've like sensed a new thing in their + +364 +00:13:25,839 --> 00:13:29,480 +environment so they tested with a bunch + +365 +00:13:27,399 --> 00:13:30,680 +of different phones and they saw that + +366 +00:13:29,480 --> 00:13:33,160 +like the babies could distinguish + +367 +00:13:30,680 --> 00:13:34,600 +between them but if especially if you're + +368 +00:13:33,160 --> 00:13:36,199 +like not a native speaker of a tonal + +369 +00:13:34,600 --> 00:13:37,680 +language and you try and distinguish + +370 +00:13:36,199 --> 00:13:39,839 +between tones and Chinese or something + +371 +00:13:37,680 --> 00:13:42,480 +it's really hard to tell um and your + +372 +00:13:39,839 --> 00:13:43,760 +brain has like learned to abstract away + +373 +00:13:42,480 --> 00:13:46,000 +from all of those things that are not + +374 +00:13:43,760 --> 00:13:47,560 +distinct in your language um but we can + +375 +00:13:46,000 --> 00:13:50,759 +still relearn these things it's just a + +376 +00:13:47,560 --> 00:13:54,680 +fun uh language acquisition fact I threw + +377 +00:13:50,759 --> 00:13:56,079 +in there um yeah so let's run through an + +378 +00:13:54,680 --> 00:13:58,560 +example oh + +379 +00:13:56,079 --> 00:14:03,519 +yes try and summarize that is it right + +380 +00:13:58,560 --> 00:14:03,519 +to assum the of language specific + +381 +00:14:04,959 --> 00:14:13,680 +M okay so how can we like + +382 +00:14:09,639 --> 00:14:17,160 +formalize um uh phon names and how they + +383 +00:14:13,680 --> 00:14:18,360 +operate with specific sounds like phones + +384 +00:14:17,160 --> 00:14:22,360 +uh let's run through an example in + +385 +00:14:18,360 --> 00:14:24,440 +English so p with no puff of air and a P + +386 +00:14:22,360 --> 00:14:26,480 +with a puff of air are two distinct + +387 +00:14:24,440 --> 00:14:29,480 +phones that are actually used in English + +388 +00:14:26,480 --> 00:14:30,839 +speech so if you hold like your hand or + +389 +00:14:29,480 --> 00:14:33,519 +a piece of paper up to your mouth and + +390 +00:14:30,839 --> 00:14:35,720 +you say the word spat you probably won't + +391 +00:14:33,519 --> 00:14:37,560 +feel much air come out of your mouth but + +392 +00:14:35,720 --> 00:14:39,880 +if you say the word Pat there's like a + +393 +00:14:37,560 --> 00:14:43,000 +bit of a + +394 +00:14:39,880 --> 00:14:46,040 +puff however if we were to like + +395 +00:14:43,000 --> 00:14:48,639 +manipulate a sound and change the Pu + +396 +00:14:46,040 --> 00:14:52,279 +with no puff of air which is unaspirated + +397 +00:14:48,639 --> 00:14:53,680 +for the puff of air p and vice versa we + +398 +00:14:52,279 --> 00:14:57,360 +wouldn't change the meaning of the word + +399 +00:14:53,680 --> 00:14:59,079 +like it can say um spat and spat one + +400 +00:14:57,360 --> 00:15:01,240 +with a puff and one without and they + +401 +00:14:59,079 --> 00:15:04,279 +mean the same thing to + +402 +00:15:01,240 --> 00:15:06,680 +me so what does this tell us um this + +403 +00:15:04,279 --> 00:15:09,160 +shows that the p with no puff of air and + +404 +00:15:06,680 --> 00:15:11,839 +the p with the puff of air are instances + +405 +00:15:09,160 --> 00:15:14,800 +of the same phon um in other words they + +406 +00:15:11,839 --> 00:15:16,959 +are alphones in English alphones are + +407 +00:15:14,800 --> 00:15:19,560 +phones that map to the same or phones + +408 +00:15:16,959 --> 00:15:22,560 +that map to the same phon um in other + +409 +00:15:19,560 --> 00:15:24,639 +languages though uh we can distinguish + +410 +00:15:22,560 --> 00:15:26,880 +between these two sounds like in tide + +411 +00:15:24,639 --> 00:15:28,160 +the p with no puff of air and the p with + +412 +00:15:26,880 --> 00:15:29,720 +the puff of air would actually change a + +413 +00:15:28,160 --> 00:15:31,720 +meaning of a word + +414 +00:15:29,720 --> 00:15:34,720 +um so their phon name inventory would + +415 +00:15:31,720 --> 00:15:34,720 +include both + +416 +00:15:35,120 --> 00:15:43,920 +P's um and how these uh uh how these + +417 +00:15:40,319 --> 00:15:46,319 +phones actually occur like what actual + +418 +00:15:43,920 --> 00:15:49,279 +sound you make is determined by the + +419 +00:15:46,319 --> 00:15:51,759 +context that it is in so whether this + +420 +00:15:49,279 --> 00:15:54,040 +phon p is produced as a p without a puff + +421 +00:15:51,759 --> 00:15:55,759 +or a P with a puff can be determined by + +422 +00:15:54,040 --> 00:15:56,800 +the sounds that surround it which we + +423 +00:15:55,759 --> 00:15:59,160 +call its + +424 +00:15:56,800 --> 00:16:02,079 +environment so an observation that we + +425 +00:15:59,160 --> 00:16:04,959 +can make is generally um for standard + +426 +00:16:02,079 --> 00:16:06,880 +American English aspiration only occurs + +427 +00:16:04,959 --> 00:16:09,199 +when this p phon is at the beginning of + +428 +00:16:06,880 --> 00:16:10,399 +a stress syllable um we can also see + +429 +00:16:09,199 --> 00:16:12,160 +though that this happens with other + +430 +00:16:10,399 --> 00:16:16,040 +sounds like T and + +431 +00:16:12,160 --> 00:16:18,519 +C um it turns out that P T and C form a + +432 +00:16:16,040 --> 00:16:20,560 +very Salient group of sounds called + +433 +00:16:18,519 --> 00:16:23,199 +unvoiced stops so we can write a + +434 +00:16:20,560 --> 00:16:24,680 +phonological rule these unvoiced stops + +435 +00:16:23,199 --> 00:16:26,639 +will be aspirated at the beginning of a + +436 +00:16:24,680 --> 00:16:29,639 +stress syllable otherwise they will be + +437 +00:16:26,639 --> 00:16:31,440 +unaspirated + +438 +00:16:29,639 --> 00:16:33,639 +okay so kind of ran through the basics + +439 +00:16:31,440 --> 00:16:37,079 +of phonetics and phology what are some + +440 +00:16:33,639 --> 00:16:38,399 +applications in NLP um a really cool one + +441 +00:16:37,079 --> 00:16:40,880 +which also ties into historical + +442 +00:16:38,399 --> 00:16:43,880 +Linguistics is automatic protol language + +443 +00:16:40,880 --> 00:16:46,199 +reconstruction um over time uh because + +444 +00:16:43,880 --> 00:16:49,680 +of like just the physical properties of + +445 +00:16:46,199 --> 00:16:52,120 +your mouth and social factors um you + +446 +00:16:49,680 --> 00:16:55,079 +will have phonological changes of how + +447 +00:16:52,120 --> 00:16:57,079 +you like produce words and sounds over + +448 +00:16:55,079 --> 00:16:58,600 +generations and this can give us Clues + +449 +00:16:57,079 --> 00:17:00,839 +as to how languages have evolved over + +450 +00:16:58,600 --> 00:17:03,160 +time and how they're related um so there + +451 +00:17:00,839 --> 00:17:05,799 +have been some work to uh uncover these + +452 +00:17:03,160 --> 00:17:07,120 +types of patterns computationally um + +453 +00:17:05,799 --> 00:17:08,559 +there's also really cool work on + +454 +00:17:07,120 --> 00:17:10,760 +cognitive models of human speech + +455 +00:17:08,559 --> 00:17:13,240 +production so this recent work uh from + +456 +00:17:10,760 --> 00:17:15,240 +Berkeley was training an unsupervised uh + +457 +00:17:13,240 --> 00:17:17,959 +Speech synthesis model which instead of + +458 +00:17:15,240 --> 00:17:19,679 +producing from raw waveforms uh instead + +459 +00:17:17,959 --> 00:17:21,799 +trained uh their model to produce + +460 +00:17:19,679 --> 00:17:23,319 +humanik articulatory gestures based on + +461 +00:17:21,799 --> 00:17:26,120 +like electrical signals that people have + +462 +00:17:23,319 --> 00:17:27,559 +measured from people's mouths um it can + +463 +00:17:26,120 --> 00:17:29,000 +also serve as a form of linguistic + +464 +00:17:27,559 --> 00:17:31,120 +evaluation of things like phone + +465 +00:17:29,000 --> 00:17:32,840 +embeddings uh if you have embeddings of + +466 +00:17:31,120 --> 00:17:35,120 +phones or the individual sounds do they + +467 +00:17:32,840 --> 00:17:37,799 +actually also encode phonological + +468 +00:17:35,120 --> 00:17:39,360 +relations and then finally um we can + +469 +00:17:37,799 --> 00:17:41,280 +also incorporate phonetic information + +470 +00:17:39,360 --> 00:17:43,039 +into word embeddings so this can be + +471 +00:17:41,280 --> 00:17:45,280 +applied to tasks like cognate and Lan + +472 +00:17:43,039 --> 00:17:49,080 +word detection um multilingual named + +473 +00:17:45,280 --> 00:17:53,200 +entity recognition Lang ID + +474 +00:17:49,080 --> 00:17:55,080 +Etc okay so now popping one level up in + +475 +00:17:53,200 --> 00:17:56,440 +abstraction we can move on to subwords + +476 +00:17:55,080 --> 00:17:59,080 +and + +477 +00:17:56,440 --> 00:18:01,200 +constituents so morphology is is the + +478 +00:17:59,080 --> 00:18:02,960 +study of word formation and structure + +479 +00:18:01,200 --> 00:18:04,679 +and if you've ever like armchair + +480 +00:18:02,960 --> 00:18:06,280 +philosophized or thought a little bit + +481 +00:18:04,679 --> 00:18:09,520 +too hard about what you were saying you + +482 +00:18:06,280 --> 00:18:12,000 +might be like um what is a word do words + +483 +00:18:09,520 --> 00:18:14,080 +exist um and this is actually a very + +484 +00:18:12,000 --> 00:18:15,360 +valid question lots of linguists have + +485 +00:18:14,080 --> 00:18:17,320 +thought about this lots of linguists + +486 +00:18:15,360 --> 00:18:18,720 +continue to debate about it if you ask + +487 +00:18:17,320 --> 00:18:20,640 +someone who's really opinionated they + +488 +00:18:18,720 --> 00:18:22,080 +will go for a very very long time + +489 +00:18:20,640 --> 00:18:24,080 +talking about whether or not they + +490 +00:18:22,080 --> 00:18:25,960 +believe a word exists but for now we're + +491 +00:18:24,080 --> 00:18:28,520 +going to forego that debate and just go + +492 +00:18:25,960 --> 00:18:30,520 +with our intuitions for what a word is + +493 +00:18:28,520 --> 00:18:33,600 +um words are formed from linguistic + +494 +00:18:30,520 --> 00:18:35,840 +units um called morphemes um and like + +495 +00:18:33,600 --> 00:18:38,320 +how a phone was like the smallest unit + +496 +00:18:35,840 --> 00:18:40,159 +we would study in phology a morim is the + +497 +00:18:38,320 --> 00:18:42,159 +smallest stud unit we study in + +498 +00:18:40,159 --> 00:18:44,400 +morphology um these are the smallest + +499 +00:18:42,159 --> 00:18:46,159 +meaningful linguistic units so I can + +500 +00:18:44,400 --> 00:18:48,799 +break the word morphology down into two + +501 +00:18:46,159 --> 00:18:51,159 +morphemes morph which means form and + +502 +00:18:48,799 --> 00:18:53,679 +shape and ology the study + +503 +00:18:51,159 --> 00:18:55,280 +of um so one thing I'd like to note is + +504 +00:18:53,679 --> 00:18:57,159 +that in this lecture most of my examples + +505 +00:18:55,280 --> 00:18:59,400 +are going to be English because I know + +506 +00:18:57,159 --> 00:19:02,000 +all of us speak English um but English + +507 +00:18:59,400 --> 00:19:04,000 +morphology is super duper boring um so + +508 +00:19:02,000 --> 00:19:05,679 +you can check out some really cool fun + +509 +00:19:04,000 --> 00:19:07,159 +poly synthetic languages or ones that + +510 +00:19:05,679 --> 00:19:09,280 +have a lot of like morphological + +511 +00:19:07,159 --> 00:19:11,240 +processes that go on um including many + +512 +00:19:09,280 --> 00:19:13,640 +indigenous American languages uh for + +513 +00:19:11,240 --> 00:19:17,080 +more fun + +514 +00:19:13,640 --> 00:19:19,520 +examples yeah so for the most part we + +515 +00:19:17,080 --> 00:19:20,840 +can break down morphemes into two + +516 +00:19:19,520 --> 00:19:22,840 +categories based on the following + +517 +00:19:20,840 --> 00:19:25,320 +properties first we can ask can a + +518 +00:19:22,840 --> 00:19:27,200 +morphine occur by itself um if a + +519 +00:19:25,320 --> 00:19:28,440 +morphine can occur by itself then it's + +520 +00:19:27,200 --> 00:19:30,480 +basically word and we call it a free + +521 +00:19:28,440 --> 00:19:33,880 +morph Mor um but if it can't then it's + +522 +00:19:30,480 --> 00:19:36,760 +bound so like in dogs uh we have a free + +523 +00:19:33,880 --> 00:19:38,320 +morphine dog and a bound morphine for + +524 +00:19:36,760 --> 00:19:40,400 +the plural s because that just can't + +525 +00:19:38,320 --> 00:19:43,200 +occur on its own but we can also form + +526 +00:19:40,400 --> 00:19:44,679 +words that are composed of all bound + +527 +00:19:43,200 --> 00:19:47,919 +morphins like + +528 +00:19:44,679 --> 00:19:49,880 +multilingual um another thing we can ask + +529 +00:19:47,919 --> 00:19:52,600 +is whether or not it comprises the main + +530 +00:19:49,880 --> 00:19:54,720 +meaning of the word um if it does it's + +531 +00:19:52,600 --> 00:19:57,799 +the root if it's not it's an AIX so like + +532 +00:19:54,720 --> 00:19:59,240 +in dogs um the AIX on its own just + +533 +00:19:57,799 --> 00:20:01,280 +indicates that it's a plural but not + +534 +00:19:59,240 --> 00:20:06,240 +really of what so the main meaning is + +535 +00:20:01,280 --> 00:20:08,880 +with dog um another like weird thing is + +536 +00:20:06,240 --> 00:20:11,880 +cranberry morim so if we try and split + +537 +00:20:08,880 --> 00:20:14,400 +cran and Berry cran doesn't really mean + +538 +00:20:11,880 --> 00:20:16,360 +anything um so it's a bound morphine + +539 +00:20:14,400 --> 00:20:17,720 +without like a real meaning which kind + +540 +00:20:16,360 --> 00:20:19,880 +of contradicts what I said about a + +541 +00:20:17,720 --> 00:20:21,880 +morphe having a meaning um but this type + +542 +00:20:19,880 --> 00:20:23,799 +of thing comes about from historical + +543 +00:20:21,880 --> 00:20:26,480 +change so cran actually did have a + +544 +00:20:23,799 --> 00:20:29,960 +meaning back in the day um but it no + +545 +00:20:26,480 --> 00:20:32,400 +longer does now + +546 +00:20:29,960 --> 00:20:34,840 +so one type of morphological process is + +547 +00:20:32,400 --> 00:20:37,440 +inflection which can create a new form + +548 +00:20:34,840 --> 00:20:39,159 +of the same word um basically the main + +549 +00:20:37,440 --> 00:20:40,720 +concept and meaning of the word will + +550 +00:20:39,159 --> 00:20:42,840 +remain the same but we're Bas like + +551 +00:20:40,720 --> 00:20:45,280 +flipping a switch for the grammatical + +552 +00:20:42,840 --> 00:20:47,200 +feature of uh or some grammatical + +553 +00:20:45,280 --> 00:20:50,159 +feature in the word so like in the dog's + +554 +00:20:47,200 --> 00:20:53,400 +example appending an S makes it plural + +555 +00:20:50,159 --> 00:20:58,880 +uh for person I can say I run to he runs + +556 +00:20:53,400 --> 00:20:58,880 +tense I climb I climbed Etc + +557 +00:20:59,039 --> 00:21:03,440 +in contrast we have processes of word + +558 +00:21:01,120 --> 00:21:05,440 +formation so one of these is first + +559 +00:21:03,440 --> 00:21:07,440 +derivation it's a process that creates a + +560 +00:21:05,440 --> 00:21:09,600 +semantically related new word by + +561 +00:21:07,440 --> 00:21:12,120 +operating on a base form often through + +562 +00:21:09,600 --> 00:21:13,679 +things like affixation so now the main + +563 +00:21:12,120 --> 00:21:15,600 +concept and meaning of the word is going + +564 +00:21:13,679 --> 00:21:18,039 +to change and often times part of speech + +565 +00:21:15,600 --> 00:21:21,120 +will change too so you have to teach a + +566 +00:21:18,039 --> 00:21:23,960 +verb to teacher uh someone who teaches + +567 +00:21:21,120 --> 00:21:26,200 +uh intense to intensify easy easily but + +568 +00:21:23,960 --> 00:21:28,960 +we can also have derivations that don't + +569 +00:21:26,200 --> 00:21:31,120 +change part of spe speech like unlucky + +570 +00:21:28,960 --> 00:21:32,039 +and lucky where they now kind of have + +571 +00:21:31,120 --> 00:21:35,200 +opposite + +572 +00:21:32,039 --> 00:21:37,120 +meanings another word formation strategy + +573 +00:21:35,200 --> 00:21:39,159 +is compounding and this is a process + +574 +00:21:37,120 --> 00:21:41,880 +that creates semantically new words by + +575 +00:21:39,159 --> 00:21:44,559 +combining two already separate words + +576 +00:21:41,880 --> 00:21:46,240 +like Blackbird ice cream skyscraper um + +577 +00:21:44,559 --> 00:21:49,080 +German is Infamous for this so this like + +578 +00:21:46,240 --> 00:21:51,240 +super long example here um means cattle + +579 +00:21:49,080 --> 00:21:55,480 +marking and beef labeling supervision + +580 +00:21:51,240 --> 00:21:58,360 +duties delegation law um yeah that's + +581 +00:21:55,480 --> 00:22:01,200 +fun um so all of the things I've shown + +582 +00:21:58,360 --> 00:22:04,200 +you especially with the English examples + +583 +00:22:01,200 --> 00:22:06,159 +are pretty simple we just like root + +584 +00:22:04,200 --> 00:22:09,480 +attach things sequentially to the root + +585 +00:22:06,159 --> 00:22:12,120 +boom word um but not all morphological + +586 +00:22:09,480 --> 00:22:13,600 +processes are this straightforward so um + +587 +00:22:12,120 --> 00:22:15,600 +in English we do have something called + +588 +00:22:13,600 --> 00:22:18,000 +apophony um so this is like a vowel + +589 +00:22:15,600 --> 00:22:20,039 +change tooth to teeth plural Goose to + +590 +00:22:18,000 --> 00:22:24,279 +geese unfortunately not mice to me but + +591 +00:22:20,039 --> 00:22:25,960 +that would be fun um infixation also fun + +592 +00:22:24,279 --> 00:22:28,880 +there's a fun example in English you can + +593 +00:22:25,960 --> 00:22:32,480 +read it I won't read it out loud um + +594 +00:22:28,880 --> 00:22:35,440 +transfixation uh is when you have like + +595 +00:22:32,480 --> 00:22:38,559 +uh some root but it's actually split + +596 +00:22:35,440 --> 00:22:40,000 +when you put the new um AIC in there so + +597 +00:22:38,559 --> 00:22:43,039 +this happens with Arabic and Hebrew + +598 +00:22:40,000 --> 00:22:45,440 +Roots reduplication um in Indonesian + +599 +00:22:43,039 --> 00:22:49,080 +this happens pretty often um we have the + +600 +00:22:45,440 --> 00:22:51,919 +verb beran to walk um and then we + +601 +00:22:49,080 --> 00:22:53,400 +reduplicate the the root to become to + +602 +00:22:51,919 --> 00:22:55,440 +stroll stroll + +603 +00:22:53,400 --> 00:22:57,039 +bridelan uh and then there's also a lot + +604 +00:22:55,440 --> 00:22:59,640 +of other processes that I haven't listed + +605 +00:22:57,039 --> 00:23:03,080 +here + +606 +00:22:59,640 --> 00:23:04,799 +um computationally we have uh useful + +607 +00:23:03,080 --> 00:23:07,120 +tools for this called morphological + +608 +00:23:04,799 --> 00:23:09,279 +analyzers which take as input a word + +609 +00:23:07,120 --> 00:23:11,799 +form and then output all possible + +610 +00:23:09,279 --> 00:23:14,039 +morphological uh parses of that + +611 +00:23:11,799 --> 00:23:16,440 +word um so traditionally this is done + +612 +00:23:14,039 --> 00:23:18,880 +with fsts um and it's a two-step + +613 +00:23:16,440 --> 00:23:21,840 +creation process it's actually pretty uh + +614 +00:23:18,880 --> 00:23:24,039 +arduous first you got to map your Lemma + +615 +00:23:21,840 --> 00:23:26,320 +and a morphos syntactic description of + +616 +00:23:24,039 --> 00:23:29,080 +the morphemes to an intermediate form + +617 +00:23:26,320 --> 00:23:32,440 +that represents like the basic morphine + +618 +00:23:29,080 --> 00:23:35,559 +representation of uh that label so for + +619 +00:23:32,440 --> 00:23:38,039 +example you have like bus the Lemma PL + +620 +00:23:35,559 --> 00:23:40,960 +for plural for that description and then + +621 +00:23:38,039 --> 00:23:42,679 +that maps to bus and then S as the + +622 +00:23:40,960 --> 00:23:45,159 +canonical morphine representation of the + +623 +00:23:42,679 --> 00:23:48,000 +plural and then you have another one + +624 +00:23:45,159 --> 00:23:49,600 +that maps from intermediate form to + +625 +00:23:48,000 --> 00:23:52,799 +surface form According to some like + +626 +00:23:49,600 --> 00:23:55,520 +orthog orthographic rules uh phological + +627 +00:23:52,799 --> 00:23:58,200 +rules Etc so now you go from bus with + +628 +00:23:55,520 --> 00:23:59,840 +the be uh with the plural s but it's + +629 +00:23:58,200 --> 00:24:03,000 +actually written out as + +630 +00:23:59,840 --> 00:24:06,000 +buses um you put these together and then + +631 +00:24:03,000 --> 00:24:08,400 +you can have as input your bus PL and + +632 +00:24:06,000 --> 00:24:10,440 +then get as output buses but you can + +633 +00:24:08,400 --> 00:24:12,640 +also invert it so now you can use it as + +634 +00:24:10,440 --> 00:24:15,400 +an analyzer um and some tools to + +635 +00:24:12,640 --> 00:24:17,600 +construct these type of fsts can are + +636 +00:24:15,400 --> 00:24:21,559 +foma rust FST and open + +637 +00:24:17,600 --> 00:24:24,720 +FST um but obviously we don't really use + +638 +00:24:21,559 --> 00:24:27,720 +a ton of fsts anymore in modern NLP um + +639 +00:24:24,720 --> 00:24:29,559 +so more recently there's neural models + +640 +00:24:27,720 --> 00:24:31,919 +um that uh like sequence to sequence + +641 +00:24:29,559 --> 00:24:35,200 +models that just do this with the word + +642 +00:24:31,919 --> 00:24:36,919 +as a raw input and the analysis as the + +643 +00:24:35,200 --> 00:24:39,919 +output but we can still combine + +644 +00:24:36,919 --> 00:24:42,159 +approaches so you can combine an FST + +645 +00:24:39,919 --> 00:24:43,799 +with but with your predetermined lexicon + +646 +00:24:42,159 --> 00:24:46,559 +with a neural guesser that can handle + +647 +00:24:43,799 --> 00:24:48,360 +unseen word forms um we can also use + +648 +00:24:46,559 --> 00:24:50,480 +fsts to generate additional training + +649 +00:24:48,360 --> 00:24:54,080 +data uh that can be used as input to + +650 +00:24:50,480 --> 00:24:56,399 +your neural model um even though like we + +651 +00:24:54,080 --> 00:24:58,919 +or I wouldn't guarantee that not all of + +652 +00:24:56,399 --> 00:25:00,200 +us use fsts but like more generally the + +653 +00:24:58,919 --> 00:25:02,120 +P Community doesn't really deal with + +654 +00:25:00,200 --> 00:25:04,080 +them anymore these tools are really + +655 +00:25:02,120 --> 00:25:05,840 +really useful for low resource languages + +656 +00:25:04,080 --> 00:25:10,559 +and for annotation of those low resource + +657 +00:25:05,840 --> 00:25:14,200 +languages um like in this example for uh + +658 +00:25:10,559 --> 00:25:17,760 +upic okay so we've gone yeah sorry one + +659 +00:25:14,200 --> 00:25:19,600 +one quick followup um is like lindia + +660 +00:25:17,760 --> 00:25:22,399 +pointed out that English morphology is + +661 +00:25:19,600 --> 00:25:24,279 +really boring um Chinese morphology is + +662 +00:25:22,399 --> 00:25:27,960 +even more boring uh so if you speak + +663 +00:25:24,279 --> 00:25:29,600 +Chinese uh you you also don't you know + +664 +00:25:27,960 --> 00:25:31,480 +deal with a lot of morphology but most + +665 +00:25:29,600 --> 00:25:33,840 +other languages in the world have more + +666 +00:25:31,480 --> 00:25:35,640 +complex morphology than English and + +667 +00:25:33,840 --> 00:25:38,679 +especially the ones that if you could go + +668 +00:25:35,640 --> 00:25:41,960 +back a few slides the ones that have + +669 +00:25:38,679 --> 00:25:43,640 +like infixation for example um where + +670 +00:25:41,960 --> 00:25:46,919 +you're changing the characters in like + +671 +00:25:43,640 --> 00:25:49,000 +the middle of the sentence + +672 +00:25:46,919 --> 00:25:50,679 +actually they break some of the + +673 +00:25:49,000 --> 00:25:53,640 +underlying assumptions that we have in + +674 +00:25:50,679 --> 00:25:57,760 +our like neural models nowadays like for + +675 +00:25:53,640 --> 00:26:00,279 +example we're using bpe or something um + +676 +00:25:57,760 --> 00:26:01,960 +or piece or something to split words + +677 +00:26:00,279 --> 00:26:03,919 +that works really well in English where + +678 +00:26:01,960 --> 00:26:05,279 +we mostly have concatenative morphology + +679 +00:26:03,919 --> 00:26:06,840 +where you just stick two things together + +680 +00:26:05,279 --> 00:26:08,320 +but it doesn't work well when you're + +681 +00:26:06,840 --> 00:26:11,080 +inserting characters in the middle of + +682 +00:26:08,320 --> 00:26:13,000 +the words so like it it's kind of + +683 +00:26:11,080 --> 00:26:14,360 +interesting to know these differences + +684 +00:26:13,000 --> 00:26:15,919 +from the point of view of modeling if + +685 +00:26:14,360 --> 00:26:17,360 +you're modeling one of these languages + +686 +00:26:15,919 --> 00:26:18,679 +because that actually becomes a really + +687 +00:26:17,360 --> 00:26:20,120 +big problem if you start doing like + +688 +00:26:18,679 --> 00:26:22,360 +Arabic or something like that and you + +689 +00:26:20,120 --> 00:26:24,000 +just use our existent models so that's + +690 +00:26:22,360 --> 00:26:25,919 +just another point about why knowing + +691 +00:26:24,000 --> 00:26:28,440 +this is important + +692 +00:26:25,919 --> 00:26:30,880 +here yeah and if you want to know more + +693 +00:26:28,440 --> 00:26:33,440 +David teaches a really cool class on + +694 +00:26:30,880 --> 00:26:37,320 +subwords or we deal you deal with like + +695 +00:26:33,440 --> 00:26:40,799 +tokenization stuff and morphological + +696 +00:26:37,320 --> 00:26:43,120 +processes um all right so we've covered + +697 +00:26:40,799 --> 00:26:45,039 +words now let's put them together um + +698 +00:26:43,120 --> 00:26:48,200 +syntax is a study of how words form + +699 +00:26:45,039 --> 00:26:50,520 +phrases and sentences so like a question + +700 +00:26:48,200 --> 00:26:52,480 +that a syntax might ask is like what are + +701 +00:26:50,520 --> 00:26:54,240 +the principles governing phrase and + +702 +00:26:52,480 --> 00:26:57,440 +sentence structure within a language and + +703 +00:26:54,240 --> 00:26:59,960 +then also across languages so aspects of + +704 +00:26:57,440 --> 00:27:02,200 +syntax include word order like does your + +705 +00:26:59,960 --> 00:27:04,200 +subject uh come before your verb and + +706 +00:27:02,200 --> 00:27:05,039 +Then followed by your object or some + +707 +00:27:04,200 --> 00:27:07,360 +other + +708 +00:27:05,039 --> 00:27:09,440 +combination um agreement like subject + +709 +00:27:07,360 --> 00:27:11,640 +verb agreement and then also what is the + +710 +00:27:09,440 --> 00:27:14,080 +nature of the hierarchical structure of + +711 +00:27:11,640 --> 00:27:16,159 +the syntax um and then I have a fun + +712 +00:27:14,080 --> 00:27:17,720 +example from Twitter about how some + +713 +00:27:16,159 --> 00:27:21,679 +English sentences look like nine + +714 +00:27:17,720 --> 00:27:25,320 +consecutive nouns um which I thought was + +715 +00:27:21,679 --> 00:27:27,919 +funny um so words um like there's a lot + +716 +00:27:25,320 --> 00:27:29,720 +of categorization um in this lecture and + +717 +00:27:27,919 --> 00:27:31,559 +where are no exception we can categorize + +718 +00:27:29,720 --> 00:27:34,279 +them based on their morphological + +719 +00:27:31,559 --> 00:27:36,159 +syntactic and semantic properties and we + +720 +00:27:34,279 --> 00:27:37,880 +refer to these categories as parts of + +721 +00:27:36,159 --> 00:27:40,960 +speech like nouns verbs and adjectives + +722 +00:27:37,880 --> 00:27:42,519 +I'm sure you all are very familiar um + +723 +00:27:40,960 --> 00:27:45,399 +however one thing to note is that this + +724 +00:27:42,519 --> 00:27:47,320 +categorization is like not not a very + +725 +00:27:45,399 --> 00:27:49,320 +strict one at all um and it should not + +726 +00:27:47,320 --> 00:27:51,279 +be taken for granted as you study like + +727 +00:27:49,320 --> 00:27:52,880 +more and more complex or languages that + +728 +00:27:51,279 --> 00:27:54,519 +are just completely different from + +729 +00:27:52,880 --> 00:27:57,120 +English you'll realize that some of + +730 +00:27:54,519 --> 00:28:00,440 +these boundaries between like nouns and + +731 +00:27:57,120 --> 00:28:03,440 +adjectives or verbs and some nouns is + +732 +00:28:00,440 --> 00:28:05,159 +actually really really blurry um so even + +733 +00:28:03,440 --> 00:28:07,279 +though like this this is very like you + +734 +00:28:05,159 --> 00:28:09,120 +know announ is a person place or thing + +735 +00:28:07,279 --> 00:28:10,840 +and a verb is like an action it actually + +736 +00:28:09,120 --> 00:28:13,559 +is a bit more complicated when you + +737 +00:28:10,840 --> 00:28:15,120 +factor like morphosyntax into it so keep + +738 +00:28:13,559 --> 00:28:18,000 +that in + +739 +00:28:15,120 --> 00:28:20,399 +mind um a very broad distinction we can + +740 +00:28:18,000 --> 00:28:22,279 +make over all sorts of words is whether + +741 +00:28:20,399 --> 00:28:25,480 +it's an Open Class word or a closed + +742 +00:28:22,279 --> 00:28:28,080 +class word so open classes of words are + +743 +00:28:25,480 --> 00:28:30,440 +um classes where we can add new items + +744 +00:28:28,080 --> 00:28:32,399 +items easily over time um and with + +745 +00:28:30,440 --> 00:28:35,320 +relative ease so if any of you guys are + +746 +00:28:32,399 --> 00:28:37,840 +like online uh we have a new word like + +747 +00:28:35,320 --> 00:28:40,399 +Riz derived from Charisma and it can be + +748 +00:28:37,840 --> 00:28:42,480 +a noun like oh he has so much RZ or it + +749 +00:28:40,399 --> 00:28:45,840 +can be a verb like oh he rised her up + +750 +00:28:42,480 --> 00:28:47,600 +you know fun um and then closed class + +751 +00:28:45,840 --> 00:28:49,640 +words you have a much smaller number of + +752 +00:28:47,600 --> 00:28:52,840 +words and it's a lot harder to add new + +753 +00:28:49,640 --> 00:28:55,000 +items over time um one exception to this + +754 +00:28:52,840 --> 00:28:56,679 +though recently is with pronouns like + +755 +00:28:55,000 --> 00:28:59,200 +people are a bit more productive and how + +756 +00:28:56,679 --> 00:29:03,559 +they use pronouns a bit more flexible + +757 +00:28:59,200 --> 00:29:05,640 +at least in English in in the US um and + +758 +00:29:03,559 --> 00:29:08,440 +even based on how words act in context + +759 +00:29:05,640 --> 00:29:09,919 +we can often infer um the part of speech + +760 +00:29:08,440 --> 00:29:12,120 +even though we've never seen it before + +761 +00:29:09,919 --> 00:29:15,240 +um let me move + +762 +00:29:12,120 --> 00:29:16,840 +this um as in this example which you + +763 +00:29:15,240 --> 00:29:18,399 +might have seen if you've ever taken a + +764 +00:29:16,840 --> 00:29:20,720 +Linguistics class it's like everyone's + +765 +00:29:18,399 --> 00:29:22,960 +favorite part of speech example um it's + +766 +00:29:20,720 --> 00:29:24,880 +by this poem called Jabberwocky by Lis + +767 +00:29:22,960 --> 00:29:27,279 +Caroll where he has a bunch of nons + +768 +00:29:24,880 --> 00:29:29,120 +words um but we can kind of tell even + +769 +00:29:27,279 --> 00:29:30,799 +though we've never seen the word before + +770 +00:29:29,120 --> 00:29:33,960 +what its function in the sentence is + +771 +00:29:30,799 --> 00:29:35,720 +like um all Mimsy were the borov like + +772 +00:29:33,960 --> 00:29:38,559 +borgov has to be a noun here it has to + +773 +00:29:35,720 --> 00:29:40,080 +be something and they were being Mimsy + +774 +00:29:38,559 --> 00:29:42,840 +whatever that + +775 +00:29:40,080 --> 00:29:45,360 +means um so yeah here's a list of + +776 +00:29:42,840 --> 00:29:47,840 +canonical parts of speech um sometimes + +777 +00:29:45,360 --> 00:29:49,320 +based on a linguist desired annotations + +778 +00:29:47,840 --> 00:29:51,519 +we can get more narrow than this but + +779 +00:29:49,320 --> 00:29:53,640 +this is like pretty standard and then + +780 +00:29:51,519 --> 00:29:55,640 +like a sentence annotated for each part + +781 +00:29:53,640 --> 00:29:58,840 +of speech like they is a pronoun had is + +782 +00:29:55,640 --> 00:30:01,159 +an auxiliary verb argued as a verb etc + +783 +00:29:58,840 --> 00:30:03,360 +etc um I won't spend too much time on + +784 +00:30:01,159 --> 00:30:05,440 +this because um I think it's a bit too + +785 +00:30:03,360 --> 00:30:08,559 +nitty-gritty in the + +786 +00:30:05,440 --> 00:30:10,600 +details um so a big part of syntax is + +787 +00:30:08,559 --> 00:30:12,760 +phrases um like what are the types of + +788 +00:30:10,600 --> 00:30:14,960 +phrases we have how are they formed um + +789 +00:30:12,760 --> 00:30:17,080 +here are three very very basic ones so a + +790 +00:30:14,960 --> 00:30:19,120 +noun phrase obviously as a name suggests + +791 +00:30:17,080 --> 00:30:20,720 +contains a noun but it can also include + +792 +00:30:19,120 --> 00:30:22,519 +a determiner to tell you like what set + +793 +00:30:20,720 --> 00:30:24,159 +of nouns are you referring to and then + +794 +00:30:22,519 --> 00:30:26,760 +also things that modify the noun like + +795 +00:30:24,159 --> 00:30:29,080 +adjectives so here is a noun phrase the + +796 +00:30:26,760 --> 00:30:31,320 +old man old man is also a noun phrase + +797 +00:30:29,080 --> 00:30:33,120 +man is also a noun phrase a + +798 +00:30:31,320 --> 00:30:36,080 +prepositional phrase um has a + +799 +00:30:33,120 --> 00:30:38,159 +preposition followed by a noun phrase uh + +800 +00:30:36,080 --> 00:30:39,720 +some preposition phrases we can extend + +801 +00:30:38,159 --> 00:30:42,159 +the rules to be more complicated but + +802 +00:30:39,720 --> 00:30:44,600 +here's a very simple example like to + +803 +00:30:42,159 --> 00:30:46,799 +school um and then verb phrases contain + +804 +00:30:44,600 --> 00:30:48,159 +a verb than any other noun phrase or + +805 +00:30:46,799 --> 00:30:51,200 +prepositional phrase that the verb + +806 +00:30:48,159 --> 00:30:53,559 +requires or has a slot for um my video + +807 +00:30:51,200 --> 00:30:56,760 +keeps getting in the way of my + +808 +00:30:53,559 --> 00:30:58,600 +slides um so as well as any other + +809 +00:30:56,760 --> 00:31:01,960 +adverbial modifiers like + +810 +00:30:58,600 --> 00:31:05,000 +uh sold a car to me very + +811 +00:31:01,960 --> 00:31:07,200 +simple so constituents consist of at + +812 +00:31:05,000 --> 00:31:10,120 +least one contiguous word and behaves as + +813 +00:31:07,200 --> 00:31:13,720 +a single unit um this is like one + +814 +00:31:10,120 --> 00:31:16,200 +theoretical unit that is very um + +815 +00:31:13,720 --> 00:31:17,799 +important in generative syntax so let's + +816 +00:31:16,200 --> 00:31:20,360 +look at this example Beyonce released a + +817 +00:31:17,799 --> 00:31:21,720 +new country album a new country album as + +818 +00:31:20,360 --> 00:31:24,080 +we saw in the previous slide is an out + +819 +00:31:21,720 --> 00:31:26,039 +phrase um a crucial observation to make + +820 +00:31:24,080 --> 00:31:27,639 +is that we can continually replace a new + +821 +00:31:26,039 --> 00:31:28,519 +country album with smaller and smaller + +822 +00:31:27,639 --> 00:31:30,360 +units + +823 +00:31:28,519 --> 00:31:33,720 +all the way down to the word level like + +824 +00:31:30,360 --> 00:31:35,720 +Beyonce released a new language model um + +825 +00:31:33,720 --> 00:31:39,399 +even shorter Beyonce released a balloon + +826 +00:31:35,720 --> 00:31:41,600 +or Beyonce released Lantern flies so um + +827 +00:31:39,399 --> 00:31:43,320 +yeah we can we can see that all of these + +828 +00:31:41,600 --> 00:31:46,200 +things like act as a single unit and + +829 +00:31:43,320 --> 00:31:49,919 +they all act in very similar + +830 +00:31:46,200 --> 00:31:51,720 +ways um so as a followup to that some + +831 +00:31:49,919 --> 00:31:55,600 +people have developed a theory of + +832 +00:31:51,720 --> 00:31:57,880 +language in the context of context free + +833 +00:31:55,600 --> 00:31:59,039 +grammars um in languis spe specific + +834 +00:31:57,880 --> 00:32:01,120 +speically these are called phrase + +835 +00:31:59,039 --> 00:32:03,399 +structure grammars and it's introduced + +836 +00:32:01,120 --> 00:32:05,679 +by no trky who I think actually gets a + +837 +00:32:03,399 --> 00:32:07,039 +lot of flack from NLP but people need to + +838 +00:32:05,679 --> 00:32:08,679 +realize he was actually really really + +839 +00:32:07,039 --> 00:32:10,240 +important for Linguistics like he has + +840 +00:32:08,679 --> 00:32:13,080 +some pretty good ideas even if not all + +841 +00:32:10,240 --> 00:32:15,799 +of them are correct um so Noom Chomsky + +842 +00:32:13,080 --> 00:32:18,240 +defined a phras structure grammar as um + +843 +00:32:15,799 --> 00:32:19,960 +having a finite vocabulary a finite set + +844 +00:32:18,240 --> 00:32:21,320 +of strings that are part of this + +845 +00:32:19,960 --> 00:32:23,320 +vocabulary and then a finite set of + +846 +00:32:21,320 --> 00:32:24,919 +rules that operate on the vocabulary to + +847 +00:32:23,320 --> 00:32:28,480 +produce + +848 +00:32:24,919 --> 00:32:30,600 +strings or more strings and we can use + +849 +00:32:28,480 --> 00:32:32,639 +these rules over and over and over again + +850 +00:32:30,600 --> 00:32:35,240 +recursively to create infinitely long + +851 +00:32:32,639 --> 00:32:37,320 +strings that are still Parable um so + +852 +00:32:35,240 --> 00:32:39,799 +here are some very simple phrase + +853 +00:32:37,320 --> 00:32:42,600 +structure rules for English a sentence + +854 +00:32:39,799 --> 00:32:45,559 +is defined to be a noun phrase followed + +855 +00:32:42,600 --> 00:32:47,080 +by a verb phrase a noun phrase can be uh + +856 +00:32:45,559 --> 00:32:49,240 +constructed by having an optional + +857 +00:32:47,080 --> 00:32:51,440 +determiner and another noun phrase and + +858 +00:32:49,240 --> 00:32:53,919 +then we can take that noun phrase um and + +859 +00:32:51,440 --> 00:32:55,919 +decompose it into another like another + +860 +00:32:53,919 --> 00:32:59,840 +optional adjective phrase a noun a + +861 +00:32:55,919 --> 00:32:59,840 +prepositional phrase so on and so forth + +862 +00:33:00,399 --> 00:33:04,279 +Now using such a set of rules in our pH + +863 +00:33:02,600 --> 00:33:06,480 +structure grammar we can generate lots + +864 +00:33:04,279 --> 00:33:08,519 +and lots of English sentences including + +865 +00:33:06,480 --> 00:33:10,679 +those that are syntactically proper even + +866 +00:33:08,519 --> 00:33:12,960 +if they are semantically nonsensical so + +867 +00:33:10,679 --> 00:33:15,240 +like playing Mad Libs with these phrase + +868 +00:33:12,960 --> 00:33:17,120 +structure rules a very famous example + +869 +00:33:15,240 --> 00:33:19,080 +from tomsky is colorless green ideas + +870 +00:33:17,120 --> 00:33:20,880 +sleep furiously doesn't really mean + +871 +00:33:19,080 --> 00:33:22,240 +anything but as a native English speaker + +872 +00:33:20,880 --> 00:33:24,440 +you're like you know it sounds right + +873 +00:33:22,240 --> 00:33:26,679 +even if I don't know what it means um + +874 +00:33:24,440 --> 00:33:28,760 +which is a really cool observation like + +875 +00:33:26,679 --> 00:33:30,840 +uh speakers have an in for when things + +876 +00:33:28,760 --> 00:33:32,600 +sound syntactically correct even if they + +877 +00:33:30,840 --> 00:33:34,720 +have no + +878 +00:33:32,600 --> 00:33:36,600 +meaning um but if you're wondering you + +879 +00:33:34,720 --> 00:33:39,559 +know all these like sets of rules it + +880 +00:33:36,600 --> 00:33:41,519 +seems way too simple like people mess + +881 +00:33:39,559 --> 00:33:43,399 +things up all the time or there are lots + +882 +00:33:41,519 --> 00:33:45,399 +of other constructions that can't be + +883 +00:33:43,399 --> 00:33:48,200 +explained very easily by these rules + +884 +00:33:45,399 --> 00:33:49,600 +well yeah you're right um some phenomena + +885 +00:33:48,200 --> 00:33:52,720 +are very difficult to model in this + +886 +00:33:49,600 --> 00:33:55,639 +fashion um but you know this is this is + +887 +00:33:52,720 --> 00:33:58,840 +like a very very old grammar back from + +888 +00:33:55,639 --> 00:34:00,559 +the 50s there are lots of new Frameworks + +889 +00:33:58,840 --> 00:34:02,679 +in theoretical Linguistics such as + +890 +00:34:00,559 --> 00:34:05,080 +minimalism uh which is atomski and + +891 +00:34:02,679 --> 00:34:07,519 +tradition there's other formalisms like + +892 +00:34:05,080 --> 00:34:09,520 +hpsg uh cognitive Linguistics approaches + +893 +00:34:07,519 --> 00:34:11,599 +like construction grammars Etc so it's a + +894 +00:34:09,520 --> 00:34:15,159 +lot more wide and varied than just what + +895 +00:34:11,599 --> 00:34:17,119 +you see canonically in intro Ling or uh + +896 +00:34:15,159 --> 00:34:19,879 +Linguistics lectures and NLP classes + +897 +00:34:17,119 --> 00:34:22,240 +like this one um however it's still a + +898 +00:34:19,879 --> 00:34:23,720 +very conceptually powerful and remains + +899 +00:34:22,240 --> 00:34:26,800 +um + +900 +00:34:23,720 --> 00:34:29,200 +influential so a very important aspect + +901 +00:34:26,800 --> 00:34:30,720 +of this line of work and a lot of + +902 +00:34:29,200 --> 00:34:33,320 +subsequent and competing theories is the + +903 +00:34:30,720 --> 00:34:35,079 +idea of hierarchical structure in syntax + +904 +00:34:33,320 --> 00:34:37,200 +so using these phrase structure rules we + +905 +00:34:35,079 --> 00:34:39,240 +can break down the sentence into a tree + +906 +00:34:37,200 --> 00:34:41,520 +where the sentence node s is the root + +907 +00:34:39,240 --> 00:34:43,480 +and the words are the terminal nodes and + +908 +00:34:41,520 --> 00:34:45,000 +their part of speech is the one right + +909 +00:34:43,480 --> 00:34:49,359 +above + +910 +00:34:45,000 --> 00:34:52,000 +it we can also have syntactic trees that + +911 +00:34:49,359 --> 00:34:54,359 +reflect ambiguity so here we have two + +912 +00:34:52,000 --> 00:34:56,960 +trees for the same surface form sentence + +913 +00:34:54,359 --> 00:34:58,359 +um I saw a girl with a telescope um but + +914 +00:34:56,960 --> 00:35:01,000 +they mean slightly different things + +915 +00:34:58,359 --> 00:35:03,760 +depending on how you interpret uh with a + +916 +00:35:01,000 --> 00:35:06,040 +telescope um does it are you seeing the + +917 +00:35:03,760 --> 00:35:07,440 +girl with like are do you have a + +918 +00:35:06,040 --> 00:35:09,119 +telescope in your hand and are you + +919 +00:35:07,440 --> 00:35:12,079 +seeing the girl with it or are you + +920 +00:35:09,119 --> 00:35:13,760 +seeing a girl who has a telescope um you + +921 +00:35:12,079 --> 00:35:16,520 +can represent these two interpretations + +922 +00:35:13,760 --> 00:35:16,520 +differently in + +923 +00:35:16,920 --> 00:35:22,079 +syntax um so what I just described + +924 +00:35:19,520 --> 00:35:25,240 +earlier were like uh par structure or + +925 +00:35:22,079 --> 00:35:28,079 +par phrase structure grammars um that + +926 +00:35:25,240 --> 00:35:29,960 +are based on constituency relations um + +927 +00:35:28,079 --> 00:35:32,440 +but there are other types of ways we can + +928 +00:35:29,960 --> 00:35:35,520 +parse sentences uh such as dependency + +929 +00:35:32,440 --> 00:35:37,880 +parsers U dependency trees are based on + +930 +00:35:35,520 --> 00:35:39,480 +dependency relations sometimes referred + +931 +00:35:37,880 --> 00:35:41,680 +to as grammatical relations and these + +932 +00:35:39,480 --> 00:35:44,480 +are binary asymmetrical relations that + +933 +00:35:41,680 --> 00:35:48,000 +connect words and phrases um so in the + +934 +00:35:44,480 --> 00:35:51,200 +abstract a relation uh a goes to B A is + +935 +00:35:48,000 --> 00:35:52,680 +the head and B is the dependent of a um + +936 +00:35:51,200 --> 00:35:54,880 +you can define a relation in many + +937 +00:35:52,680 --> 00:35:57,200 +different ways syntactic semantic + +938 +00:35:54,880 --> 00:35:58,839 +morphological prootic um but most + +939 +00:35:57,200 --> 00:36:00,960 +Frameworks will focus on syntactic + +940 +00:35:58,839 --> 00:36:02,880 +relations with the main verb serving as + +941 +00:36:00,960 --> 00:36:05,440 +the root of the tree so we can have + +942 +00:36:02,880 --> 00:36:07,440 +clausal relations like whether um the + +943 +00:36:05,440 --> 00:36:09,960 +dependent as a nominal subject direct + +944 +00:36:07,440 --> 00:36:12,280 +object indirect object we can also have + +945 +00:36:09,960 --> 00:36:15,200 +modification relations like is something + +946 +00:36:12,280 --> 00:36:16,960 +modifying a noun or like is a noun + +947 +00:36:15,200 --> 00:36:19,359 +modifying another thing is an adjective + +948 +00:36:16,960 --> 00:36:22,119 +modifying another thing + +949 +00:36:19,359 --> 00:36:24,720 +Etc so here is an example of a + +950 +00:36:22,119 --> 00:36:27,079 +dependency pars from a universal + +951 +00:36:24,720 --> 00:36:29,680 +dependencies um here like in the top + +952 +00:36:27,079 --> 00:36:31,800 +example in English chased is the head of + +953 +00:36:29,680 --> 00:36:35,160 +the tree and you can like follow the + +954 +00:36:31,800 --> 00:36:35,160 +errors to see like what are its + +955 +00:36:35,839 --> 00:36:41,040 +dependence so part speech tagging and + +956 +00:36:39,000 --> 00:36:43,160 +syntactic parsing used to be a big deal + +957 +00:36:41,040 --> 00:36:45,800 +in NLP um but there's a reason why you + +958 +00:36:43,160 --> 00:36:47,440 +don't have a lecture on that anymore um + +959 +00:36:45,800 --> 00:36:49,119 +but especially because when we're + +960 +00:36:47,440 --> 00:36:51,800 +dealing with high resource languages + +961 +00:36:49,119 --> 00:36:54,319 +like English it's not a super big deal + +962 +00:36:51,800 --> 00:36:55,720 +um but it's a still Val a very valuable + +963 +00:36:54,319 --> 00:36:58,079 +resource for people studying lower + +964 +00:36:55,720 --> 00:36:59,880 +resource languages um and having lots of + +965 +00:36:58,079 --> 00:37:01,839 +linguistically annotated corpora over a + +966 +00:36:59,880 --> 00:37:03,880 +wide variety of languages can also + +967 +00:37:01,839 --> 00:37:05,720 +enable us to do like broad range + +968 +00:37:03,880 --> 00:37:07,720 +linguistic + +969 +00:37:05,720 --> 00:37:10,359 +studies um so here are just some + +970 +00:37:07,720 --> 00:37:12,440 +examples of corpora in English like some + +971 +00:37:10,359 --> 00:37:15,079 +major ones are the brown Corpus and Coca + +972 +00:37:12,440 --> 00:37:17,079 +for part of speech tagging uh for + +973 +00:37:15,079 --> 00:37:19,599 +constituency parses there's the pr Tree + +974 +00:37:17,079 --> 00:37:21,720 +Bank uh for dependency parses we also + +975 +00:37:19,599 --> 00:37:23,960 +have Google syntactic ingrams and then + +976 +00:37:21,720 --> 00:37:25,760 +Universal dependencies um this one is + +977 +00:37:23,960 --> 00:37:28,040 +actually quite interesting there's over + +978 +00:37:25,760 --> 00:37:30,200 +140 languages with a bunch of dependency + +979 +00:37:28,040 --> 00:37:31,440 +parses um and it's still a continual + +980 +00:37:30,200 --> 00:37:33,680 +effort to develop more and more + +981 +00:37:31,440 --> 00:37:35,920 +descriptive annotations so very recently + +982 +00:37:33,680 --> 00:37:38,200 +this paper came out to like have a layer + +983 +00:37:35,920 --> 00:37:41,680 +of constructions on top of existing + +984 +00:37:38,200 --> 00:37:41,680 +Universal dependencies + +985 +00:37:41,960 --> 00:37:49,240 +parses okay + +986 +00:37:44,319 --> 00:37:52,680 +yes assume are Universal depes + +987 +00:37:49,240 --> 00:37:55,560 +cons um this is a question that like + +988 +00:37:52,680 --> 00:37:59,119 +typologists are like very concerned with + +989 +00:37:55,560 --> 00:38:03,880 +um I think it's at the for construction + +990 +00:37:59,119 --> 00:38:06,480 +grammars and people who deal with um + +991 +00:38:03,880 --> 00:38:09,040 +like this how are certain semantic + +992 +00:38:06,480 --> 00:38:10,880 +Concepts like how do they emerge in the + +993 +00:38:09,040 --> 00:38:12,640 +structure of a language I think there's + +994 +00:38:10,880 --> 00:38:15,280 +some assumption to be made on like very + +995 +00:38:12,640 --> 00:38:17,880 +basic forms of meaning that all humans + +996 +00:38:15,280 --> 00:38:19,359 +try to communicate and then from there + +997 +00:38:17,880 --> 00:38:21,560 +um how that's actually expressed in the + +998 +00:38:19,359 --> 00:38:23,280 +grammar might differ but there are like + +999 +00:38:21,560 --> 00:38:25,359 +because there are those base meanings + +1000 +00:38:23,280 --> 00:38:28,119 +there are going to be comparisons that + +1001 +00:38:25,359 --> 00:38:31,440 +are that can be made across languages um + +1002 +00:38:28,119 --> 00:38:33,960 +but that is like a very Hot Topic in + +1003 +00:38:31,440 --> 00:38:35,560 +linguistics yeah yeah I think there + +1004 +00:38:33,960 --> 00:38:38,440 +another comment there's Universal part + +1005 +00:38:35,560 --> 00:38:41,079 +of speech uh kind of like the ones that + +1006 +00:38:38,440 --> 00:38:43,000 +Mia showed in Universal dependencies + +1007 +00:38:41,079 --> 00:38:45,240 +these are like aggressively simplified + +1008 +00:38:43,000 --> 00:38:47,400 +to only have the things that occur in + +1009 +00:38:45,240 --> 00:38:49,720 +most languages and so then there's other + +1010 +00:38:47,400 --> 00:38:53,000 +languages that have other things + +1011 +00:38:49,720 --> 00:38:54,760 +basically that go beyond this but by + +1012 +00:38:53,000 --> 00:38:56,720 +aggressively simplifying you can at + +1013 +00:38:54,760 --> 00:38:59,280 +least like make it very easy to do + +1014 +00:38:56,720 --> 00:39:00,920 +comparative sites + +1015 +00:38:59,280 --> 00:39:02,319 +yeah a lot of times they'll have like + +1016 +00:39:00,920 --> 00:39:05,160 +what Gramma said these very core screen + +1017 +00:39:02,319 --> 00:39:06,880 +tags and then as another column um in + +1018 +00:39:05,160 --> 00:39:09,119 +The annotation have language specific + +1019 +00:39:06,880 --> 00:39:11,119 +tags uh to be a bit more descriptive and + +1020 +00:39:09,119 --> 00:39:15,400 +like um show when things might not + +1021 +00:39:11,119 --> 00:39:15,400 +necessarily align with the broader + +1022 +00:39:15,960 --> 00:39:21,720 +label okay um I think now we're going to + +1023 +00:39:19,520 --> 00:39:24,119 +enter things that are a bit more + +1024 +00:39:21,720 --> 00:39:25,440 +computationally like NLP relevant um + +1025 +00:39:24,119 --> 00:39:29,560 +with meaning and + +1026 +00:39:25,440 --> 00:39:31,839 +intent um so semantics is the study of + +1027 +00:39:29,560 --> 00:39:34,319 +linguistic meaning and we can study this + +1028 +00:39:31,839 --> 00:39:36,839 +at various levels um as we saw we could + +1029 +00:39:34,319 --> 00:39:39,280 +see what a morphe means we can ask what + +1030 +00:39:36,839 --> 00:39:41,119 +a word means what a sentence means um + +1031 +00:39:39,280 --> 00:39:43,440 +and this often interacts with morphology + +1032 +00:39:41,119 --> 00:39:45,520 +and syntax as we saw like appending + +1033 +00:39:43,440 --> 00:39:47,200 +certain morphemes causes your word to + +1034 +00:39:45,520 --> 00:39:49,839 +mean something else + +1035 +00:39:47,200 --> 00:39:51,359 +now um a really active area in + +1036 +00:39:49,839 --> 00:39:52,920 +linguistics as well is the syntax + +1037 +00:39:51,359 --> 00:39:54,960 +semantics interface like what is the + +1038 +00:39:52,920 --> 00:39:57,000 +relationship between syntactic form and + +1039 +00:39:54,960 --> 00:39:57,960 +meaning how do different meanings of + +1040 +00:39:57,000 --> 00:40:00,920 +words + +1041 +00:39:57,960 --> 00:40:01,960 +uh change how they act syntactically in + +1042 +00:40:00,920 --> 00:40:04,880 +a larger + +1043 +00:40:01,960 --> 00:40:08,119 +structure now semantics is obviously a + +1044 +00:40:04,880 --> 00:40:10,680 +very very broad field um and we can very + +1045 +00:40:08,119 --> 00:40:14,040 +easily Veer into philosophy of language + +1046 +00:40:10,680 --> 00:40:15,599 +semiotics uh what is meaning anyways um + +1047 +00:40:14,040 --> 00:40:18,000 +we're going to stick to computationally + +1048 +00:40:15,599 --> 00:40:19,920 +relevant topics here but even then like + +1049 +00:40:18,000 --> 00:40:21,440 +I still don't have time to to cover + +1050 +00:40:19,920 --> 00:40:24,359 +certain topics like propositional and + +1051 +00:40:21,440 --> 00:40:25,960 +first order logic so um if these things + +1052 +00:40:24,359 --> 00:40:29,280 +sound interesting to you I encourage you + +1053 +00:40:25,960 --> 00:40:32,480 +to like look them up um they're quite + +1054 +00:40:29,280 --> 00:40:34,560 +fun um so let's start with lexical + +1055 +00:40:32,480 --> 00:40:37,480 +semantics um which I think is a very + +1056 +00:40:34,560 --> 00:40:39,680 +intuitive notion for people um a sense + +1057 +00:40:37,480 --> 00:40:42,640 +of a word is a distinct meaning of a + +1058 +00:40:39,680 --> 00:40:44,960 +word um and as we all know words can + +1059 +00:40:42,640 --> 00:40:48,640 +have multiple semantically related s + +1060 +00:40:44,960 --> 00:40:51,960 +senses and we refer to this as word + +1061 +00:40:48,640 --> 00:40:55,280 +pocy so I can say like they run + +1062 +00:40:51,960 --> 00:40:57,720 +experiments they run races candidates + +1063 +00:40:55,280 --> 00:41:00,119 +run for office can I run this idea by + +1064 +00:40:57,720 --> 00:41:02,280 +you um we're all using run we kind of + +1065 +00:41:00,119 --> 00:41:06,400 +have an intuition of how they're similar + +1066 +00:41:02,280 --> 00:41:09,599 +but also a bit different um etc + +1067 +00:41:06,400 --> 00:41:11,960 +etc um a related concept but not exactly + +1068 +00:41:09,599 --> 00:41:13,599 +the same as homonyms um and this + +1069 +00:41:11,960 --> 00:41:16,440 +actually is kind of a blurry distinction + +1070 +00:41:13,599 --> 00:41:18,760 +so uh a canonical homonym in English is + +1071 +00:41:16,440 --> 00:41:20,680 +something like Bank like River Bank + +1072 +00:41:18,760 --> 00:41:24,280 +versus I went to the bank and got some + +1073 +00:41:20,680 --> 00:41:26,680 +money um but if we actually look at the + +1074 +00:41:24,280 --> 00:41:28,520 +Historical uh traces of like the meaning + +1075 +00:41:26,680 --> 00:41:31,319 +of bank and these two settings we'd + +1076 +00:41:28,520 --> 00:41:33,520 +actually see that they're actually pimus + +1077 +00:41:31,319 --> 00:41:35,760 +um but over time as our uses of these + +1078 +00:41:33,520 --> 00:41:37,119 +words have changed in context we've seen + +1079 +00:41:35,760 --> 00:41:38,960 +these meanings drift further and further + +1080 +00:41:37,119 --> 00:41:41,640 +apart so now they're + +1081 +00:41:38,960 --> 00:41:44,319 +homonyms um but in the context of NLP + +1082 +00:41:41,640 --> 00:41:46,720 +whether it's a a police pimus word or a + +1083 +00:41:44,319 --> 00:41:49,440 +homonym um they kind of give us the same + +1084 +00:41:46,720 --> 00:41:51,800 +issue uh which is we have two surface + +1085 +00:41:49,440 --> 00:41:54,359 +War Two surface forms which in text + +1086 +00:41:51,800 --> 00:41:57,680 +appear exactly the same um but they have + +1087 +00:41:54,359 --> 00:41:57,680 +different senses + +1088 +00:41:57,880 --> 00:42:02,119 +so not only can we talk about like how a + +1089 +00:42:00,240 --> 00:42:04,640 +word has many different senses but we + +1090 +00:42:02,119 --> 00:42:07,280 +can compare a word and its sense to + +1091 +00:42:04,640 --> 00:42:10,680 +other words and their senses um and we + +1092 +00:42:07,280 --> 00:42:12,200 +call these like lexical relations so uh + +1093 +00:42:10,680 --> 00:42:14,760 +in a thesaurus you'll have lots of + +1094 +00:42:12,200 --> 00:42:17,119 +synonyms and antonyms where synonyms are + +1095 +00:42:14,760 --> 00:42:18,640 +things that are about the same meaning + +1096 +00:42:17,119 --> 00:42:21,520 +antonyms are things that are opposite + +1097 +00:42:18,640 --> 00:42:24,200 +like hot cold very simple there's also + +1098 +00:42:21,520 --> 00:42:28,119 +other relations like super subordinate + +1099 +00:42:24,200 --> 00:42:31,559 +like uh if I say vehicle um that uh + +1100 +00:42:28,119 --> 00:42:34,079 +encompasses all cars uh and all cars or + +1101 +00:42:31,559 --> 00:42:35,559 +and all SUVs are encompassed by cars and + +1102 +00:42:34,079 --> 00:42:39,280 +you can go all the way down into like a + +1103 +00:42:35,559 --> 00:42:41,440 +very very specific car like my uh dad's + +1104 +00:42:39,280 --> 00:42:43,040 +old Honda Odyssey it's like a very + +1105 +00:42:41,440 --> 00:42:44,520 +specific instance that is covered by the + +1106 +00:42:43,040 --> 00:42:47,200 +large umbrella of + +1107 +00:42:44,520 --> 00:42:50,000 +vehicles um we can also talk about part + +1108 +00:42:47,200 --> 00:42:51,760 +hole relations like a toe is a part of a + +1109 +00:42:50,000 --> 00:42:55,599 +foot and a foot is a part of a leg and a + +1110 +00:42:51,760 --> 00:42:55,599 +leg is a part of a body etc + +1111 +00:42:55,880 --> 00:43:02,800 +etc so so one uh really large project + +1112 +00:43:00,200 --> 00:43:05,359 +back in 2005 was to take a bunch of + +1113 +00:43:02,800 --> 00:43:06,960 +English words and kind of categorize + +1114 +00:43:05,359 --> 00:43:09,920 +them based on their relations to other + +1115 +00:43:06,960 --> 00:43:12,040 +words uh this is wordnet which is a very + +1116 +00:43:09,920 --> 00:43:14,760 +large database of English words where + +1117 +00:43:12,040 --> 00:43:17,359 +they basically took all the content word + +1118 +00:43:14,760 --> 00:43:19,640 +content words like nouns verbs + +1119 +00:43:17,359 --> 00:43:21,640 +adjectives and adverbs and they grouped + +1120 +00:43:19,640 --> 00:43:23,079 +them into sets of synonyms which they + +1121 +00:43:21,640 --> 00:43:26,000 +call + +1122 +00:43:23,079 --> 00:43:28,559 +sinets um and then for each grouping + +1123 +00:43:26,000 --> 00:43:30,319 +they would link uh one grouping to + +1124 +00:43:28,559 --> 00:43:33,160 +another through these conceptual + +1125 +00:43:30,319 --> 00:43:35,559 +semantic and lexical relations so a very + +1126 +00:43:33,160 --> 00:43:37,119 +common uh relation that would link one + +1127 +00:43:35,559 --> 00:43:39,680 +group of synonyms to another group of + +1128 +00:43:37,119 --> 00:43:41,960 +synonyms is like the super subordinate + +1129 +00:43:39,680 --> 00:43:43,559 +relations like I talked about so all the + +1130 +00:43:41,960 --> 00:43:46,680 +things that are group together with + +1131 +00:43:43,559 --> 00:43:49,160 +vehicle are uh above things that are + +1132 +00:43:46,680 --> 00:43:52,280 +grouped together with car and so on and + +1133 +00:43:49,160 --> 00:43:54,599 +so forth um they also distinguish uh + +1134 +00:43:52,280 --> 00:43:57,920 +between types which are common nouns + +1135 +00:43:54,599 --> 00:44:00,160 +like car uh or president + +1136 +00:43:57,920 --> 00:44:03,520 +uh with and instances which are proper + +1137 +00:44:00,160 --> 00:44:06,800 +nouns so uh president is a type Obama + +1138 +00:44:03,520 --> 00:44:08,920 +Trump Biden are instances of that type + +1139 +00:44:06,800 --> 00:44:11,960 +um and instances will always occur as + +1140 +00:44:08,920 --> 00:44:13,960 +the terminal node in these word net + +1141 +00:44:11,960 --> 00:44:15,720 +hierarchies um since then they've + +1142 +00:44:13,960 --> 00:44:18,200 +created lots of word Nets in other + +1143 +00:44:15,720 --> 00:44:20,240 +languages um and an interesting like + +1144 +00:44:18,200 --> 00:44:21,920 +consequence of word net was imag net + +1145 +00:44:20,240 --> 00:44:24,000 +which based its hierarchy of images + +1146 +00:44:21,920 --> 00:44:26,680 +according to the groupings of nouns in + +1147 +00:44:24,000 --> 00:44:29,119 +wordnet um the one caveat I have for + +1148 +00:44:26,680 --> 00:44:31,480 +this is because word net was constructed + +1149 +00:44:29,119 --> 00:44:34,200 +with um English and then with very + +1150 +00:44:31,480 --> 00:44:36,160 +specific annotators of uh that spoke a + +1151 +00:44:34,200 --> 00:44:37,520 +certain type of English your hierarchies + +1152 +00:44:36,160 --> 00:44:38,960 +are not going to map very well + +1153 +00:44:37,520 --> 00:44:41,680 +conceptually across cultures and + +1154 +00:44:38,960 --> 00:44:43,319 +languages so that's one consideration to + +1155 +00:44:41,680 --> 00:44:45,520 +be made about like grouping words in + +1156 +00:44:43,319 --> 00:44:45,520 +this + +1157 +00:44:46,040 --> 00:44:51,640 +way uh one really important uh theory + +1158 +00:44:49,440 --> 00:44:54,920 +that comes from semantics is the + +1159 +00:44:51,640 --> 00:44:57,200 +distributional hypothesis by um zelik + +1160 +00:44:54,920 --> 00:44:59,760 +Harris which fun fact was actually + +1161 +00:44:57,200 --> 00:45:02,920 +chomsky's PhD adviser uh but have + +1162 +00:44:59,760 --> 00:45:04,559 +birthed two completely different like uh + +1163 +00:45:02,920 --> 00:45:07,400 +schools of thought in linguistics that + +1164 +00:45:04,559 --> 00:45:10,079 +are often at odds um and this hypothesis + +1165 +00:45:07,400 --> 00:45:12,839 +was that um linguistic items that have + +1166 +00:45:10,079 --> 00:45:16,400 +similar distributions in use uh will end + +1167 +00:45:12,839 --> 00:45:18,440 +up having similar meanings uh very uh + +1168 +00:45:16,400 --> 00:45:20,720 +famous quotation uh that kind of + +1169 +00:45:18,440 --> 00:45:22,119 +rephrases this is you shall know a word + +1170 +00:45:20,720 --> 00:45:25,000 +by the company it + +1171 +00:45:22,119 --> 00:45:26,720 +keeps um and this idea is the uh + +1172 +00:45:25,000 --> 00:45:29,400 +foundation for lots of statistical + +1173 +00:45:26,720 --> 00:45:31,599 +approach es to semantics both lexical + +1174 +00:45:29,400 --> 00:45:31,599 +and + +1175 +00:45:32,079 --> 00:45:36,640 +otherwise so I think you guys have + +1176 +00:45:34,200 --> 00:45:38,680 +already talked about this but we can uh + +1177 +00:45:36,640 --> 00:45:41,040 +given a large Corpus form Vector + +1178 +00:45:38,680 --> 00:45:43,680 +representations of words based on their + +1179 +00:45:41,040 --> 00:45:46,640 +relationships and where they appear uh + +1180 +00:45:43,680 --> 00:45:48,839 +in context to other words uh with these + +1181 +00:45:46,640 --> 00:45:51,319 +Vector representations we can show sense + +1182 +00:45:48,839 --> 00:45:53,440 +relations um with cosine similar so if + +1183 +00:45:51,319 --> 00:45:56,359 +they're synonyms they'll probably end up + +1184 +00:45:53,440 --> 00:45:59,079 +in a very close in space so if we take + +1185 +00:45:56,359 --> 00:46:00,599 +their uh cosine similarity will be very + +1186 +00:45:59,079 --> 00:46:03,040 +close and then if we do Vector + +1187 +00:46:00,599 --> 00:46:05,119 +arithmetic operations we can see like an + +1188 +00:46:03,040 --> 00:46:10,000 +an analogical relationships so one + +1189 +00:46:05,119 --> 00:46:12,800 +example was uh King to Queen uh is uh + +1190 +00:46:10,000 --> 00:46:16,720 +like man to woman where the resulting + +1191 +00:46:12,800 --> 00:46:19,760 +differences are uh very similar uh we + +1192 +00:46:16,720 --> 00:46:22,119 +have in modern NLP I guess two main + +1193 +00:46:19,760 --> 00:46:24,240 +types of word embeddings dense static + +1194 +00:46:22,119 --> 00:46:26,640 +embeddings like word Toc and glove and + +1195 +00:46:24,240 --> 00:46:27,800 +then con contextual embeddings like Elmo + +1196 +00:46:26,640 --> 00:46:29,720 +and B + +1197 +00:46:27,800 --> 00:46:31,359 +um the drawback from using things like + +1198 +00:46:29,720 --> 00:46:36,119 +static embeddings is that they won't + +1199 +00:46:31,359 --> 00:46:38,920 +capture pisame um and homonyms so it + +1200 +00:46:36,119 --> 00:46:42,960 +would just be one representation for all + +1201 +00:46:38,920 --> 00:46:44,280 +instances of that uh written word + +1202 +00:46:42,960 --> 00:46:46,839 +whereas with contextual embeddings we + +1203 +00:46:44,280 --> 00:46:46,839 +can make that + +1204 +00:46:47,720 --> 00:46:51,839 +distinction uh another very very + +1205 +00:46:50,160 --> 00:46:54,079 +important Concept in semantics and I + +1206 +00:46:51,839 --> 00:46:55,440 +guess in linguistics in general is this + +1207 +00:46:54,079 --> 00:46:58,000 +idea of + +1208 +00:46:55,440 --> 00:46:59,200 +compositionality um so so from lot all + +1209 +00:46:58,000 --> 00:47:01,520 +the other slides that I've shown + +1210 +00:46:59,200 --> 00:47:03,800 +especially with morphology and syntax it + +1211 +00:47:01,520 --> 00:47:06,280 +seems that a lot of natural language is + +1212 +00:47:03,800 --> 00:47:07,640 +basically by like taking smaller units + +1213 +00:47:06,280 --> 00:47:11,520 +putting them together and then we can + +1214 +00:47:07,640 --> 00:47:13,200 +kind of see how as a whole we have um a + +1215 +00:47:11,520 --> 00:47:15,280 +certain structure that conveys a certain + +1216 +00:47:13,200 --> 00:47:17,760 +meaning that is derived from the meaning + +1217 +00:47:15,280 --> 00:47:19,960 +of individual Parts um so this was very + +1218 +00:47:17,760 --> 00:47:22,640 +clear in things like morphology and a + +1219 +00:47:19,960 --> 00:47:25,960 +bit less clear but still there in + +1220 +00:47:22,640 --> 00:47:27,760 +syntax um so in sentences uh for example + +1221 +00:47:25,960 --> 00:47:30,160 +we can combine the meaning of individual + +1222 +00:47:27,760 --> 00:47:31,240 +lexical items and phrases and then maybe + +1223 +00:47:30,160 --> 00:47:33,960 +even other + +1224 +00:47:31,240 --> 00:47:35,640 +constructions um a very important thing + +1225 +00:47:33,960 --> 00:47:38,000 +as well is that we can create novel + +1226 +00:47:35,640 --> 00:47:40,319 +sentences and structures systematically + +1227 +00:47:38,000 --> 00:47:41,800 +through compositionality um and + +1228 +00:47:40,319 --> 00:47:45,440 +similarly we can determine the meaning + +1229 +00:47:41,800 --> 00:47:47,400 +of Novel sentences and structures um and + +1230 +00:47:45,440 --> 00:47:50,800 +this is still an open question as to + +1231 +00:47:47,400 --> 00:47:53,319 +like whether modern models can do this + +1232 +00:47:50,800 --> 00:47:54,559 +and to what extent if they do um there + +1233 +00:47:53,319 --> 00:47:57,160 +are a couple of compositionality + +1234 +00:47:54,559 --> 00:48:00,079 +benchmarks one of them is cogs uh which + +1235 +00:47:57,160 --> 00:48:02,720 +uses like semantic representations um + +1236 +00:48:00,079 --> 00:48:05,319 +there's also another paper like kind of + +1237 +00:48:02,720 --> 00:48:08,200 +a survey paper on like compositionally + +1238 +00:48:05,319 --> 00:48:09,760 +compositionality in NLP and like how + +1239 +00:48:08,200 --> 00:48:12,319 +people have evaluated these types of + +1240 +00:48:09,760 --> 00:48:14,520 +things and uh I guess they pressed for + +1241 +00:48:12,319 --> 00:48:17,680 +more evaluations that occur on natural + +1242 +00:48:14,520 --> 00:48:20,079 +data as opposed to like artificial data + +1243 +00:48:17,680 --> 00:48:22,280 +um but there are also exceptions to + +1244 +00:48:20,079 --> 00:48:24,760 +compositionality uh such as idioms and + +1245 +00:48:22,280 --> 00:48:27,839 +figurative language so this is a really + +1246 +00:48:24,760 --> 00:48:29,800 +funny viral tweet on um the Chinese + +1247 +00:48:27,839 --> 00:48:32,240 +McDonald's menu translations so like + +1248 +00:48:29,800 --> 00:48:35,319 +this one says unsuspecting Tyrant double + +1249 +00:48:32,240 --> 00:48:37,319 +decker beef Fort um so like if you've + +1250 +00:48:35,319 --> 00:48:39,119 +ever looked at a Chinese menu and been + +1251 +00:48:37,319 --> 00:48:41,640 +like wow that is a really strange dish + +1252 +00:48:39,119 --> 00:48:43,800 +name it's because like they're very + +1253 +00:48:41,640 --> 00:48:45,960 +figurative dish names and they don't + +1254 +00:48:43,800 --> 00:48:47,280 +translate very well when you do like a + +1255 +00:48:45,960 --> 00:48:49,280 +word by word + +1256 +00:48:47,280 --> 00:48:51,160 +translation so obviously this is a + +1257 +00:48:49,280 --> 00:48:53,640 +challenge for applications like machine + +1258 +00:48:51,160 --> 00:48:55,160 +translation but also like um more + +1259 +00:48:53,640 --> 00:48:58,720 +culturally sensitive language + +1260 +00:48:55,160 --> 00:48:58,720 +Technologies as well + +1261 +00:49:00,000 --> 00:49:06,000 +so um kind of Switching gears uh one + +1262 +00:49:03,920 --> 00:49:08,240 +aspect uh of an expression's meaning + +1263 +00:49:06,000 --> 00:49:10,680 +although it doesn't Encompass everything + +1264 +00:49:08,240 --> 00:49:12,799 +um is the truth conditions or the + +1265 +00:49:10,680 --> 00:49:15,880 +conditions under which the expression + +1266 +00:49:12,799 --> 00:49:18,559 +would be considered to be true so for + +1267 +00:49:15,880 --> 00:49:21,359 +example if I say it rained in Pittsburgh + +1268 +00:49:18,559 --> 00:49:22,520 +yesterday this would be true only if it + +1269 +00:49:21,359 --> 00:49:24,880 +actually rained here yesterday and + +1270 +00:49:22,520 --> 00:49:28,079 +because it did this sentence is true + +1271 +00:49:24,880 --> 00:49:32,000 +pretty pretty straightforward + +1272 +00:49:28,079 --> 00:49:34,200 +um but we can also have uh relationships + +1273 +00:49:32,000 --> 00:49:37,440 +between Expressions that uh consider + +1274 +00:49:34,200 --> 00:49:40,920 +truth conditions so uh the definition of + +1275 +00:49:37,440 --> 00:49:44,160 +an entailment is that if a entails B + +1276 +00:49:40,920 --> 00:49:46,520 +then B must be true if a is true um in + +1277 +00:49:44,160 --> 00:49:49,400 +other words we can say that b is a truth + +1278 +00:49:46,520 --> 00:49:52,160 +condition of a um so if I say something + +1279 +00:49:49,400 --> 00:49:55,359 +like Emy is my adorable little orange + +1280 +00:49:52,160 --> 00:49:59,480 +cat which is 100% true um this entails + +1281 +00:49:55,359 --> 00:49:59,480 +that any is a cat and she is inde a + +1282 +00:50:00,359 --> 00:50:07,000 +cat um so entailment is uh something + +1283 +00:50:04,160 --> 00:50:09,280 +that's uh an objective study in a task + +1284 +00:50:07,000 --> 00:50:11,920 +called natural language inference um + +1285 +00:50:09,280 --> 00:50:15,160 +this is an NLP task where given some + +1286 +00:50:11,920 --> 00:50:17,559 +premise sentence or text determine if a + +1287 +00:50:15,160 --> 00:50:21,000 +hypothesis is entailed are contradicted + +1288 +00:50:17,559 --> 00:50:22,680 +or neutral um given that premise um so + +1289 +00:50:21,000 --> 00:50:26,839 +there are lots of data sets that deal + +1290 +00:50:22,680 --> 00:50:28,280 +with this like snli multi nli sale Etc + +1291 +00:50:26,839 --> 00:50:30,119 +um here are just some examples I don't + +1292 +00:50:28,280 --> 00:50:33,200 +know if you guys can read but it's like + +1293 +00:50:30,119 --> 00:50:35,200 +one of the text premises is a man + +1294 +00:50:33,200 --> 00:50:37,200 +inspects the uniform of a figure in some + +1295 +00:50:35,200 --> 00:50:39,040 +East Asian country um these are all + +1296 +00:50:37,200 --> 00:50:40,839 +annotations of images by the way if + +1297 +00:50:39,040 --> 00:50:43,079 +they're a bit strange um and then the + +1298 +00:50:40,839 --> 00:50:45,119 +hypothesis given that text is the man is + +1299 +00:50:43,079 --> 00:50:48,680 +sleeping well since he cannot be + +1300 +00:50:45,119 --> 00:50:51,280 +sleeping if uh he's inspecting uniforms + +1301 +00:50:48,680 --> 00:50:55,359 +um this is a contradiction given the + +1302 +00:50:51,280 --> 00:50:57,240 +premise so um nli isn't really a super + +1303 +00:50:55,359 --> 00:50:59,640 +popular task these days + +1304 +00:50:57,240 --> 00:51:01,359 +um but it's still pretty useful so we + +1305 +00:50:59,640 --> 00:51:03,760 +can use entailment models for things + +1306 +00:51:01,359 --> 00:51:06,319 +like a factuality checking of generated + +1307 +00:51:03,760 --> 00:51:08,280 +text or uh seeing if two sources agree + +1308 +00:51:06,319 --> 00:51:10,960 +in like something like fake news + +1309 +00:51:08,280 --> 00:51:12,799 +detection um so especially for like + +1310 +00:51:10,960 --> 00:51:14,440 +maybe retrieval augmented systems we can + +1311 +00:51:12,799 --> 00:51:17,040 +check if the generated answer is + +1312 +00:51:14,440 --> 00:51:19,400 +entailed by some rece like retrieved + +1313 +00:51:17,040 --> 00:51:19,400 +Source + +1314 +00:51:20,040 --> 00:51:25,599 +text okay so I think I'm actually going + +1315 +00:51:23,280 --> 00:51:27,760 +a lot faster than I thought I would but + +1316 +00:51:25,599 --> 00:51:30,079 +um pragmatic is an area where you can + +1317 +00:51:27,760 --> 00:51:31,799 +ask a lot of questions so and there's + +1318 +00:51:30,079 --> 00:51:35,799 +also a lot of ripe things that we can + +1319 +00:51:31,799 --> 00:51:38,280 +look at in NLP uh so this as opposed to + +1320 +00:51:35,799 --> 00:51:41,040 +semantics um pragmatics deals more with + +1321 +00:51:38,280 --> 00:51:43,520 +language use in context um so how is + +1322 +00:51:41,040 --> 00:51:45,880 +language used in social interactions How + +1323 +00:51:43,520 --> 00:51:48,680 +does context linguistic or otherwise + +1324 +00:51:45,880 --> 00:51:50,319 +actually influence how we say things um + +1325 +00:51:48,680 --> 00:51:52,440 +what do we actually intend to mean when + +1326 +00:51:50,319 --> 00:51:54,559 +we say something and how does this in + +1327 +00:51:52,440 --> 00:51:57,960 +influence the interpretation by The + +1328 +00:51:54,559 --> 00:51:59,400 +Listener um a very uh prominent theory + +1329 +00:51:57,960 --> 00:52:02,000 +in pragmatics is something called the + +1330 +00:51:59,400 --> 00:52:04,480 +speech act Theory um which says that the + +1331 +00:52:02,000 --> 00:52:06,559 +meaning of like something that you say + +1332 +00:52:04,480 --> 00:52:09,359 +is not just comprised of the statement + +1333 +00:52:06,559 --> 00:52:12,280 +itself but also of the intended effect + +1334 +00:52:09,359 --> 00:52:15,040 +that you meant to have on the listener + +1335 +00:52:12,280 --> 00:52:18,599 +um so a very simple example of this is + +1336 +00:52:15,040 --> 00:52:21,280 +asking like can you pass me the salt um + +1337 +00:52:18,599 --> 00:52:23,559 +like I'm not asking if you can literally + +1338 +00:52:21,280 --> 00:52:26,559 +physically pass me the salt I'm + +1339 +00:52:23,559 --> 00:52:28,880 +requesting you to pass me the salt um + +1340 +00:52:26,559 --> 00:52:32,280 +another funny one in English is like the + +1341 +00:52:28,880 --> 00:52:34,240 +do you mind type of construction um + +1342 +00:52:32,280 --> 00:52:37,640 +where like if I say do you mind if I sit + +1343 +00:52:34,240 --> 00:52:39,280 +next to you and you say yes like if + +1344 +00:52:37,640 --> 00:52:40,960 +you're answering literally you're saying + +1345 +00:52:39,280 --> 00:52:42,640 +that you mind so you would rather me not + +1346 +00:52:40,960 --> 00:52:44,480 +sit there but most people when they just + +1347 +00:52:42,640 --> 00:52:47,240 +say yes in isolation they actually mean + +1348 +00:52:44,480 --> 00:52:48,520 +go ahead um but if they say no it's a + +1349 +00:52:47,240 --> 00:52:51,280 +bit weird and they have to follow up + +1350 +00:52:48,520 --> 00:52:52,599 +with I don't mind um so this is a + +1351 +00:52:51,280 --> 00:52:54,760 +difference between like what you + +1352 +00:52:52,599 --> 00:52:57,400 +literally say versus what you actually + +1353 +00:52:54,760 --> 00:52:59,200 +mean + +1354 +00:52:57,400 --> 00:53:00,799 +and of course like given this like + +1355 +00:52:59,200 --> 00:53:02,319 +difference between what's literally + +1356 +00:53:00,799 --> 00:53:04,280 +written and what's actually meant to be + +1357 +00:53:02,319 --> 00:53:07,040 +said you can see where a lot of uh + +1358 +00:53:04,280 --> 00:53:10,240 +problems and like ripe research areas + +1359 +00:53:07,040 --> 00:53:14,000 +might uh be in + +1360 +00:53:10,240 --> 00:53:16,440 +NLP um so one uh important thing to + +1361 +00:53:14,000 --> 00:53:18,720 +remember in pragmatics is the idea of + +1362 +00:53:16,440 --> 00:53:21,400 +presuppositions so in discourse we kind + +1363 +00:53:18,720 --> 00:53:23,799 +of have like an agreement on what we + +1364 +00:53:21,400 --> 00:53:25,760 +know and like what we're talking about + +1365 +00:53:23,799 --> 00:53:28,160 +at the moment and we have implicit + +1366 +00:53:25,760 --> 00:53:31,839 +assumptions about the world uh that we + +1367 +00:53:28,160 --> 00:53:34,480 +act on so I already told you guys and + +1368 +00:53:31,839 --> 00:53:36,079 +showed you a photo of my adorable cat so + +1369 +00:53:34,480 --> 00:53:38,799 +if I say something like everyone thinks + +1370 +00:53:36,079 --> 00:53:40,839 +my cat is cute which is true this + +1371 +00:53:38,799 --> 00:53:42,480 +presupposes that I have a cat um it + +1372 +00:53:40,839 --> 00:53:44,839 +would be super strange for me to say + +1373 +00:53:42,480 --> 00:53:49,720 +something like this if I didn't have a + +1374 +00:53:44,839 --> 00:53:52,319 +cat and also if no one knew about my + +1375 +00:53:49,720 --> 00:53:54,359 +cat um so presuppositions can be + +1376 +00:53:52,319 --> 00:53:56,559 +triggered by certain lexical items or + +1377 +00:53:54,359 --> 00:53:59,079 +constructions so some examples of this + +1378 +00:53:56,559 --> 00:54:01,160 +are definite descriptions like the + +1379 +00:53:59,079 --> 00:54:02,720 +current King of France current is + +1380 +00:54:01,160 --> 00:54:06,319 +missing here but if I said the current + +1381 +00:54:02,720 --> 00:54:08,280 +King of France um the the kind of + +1382 +00:54:06,319 --> 00:54:10,359 +presupposes that I'm referring to one + +1383 +00:54:08,280 --> 00:54:12,799 +thing and that that one thing actually + +1384 +00:54:10,359 --> 00:54:15,240 +exists but France does not have a king + +1385 +00:54:12,799 --> 00:54:18,079 +at the moment so saying the current King + +1386 +00:54:15,240 --> 00:54:19,799 +of France kind of has a false + +1387 +00:54:18,079 --> 00:54:21,920 +presupposition uh another thing is + +1388 +00:54:19,799 --> 00:54:23,520 +factives which kind of when you use + +1389 +00:54:21,920 --> 00:54:26,040 +certain verbs you're kind of relaying + +1390 +00:54:23,520 --> 00:54:29,559 +the the subsequent information as things + +1391 +00:54:26,040 --> 00:54:31,920 +that are necessarily facts so if I say + +1392 +00:54:29,559 --> 00:54:34,000 +something like which is 100 also 100% + +1393 +00:54:31,920 --> 00:54:36,280 +true I regret drinking the Vietnamese + +1394 +00:54:34,000 --> 00:54:38,440 +cold brew from Red Hawk I couldn't sleep + +1395 +00:54:36,280 --> 00:54:40,280 +um this presupposes that I did in fact + +1396 +00:54:38,440 --> 00:54:43,160 +drink cold brew from Red Hawk and that + +1397 +00:54:40,280 --> 00:54:45,599 +did happen um but if I didn't drink cold + +1398 +00:54:43,160 --> 00:54:46,839 +brew from Red Hawk then this would also + +1399 +00:54:45,599 --> 00:54:48,920 +have a false + +1400 +00:54:46,839 --> 00:54:51,200 +presupposition um this also happens in + +1401 +00:54:48,920 --> 00:54:53,280 +questions like which linguist invented + +1402 +00:54:51,200 --> 00:54:56,440 +the light bulb presupposes that some + +1403 +00:54:53,280 --> 00:54:57,799 +linguist invented the light bulb um but + +1404 +00:54:56,440 --> 00:54:58,960 +there is no linguist that invented to + +1405 +00:54:57,799 --> 00:55:02,440 +light bulb and this is actually the + +1406 +00:54:58,960 --> 00:55:03,640 +title of a paper uh by n njin Kim um + +1407 +00:55:02,440 --> 00:55:05,240 +which looked at a bunch of question + +1408 +00:55:03,640 --> 00:55:08,079 +answering data sets and they found that + +1409 +00:55:05,240 --> 00:55:10,520 +certain questions are just unanswerable + +1410 +00:55:08,079 --> 00:55:12,280 +uh so it makes no sense to evaluate um + +1411 +00:55:10,520 --> 00:55:14,040 +NLP systems on these questions because + +1412 +00:55:12,280 --> 00:55:15,200 +they have false presuppositions there + +1413 +00:55:14,040 --> 00:55:17,280 +there's no way you can answer it + +1414 +00:55:15,200 --> 00:55:18,319 +factually um and there's a lot of other + +1415 +00:55:17,280 --> 00:55:22,160 +different types of + +1416 +00:55:18,319 --> 00:55:22,160 +triggers um but these are just a + +1417 +00:55:22,480 --> 00:55:28,359 +few um another uh concept is implicature + +1418 +00:55:26,920 --> 00:55:29,920 +so in semantics we talked about + +1419 +00:55:28,359 --> 00:55:32,160 +entailment like something that must + +1420 +00:55:29,920 --> 00:55:34,680 +necessarily be true if I had s something + +1421 +00:55:32,160 --> 00:55:36,319 +before that um but inures are slightly + +1422 +00:55:34,680 --> 00:55:38,480 +different these are things that are + +1423 +00:55:36,319 --> 00:55:40,920 +suggested but they're not liter + +1424 +00:55:38,480 --> 00:55:42,640 +necessarily literally expressed so I can + +1425 +00:55:40,920 --> 00:55:45,119 +say something like if it's lightly + +1426 +00:55:42,640 --> 00:55:48,200 +raining outside cloudy and kind of gross + +1427 +00:55:45,119 --> 00:55:50,039 +um like oh today's weather is the worst + +1428 +00:55:48,200 --> 00:55:52,520 +um I don't actually mean that it's + +1429 +00:55:50,039 --> 00:55:56,359 +literally the worst um like it could be + +1430 +00:55:52,520 --> 00:55:59,000 +a lot more worse um but I'm basically + +1431 +00:55:56,359 --> 00:56:01,440 +implying that it's bad and I don't like + +1432 +00:55:59,000 --> 00:56:03,680 +it I have a distaste for it um but + +1433 +00:56:01,440 --> 00:56:06,720 +that's not actually what's present in + +1434 +00:56:03,680 --> 00:56:06,720 +the thing I said + +1435 +00:56:11,359 --> 00:56:15,440 +yes this like bad to have in an + +1436 +00:56:14,039 --> 00:56:17,000 +evaluation data set because wouldn't + +1437 +00:56:15,440 --> 00:56:19,359 +this be a good indication of whether + +1438 +00:56:17,000 --> 00:56:21,280 +your model can identify the pr positions + +1439 +00:56:19,359 --> 00:56:23,440 +and then identify what that position is + +1440 +00:56:21,280 --> 00:56:26,400 +true yeah I think the issue was that + +1441 +00:56:23,440 --> 00:56:28,280 +that was not they were um trying to like + +1442 +00:56:26,400 --> 00:56:30,520 +with question answer evaluation data + +1443 +00:56:28,280 --> 00:56:32,280 +sets that wasn't the thing that they + +1444 +00:56:30,520 --> 00:56:37,119 +were looking for they were looking for + +1445 +00:56:32,280 --> 00:56:38,680 +like some string um so I think + +1446 +00:56:37,119 --> 00:56:40,559 +investigating whether question answering + +1447 +00:56:38,680 --> 00:56:43,200 +models can detect such false + +1448 +00:56:40,559 --> 00:56:44,920 +presuppositions is like a good thing um + +1449 +00:56:43,200 --> 00:56:47,119 +but if you don't actually identify that + +1450 +00:56:44,920 --> 00:56:50,079 +as a problem in your data to begin with + +1451 +00:56:47,119 --> 00:56:51,599 +um then it kind of messes with what like + +1452 +00:56:50,079 --> 00:56:53,799 +if you're just doing like a raw question + +1453 +00:56:51,599 --> 00:56:56,440 +answering evaluation it will mess with + +1454 +00:56:53,799 --> 00:56:58,440 +your end results so it was more of a + +1455 +00:56:56,440 --> 00:57:02,359 +mismatch between issues in the data and + +1456 +00:56:58,440 --> 00:57:04,760 +what people were looking for + +1457 +00:57:02,359 --> 00:57:09,680 +yeah but that's a good + +1458 +00:57:04,760 --> 00:57:12,400 +question um right so another example um + +1459 +00:57:09,680 --> 00:57:13,960 +which is a bit more Salient um like + +1460 +00:57:12,400 --> 00:57:15,799 +let's say I ask you in conversation are + +1461 +00:57:13,960 --> 00:57:19,559 +you going to see the eclipse on Monday + +1462 +00:57:15,799 --> 00:57:23,760 +and you respond I have work um what what + +1463 +00:57:19,559 --> 00:57:26,720 +is being implied by I have work like + +1464 +00:57:23,760 --> 00:57:28,760 +can no right like you know I have to go + +1465 +00:57:26,720 --> 00:57:31,039 +to work therefore like you know I can't + +1466 +00:57:28,760 --> 00:57:35,480 +go out and Skip whatever to see the + +1467 +00:57:31,039 --> 00:57:38,640 +eclipse um but you could also say I have + +1468 +00:57:35,480 --> 00:57:41,440 +work but I'm going to go anyways um so + +1469 +00:57:38,640 --> 00:57:45,559 +unlike entailments which are necessarily + +1470 +00:57:41,440 --> 00:57:47,559 +true if the premise is true implicators + +1471 +00:57:45,559 --> 00:57:50,559 +are diffusible if we add additional + +1472 +00:57:47,559 --> 00:57:54,160 +context be it linguistic like in more + +1473 +00:57:50,559 --> 00:57:56,880 +words or like in just a social context + +1474 +00:57:54,160 --> 00:57:58,839 +we can uh change + +1475 +00:57:56,880 --> 00:58:00,760 +uh the implied the thing that you're + +1476 +00:57:58,839 --> 00:58:03,760 +actually going to imply uh so one + +1477 +00:58:00,760 --> 00:58:05,960 +example is like let's say um you're a + +1478 +00:58:03,760 --> 00:58:10,520 +real estate agent and you're showing + +1479 +00:58:05,960 --> 00:58:12,960 +people houses uh you can say um oh this + +1480 +00:58:10,520 --> 00:58:14,599 +house has two bedrooms and what you're + +1481 +00:58:12,960 --> 00:58:17,319 +actually saying is like this house has + +1482 +00:58:14,599 --> 00:58:20,160 +exactly two bedrooms um but let's say + +1483 +00:58:17,319 --> 00:58:23,559 +you're hosting uh some guests at your + +1484 +00:58:20,160 --> 00:58:25,359 +house and you have exactly five bedrooms + +1485 +00:58:23,559 --> 00:58:26,720 +um but they only need two bedrooms to + +1486 +00:58:25,359 --> 00:58:28,359 +stay at your house + +1487 +00:58:26,720 --> 00:58:29,680 +um they ask you like oh do you have + +1488 +00:58:28,359 --> 00:58:31,240 +space for us how many bedrooms do you + +1489 +00:58:29,680 --> 00:58:33,559 +have and you could be like oh well I + +1490 +00:58:31,240 --> 00:58:35,480 +have two bedrooms like it doesn't have + +1491 +00:58:33,559 --> 00:58:37,520 +to be true that you have exactly two but + +1492 +00:58:35,480 --> 00:58:39,400 +you have two available bedrooms for them + +1493 +00:58:37,520 --> 00:58:41,880 +so this is an example of where like + +1494 +00:58:39,400 --> 00:58:44,240 +extra linguistic contexts can change + +1495 +00:58:41,880 --> 00:58:47,480 +what you're implying in your statement + +1496 +00:58:44,240 --> 00:58:49,760 +yeah and I I also want to point out that + +1497 +00:58:47,480 --> 00:58:51,799 +um this is a super good example of how + +1498 +00:58:49,760 --> 00:58:54,200 +different varieties of linguistics + +1499 +00:58:51,799 --> 00:58:56,760 +interact with each other because lindia + +1500 +00:58:54,200 --> 00:58:59,559 +changed the way she said I have two + +1501 +00:58:56,760 --> 00:59:01,200 +bedrooms in those two cases right and + +1502 +00:58:59,559 --> 00:59:03,839 +like the tone of your voice and and + +1503 +00:59:01,200 --> 00:59:06,280 +stuff like that that's called Pro and + +1504 +00:59:03,839 --> 00:59:09,240 +like you can change what you mean you + +1505 +00:59:06,280 --> 00:59:10,799 +can change uh semantics through Pro and + +1506 +00:59:09,240 --> 00:59:11,640 +like I just thought this was too good of + +1507 +00:59:10,799 --> 00:59:15,039 +an + +1508 +00:59:11,640 --> 00:59:17,640 +examp because like you can't really + +1509 +00:59:15,039 --> 00:59:19,799 +say yeah you you'll change the way you + +1510 +00:59:17,640 --> 00:59:22,640 +say something if you want to like make + +1511 +00:59:19,799 --> 00:59:25,039 +it clear that the Nuance is different + +1512 +00:59:22,640 --> 00:59:26,720 +yeah we have more examples of this later + +1513 +00:59:25,039 --> 00:59:28,240 +Sor no but like + +1514 +00:59:26,720 --> 00:59:31,000 +it's a different type of example but + +1515 +00:59:28,240 --> 00:59:36,039 +like it's good to have that pointed out + +1516 +00:59:31,000 --> 00:59:38,160 +um yeah so this kind of thing relates to + +1517 +00:59:36,039 --> 00:59:40,039 +uh the the overall question of how do + +1518 +00:59:38,160 --> 00:59:41,680 +people actually conduct conversations + +1519 +00:59:40,039 --> 00:59:43,200 +and how do they achieve e efficient + +1520 +00:59:41,680 --> 00:59:45,400 +communication like it's kind of a + +1521 +00:59:43,200 --> 00:59:47,480 +miracle that like as people we produce + +1522 +00:59:45,400 --> 00:59:49,520 +like a bunch of funky sounds and we know + +1523 +00:59:47,480 --> 00:59:52,160 +exactly or kind of approximate what + +1524 +00:59:49,520 --> 00:59:53,960 +those funky sounds mean right super cool + +1525 +00:59:52,160 --> 00:59:57,039 +um but how does this actually + +1526 +00:59:53,960 --> 00:59:59,359 +work um so there's a guy named Paul + +1527 +00:59:57,039 --> 01:00:01,680 +Grace and he asked this question and he + +1528 +00:59:59,359 --> 01:00:04,359 +came up with the idea that people are + +1529 +01:00:01,680 --> 01:00:06,760 +generally rational speakers I would hope + +1530 +01:00:04,359 --> 01:00:09,000 +um and to be a rational speaker you kind + +1531 +01:00:06,760 --> 01:00:10,839 +of expect and follow certain uh + +1532 +01:00:09,000 --> 01:00:13,480 +conventions and conversations which we + +1533 +01:00:10,839 --> 01:00:16,240 +refer to as maxims uh so the first is + +1534 +01:00:13,480 --> 01:00:18,599 +quantity uh which is to not undershot + +1535 +01:00:16,240 --> 01:00:20,079 +overshare like if I ask you um what did + +1536 +01:00:18,599 --> 01:00:22,319 +you do yesterday you're not going to + +1537 +01:00:20,079 --> 01:00:24,280 +relay like minute-by-minute playthroughs + +1538 +01:00:22,319 --> 01:00:26,079 +of what you did from when you woke up to + +1539 +01:00:24,280 --> 01:00:27,440 +when you went to bed right but at the + +1540 +01:00:26,079 --> 01:00:30,880 +same time you're not going to unders + +1541 +01:00:27,440 --> 01:00:32,559 +share and be like I lived like you know + +1542 +01:00:30,880 --> 01:00:34,720 +like we kind of we already know that you + +1543 +01:00:32,559 --> 01:00:36,839 +lived like tell me a bit more um the + +1544 +01:00:34,720 --> 01:00:38,960 +second uh very straightforward don't lie + +1545 +01:00:36,839 --> 01:00:41,160 +or at least like don't relay information + +1546 +01:00:38,960 --> 01:00:44,000 +that you know to not be factually + +1547 +01:00:41,160 --> 01:00:45,880 +correct um the third is relation be + +1548 +01:00:44,000 --> 01:00:47,160 +relevant so same question like if I + +1549 +01:00:45,880 --> 01:00:49,319 +asked you what you did yesterday you're + +1550 +01:00:47,160 --> 01:00:51,079 +not going to go back to your diary flip + +1551 +01:00:49,319 --> 01:00:54,319 +back one year and be like this is what I + +1552 +01:00:51,079 --> 01:00:56,839 +did one year ago um and then finally + +1553 +01:00:54,319 --> 01:00:58,799 +manner be clear which is something I'm + +1554 +01:00:56,839 --> 01:01:01,039 +hoping I'm doing in this lecture but you + +1555 +01:00:58,799 --> 01:01:03,559 +know don't say things in such a way that + +1556 +01:01:01,039 --> 01:01:06,680 +it's going to be uh super difficult for + +1557 +01:01:03,559 --> 01:01:08,480 +your listener to parse um now it would + +1558 +01:01:06,680 --> 01:01:10,680 +be great if people followed these + +1559 +01:01:08,480 --> 01:01:11,799 +conventions all the time because then we + +1560 +01:01:10,680 --> 01:01:14,319 +would know exactly what people are + +1561 +01:01:11,799 --> 01:01:17,680 +trying to convey at every given moment + +1562 +01:01:14,319 --> 01:01:20,680 +but this is not always the case um so + +1563 +01:01:17,680 --> 01:01:23,680 +people can intentionally uh violate or + +1564 +01:01:20,680 --> 01:01:26,280 +flout um we're doing flouting first + +1565 +01:01:23,680 --> 01:01:28,319 +flout one of these maxims to convey like + +1566 +01:01:26,280 --> 01:01:30,160 +another layer of meaning but usually + +1567 +01:01:28,319 --> 01:01:31,520 +with the intention that the The Listener + +1568 +01:01:30,160 --> 01:01:34,160 +will actually understand what they're + +1569 +01:01:31,520 --> 01:01:37,760 +trying to convey so a very good example + +1570 +01:01:34,160 --> 01:01:40,799 +of this is sarcasm like + +1571 +01:01:37,760 --> 01:01:43,640 +um I can't come up with one on the Fly + +1572 +01:01:40,799 --> 01:01:46,319 +um I'm totally not blanking right now + +1573 +01:01:43,640 --> 01:01:49,280 +that's like you know flouting um uh but + +1574 +01:01:46,319 --> 01:01:52,520 +you can also break maxims covertly like + +1575 +01:01:49,280 --> 01:01:55,520 +um outright lying this is done when like + +1576 +01:01:52,520 --> 01:01:57,400 +you are violating one of the maxims and + +1577 +01:01:55,520 --> 01:01:59,599 +you do not want the listener to know + +1578 +01:01:57,400 --> 01:02:01,680 +that you're violating a Maxum so other + +1579 +01:01:59,599 --> 01:02:03,880 +things like half truths which relates to + +1580 +01:02:01,680 --> 01:02:06,359 +quantity when you're unders sharing so + +1581 +01:02:03,880 --> 01:02:08,359 +that people does like people don't uh + +1582 +01:02:06,359 --> 01:02:10,079 +come up with a certain conclusion or + +1583 +01:02:08,359 --> 01:02:12,480 +with manner over complicating making + +1584 +01:02:10,079 --> 01:02:13,960 +your syntax really hard to parse uh like + +1585 +01:02:12,480 --> 01:02:15,720 +in a court proceeding when you're trying + +1586 +01:02:13,960 --> 01:02:19,200 +to convince the judge by overloading + +1587 +01:02:15,720 --> 01:02:19,200 +them with information or something like + +1588 +01:02:19,559 --> 01:02:27,039 +that um so in relation to like the + +1589 +01:02:24,559 --> 01:02:29,400 +conventions we have in conversation + +1590 +01:02:27,039 --> 01:02:31,839 +there's often times multiple ways of + +1591 +01:02:29,400 --> 01:02:33,640 +saying the thing that we want to convey + +1592 +01:02:31,839 --> 01:02:35,480 +um but how do we actually choose which + +1593 +01:02:33,640 --> 01:02:38,279 +one of these options is the best one to + +1594 +01:02:35,480 --> 01:02:40,520 +pick um so for example we can choose + +1595 +01:02:38,279 --> 01:02:42,279 +between different grammatical structures + +1596 +01:02:40,520 --> 01:02:43,720 +uh like in writing certain Fields may + +1597 +01:02:42,279 --> 01:02:46,079 +want you to construct everything in the + +1598 +01:02:43,720 --> 01:02:49,960 +passive voice like this experiment was + +1599 +01:02:46,079 --> 01:02:52,680 +conducted or uh the bacteria were uh + +1600 +01:02:49,960 --> 01:02:54,559 +incubated for x amount of time versus + +1601 +01:02:52,680 --> 01:02:57,839 +like in CS we use active voice + +1602 +01:02:54,559 --> 01:03:00,520 +constructions like um we uh modeled or + +1603 +01:02:57,839 --> 01:03:02,279 +we trained Etc uh we can also vary + +1604 +01:03:00,520 --> 01:03:03,640 +intonation and stress patterns like what + +1605 +01:03:02,279 --> 01:03:06,119 +Graham brought up with the like two + +1606 +01:03:03,640 --> 01:03:07,559 +bedrooms example um and then we can also + +1607 +01:03:06,119 --> 01:03:09,359 +choose different vocabulary and + +1608 +01:03:07,559 --> 01:03:11,079 +constructions like I can say certain + +1609 +01:03:09,359 --> 01:03:13,359 +words that I think are more simple and + +1610 +01:03:11,079 --> 01:03:16,400 +easy to digest if someone is unfamiliar + +1611 +01:03:13,359 --> 01:03:18,240 +with a topic um so we have all of these + +1612 +01:03:16,400 --> 01:03:20,359 +different things to pick from how do we + +1613 +01:03:18,240 --> 01:03:23,319 +actually choose + +1614 +01:03:20,359 --> 01:03:25,440 +them um so this in large part it comes + +1615 +01:03:23,319 --> 01:03:27,720 +from three different things first a + +1616 +01:03:25,440 --> 01:03:29,839 +speaker knowledge of Common Ground uh + +1617 +01:03:27,720 --> 01:03:32,599 +like what are the implicit assumptions + +1618 +01:03:29,839 --> 01:03:35,119 +they are making about like what they and + +1619 +01:03:32,599 --> 01:03:36,520 +the speaker like interacts with or they + +1620 +01:03:35,119 --> 01:03:39,359 +and The Listener like interact with what + +1621 +01:03:36,520 --> 01:03:41,200 +they have in their environment uh second + +1622 +01:03:39,359 --> 01:03:44,279 +what is their communicative goal like is + +1623 +01:03:41,200 --> 01:03:46,559 +your goal to acquire something is it to + +1624 +01:03:44,279 --> 01:03:48,520 +answer a question and then finally + +1625 +01:03:46,559 --> 01:03:50,160 +related to like the question what is + +1626 +01:03:48,520 --> 01:03:54,079 +actually desired by The + +1627 +01:03:50,160 --> 01:03:55,880 +Listener um so for common ground uh as + +1628 +01:03:54,079 --> 01:03:57,680 +an example I heard this in a meeting the + +1629 +01:03:55,880 --> 01:04:00,920 +other day we can launch a bunch of small + +1630 +01:03:57,680 --> 01:04:03,839 +llamas like if I said this in a random + +1631 +01:04:00,920 --> 01:04:05,400 +Pittsburgh restaurant and like the + +1632 +01:04:03,839 --> 01:04:07,079 +someone was eavesdropping and they have + +1633 +01:04:05,400 --> 01:04:09,599 +no idea about the current state of NLP + +1634 +01:04:07,079 --> 01:04:12,319 +they'll be like what are you doing why + +1635 +01:04:09,599 --> 01:04:14,839 +are you doing that um so this is a very + +1636 +01:04:12,319 --> 01:04:17,880 +illustrative example of like them not + +1637 +01:04:14,839 --> 01:04:20,240 +having like a common uh knowledge about + +1638 +01:04:17,880 --> 01:04:22,079 +what you're talking about um another + +1639 +01:04:20,240 --> 01:04:23,720 +example is like I've recently be been + +1640 +01:04:22,079 --> 01:04:24,960 +watching the bear and if you guys are + +1641 +01:04:23,720 --> 01:04:27,319 +familiar with that show it's like about + +1642 +01:04:24,960 --> 01:04:29,480 +this chef and he's like in really high + +1643 +01:04:27,319 --> 01:04:32,000 +stakes high stress situations so he'll + +1644 +01:04:29,480 --> 01:04:34,240 +just like yell and if he really wants + +1645 +01:04:32,000 --> 01:04:36,359 +salt he'll be like salt versus like if + +1646 +01:04:34,240 --> 01:04:37,920 +I'm at dinner and I want someone to pass + +1647 +01:04:36,359 --> 01:04:39,119 +me salt I'll be really polite and be + +1648 +01:04:37,920 --> 01:04:41,520 +like hey could you please pass me the + +1649 +01:04:39,119 --> 01:04:44,359 +salt so the first one is a very urgent + +1650 +01:04:41,520 --> 01:04:46,839 +command um like their goal is to get + +1651 +01:04:44,359 --> 01:04:48,720 +salt as quickly as possible versus mine + +1652 +01:04:46,839 --> 01:04:51,000 +is like well I'm kind of chilling with + +1653 +01:04:48,720 --> 01:04:53,440 +my dinner I don't need salt immediately + +1654 +01:04:51,000 --> 01:04:57,400 +so I can make this very polite + +1655 +01:04:53,440 --> 01:04:59,920 +request um and then and finally when we + +1656 +01:04:57,400 --> 01:05:01,640 +a listener desires certain information + +1657 +01:04:59,920 --> 01:05:05,279 +we can change what we focus on in our + +1658 +01:05:01,640 --> 01:05:07,520 +answer um so if someone asks like who + +1659 +01:05:05,279 --> 01:05:09,760 +trains llamas I can be like I train + +1660 +01:05:07,520 --> 01:05:11,559 +llamas but if someone asks you like what + +1661 +01:05:09,760 --> 01:05:13,720 +do you do with llamas I can say I train + +1662 +01:05:11,559 --> 01:05:17,119 +llamas and if someone asks what do I + +1663 +01:05:13,720 --> 01:05:19,160 +train I can say I train llamas so like + +1664 +01:05:17,119 --> 01:05:22,319 +surface form they're all the same but + +1665 +01:05:19,160 --> 01:05:24,039 +how I says how I say it changes to um + +1666 +01:05:22,319 --> 01:05:27,640 +like put focus on different parts of my + +1667 +01:05:24,039 --> 01:05:27,640 +answer depending on what the listener + +1668 +01:05:28,119 --> 01:05:33,079 +wants okay um we're going to go to + +1669 +01:05:30,599 --> 01:05:36,559 +something super computational um and + +1670 +01:05:33,079 --> 01:05:38,720 +this is rational speech acts um so GCE + +1671 +01:05:36,559 --> 01:05:41,240 +in his maxims says that like okay we + +1672 +01:05:38,720 --> 01:05:43,680 +kind kind of follow these conventions um + +1673 +01:05:41,240 --> 01:05:45,160 +but they're just very like nebulous + +1674 +01:05:43,680 --> 01:05:47,000 +conventions they don't really tell us + +1675 +01:05:45,160 --> 01:05:49,119 +about how to operationalize them in any + +1676 +01:05:47,000 --> 01:05:51,559 +computational setting um this is a + +1677 +01:05:49,119 --> 01:05:54,000 +computational theory uh for how + +1678 +01:05:51,559 --> 01:05:57,599 +communication may work um and it's a + +1679 +01:05:54,000 --> 01:05:59,160 +basian model uh so I'd also like to + +1680 +01:05:57,599 --> 01:06:00,880 +mention there are other competing models + +1681 +01:05:59,160 --> 01:06:03,279 +and not everyone believes this but it is + +1682 +01:06:00,880 --> 01:06:07,920 +pretty useful um in modeling + +1683 +01:06:03,279 --> 01:06:10,119 +settings so uh RSA views communication + +1684 +01:06:07,920 --> 01:06:13,200 +um as a recursive reasoning process + +1685 +01:06:10,119 --> 01:06:15,480 +between a speaker and a listener so it's + +1686 +01:06:13,200 --> 01:06:18,480 +like am I thinking what you're thinking + +1687 +01:06:15,480 --> 01:06:21,520 +I'm thinking that you're thinking I'm + +1688 +01:06:18,480 --> 01:06:23,880 +thinking Etc ET ET um and this is + +1689 +01:06:21,520 --> 01:06:27,760 +closely tied uh to another concept from + +1690 +01:06:23,880 --> 01:06:30,839 +psychology is and anyone have a guess to + +1691 +01:06:27,760 --> 01:06:30,839 +what that concept may + +1692 +01:06:31,960 --> 01:06:35,520 +be you know like what am I thinking + +1693 +01:06:34,160 --> 01:06:37,319 +you're thinking or like what are you + +1694 +01:06:35,520 --> 01:06:40,960 +thinking I'm + +1695 +01:06:37,319 --> 01:06:43,480 +thinking yeah yeah theory of mine um so + +1696 +01:06:40,960 --> 01:06:45,319 +yeah these like pragmatics and Concepts + +1697 +01:06:43,480 --> 01:06:46,520 +like theory of Mind do overlap um + +1698 +01:06:45,319 --> 01:06:48,000 +there's still like lots of debate on + +1699 +01:06:46,520 --> 01:06:50,039 +like how they actually interact and how + +1700 +01:06:48,000 --> 01:06:51,599 +they are actually operationalized like + +1701 +01:06:50,039 --> 01:06:54,520 +what are the psychological realities of + +1702 +01:06:51,599 --> 01:06:56,720 +these two things um but yeah they do + +1703 +01:06:54,520 --> 01:06:59,160 +overlap + +1704 +01:06:56,720 --> 01:07:02,039 +um so for Simplicity we can consider a + +1705 +01:06:59,160 --> 01:07:05,039 +setting of a reference game so let's say + +1706 +01:07:02,039 --> 01:07:08,359 +like uh me and some other person we have + +1707 +01:07:05,039 --> 01:07:10,480 +like a basket of colored balls um with + +1708 +01:07:08,359 --> 01:07:11,960 +different properties I'm thinking of a + +1709 +01:07:10,480 --> 01:07:13,960 +ball I want to communicate that I'm + +1710 +01:07:11,960 --> 01:07:15,920 +thinking of a certain ball um what + +1711 +01:07:13,960 --> 01:07:18,079 +should I say so that the listener will + +1712 +01:07:15,920 --> 01:07:21,400 +pick the same ball this is a reference + +1713 +01:07:18,079 --> 01:07:23,640 +game um so as I said this is a a + +1714 +01:07:21,400 --> 01:07:25,799 +recursive model and the base case for + +1715 +01:07:23,640 --> 01:07:29,720 +this recursion is a literal listener so + +1716 +01:07:25,799 --> 01:07:32,440 +they will select um something in their + +1717 +01:07:29,720 --> 01:07:34,440 +like set of references only considering + +1718 +01:07:32,440 --> 01:07:35,920 +what that person has said literally so + +1719 +01:07:34,440 --> 01:07:38,039 +like in this example the literal + +1720 +01:07:35,920 --> 01:07:39,319 +listener is like the innermost bubble + +1721 +01:07:38,039 --> 01:07:41,680 +there's three different items they can + +1722 +01:07:39,319 --> 01:07:43,880 +select from a smiley face with nothing + +1723 +01:07:41,680 --> 01:07:47,279 +on it a smiley face with glasses and a + +1724 +01:07:43,880 --> 01:07:50,520 +smiley face with with glasses and a half + +1725 +01:07:47,279 --> 01:07:52,799 +um they take us input something that the + +1726 +01:07:50,520 --> 01:07:55,640 +listener said so the listener said my + +1727 +01:07:52,799 --> 01:07:58,799 +friend has glasses the literal listener + +1728 +01:07:55,640 --> 01:08:00,880 +will pick the subset of items with + +1729 +01:07:58,799 --> 01:08:05,079 +glasses um so these are two out of the + +1730 +01:08:00,880 --> 01:08:08,160 +three items um so very basic um one + +1731 +01:08:05,079 --> 01:08:10,240 +level up is the speaker um the speaker + +1732 +01:08:08,160 --> 01:08:12,039 +at the lowest level will reason about + +1733 +01:08:10,240 --> 01:08:14,039 +potential interpretations by The + +1734 +01:08:12,039 --> 01:08:16,520 +Listener with the base case being this + +1735 +01:08:14,039 --> 01:08:18,880 +literal listener and choose from a + +1736 +01:08:16,520 --> 01:08:20,679 +selection of utterances such that the + +1737 +01:08:18,880 --> 01:08:25,199 +listener is most likely to pick the + +1738 +01:08:20,679 --> 01:08:27,319 +correct option so if the speaker was + +1739 +01:08:25,199 --> 01:08:29,520 +thinking about okay how can I maximally + +1740 +01:08:27,319 --> 01:08:31,120 +identify the smiley face with glasses + +1741 +01:08:29,520 --> 01:08:33,400 +they're not going to say my friend is + +1742 +01:08:31,120 --> 01:08:35,480 +smiling because all of them are smiling + +1743 +01:08:33,400 --> 01:08:38,759 +so that gives no additional information + +1744 +01:08:35,480 --> 01:08:40,640 +but by saying um and then the other + +1745 +01:08:38,759 --> 01:08:42,000 +option is to say okay my friend has a + +1746 +01:08:40,640 --> 01:08:43,239 +hat but that's not actually what they + +1747 +01:08:42,000 --> 01:08:46,440 +want to refer to so they're going to + +1748 +01:08:43,239 --> 01:08:47,640 +throw that option out um so from all the + +1749 +01:08:46,440 --> 01:08:50,359 +options that they have they're like okay + +1750 +01:08:47,640 --> 01:08:52,400 +glasses might be the best one one level + +1751 +01:08:50,359 --> 01:08:55,799 +up from that is a listener who is + +1752 +01:08:52,400 --> 01:08:58,719 +reasoning about potential States of the + +1753 +01:08:55,799 --> 01:09:00,880 +speaker who is reasoning about the Bas + +1754 +01:08:58,719 --> 01:09:02,440 +listener and thinking okay the speaker + +1755 +01:09:00,880 --> 01:09:04,159 +is attempting to be maximally + +1756 +01:09:02,440 --> 01:09:06,640 +informative out of all the things that + +1757 +01:09:04,159 --> 01:09:08,000 +they could tell me um why would they + +1758 +01:09:06,640 --> 01:09:09,920 +pick this one to be maximally + +1759 +01:09:08,000 --> 01:09:12,480 +informative and then we can iterate over + +1760 +01:09:09,920 --> 01:09:14,239 +this process as many times as we want so + +1761 +01:09:12,480 --> 01:09:15,440 +we go from base listener to speaker to + +1762 +01:09:14,239 --> 01:09:16,839 +listener to speaker to listener to + +1763 +01:09:15,440 --> 01:09:20,120 +speaker at + +1764 +01:09:16,839 --> 01:09:22,319 +infinum um this is actually used I think + +1765 +01:09:20,120 --> 01:09:25,640 +in some of Daniel Freed's work on like + +1766 +01:09:22,319 --> 01:09:29,359 +Vision QA or visual uh visual like + +1767 +01:09:25,640 --> 01:09:31,480 +captioning I believe um so it is pretty + +1768 +01:09:29,359 --> 01:09:35,000 +interesting and like is an information + +1769 +01:09:31,480 --> 01:09:35,000 +Theory based perspective on + +1770 +01:09:35,199 --> 01:09:40,239 +pragmatics um so yeah these are some + +1771 +01:09:38,279 --> 01:09:41,759 +interesting things that I did not have + +1772 +01:09:40,239 --> 01:09:44,319 +time for and lots of things that remain + +1773 +01:09:41,759 --> 01:09:47,080 +to be studied so as I mentioned earlier + +1774 +01:09:44,319 --> 01:09:48,920 +in like the intro slides there's um + +1775 +01:09:47,080 --> 01:09:50,199 +other fields like neural Linguistics + +1776 +01:09:48,920 --> 01:09:51,880 +psychol Linguistics and social + +1777 +01:09:50,199 --> 01:09:54,320 +Linguistics as well as linguistic + +1778 +01:09:51,880 --> 01:09:56,679 +typology um and I think recently + +1779 +01:09:54,320 --> 01:09:58,199 +especially there's a lot overlap between + +1780 +01:09:56,679 --> 01:09:59,679 +questions in some of these more applied + +1781 +01:09:58,199 --> 01:10:02,120 +fields and current + +1782 +01:09:59,679 --> 01:10:04,040 +NLP um so here are some example + +1783 +01:10:02,120 --> 01:10:06,960 +questions that uh I think would be + +1784 +01:10:04,040 --> 01:10:09,000 +interesting to explore um first is that + +1785 +01:10:06,960 --> 01:10:11,199 +humans seem to be really data efficient + +1786 +01:10:09,000 --> 01:10:12,960 +in terms of their linguistic input um + +1787 +01:10:11,199 --> 01:10:14,600 +chsky even had a hypothesis for this + +1788 +01:10:12,960 --> 01:10:17,239 +called Poverty of the stimulus he's like + +1789 +01:10:14,600 --> 01:10:20,239 +we must have grammar in our brains + +1790 +01:10:17,239 --> 01:10:21,920 +imbued at Birth because there is so + +1791 +01:10:20,239 --> 01:10:24,400 +little negative examples that we give + +1792 +01:10:21,920 --> 01:10:26,000 +like how do we actually generalize rules + +1793 +01:10:24,400 --> 01:10:27,719 +um whether or not you believe in that or + +1794 +01:10:26,000 --> 01:10:29,760 +not it is quite true and that like + +1795 +01:10:27,719 --> 01:10:32,120 +literal linguistic input is a lot more + +1796 +01:10:29,760 --> 01:10:33,440 +sparse for humans than it is for like a + +1797 +01:10:32,120 --> 01:10:34,840 +language model trained on trillions and + +1798 +01:10:33,440 --> 01:10:37,840 +trillions of tokens which is just not + +1799 +01:10:34,840 --> 01:10:39,840 +acquisition reasonable um so how can we + +1800 +01:10:37,840 --> 01:10:41,920 +imbue this type of data efficiency in + +1801 +01:10:39,840 --> 01:10:43,520 +models and like can we learn something + +1802 +01:10:41,920 --> 01:10:47,440 +from Human processes and human learning + +1803 +01:10:43,520 --> 01:10:49,520 +to do that um I guess one more specific + +1804 +01:10:47,440 --> 01:10:52,320 +question from this is how do we learn to + +1805 +01:10:49,520 --> 01:10:54,640 +generalize uh from linguistic exemplars + +1806 +01:10:52,320 --> 01:10:56,800 +like if you think about um like past + +1807 +01:10:54,640 --> 01:10:58,280 +tenses a past tense inflection in + +1808 +01:10:56,800 --> 01:11:03,760 +English there are some that are + +1809 +01:10:58,280 --> 01:11:08,040 +irregular like um I went uh like + +1810 +01:11:03,760 --> 01:11:11,040 +I uh talk versus I talked but if I say I + +1811 +01:11:08,040 --> 01:11:13,679 +go it's not I go it's I went um how do + +1812 +01:11:11,040 --> 01:11:15,320 +we figure out when to create a rule + +1813 +01:11:13,679 --> 01:11:17,239 +given that there are exceptions do we + +1814 +01:11:15,320 --> 01:11:19,440 +create a rule at all and how many + +1815 +01:11:17,239 --> 01:11:23,080 +exemplars do we need to create a + +1816 +01:11:19,440 --> 01:11:24,760 +rule um another one is uh very broad how + +1817 +01:11:23,080 --> 01:11:26,120 +can we make NLP systems that work better + +1818 +01:11:24,760 --> 01:11:27,840 +for everyone + +1819 +01:11:26,120 --> 01:11:31,239 +including people who speak non-standard + +1820 +01:11:27,840 --> 01:11:33,640 +dialects and marginalized languages so + +1821 +01:11:31,239 --> 01:11:35,560 +um in sociol linguistics uh this is + +1822 +01:11:33,640 --> 01:11:37,280 +something that's studied uh like the + +1823 +01:11:35,560 --> 01:11:38,679 +type of variation that you would have in + +1824 +01:11:37,280 --> 01:11:41,360 +communities and across communities is + +1825 +01:11:38,679 --> 01:11:43,639 +something that's uh studied um and when + +1826 +01:11:41,360 --> 01:11:45,360 +we have like characterizations of why + +1827 +01:11:43,639 --> 01:11:48,800 +certain speakers would say things in a + +1828 +01:11:45,360 --> 01:11:51,840 +different way or like why um that change + +1829 +01:11:48,800 --> 01:11:54,760 +may occur we can have more informed uh + +1830 +01:11:51,840 --> 01:11:57,520 +data collection uh like data collection + +1831 +01:11:54,760 --> 01:12:00,360 +strategy for NLP systems um and we could + +1832 +01:11:57,520 --> 01:12:02,480 +also talk about like why certain uh + +1833 +01:12:00,360 --> 01:12:04,719 +systems might not work well for others + +1834 +01:12:02,480 --> 01:12:06,639 +by talking about the actual linguistic + +1835 +01:12:04,719 --> 01:12:08,360 +variations that occur as opposed to just + +1836 +01:12:06,639 --> 01:12:11,199 +saying oh they're + +1837 +01:12:08,360 --> 01:12:12,600 +different um and then finally uh one + +1838 +01:12:11,199 --> 01:12:14,840 +thing that I kind of touched upon is + +1839 +01:12:12,600 --> 01:12:16,560 +like uh nowadays people are making a lot + +1840 +01:12:14,840 --> 01:12:19,639 +of comparisons between oh like language + +1841 +01:12:16,560 --> 01:12:23,080 +models can do this humans can do this um + +1842 +01:12:19,639 --> 01:12:24,800 +are language models super human um but a + +1843 +01:12:23,080 --> 01:12:26,880 +lot of these things are questions around + +1844 +01:12:24,800 --> 01:12:29,080 +evaluation like how do we actually make + +1845 +01:12:26,880 --> 01:12:31,199 +Fair comparisons between human and model + +1846 +01:12:29,080 --> 01:12:33,440 +language competence um how do we test + +1847 +01:12:31,199 --> 01:12:36,920 +for this type of linguistic knowledge um + +1848 +01:12:33,440 --> 01:12:39,400 +and I think this is a very like ripe uh + +1849 +01:12:36,920 --> 01:12:41,679 +active field and would be great if you + +1850 +01:12:39,400 --> 01:12:43,920 +guys were interested in exploring more + +1851 +01:12:41,679 --> 01:12:47,320 +um yeah I think that is actually all my + +1852 +01:12:43,920 --> 01:12:47,320 +content for today so I made + +1853 +01:12:50,600 --> 01:12:56,480 +it so yeah we we actually have a little + +1854 +01:12:54,040 --> 01:13:01,080 +bit of time uh for for questions about + +1855 +01:12:56,480 --> 01:13:01,080 +any of this uh stuff here if anybody + +1856 +01:13:03,360 --> 01:13:10,639 +has um we had a few questions along the + +1857 +01:13:06,040 --> 01:13:10,639 +way but there's a lot of + +1858 +01:13:11,000 --> 01:13:16,159 +um yeah anybody has things you wanted to + +1859 +01:13:16,199 --> 01:13:21,800 +ask okay um we can also take questions + +1860 +01:13:19,000 --> 01:13:24,639 +up front uh if you'd like to just ask + +1861 +01:13:21,800 --> 01:13:26,199 +privately I I think um oh yeah got one + +1862 +01:13:24,639 --> 01:13:28,960 +there + +1863 +01:13:26,199 --> 01:13:31,080 +these SES are actually + +1864 +01:13:28,960 --> 01:13:33,920 +different oh yeah I I might have + +1865 +01:13:31,080 --> 01:13:35,960 +uploaded a vision so yeah I'll I'll + +1866 +01:13:33,920 --> 01:13:38,360 +upload the ones here and we'll also + +1867 +01:13:35,960 --> 01:13:41,840 +upload all the references that she said + +1868 +01:13:38,360 --> 01:13:44,040 +to so yeah yeah so how do we + +1869 +01:13:41,840 --> 01:13:47,480 +take we have + +1870 +01:13:44,040 --> 01:13:47,480 +today language + +1871 +01:13:48,199 --> 01:13:53,719 +model doing data preparation or what + +1872 +01:13:51,000 --> 01:13:56,760 +kind of model do you need to um adapt in + +1873 +01:13:53,719 --> 01:13:59,760 +order to accom those kind of + +1874 +01:13:56,760 --> 01:13:59,760 +listic + +1875 +01:14:00,320 --> 01:14:04,679 +ideas yeah I mean it I'm going to talk + +1876 +01:14:02,840 --> 01:14:07,120 +about multilingual NLP where we talk + +1877 +01:14:04,679 --> 01:14:11,159 +more about it next time um although I'm + +1878 +01:14:07,120 --> 01:14:11,159 +not going to talk about it a whole lot + +1879 +01:14:11,560 --> 01:14:15,600 +um if you have any comments about that + +1880 +01:14:14,440 --> 01:14:18,600 +you + +1881 +01:14:15,600 --> 01:14:21,040 +can yeah I think like so the point + +1882 +01:14:18,600 --> 01:14:23,920 +Graham made about um like + +1883 +01:14:21,040 --> 01:14:27,280 +tokenization uh that that's a really big + +1884 +01:14:23,920 --> 01:14:30,800 +one um like some people have shifted to + +1885 +01:14:27,280 --> 01:14:32,080 +like bite based models uh for like + +1886 +01:14:30,800 --> 01:14:35,360 +multilingual but also if you have + +1887 +01:14:32,080 --> 01:14:37,360 +scripts that are just like not Roman + +1888 +01:14:35,360 --> 01:14:38,239 +scripts or very un like uncommon in your + +1889 +01:14:37,360 --> 01:14:42,480 +training + +1890 +01:14:38,239 --> 01:14:45,080 +data um I think it's a very broad + +1891 +01:14:42,480 --> 01:14:48,480 +question uh it really depends on what + +1892 +01:14:45,080 --> 01:14:50,719 +you want your model to be a model of + +1893 +01:14:48,480 --> 01:14:53,320 +like if you want your model to be a + +1894 +01:14:50,719 --> 01:14:55,440 +model of human language and cognition + +1895 +01:14:53,320 --> 01:14:57,239 +then you know you would want want to + +1896 +01:14:55,440 --> 01:14:58,760 +make sure your data scale is similar + +1897 +01:14:57,239 --> 01:15:00,840 +that your inputs are similar like people + +1898 +01:14:58,760 --> 01:15:02,719 +that train on child directed speech for + +1899 +01:15:00,840 --> 01:15:04,320 +example to study like whether language + +1900 +01:15:02,719 --> 01:15:05,880 +models can acquire similar linguistic + +1901 +01:15:04,320 --> 01:15:09,880 +capabilities to children at a certain + +1902 +01:15:05,880 --> 01:15:12,320 +age um if you want like an NLP system + +1903 +01:15:09,880 --> 01:15:15,239 +that's more culturally aware or like uh + +1904 +01:15:12,320 --> 01:15:18,199 +can hand handle non-standard dialects + +1905 +01:15:15,239 --> 01:15:19,600 +and non-standard uh uh ways of saying + +1906 +01:15:18,199 --> 01:15:22,960 +things then I think that would be more + +1907 +01:15:19,600 --> 01:15:24,280 +on the data collection side um like + +1908 +01:15:22,960 --> 01:15:25,880 +figuring out like what appropriate + +1909 +01:15:24,280 --> 01:15:27,800 +balance of data is like what were those + +1910 +01:15:25,880 --> 01:15:29,719 +sources and also just like ethical + +1911 +01:15:27,800 --> 01:15:32,679 +considerations of where you're sourcing + +1912 +01:15:29,719 --> 01:15:35,400 +that data from um I don't know if you + +1913 +01:15:32,679 --> 01:15:38,600 +had any specific like tasks or examples + +1914 +01:15:35,400 --> 01:15:40,600 +in mind yeah I I can also follow up a + +1915 +01:15:38,600 --> 01:15:43,320 +little bit like when you saw the tour of + +1916 +01:15:40,600 --> 01:15:44,600 +large language models class that I gave + +1917 +01:15:43,320 --> 01:15:46,320 +one thing you might have noticed there + +1918 +01:15:44,600 --> 01:15:47,800 +is that almost nobody is actually + +1919 +01:15:46,320 --> 01:15:50,480 +messing around with the architecture + +1920 +01:15:47,800 --> 01:15:52,320 +anymore um of language models they're + +1921 +01:15:50,480 --> 01:15:56,159 +all very very similar and I think the + +1922 +01:15:52,320 --> 01:15:59,719 +reason why is um people are training on + +1923 +01:15:56,159 --> 01:16:01,960 +like more and more data and + +1924 +01:15:59,719 --> 01:16:03,320 +um you you can mess around with + +1925 +01:16:01,960 --> 01:16:05,080 +architectures but the differences + +1926 +01:16:03,320 --> 01:16:07,120 +between architectures grow smaller as + +1927 +01:16:05,080 --> 01:16:10,040 +you train on more data and larger + +1928 +01:16:07,120 --> 01:16:13,639 +architectures and um also Transformers + +1929 +01:16:10,040 --> 01:16:15,199 +scale well all the like GPU based + +1930 +01:16:13,639 --> 01:16:16,800 +tooling is around them and stuff like + +1931 +01:16:15,199 --> 01:16:18,920 +that so because of that what do people + +1932 +01:16:16,800 --> 01:16:21,199 +mess around with they mess around with + +1933 +01:16:18,920 --> 01:16:22,639 +data what do what do they look at when + +1934 +01:16:21,199 --> 01:16:24,520 +they mess around with data they look at + +1935 +01:16:22,639 --> 01:16:26,400 +evaluations and how do people make + +1936 +01:16:24,520 --> 01:16:28,400 +evaluations well a lot of evaluations + +1937 +01:16:26,400 --> 01:16:32,960 +were designed based on listic principles + +1938 +01:16:28,400 --> 01:16:34,480 +so kind of I feel like compared to 20 + +1939 +01:16:32,960 --> 01:16:38,800 +years ago the + +1940 +01:16:34,480 --> 01:16:40,560 +connection the in well like 20 years ago + +1941 +01:16:38,800 --> 01:16:41,719 +for example there were people working on + +1942 +01:16:40,560 --> 01:16:45,000 +things where you would actually like + +1943 +01:16:41,719 --> 01:16:46,520 +parse the input and based on a parse + +1944 +01:16:45,000 --> 01:16:47,960 +tree of the input you would extract + +1945 +01:16:46,520 --> 01:16:50,159 +semantic structure and then you would + +1946 +01:16:47,960 --> 01:16:51,679 +manipulate that to do translation or + +1947 +01:16:50,159 --> 01:16:53,199 +something like that but I feel like + +1948 +01:16:51,679 --> 01:16:55,800 +we're not doing that anymore because we + +1949 +01:16:53,199 --> 01:16:57,920 +have a lot of endtoend systems + +1950 +01:16:55,800 --> 01:17:00,400 +um and so I think a lot of this goes + +1951 +01:16:57,920 --> 01:17:02,360 +into guiding evaluation the other thing + +1952 +01:17:00,400 --> 01:17:03,960 +is it is still really important for + +1953 +01:17:02,360 --> 01:17:06,960 +multilingual stuff where we don't have a + +1954 +01:17:03,960 --> 01:17:10,239 +lot of data in the other languages um + +1955 +01:17:06,960 --> 01:17:11,600 +and if you speak any language that's not + +1956 +01:17:10,239 --> 01:17:14,800 +English or + +1957 +01:17:11,600 --> 01:17:16,520 +Chinese um and you or actually if you + +1958 +01:17:14,800 --> 01:17:18,400 +speak any language that's not English + +1959 +01:17:16,520 --> 01:17:22,320 +and you use many of the + +1960 +01:17:18,400 --> 01:17:23,719 +open-source uh like language models + +1961 +01:17:22,320 --> 01:17:26,679 +you'll notice they're not even good at + +1962 +01:17:23,719 --> 01:17:28,960 +syntax in languages they still mess up + +1963 +01:17:26,679 --> 01:17:31,080 +syntax in non- English languages in + +1964 +01:17:28,960 --> 01:17:33,719 +English they mostly don't but sometimes + +1965 +01:17:31,080 --> 01:17:36,440 +do um and then if you go up to the + +1966 +01:17:33,719 --> 01:17:38,239 +really big models like gp4 and stuff + +1967 +01:17:36,440 --> 01:17:41,080 +like that they still mess up syntax in + +1968 +01:17:38,239 --> 01:17:43,400 +lower resource languages so you know if + +1969 +01:17:41,080 --> 01:17:45,560 +there are ways we can go in and enforce + +1970 +01:17:43,400 --> 01:17:46,760 +syntax um and then semantics is even + +1971 +01:17:45,560 --> 01:17:48,320 +harder because the dependencies are + +1972 +01:17:46,760 --> 01:17:49,880 +longer they're more complex and stuff + +1973 +01:17:48,320 --> 01:17:52,480 +like that so I think there are still + +1974 +01:17:49,880 --> 01:17:54,960 +modeling things to be done there + +1975 +01:17:52,480 --> 01:17:56,960 +too my question is kind of related to + +1976 +01:17:54,960 --> 01:17:59,080 +just said about like how architectures + +1977 +01:17:56,960 --> 01:18:01,320 +are like not really Frozen but they're + +1978 +01:17:59,080 --> 01:18:04,080 +kind of set at this point like why is + +1979 +01:18:01,320 --> 01:18:06,000 +scalability the only reason why like + +1980 +01:18:04,080 --> 01:18:07,840 +more drastic experimentation with + +1981 +01:18:06,000 --> 01:18:09,920 +architecture + +1982 +01:18:07,840 --> 01:18:11,639 +happening so I I don't think they're + +1983 +01:18:09,920 --> 01:18:15,000 +entirely set like Mamba is a good + +1984 +01:18:11,639 --> 01:18:18,000 +example of that um and like RW KB and + +1985 +01:18:15,000 --> 01:18:19,880 +other things like this and I think so + +1986 +01:18:18,000 --> 01:18:22,719 +there is still some Innovation going on + +1987 +01:18:19,880 --> 01:18:25,040 +in in uh + +1988 +01:18:22,719 --> 01:18:27,639 +architectures I think we don't have a + +1989 +01:18:25,040 --> 01:18:31,679 +good enough pipeline + +1990 +01:18:27,639 --> 01:18:34,600 +from experiments on smaller models to + +1991 +01:18:31,679 --> 01:18:36,679 +larger models so like what we would + +1992 +01:18:34,600 --> 01:18:40,440 +really like to be able to do because I + +1993 +01:18:36,679 --> 01:18:43,040 +mean training a state-ofthe-art LM costs + +1994 +01:18:40,440 --> 01:18:45,480 +like 10 to hundred million do and you + +1995 +01:18:43,040 --> 01:18:46,880 +don't want to run that over and over + +1996 +01:18:45,480 --> 01:18:48,840 +again to try different architectures so + +1997 +01:18:46,880 --> 01:18:51,480 +what we really need is we need some way + +1998 +01:18:48,840 --> 01:18:53,000 +to do like cheap experimentation with + +1999 +01:18:51,480 --> 01:18:55,360 +new model architectures that are better + +2000 +01:18:53,000 --> 01:18:58,719 +in some way and then like gradually + +2001 +01:18:55,360 --> 01:19:01,679 +scale it up and I just went to the sorry + +2002 +01:18:58,719 --> 01:19:05,840 +um very quickly I I just went to an open + +2003 +01:19:01,679 --> 01:19:08,960 +source uh like generative AI workshop + +2004 +01:19:05,840 --> 01:19:10,320 +and there's an architecture called RW KB + +2005 +01:19:08,960 --> 01:19:12,639 +it's not from Academia it's from the + +2006 +01:19:10,320 --> 01:19:14,280 +open source community and they had this + +2007 +01:19:12,639 --> 01:19:15,840 +really interesting presentation which is + +2008 +01:19:14,280 --> 01:19:17,719 +also on YouTube now if you want to see + +2009 +01:19:15,840 --> 01:19:19,719 +it where they basically say they have + +2010 +01:19:17,719 --> 01:19:21,880 +the whole Community experimenting on 500 + +2011 +01:19:19,719 --> 01:19:24,880 +million models then once they have a + +2012 +01:19:21,880 --> 01:19:27,480 +kind of like nice looking 500 billion + +2013 +01:19:24,880 --> 01:19:29,400 +model they then press a button on a + +2014 +01:19:27,480 --> 01:19:32,120 +larger experiment and run it on 1.3 + +2015 +01:19:29,400 --> 01:19:34,080 +billion and then seven billion and then + +2016 +01:19:32,120 --> 01:19:35,280 +they gradually funnel to the ones that + +2017 +01:19:34,080 --> 01:19:38,760 +they can actually run on like the + +2018 +01:19:35,280 --> 01:19:40,679 +biggest parameter sizes so I think not + +2019 +01:19:38,760 --> 01:19:41,840 +many people can do that effectively + +2020 +01:19:40,679 --> 01:19:43,520 +right now and I think that's a big + +2021 +01:19:41,840 --> 01:19:45,760 +reason why you know all of the really + +2022 +01:19:43,520 --> 01:19:48,199 +competitive models all look exactly like + +2023 +01:19:45,760 --> 01:19:49,679 +w to basically so I guess I was asking + +2024 +01:19:48,199 --> 01:19:50,880 +because at what point is it like + +2025 +01:19:49,679 --> 01:19:52,960 +something novel we're doing with + +2026 +01:19:50,880 --> 01:19:55,320 +architecture or like or like something + +2027 +01:19:52,960 --> 01:19:58,159 +we're doing with the inut + +2028 +01:19:55,320 --> 01:20:00,800 +one is it just buying performance + +2029 +01:19:58,159 --> 01:20:02,600 +with yeah I mean you can buy some + +2030 +01:20:00,800 --> 01:20:05,840 +performance with parameters but you can + +2031 +01:20:02,600 --> 01:20:07,679 +buy more performance for cheaper with + +2032 +01:20:05,840 --> 01:20:09,880 +better data better architectures and + +2033 +01:20:07,679 --> 01:20:13,679 +stuff like that so I think like there's + +2034 +01:20:09,880 --> 01:20:17,639 +actually a bet by Sasha rush and um and + +2035 +01:20:13,679 --> 01:20:19,199 +Jonathan Frankle uh Sasha Rush being a + +2036 +01:20:17,639 --> 01:20:21,840 +Cornell and hugging face and Jonathan + +2037 +01:20:19,199 --> 01:20:24,960 +Frankle being at data Bricks now where + +2038 +01:20:21,840 --> 01:20:26,040 +um Jonathan Frankle said Transformers + +2039 +01:20:24,960 --> 01:20:27,600 +like attention is all you need + +2040 +01:20:26,040 --> 01:20:29,639 +Transformers are all you need and Sasha + +2041 +01:20:27,600 --> 01:20:31,120 +Rush said you don't and in three years + +2042 +01:20:29,639 --> 01:20:32,960 +we're going to see which ones are on top + +2043 +01:20:31,120 --> 01:20:34,760 +of the leaderboard so I'm really looking + +2044 +01:20:32,960 --> 01:20:37,000 +forward to what the result of that bet + +2045 +01:20:34,760 --> 01:20:38,639 +is like in three years our Transformers + +2046 +01:20:37,000 --> 01:20:40,400 +is going to be up top or something else + +2047 +01:20:38,639 --> 01:20:42,000 +so yeah we'll see I don't want to keep + +2048 +01:20:40,400 --> 01:20:45,000 +everybody for too long but thank you for + +2049 +01:20:42,000 --> 01:20:45,000 +the the question \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/transcript.vtt b/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..4f37e2b65649c563fd79a6fa2748c3434ab3b8be --- /dev/null +++ b/CMU Advanced NLP 2024 (22) Linguistics and Computational Linguistics/transcript.vtt @@ -0,0 +1,6148 @@ +WEBVTT + +00:00:01.000 --> 00:00:06.080 +okay yeah cool so I'll be giving a + +00:00:03.399 --> 00:00:07.720 +really Whirlwind tour of linguistics as + +00:00:06.080 --> 00:00:10.240 +Graham said it's a very broad field but + +00:00:07.720 --> 00:00:14.040 +I'll try my best to cover some major + +00:00:10.240 --> 00:00:16.800 +parts of it um yeah uh so to begin what + +00:00:14.040 --> 00:00:18.520 +is linguistics um Linguistics as a field + +00:00:16.800 --> 00:00:21.320 +is the scientific study of language and + +00:00:18.520 --> 00:00:23.240 +its structure um at a very high level + +00:00:21.320 --> 00:00:25.680 +theoretical Linguistics aims to find a + +00:00:23.240 --> 00:00:28.119 +very general theory that explains the + +00:00:25.680 --> 00:00:29.359 +structure underlying languages um and a + +00:00:28.119 --> 00:00:31.840 +framework in which we can describe + +00:00:29.359 --> 00:00:34.160 +language as a structure um now we can + +00:00:31.840 --> 00:00:36.120 +describe individual rules and the types + +00:00:34.160 --> 00:00:39.200 +of structures that occur in specific + +00:00:36.120 --> 00:00:41.000 +languages however um one very important + +00:00:39.200 --> 00:00:43.120 +aspect of theoretical Linguistics is to + +00:00:41.000 --> 00:00:46.079 +try and find things that Encompass all + +00:00:43.120 --> 00:00:48.440 +natural languages um and for this reason + +00:00:46.079 --> 00:00:50.320 +uh one like topic that some linguists + +00:00:48.440 --> 00:00:51.960 +are concerned about are things like uh + +00:00:50.320 --> 00:00:53.440 +universals in linguistics like what + +00:00:51.960 --> 00:00:55.320 +concepts are present in all natural + +00:00:53.440 --> 00:00:57.199 +languages how do they come about how do + +00:00:55.320 --> 00:00:59.600 +they express + +00:00:57.199 --> 00:01:02.039 +themselves um and insights from Theory + +00:00:59.600 --> 00:01:04.280 +can Al inform more applied research so + +00:01:02.039 --> 00:01:06.040 +we can ask questions like what are the + +00:01:04.280 --> 00:01:08.200 +uh variations between speakers in a + +00:01:06.040 --> 00:01:09.759 +single language um how does this come + +00:01:08.200 --> 00:01:11.840 +about from social factors how does this + +00:01:09.759 --> 00:01:12.799 +come about from language change also + +00:01:11.840 --> 00:01:15.040 +what are the different types of + +00:01:12.799 --> 00:01:16.799 +linguistic structures within and across + +00:01:15.040 --> 00:01:19.479 +languages and how are they processed by + +00:01:16.799 --> 00:01:20.960 +the brain um and also a really + +00:01:19.479 --> 00:01:22.280 +interesting question is how do people + +00:01:20.960 --> 00:01:24.240 +acquire a new language at different + +00:01:22.280 --> 00:01:26.200 +stages of their life and how does this + +00:01:24.240 --> 00:01:28.479 +change from like infancy when you're + +00:01:26.200 --> 00:01:30.680 +acing acquiring your native language + +00:01:28.479 --> 00:01:35.439 +versus your second your third + +00:01:30.680 --> 00:01:35.439 +um at ages like 10 50 + +00:01:35.600 --> 00:01:40.560 +Etc um now this is a class on NLP and + +00:01:39.040 --> 00:01:42.240 +many of you might be asking like why + +00:01:40.560 --> 00:01:45.040 +should you care about Linguistics in the + +00:01:42.240 --> 00:01:46.439 +age of llms where everything can be fed + +00:01:45.040 --> 00:01:48.799 +into a Transformer and then you get a + +00:01:46.439 --> 00:01:50.399 +bunch of coherent English texts um I'd + +00:01:48.799 --> 00:01:52.159 +like to argue that there are reasons why + +00:01:50.399 --> 00:01:54.280 +you should be aware of linguistics um + +00:01:52.159 --> 00:01:55.840 +first at minimum it allows you to + +00:01:54.280 --> 00:01:57.759 +understand your data better and more + +00:01:55.840 --> 00:01:59.079 +thoroughly um I think this is especially + +00:01:57.759 --> 00:02:01.200 +important when you're characterizing + +00:01:59.079 --> 00:02:03.119 +specific failure of your model like you + +00:02:01.200 --> 00:02:04.640 +have certain errors um how do you + +00:02:03.119 --> 00:02:06.200 +classify them how can you characterize + +00:02:04.640 --> 00:02:08.160 +them can you look to previous literature + +00:02:06.200 --> 00:02:11.000 +to see how people have explained this + +00:02:08.160 --> 00:02:12.480 +from a human perspective um along that + +00:02:11.000 --> 00:02:14.440 +point it also gives you interesting test + +00:02:12.480 --> 00:02:16.640 +cases and Frameworks to explore um + +00:02:14.440 --> 00:02:19.400 +linguists like to explore really + +00:02:16.640 --> 00:02:21.680 +specific strange phenomena and this is a + +00:02:19.400 --> 00:02:24.280 +great test bed for a lot of things that + +00:02:21.680 --> 00:02:26.319 +your model might fail on um another + +00:02:24.280 --> 00:02:28.160 +thing is as models become more and more + +00:02:26.319 --> 00:02:30.599 +advanced people are now drawing + +00:02:28.160 --> 00:02:33.920 +connections between human capabilities + +00:02:30.599 --> 00:02:35.519 +cognitive and linguistic with models um + +00:02:33.920 --> 00:02:37.560 +I would like to say that if you want to + +00:02:35.519 --> 00:02:39.480 +make such claims about how your models + +00:02:37.560 --> 00:02:41.080 +or systems are similar to humans at + +00:02:39.480 --> 00:02:42.840 +least being aware of these theories as + +00:02:41.080 --> 00:02:44.640 +going to be a necessary starting point + +00:02:42.840 --> 00:02:46.560 +even if you don't agree with them and + +00:02:44.640 --> 00:02:48.879 +you just really really hate chomskyan + +00:02:46.560 --> 00:02:51.080 +Syntax for example um and another thing + +00:02:48.879 --> 00:02:53.959 +is just like it's fun um it's cool to + +00:02:51.080 --> 00:02:56.360 +learn about and it's a cool like party + +00:02:53.959 --> 00:02:59.280 +conversation uh yeah so that's why you + +00:02:56.360 --> 00:03:00.680 +should care um so as a lecture road map + +00:02:59.280 --> 00:03:02.480 +I'm going to give a a brief overview of + +00:03:00.680 --> 00:03:04.440 +subfields and coverage over various + +00:03:02.480 --> 00:03:05.959 +topics um for each topic group we're + +00:03:04.440 --> 00:03:08.120 +going to go over main Concepts and + +00:03:05.959 --> 00:03:09.680 +research questions that linguists ask + +00:03:08.120 --> 00:03:12.000 +also some current and previous + +00:03:09.680 --> 00:03:13.519 +computational approaches and then some + +00:03:12.000 --> 00:03:15.519 +applications to NLP that you might be + +00:03:13.519 --> 00:03:18.200 +interested in um and of course because + +00:03:15.519 --> 00:03:20.120 +there's a lot to cover in only about 80 + +00:03:18.200 --> 00:03:22.120 +now 75 minutes this is going to be very + +00:03:20.120 --> 00:03:25.280 +dun in certain areas so apologies in + +00:03:22.120 --> 00:03:27.720 +advance and please feel free to ask + +00:03:25.280 --> 00:03:29.360 +questions so how do we break down + +00:03:27.720 --> 00:03:30.280 +Linguistics as a field into separate + +00:03:29.360 --> 00:03:32.280 +subfields + +00:03:30.280 --> 00:03:34.480 +one way we can do this is by looking at + +00:03:32.280 --> 00:03:36.680 +the structures that we are studying so + +00:03:34.480 --> 00:03:39.400 +here I have a list of different uh + +00:03:36.680 --> 00:03:40.799 +subfields increasing an abstraction uh + +00:03:39.400 --> 00:03:43.120 +first we have phonetics which is the + +00:03:40.799 --> 00:03:46.280 +study of individual speech sounds um and + +00:03:43.120 --> 00:03:47.879 +for sign languages gestures um one level + +00:03:46.280 --> 00:03:49.319 +Above This is phenology how do we + +00:03:47.879 --> 00:03:51.680 +actually organize these sounds and + +00:03:49.319 --> 00:03:52.959 +gestures in the mind what makes coherent + +00:03:51.680 --> 00:03:55.239 +categories and + +00:03:52.959 --> 00:03:58.079 +languages then up we go up to the word + +00:03:55.239 --> 00:04:00.760 +level how are words formed then from + +00:03:58.079 --> 00:04:02.760 +words to phrases and sentences + +00:04:00.760 --> 00:04:04.879 +and then combining a bunch of different + +00:04:02.760 --> 00:04:07.200 +parts together how do we extract meaning + +00:04:04.879 --> 00:04:09.000 +from these different forms and then how + +00:04:07.200 --> 00:04:11.200 +does meaning change and adapt in + +00:04:09.000 --> 00:04:13.439 +language use in + +00:04:11.200 --> 00:04:15.400 +context um now I presented these + +00:04:13.439 --> 00:04:16.400 +categories in very discreet boxes but + +00:04:15.400 --> 00:04:18.320 +it's really important to remember + +00:04:16.400 --> 00:04:21.120 +there's a lot of like bleeding between + +00:04:18.320 --> 00:04:22.560 +categories um like between morphology + +00:04:21.120 --> 00:04:24.680 +and syntax there's a whole field of + +00:04:22.560 --> 00:04:26.960 +study called morphosyntax same thing + +00:04:24.680 --> 00:04:28.759 +with the syntax semantics interface um + +00:04:26.960 --> 00:04:30.240 +we can argue for days about what the + +00:04:28.759 --> 00:04:32.320 +actual difference between semantics and + +00:04:30.240 --> 00:04:34.440 +pragmatics is I'm going to ignore that + +00:04:32.320 --> 00:04:36.320 +for now along with a lot of other things + +00:04:34.440 --> 00:04:37.800 +and we can even span the whole gradient + +00:04:36.320 --> 00:04:39.759 +from phonetics to pragmatics when we + +00:04:37.800 --> 00:04:41.520 +talk about like proy inflection and + +00:04:39.759 --> 00:04:43.680 +stress so there's lots of different + +00:04:41.520 --> 00:04:45.199 +interactions that can occur here um + +00:04:43.680 --> 00:04:48.360 +while I am presenting them in very + +00:04:45.199 --> 00:04:51.440 +discrete forms um do keep that in + +00:04:48.360 --> 00:04:53.320 +mind so I have described kind of like + +00:04:51.440 --> 00:04:55.080 +the separate subfields based on + +00:04:53.320 --> 00:04:57.560 +structures but we can also apply these + +00:04:55.080 --> 00:04:59.960 +to other areas like neurolinguistics how + +00:04:57.560 --> 00:05:01.240 +does language uh work in the brain psych + +00:04:59.960 --> 00:05:02.880 +Linguistics like what is the + +00:05:01.240 --> 00:05:05.120 +psychological reality of structures how + +00:05:02.880 --> 00:05:07.039 +do we process them social Linguistics + +00:05:05.120 --> 00:05:09.080 +deals with like social context how do + +00:05:07.039 --> 00:05:11.360 +speakers vary based on like their social + +00:05:09.080 --> 00:05:12.840 +setting a linguistic typology what are + +00:05:11.360 --> 00:05:14.680 +the different variations between + +00:05:12.840 --> 00:05:17.080 +languages and historical Linguistics how + +00:05:14.680 --> 00:05:18.479 +has language changed over time um and as + +00:05:17.080 --> 00:05:20.360 +much as I would love to cover all these + +00:05:18.479 --> 00:05:23.400 +things I'm going to mainly focus on the + +00:05:20.360 --> 00:05:24.960 +things on the left um and across all of + +00:05:23.400 --> 00:05:26.759 +these different subfields we can use + +00:05:24.960 --> 00:05:28.720 +computational methods to explore + +00:05:26.759 --> 00:05:30.680 +questions within and then also across + +00:05:28.720 --> 00:05:32.000 +subfields + +00:05:30.680 --> 00:05:34.440 +um so in this lecture I'm going to break + +00:05:32.000 --> 00:05:36.120 +down uh things into three main parts + +00:05:34.440 --> 00:05:37.840 +first we'll start with sound and gesture + +00:05:36.120 --> 00:05:39.759 +then we'll move on to subwords and + +00:05:37.840 --> 00:05:42.080 +constituents and then we'll move on to + +00:05:39.759 --> 00:05:43.800 +meaning and intent um roughly broken up + +00:05:42.080 --> 00:05:46.720 +in these parts but like I said + +00:05:43.800 --> 00:05:49.080 +everything's a Continuum so things will + +00:05:46.720 --> 00:05:52.160 +uh be referred to or like we might + +00:05:49.080 --> 00:05:54.880 +Advance a little bit in certain + +00:05:52.160 --> 00:05:56.919 +sections so with that being said let's + +00:05:54.880 --> 00:05:59.759 +start with sound and + +00:05:56.919 --> 00:06:02.440 +gesture okay so at the very basic level + +00:05:59.759 --> 00:06:05.120 +at phonetics we can study speech sounds + +00:06:02.440 --> 00:06:07.639 +and gestures specifically how we produce + +00:06:05.120 --> 00:06:09.919 +them um like what are the functions in + +00:06:07.639 --> 00:06:12.759 +our body uh that we do to actually + +00:06:09.919 --> 00:06:14.520 +create sounds how we perceive them um + +00:06:12.759 --> 00:06:16.759 +like how does the actual physical + +00:06:14.520 --> 00:06:19.039 +property of the waveform for example uh + +00:06:16.759 --> 00:06:21.520 +turn into our perceptions of like pitch + +00:06:19.039 --> 00:06:24.080 +and volume and then how we can analyze + +00:06:21.520 --> 00:06:25.759 +them so we can look at physical + +00:06:24.080 --> 00:06:27.960 +properties and mathematical properties + +00:06:25.759 --> 00:06:31.560 +of these waveforms break them down into + +00:06:27.960 --> 00:06:31.560 +like their spectral components Etc + +00:06:32.280 --> 00:06:36.479 +um so one very important distinction + +00:06:35.080 --> 00:06:38.440 +distinction that we need to make when we + +00:06:36.479 --> 00:06:41.560 +study things like phonetics is that + +00:06:38.440 --> 00:06:43.319 +there is a discret uh separation between + +00:06:41.560 --> 00:06:46.360 +how things actually sound and how they + +00:06:43.319 --> 00:06:48.000 +are spelled in phonetics the actual like + +00:06:46.360 --> 00:06:49.960 +Atomic unit that we study are things + +00:06:48.000 --> 00:06:53.160 +called phones and these are individual + +00:06:49.960 --> 00:06:56.039 +speech sounds um so it' be like H in the + +00:06:53.160 --> 00:06:59.680 +sound for the English word + +00:06:56.039 --> 00:07:01.919 +hat um we need to keep in mind as like + +00:06:59.680 --> 00:07:03.280 +we work with text a lot is that uh one + +00:07:01.919 --> 00:07:05.360 +thing to keep in mind is that text is + +00:07:03.280 --> 00:07:06.840 +not a onetoone mapping between + +00:07:05.360 --> 00:07:08.919 +characters and these sounds and this is + +00:07:06.840 --> 00:07:10.120 +very obvious in certain scripts so for + +00:07:08.919 --> 00:07:12.960 +those of you that know how to read + +00:07:10.120 --> 00:07:14.759 +Chinese for example um Chinese is very + +00:07:12.960 --> 00:07:16.479 +logographic um even though there are + +00:07:14.759 --> 00:07:18.440 +some indications in the character of + +00:07:16.479 --> 00:07:21.360 +like how you might pronounce it it's + +00:07:18.440 --> 00:07:22.800 +very uh sparse in certain uh for certain + +00:07:21.360 --> 00:07:24.879 +characters and there's little indication + +00:07:22.800 --> 00:07:27.280 +of how you would actually say certain + +00:07:24.879 --> 00:07:29.280 +words um other scripts have very + +00:07:27.280 --> 00:07:32.199 +consistent spellings for sounds that are + +00:07:29.280 --> 00:07:34.639 +one one um so we can uh determine the + +00:07:32.199 --> 00:07:36.520 +exact pronunciation of a word uh based + +00:07:34.639 --> 00:07:38.680 +on its characters so this would be + +00:07:36.520 --> 00:07:41.720 +things like Japanese Kaa which are syll + +00:07:38.680 --> 00:07:43.599 +like uh that did like show each syllable + +00:07:41.720 --> 00:07:45.120 +uh Spanish is also very easy to + +00:07:43.599 --> 00:07:46.879 +pronounce once you know the rules and + +00:07:45.120 --> 00:07:50.759 +Hindi also falls in this category as + +00:07:46.879 --> 00:07:54.400 +well um some other scripts oh yes does + +00:07:50.759 --> 00:07:57.560 +that mean that sound more + +00:07:54.400 --> 00:08:01.560 +grinding sound is more well it depends + +00:07:57.560 --> 00:08:03.039 +on the script uh like sometimes you can + +00:08:01.560 --> 00:08:04.960 +like there is a script called the IPA + +00:08:03.039 --> 00:08:06.400 +which is exactly one to one between the + +00:08:04.960 --> 00:08:10.919 +sound that you produce and how it's + +00:08:06.400 --> 00:08:12.120 +spelled um but for the most part um your + +00:08:10.919 --> 00:08:13.599 +the way that you would represent the + +00:08:12.120 --> 00:08:15.479 +exact sound is always going to be more + +00:08:13.599 --> 00:08:19.199 +granular than how it's spelled in + +00:08:15.479 --> 00:08:21.400 +orthography yeah thank you um and then + +00:08:19.199 --> 00:08:23.080 +finally uh especially for those of you + +00:08:21.400 --> 00:08:24.960 +who acquired English as a second + +00:08:23.080 --> 00:08:26.960 +language or even if you are a native + +00:08:24.960 --> 00:08:28.800 +English speaker you know that certain + +00:08:26.960 --> 00:08:30.960 +words are really weird to spell and + +00:08:28.800 --> 00:08:32.680 +really weird to pronounce for the + +00:08:30.960 --> 00:08:34.440 +longest time when I was a kid up until I + +00:08:32.680 --> 00:08:38.159 +was maybe seven or eight I thought chaos + +00:08:34.440 --> 00:08:39.880 +was pronounced Chows um so uh even + +00:08:38.159 --> 00:08:41.080 +though there are very general rules + +00:08:39.880 --> 00:08:42.599 +about how you would say something in + +00:08:41.080 --> 00:08:45.680 +English there are exceptions that have + +00:08:42.599 --> 00:08:48.880 +to be made and have to be learned um and + +00:08:45.680 --> 00:08:51.640 +this happens as well in French um so + +00:08:48.880 --> 00:08:54.360 +between scripts like English and French + +00:08:51.640 --> 00:08:56.720 +uh which are harder to uh get those + +00:08:54.360 --> 00:08:58.640 +irregular forms from we call those deep + +00:08:56.720 --> 00:09:00.920 +orthographies and then the ones that are + +00:08:58.640 --> 00:09:03.519 +very onetoone like Japanese Ka we call + +00:09:00.920 --> 00:09:03.519 +them shallow + +00:09:03.680 --> 00:09:07.600 +orthographies so because there are so + +00:09:06.040 --> 00:09:10.760 +many different ways to represent sounds + +00:09:07.600 --> 00:09:12.040 +across languages and scripts um we as + +00:09:10.760 --> 00:09:14.360 +linguists use something called The + +00:09:12.040 --> 00:09:16.680 +International Phonetic Alphabet and in + +00:09:14.360 --> 00:09:21.600 +Brackets here are how you would actually + +00:09:16.680 --> 00:09:24.959 +write IPA using um IPA uh so this is + +00:09:21.600 --> 00:09:27.680 +like the updated chart from I believe + +00:09:24.959 --> 00:09:29.640 +2022 I think it says but my video is + +00:09:27.680 --> 00:09:31.560 +blocking it from what year it's from um + +00:09:29.640 --> 00:09:33.600 +but it basically categorizes a bunch of + +00:09:31.560 --> 00:09:35.600 +sounds and then shows you the exact + +00:09:33.600 --> 00:09:38.120 +character to write to represent that + +00:09:35.600 --> 00:09:40.800 +sound um one computational tool that you + +00:09:38.120 --> 00:09:42.760 +can use to convert from some orthography + +00:09:40.800 --> 00:09:46.240 +to IPA text is epan which is actually + +00:09:42.760 --> 00:09:49.279 +developed by David mortson here in + +00:09:46.240 --> 00:09:51.839 +LTI so another aspect of phonetics that + +00:09:49.279 --> 00:09:54.200 +I touched upon in the interest slide is + +00:09:51.839 --> 00:09:56.760 +uh how we actually produce sounds with + +00:09:54.200 --> 00:10:00.000 +the body so this is what articulatory + +00:09:56.760 --> 00:10:03.160 +phonetics studies um basically uh with + +00:10:00.000 --> 00:10:06.079 +spoken language uh various organs in + +00:10:03.160 --> 00:10:08.040 +your mouth nose and throat can modify + +00:10:06.079 --> 00:10:09.240 +air flow from your lungs into your lungs + +00:10:08.040 --> 00:10:11.360 +and that's how we produce different + +00:10:09.240 --> 00:10:13.040 +sounds and based on how these different + +00:10:11.360 --> 00:10:15.839 +modifications occur we can get different + +00:10:13.040 --> 00:10:18.000 +types um so very coarsely we can + +00:10:15.839 --> 00:10:19.800 +categorize sounds into vowels which are + +00:10:18.000 --> 00:10:21.560 +produced without much restriction + +00:10:19.800 --> 00:10:23.000 +consonants which are produced with some + +00:10:21.560 --> 00:10:25.240 +partial or full restriction and then + +00:10:23.000 --> 00:10:26.720 +finally semivowels which are kind of + +00:10:25.240 --> 00:10:30.440 +between a consonant and a vowel these + +00:10:26.720 --> 00:10:32.600 +are sounds like Y and W + +00:10:30.440 --> 00:10:34.440 +um and then we can break down some of + +00:10:32.600 --> 00:10:36.360 +these categories even more so for + +00:10:34.440 --> 00:10:37.600 +consonants we can categorize them based + +00:10:36.360 --> 00:10:40.399 +on their place and manner of + +00:10:37.600 --> 00:10:43.079 +articulation like the sound M I create + +00:10:40.399 --> 00:10:46.440 +by putting my lips together and then + +00:10:43.079 --> 00:10:48.720 +like nasiz uh we can get into that + +00:10:46.440 --> 00:10:50.360 +another time um but basically we can + +00:10:48.720 --> 00:10:52.160 +categorize based on the placement and + +00:10:50.360 --> 00:10:55.040 +manner as well as whether they are + +00:10:52.160 --> 00:10:58.360 +voiced or voiceless so the difference + +00:10:55.040 --> 00:11:00.360 +between like T and du is that there is + +00:10:58.360 --> 00:11:02.600 +vibration in my vocal cords and that's + +00:11:00.360 --> 00:11:05.519 +the distinction between voice and + +00:11:02.600 --> 00:11:07.320 +voiceless um vowels can be categorized + +00:11:05.519 --> 00:11:09.000 +in a different way uh based on the + +00:11:07.320 --> 00:11:10.560 +position of your tongue how open your + +00:11:09.000 --> 00:11:12.760 +mouth is and the roundedness of your + +00:11:10.560 --> 00:11:14.760 +lips I really like the IPA chart for + +00:11:12.760 --> 00:11:16.920 +vowels um because it's actually very + +00:11:14.760 --> 00:11:18.560 +intuitive imagine someone splits you + +00:11:16.920 --> 00:11:20.680 +right down the middle like this and then + +00:11:18.560 --> 00:11:22.639 +takes a cross-section and that's kind of + +00:11:20.680 --> 00:11:25.000 +how you can map the vowels to the vowel + +00:11:22.639 --> 00:11:29.440 +chart vowels are typically voice but + +00:11:25.000 --> 00:11:31.440 +voiceless vowels do actually exist + +00:11:29.440 --> 00:11:33.360 +um so we've covered the basics of + +00:11:31.440 --> 00:11:35.160 +phonetics now we can move on to phology + +00:11:33.360 --> 00:11:37.360 +which is the study of the categorization + +00:11:35.160 --> 00:11:40.360 +of speech sounds or equivalent gestures + +00:11:37.360 --> 00:11:42.079 +and sign languages now in contrast to + +00:11:40.360 --> 00:11:44.720 +phonetics which deals with the physical + +00:11:42.079 --> 00:11:46.040 +properties of sounds regardless of their + +00:11:44.720 --> 00:11:48.079 +context regardless of what language + +00:11:46.040 --> 00:11:49.839 +you're actually speaking them in phology + +00:11:48.079 --> 00:11:51.959 +Now deals with abstract rules or + +00:11:49.839 --> 00:11:53.399 +constraints that govern interactions of + +00:11:51.959 --> 00:11:55.839 +sounds within a language and also like + +00:11:53.399 --> 00:11:57.920 +your mental reality of how you perceive + +00:11:55.839 --> 00:12:00.120 +sounds so some questions that + +00:11:57.920 --> 00:12:01.920 +phonologists might asks are are like + +00:12:00.120 --> 00:12:04.079 +what sounds are meaningfully distinct in + +00:12:01.920 --> 00:12:06.560 +a language how are sounds organized into + +00:12:04.079 --> 00:12:09.880 +syllables and then what rules govern + +00:12:06.560 --> 00:12:09.880 +allowable sequences of + +00:12:10.519 --> 00:12:17.680 +sounds so like I said before phones uh + +00:12:14.199 --> 00:12:20.040 +which are the uh like basic unit of + +00:12:17.680 --> 00:12:22.320 +sounds for phonetics they're individual + +00:12:20.040 --> 00:12:23.880 +speech sounds U but in phology what + +00:12:22.320 --> 00:12:26.360 +we're actually concerned with are things + +00:12:23.880 --> 00:12:28.839 +called phones and these are perceptually + +00:12:26.360 --> 00:12:31.320 +distinct units of sound in a language um + +00:12:28.839 --> 00:12:34.480 +pH are sounds that can distinguish one + +00:12:31.320 --> 00:12:36.680 +word from another so in English um pit + +00:12:34.480 --> 00:12:38.800 +is a different word from lit so we can + +00:12:36.680 --> 00:12:42.600 +say that P and L are different + +00:12:38.800 --> 00:12:43.920 +phones uh If We Gather a set of all of + +00:12:42.600 --> 00:12:45.800 +the sounds that can create these + +00:12:43.920 --> 00:12:48.519 +distinct these distinctions of meanings + +00:12:45.800 --> 00:12:50.199 +in a language we have the phon inventory + +00:12:48.519 --> 00:12:52.839 +of that + +00:12:50.199 --> 00:12:54.880 +language so a really fun fact here + +00:12:52.839 --> 00:12:58.040 +connecting phonetics and phenology and + +00:12:54.880 --> 00:12:59.240 +psych Linguistics is that over time uh + +00:12:58.040 --> 00:13:02.240 +with the languages that you speak + +00:12:59.240 --> 00:13:04.800 +regularly we are conditioned um to limit + +00:13:02.240 --> 00:13:07.519 +our mental distinction of sounds to uh + +00:13:04.800 --> 00:13:08.680 +as well as production sometimes to those + +00:13:07.519 --> 00:13:11.040 +that are distinct in our native + +00:13:08.680 --> 00:13:12.839 +languages whereas babies can + +00:13:11.040 --> 00:13:14.560 +perceptually easily distinguish between + +00:13:12.839 --> 00:13:16.440 +all the different phones and they did + +00:13:14.560 --> 00:13:19.279 +they did the study where they had like + +00:13:16.440 --> 00:13:22.120 +babies like either changing attention or + +00:13:19.279 --> 00:13:24.240 +like sucking on a pacifier and if their + +00:13:22.120 --> 00:13:25.839 +like rate of sucking increased it means + +00:13:24.240 --> 00:13:27.399 +they've like sensed a new thing in their + +00:13:25.839 --> 00:13:29.480 +environment so they tested with a bunch + +00:13:27.399 --> 00:13:30.680 +of different phones and they saw that + +00:13:29.480 --> 00:13:33.160 +like the babies could distinguish + +00:13:30.680 --> 00:13:34.600 +between them but if especially if you're + +00:13:33.160 --> 00:13:36.199 +like not a native speaker of a tonal + +00:13:34.600 --> 00:13:37.680 +language and you try and distinguish + +00:13:36.199 --> 00:13:39.839 +between tones and Chinese or something + +00:13:37.680 --> 00:13:42.480 +it's really hard to tell um and your + +00:13:39.839 --> 00:13:43.760 +brain has like learned to abstract away + +00:13:42.480 --> 00:13:46.000 +from all of those things that are not + +00:13:43.760 --> 00:13:47.560 +distinct in your language um but we can + +00:13:46.000 --> 00:13:50.759 +still relearn these things it's just a + +00:13:47.560 --> 00:13:54.680 +fun uh language acquisition fact I threw + +00:13:50.759 --> 00:13:56.079 +in there um yeah so let's run through an + +00:13:54.680 --> 00:13:58.560 +example oh + +00:13:56.079 --> 00:14:03.519 +yes try and summarize that is it right + +00:13:58.560 --> 00:14:03.519 +to assum the of language specific + +00:14:04.959 --> 00:14:13.680 +M okay so how can we like + +00:14:09.639 --> 00:14:17.160 +formalize um uh phon names and how they + +00:14:13.680 --> 00:14:18.360 +operate with specific sounds like phones + +00:14:17.160 --> 00:14:22.360 +uh let's run through an example in + +00:14:18.360 --> 00:14:24.440 +English so p with no puff of air and a P + +00:14:22.360 --> 00:14:26.480 +with a puff of air are two distinct + +00:14:24.440 --> 00:14:29.480 +phones that are actually used in English + +00:14:26.480 --> 00:14:30.839 +speech so if you hold like your hand or + +00:14:29.480 --> 00:14:33.519 +a piece of paper up to your mouth and + +00:14:30.839 --> 00:14:35.720 +you say the word spat you probably won't + +00:14:33.519 --> 00:14:37.560 +feel much air come out of your mouth but + +00:14:35.720 --> 00:14:39.880 +if you say the word Pat there's like a + +00:14:37.560 --> 00:14:43.000 +bit of a + +00:14:39.880 --> 00:14:46.040 +puff however if we were to like + +00:14:43.000 --> 00:14:48.639 +manipulate a sound and change the Pu + +00:14:46.040 --> 00:14:52.279 +with no puff of air which is unaspirated + +00:14:48.639 --> 00:14:53.680 +for the puff of air p and vice versa we + +00:14:52.279 --> 00:14:57.360 +wouldn't change the meaning of the word + +00:14:53.680 --> 00:14:59.079 +like it can say um spat and spat one + +00:14:57.360 --> 00:15:01.240 +with a puff and one without and they + +00:14:59.079 --> 00:15:04.279 +mean the same thing to + +00:15:01.240 --> 00:15:06.680 +me so what does this tell us um this + +00:15:04.279 --> 00:15:09.160 +shows that the p with no puff of air and + +00:15:06.680 --> 00:15:11.839 +the p with the puff of air are instances + +00:15:09.160 --> 00:15:14.800 +of the same phon um in other words they + +00:15:11.839 --> 00:15:16.959 +are alphones in English alphones are + +00:15:14.800 --> 00:15:19.560 +phones that map to the same or phones + +00:15:16.959 --> 00:15:22.560 +that map to the same phon um in other + +00:15:19.560 --> 00:15:24.639 +languages though uh we can distinguish + +00:15:22.560 --> 00:15:26.880 +between these two sounds like in tide + +00:15:24.639 --> 00:15:28.160 +the p with no puff of air and the p with + +00:15:26.880 --> 00:15:29.720 +the puff of air would actually change a + +00:15:28.160 --> 00:15:31.720 +meaning of a word + +00:15:29.720 --> 00:15:34.720 +um so their phon name inventory would + +00:15:31.720 --> 00:15:34.720 +include both + +00:15:35.120 --> 00:15:43.920 +P's um and how these uh uh how these + +00:15:40.319 --> 00:15:46.319 +phones actually occur like what actual + +00:15:43.920 --> 00:15:49.279 +sound you make is determined by the + +00:15:46.319 --> 00:15:51.759 +context that it is in so whether this + +00:15:49.279 --> 00:15:54.040 +phon p is produced as a p without a puff + +00:15:51.759 --> 00:15:55.759 +or a P with a puff can be determined by + +00:15:54.040 --> 00:15:56.800 +the sounds that surround it which we + +00:15:55.759 --> 00:15:59.160 +call its + +00:15:56.800 --> 00:16:02.079 +environment so an observation that we + +00:15:59.160 --> 00:16:04.959 +can make is generally um for standard + +00:16:02.079 --> 00:16:06.880 +American English aspiration only occurs + +00:16:04.959 --> 00:16:09.199 +when this p phon is at the beginning of + +00:16:06.880 --> 00:16:10.399 +a stress syllable um we can also see + +00:16:09.199 --> 00:16:12.160 +though that this happens with other + +00:16:10.399 --> 00:16:16.040 +sounds like T and + +00:16:12.160 --> 00:16:18.519 +C um it turns out that P T and C form a + +00:16:16.040 --> 00:16:20.560 +very Salient group of sounds called + +00:16:18.519 --> 00:16:23.199 +unvoiced stops so we can write a + +00:16:20.560 --> 00:16:24.680 +phonological rule these unvoiced stops + +00:16:23.199 --> 00:16:26.639 +will be aspirated at the beginning of a + +00:16:24.680 --> 00:16:29.639 +stress syllable otherwise they will be + +00:16:26.639 --> 00:16:31.440 +unaspirated + +00:16:29.639 --> 00:16:33.639 +okay so kind of ran through the basics + +00:16:31.440 --> 00:16:37.079 +of phonetics and phology what are some + +00:16:33.639 --> 00:16:38.399 +applications in NLP um a really cool one + +00:16:37.079 --> 00:16:40.880 +which also ties into historical + +00:16:38.399 --> 00:16:43.880 +Linguistics is automatic protol language + +00:16:40.880 --> 00:16:46.199 +reconstruction um over time uh because + +00:16:43.880 --> 00:16:49.680 +of like just the physical properties of + +00:16:46.199 --> 00:16:52.120 +your mouth and social factors um you + +00:16:49.680 --> 00:16:55.079 +will have phonological changes of how + +00:16:52.120 --> 00:16:57.079 +you like produce words and sounds over + +00:16:55.079 --> 00:16:58.600 +generations and this can give us Clues + +00:16:57.079 --> 00:17:00.839 +as to how languages have evolved over + +00:16:58.600 --> 00:17:03.160 +time and how they're related um so there + +00:17:00.839 --> 00:17:05.799 +have been some work to uh uncover these + +00:17:03.160 --> 00:17:07.120 +types of patterns computationally um + +00:17:05.799 --> 00:17:08.559 +there's also really cool work on + +00:17:07.120 --> 00:17:10.760 +cognitive models of human speech + +00:17:08.559 --> 00:17:13.240 +production so this recent work uh from + +00:17:10.760 --> 00:17:15.240 +Berkeley was training an unsupervised uh + +00:17:13.240 --> 00:17:17.959 +Speech synthesis model which instead of + +00:17:15.240 --> 00:17:19.679 +producing from raw waveforms uh instead + +00:17:17.959 --> 00:17:21.799 +trained uh their model to produce + +00:17:19.679 --> 00:17:23.319 +humanik articulatory gestures based on + +00:17:21.799 --> 00:17:26.120 +like electrical signals that people have + +00:17:23.319 --> 00:17:27.559 +measured from people's mouths um it can + +00:17:26.120 --> 00:17:29.000 +also serve as a form of linguistic + +00:17:27.559 --> 00:17:31.120 +evaluation of things like phone + +00:17:29.000 --> 00:17:32.840 +embeddings uh if you have embeddings of + +00:17:31.120 --> 00:17:35.120 +phones or the individual sounds do they + +00:17:32.840 --> 00:17:37.799 +actually also encode phonological + +00:17:35.120 --> 00:17:39.360 +relations and then finally um we can + +00:17:37.799 --> 00:17:41.280 +also incorporate phonetic information + +00:17:39.360 --> 00:17:43.039 +into word embeddings so this can be + +00:17:41.280 --> 00:17:45.280 +applied to tasks like cognate and Lan + +00:17:43.039 --> 00:17:49.080 +word detection um multilingual named + +00:17:45.280 --> 00:17:53.200 +entity recognition Lang ID + +00:17:49.080 --> 00:17:55.080 +Etc okay so now popping one level up in + +00:17:53.200 --> 00:17:56.440 +abstraction we can move on to subwords + +00:17:55.080 --> 00:17:59.080 +and + +00:17:56.440 --> 00:18:01.200 +constituents so morphology is is the + +00:17:59.080 --> 00:18:02.960 +study of word formation and structure + +00:18:01.200 --> 00:18:04.679 +and if you've ever like armchair + +00:18:02.960 --> 00:18:06.280 +philosophized or thought a little bit + +00:18:04.679 --> 00:18:09.520 +too hard about what you were saying you + +00:18:06.280 --> 00:18:12.000 +might be like um what is a word do words + +00:18:09.520 --> 00:18:14.080 +exist um and this is actually a very + +00:18:12.000 --> 00:18:15.360 +valid question lots of linguists have + +00:18:14.080 --> 00:18:17.320 +thought about this lots of linguists + +00:18:15.360 --> 00:18:18.720 +continue to debate about it if you ask + +00:18:17.320 --> 00:18:20.640 +someone who's really opinionated they + +00:18:18.720 --> 00:18:22.080 +will go for a very very long time + +00:18:20.640 --> 00:18:24.080 +talking about whether or not they + +00:18:22.080 --> 00:18:25.960 +believe a word exists but for now we're + +00:18:24.080 --> 00:18:28.520 +going to forego that debate and just go + +00:18:25.960 --> 00:18:30.520 +with our intuitions for what a word is + +00:18:28.520 --> 00:18:33.600 +um words are formed from linguistic + +00:18:30.520 --> 00:18:35.840 +units um called morphemes um and like + +00:18:33.600 --> 00:18:38.320 +how a phone was like the smallest unit + +00:18:35.840 --> 00:18:40.159 +we would study in phology a morim is the + +00:18:38.320 --> 00:18:42.159 +smallest stud unit we study in + +00:18:40.159 --> 00:18:44.400 +morphology um these are the smallest + +00:18:42.159 --> 00:18:46.159 +meaningful linguistic units so I can + +00:18:44.400 --> 00:18:48.799 +break the word morphology down into two + +00:18:46.159 --> 00:18:51.159 +morphemes morph which means form and + +00:18:48.799 --> 00:18:53.679 +shape and ology the study + +00:18:51.159 --> 00:18:55.280 +of um so one thing I'd like to note is + +00:18:53.679 --> 00:18:57.159 +that in this lecture most of my examples + +00:18:55.280 --> 00:18:59.400 +are going to be English because I know + +00:18:57.159 --> 00:19:02.000 +all of us speak English um but English + +00:18:59.400 --> 00:19:04.000 +morphology is super duper boring um so + +00:19:02.000 --> 00:19:05.679 +you can check out some really cool fun + +00:19:04.000 --> 00:19:07.159 +poly synthetic languages or ones that + +00:19:05.679 --> 00:19:09.280 +have a lot of like morphological + +00:19:07.159 --> 00:19:11.240 +processes that go on um including many + +00:19:09.280 --> 00:19:13.640 +indigenous American languages uh for + +00:19:11.240 --> 00:19:17.080 +more fun + +00:19:13.640 --> 00:19:19.520 +examples yeah so for the most part we + +00:19:17.080 --> 00:19:20.840 +can break down morphemes into two + +00:19:19.520 --> 00:19:22.840 +categories based on the following + +00:19:20.840 --> 00:19:25.320 +properties first we can ask can a + +00:19:22.840 --> 00:19:27.200 +morphine occur by itself um if a + +00:19:25.320 --> 00:19:28.440 +morphine can occur by itself then it's + +00:19:27.200 --> 00:19:30.480 +basically word and we call it a free + +00:19:28.440 --> 00:19:33.880 +morph Mor um but if it can't then it's + +00:19:30.480 --> 00:19:36.760 +bound so like in dogs uh we have a free + +00:19:33.880 --> 00:19:38.320 +morphine dog and a bound morphine for + +00:19:36.760 --> 00:19:40.400 +the plural s because that just can't + +00:19:38.320 --> 00:19:43.200 +occur on its own but we can also form + +00:19:40.400 --> 00:19:44.679 +words that are composed of all bound + +00:19:43.200 --> 00:19:47.919 +morphins like + +00:19:44.679 --> 00:19:49.880 +multilingual um another thing we can ask + +00:19:47.919 --> 00:19:52.600 +is whether or not it comprises the main + +00:19:49.880 --> 00:19:54.720 +meaning of the word um if it does it's + +00:19:52.600 --> 00:19:57.799 +the root if it's not it's an AIX so like + +00:19:54.720 --> 00:19:59.240 +in dogs um the AIX on its own just + +00:19:57.799 --> 00:20:01.280 +indicates that it's a plural but not + +00:19:59.240 --> 00:20:06.240 +really of what so the main meaning is + +00:20:01.280 --> 00:20:08.880 +with dog um another like weird thing is + +00:20:06.240 --> 00:20:11.880 +cranberry morim so if we try and split + +00:20:08.880 --> 00:20:14.400 +cran and Berry cran doesn't really mean + +00:20:11.880 --> 00:20:16.360 +anything um so it's a bound morphine + +00:20:14.400 --> 00:20:17.720 +without like a real meaning which kind + +00:20:16.360 --> 00:20:19.880 +of contradicts what I said about a + +00:20:17.720 --> 00:20:21.880 +morphe having a meaning um but this type + +00:20:19.880 --> 00:20:23.799 +of thing comes about from historical + +00:20:21.880 --> 00:20:26.480 +change so cran actually did have a + +00:20:23.799 --> 00:20:29.960 +meaning back in the day um but it no + +00:20:26.480 --> 00:20:32.400 +longer does now + +00:20:29.960 --> 00:20:34.840 +so one type of morphological process is + +00:20:32.400 --> 00:20:37.440 +inflection which can create a new form + +00:20:34.840 --> 00:20:39.159 +of the same word um basically the main + +00:20:37.440 --> 00:20:40.720 +concept and meaning of the word will + +00:20:39.159 --> 00:20:42.840 +remain the same but we're Bas like + +00:20:40.720 --> 00:20:45.280 +flipping a switch for the grammatical + +00:20:42.840 --> 00:20:47.200 +feature of uh or some grammatical + +00:20:45.280 --> 00:20:50.159 +feature in the word so like in the dog's + +00:20:47.200 --> 00:20:53.400 +example appending an S makes it plural + +00:20:50.159 --> 00:20:58.880 +uh for person I can say I run to he runs + +00:20:53.400 --> 00:20:58.880 +tense I climb I climbed Etc + +00:20:59.039 --> 00:21:03.440 +in contrast we have processes of word + +00:21:01.120 --> 00:21:05.440 +formation so one of these is first + +00:21:03.440 --> 00:21:07.440 +derivation it's a process that creates a + +00:21:05.440 --> 00:21:09.600 +semantically related new word by + +00:21:07.440 --> 00:21:12.120 +operating on a base form often through + +00:21:09.600 --> 00:21:13.679 +things like affixation so now the main + +00:21:12.120 --> 00:21:15.600 +concept and meaning of the word is going + +00:21:13.679 --> 00:21:18.039 +to change and often times part of speech + +00:21:15.600 --> 00:21:21.120 +will change too so you have to teach a + +00:21:18.039 --> 00:21:23.960 +verb to teacher uh someone who teaches + +00:21:21.120 --> 00:21:26.200 +uh intense to intensify easy easily but + +00:21:23.960 --> 00:21:28.960 +we can also have derivations that don't + +00:21:26.200 --> 00:21:31.120 +change part of spe speech like unlucky + +00:21:28.960 --> 00:21:32.039 +and lucky where they now kind of have + +00:21:31.120 --> 00:21:35.200 +opposite + +00:21:32.039 --> 00:21:37.120 +meanings another word formation strategy + +00:21:35.200 --> 00:21:39.159 +is compounding and this is a process + +00:21:37.120 --> 00:21:41.880 +that creates semantically new words by + +00:21:39.159 --> 00:21:44.559 +combining two already separate words + +00:21:41.880 --> 00:21:46.240 +like Blackbird ice cream skyscraper um + +00:21:44.559 --> 00:21:49.080 +German is Infamous for this so this like + +00:21:46.240 --> 00:21:51.240 +super long example here um means cattle + +00:21:49.080 --> 00:21:55.480 +marking and beef labeling supervision + +00:21:51.240 --> 00:21:58.360 +duties delegation law um yeah that's + +00:21:55.480 --> 00:22:01.200 +fun um so all of the things I've shown + +00:21:58.360 --> 00:22:04.200 +you especially with the English examples + +00:22:01.200 --> 00:22:06.159 +are pretty simple we just like root + +00:22:04.200 --> 00:22:09.480 +attach things sequentially to the root + +00:22:06.159 --> 00:22:12.120 +boom word um but not all morphological + +00:22:09.480 --> 00:22:13.600 +processes are this straightforward so um + +00:22:12.120 --> 00:22:15.600 +in English we do have something called + +00:22:13.600 --> 00:22:18.000 +apophony um so this is like a vowel + +00:22:15.600 --> 00:22:20.039 +change tooth to teeth plural Goose to + +00:22:18.000 --> 00:22:24.279 +geese unfortunately not mice to me but + +00:22:20.039 --> 00:22:25.960 +that would be fun um infixation also fun + +00:22:24.279 --> 00:22:28.880 +there's a fun example in English you can + +00:22:25.960 --> 00:22:32.480 +read it I won't read it out loud um + +00:22:28.880 --> 00:22:35.440 +transfixation uh is when you have like + +00:22:32.480 --> 00:22:38.559 +uh some root but it's actually split + +00:22:35.440 --> 00:22:40.000 +when you put the new um AIC in there so + +00:22:38.559 --> 00:22:43.039 +this happens with Arabic and Hebrew + +00:22:40.000 --> 00:22:45.440 +Roots reduplication um in Indonesian + +00:22:43.039 --> 00:22:49.080 +this happens pretty often um we have the + +00:22:45.440 --> 00:22:51.919 +verb beran to walk um and then we + +00:22:49.080 --> 00:22:53.400 +reduplicate the the root to become to + +00:22:51.919 --> 00:22:55.440 +stroll stroll + +00:22:53.400 --> 00:22:57.039 +bridelan uh and then there's also a lot + +00:22:55.440 --> 00:22:59.640 +of other processes that I haven't listed + +00:22:57.039 --> 00:23:03.080 +here + +00:22:59.640 --> 00:23:04.799 +um computationally we have uh useful + +00:23:03.080 --> 00:23:07.120 +tools for this called morphological + +00:23:04.799 --> 00:23:09.279 +analyzers which take as input a word + +00:23:07.120 --> 00:23:11.799 +form and then output all possible + +00:23:09.279 --> 00:23:14.039 +morphological uh parses of that + +00:23:11.799 --> 00:23:16.440 +word um so traditionally this is done + +00:23:14.039 --> 00:23:18.880 +with fsts um and it's a two-step + +00:23:16.440 --> 00:23:21.840 +creation process it's actually pretty uh + +00:23:18.880 --> 00:23:24.039 +arduous first you got to map your Lemma + +00:23:21.840 --> 00:23:26.320 +and a morphos syntactic description of + +00:23:24.039 --> 00:23:29.080 +the morphemes to an intermediate form + +00:23:26.320 --> 00:23:32.440 +that represents like the basic morphine + +00:23:29.080 --> 00:23:35.559 +representation of uh that label so for + +00:23:32.440 --> 00:23:38.039 +example you have like bus the Lemma PL + +00:23:35.559 --> 00:23:40.960 +for plural for that description and then + +00:23:38.039 --> 00:23:42.679 +that maps to bus and then S as the + +00:23:40.960 --> 00:23:45.159 +canonical morphine representation of the + +00:23:42.679 --> 00:23:48.000 +plural and then you have another one + +00:23:45.159 --> 00:23:49.600 +that maps from intermediate form to + +00:23:48.000 --> 00:23:52.799 +surface form According to some like + +00:23:49.600 --> 00:23:55.520 +orthog orthographic rules uh phological + +00:23:52.799 --> 00:23:58.200 +rules Etc so now you go from bus with + +00:23:55.520 --> 00:23:59.840 +the be uh with the plural s but it's + +00:23:58.200 --> 00:24:03.000 +actually written out as + +00:23:59.840 --> 00:24:06.000 +buses um you put these together and then + +00:24:03.000 --> 00:24:08.400 +you can have as input your bus PL and + +00:24:06.000 --> 00:24:10.440 +then get as output buses but you can + +00:24:08.400 --> 00:24:12.640 +also invert it so now you can use it as + +00:24:10.440 --> 00:24:15.400 +an analyzer um and some tools to + +00:24:12.640 --> 00:24:17.600 +construct these type of fsts can are + +00:24:15.400 --> 00:24:21.559 +foma rust FST and open + +00:24:17.600 --> 00:24:24.720 +FST um but obviously we don't really use + +00:24:21.559 --> 00:24:27.720 +a ton of fsts anymore in modern NLP um + +00:24:24.720 --> 00:24:29.559 +so more recently there's neural models + +00:24:27.720 --> 00:24:31.919 +um that uh like sequence to sequence + +00:24:29.559 --> 00:24:35.200 +models that just do this with the word + +00:24:31.919 --> 00:24:36.919 +as a raw input and the analysis as the + +00:24:35.200 --> 00:24:39.919 +output but we can still combine + +00:24:36.919 --> 00:24:42.159 +approaches so you can combine an FST + +00:24:39.919 --> 00:24:43.799 +with but with your predetermined lexicon + +00:24:42.159 --> 00:24:46.559 +with a neural guesser that can handle + +00:24:43.799 --> 00:24:48.360 +unseen word forms um we can also use + +00:24:46.559 --> 00:24:50.480 +fsts to generate additional training + +00:24:48.360 --> 00:24:54.080 +data uh that can be used as input to + +00:24:50.480 --> 00:24:56.399 +your neural model um even though like we + +00:24:54.080 --> 00:24:58.919 +or I wouldn't guarantee that not all of + +00:24:56.399 --> 00:25:00.200 +us use fsts but like more generally the + +00:24:58.919 --> 00:25:02.120 +P Community doesn't really deal with + +00:25:00.200 --> 00:25:04.080 +them anymore these tools are really + +00:25:02.120 --> 00:25:05.840 +really useful for low resource languages + +00:25:04.080 --> 00:25:10.559 +and for annotation of those low resource + +00:25:05.840 --> 00:25:14.200 +languages um like in this example for uh + +00:25:10.559 --> 00:25:17.760 +upic okay so we've gone yeah sorry one + +00:25:14.200 --> 00:25:19.600 +one quick followup um is like lindia + +00:25:17.760 --> 00:25:22.399 +pointed out that English morphology is + +00:25:19.600 --> 00:25:24.279 +really boring um Chinese morphology is + +00:25:22.399 --> 00:25:27.960 +even more boring uh so if you speak + +00:25:24.279 --> 00:25:29.600 +Chinese uh you you also don't you know + +00:25:27.960 --> 00:25:31.480 +deal with a lot of morphology but most + +00:25:29.600 --> 00:25:33.840 +other languages in the world have more + +00:25:31.480 --> 00:25:35.640 +complex morphology than English and + +00:25:33.840 --> 00:25:38.679 +especially the ones that if you could go + +00:25:35.640 --> 00:25:41.960 +back a few slides the ones that have + +00:25:38.679 --> 00:25:43.640 +like infixation for example um where + +00:25:41.960 --> 00:25:46.919 +you're changing the characters in like + +00:25:43.640 --> 00:25:49.000 +the middle of the sentence + +00:25:46.919 --> 00:25:50.679 +actually they break some of the + +00:25:49.000 --> 00:25:53.640 +underlying assumptions that we have in + +00:25:50.679 --> 00:25:57.760 +our like neural models nowadays like for + +00:25:53.640 --> 00:26:00.279 +example we're using bpe or something um + +00:25:57.760 --> 00:26:01.960 +or piece or something to split words + +00:26:00.279 --> 00:26:03.919 +that works really well in English where + +00:26:01.960 --> 00:26:05.279 +we mostly have concatenative morphology + +00:26:03.919 --> 00:26:06.840 +where you just stick two things together + +00:26:05.279 --> 00:26:08.320 +but it doesn't work well when you're + +00:26:06.840 --> 00:26:11.080 +inserting characters in the middle of + +00:26:08.320 --> 00:26:13.000 +the words so like it it's kind of + +00:26:11.080 --> 00:26:14.360 +interesting to know these differences + +00:26:13.000 --> 00:26:15.919 +from the point of view of modeling if + +00:26:14.360 --> 00:26:17.360 +you're modeling one of these languages + +00:26:15.919 --> 00:26:18.679 +because that actually becomes a really + +00:26:17.360 --> 00:26:20.120 +big problem if you start doing like + +00:26:18.679 --> 00:26:22.360 +Arabic or something like that and you + +00:26:20.120 --> 00:26:24.000 +just use our existent models so that's + +00:26:22.360 --> 00:26:25.919 +just another point about why knowing + +00:26:24.000 --> 00:26:28.440 +this is important + +00:26:25.919 --> 00:26:30.880 +here yeah and if you want to know more + +00:26:28.440 --> 00:26:33.440 +David teaches a really cool class on + +00:26:30.880 --> 00:26:37.320 +subwords or we deal you deal with like + +00:26:33.440 --> 00:26:40.799 +tokenization stuff and morphological + +00:26:37.320 --> 00:26:43.120 +processes um all right so we've covered + +00:26:40.799 --> 00:26:45.039 +words now let's put them together um + +00:26:43.120 --> 00:26:48.200 +syntax is a study of how words form + +00:26:45.039 --> 00:26:50.520 +phrases and sentences so like a question + +00:26:48.200 --> 00:26:52.480 +that a syntax might ask is like what are + +00:26:50.520 --> 00:26:54.240 +the principles governing phrase and + +00:26:52.480 --> 00:26:57.440 +sentence structure within a language and + +00:26:54.240 --> 00:26:59.960 +then also across languages so aspects of + +00:26:57.440 --> 00:27:02.200 +syntax include word order like does your + +00:26:59.960 --> 00:27:04.200 +subject uh come before your verb and + +00:27:02.200 --> 00:27:05.039 +Then followed by your object or some + +00:27:04.200 --> 00:27:07.360 +other + +00:27:05.039 --> 00:27:09.440 +combination um agreement like subject + +00:27:07.360 --> 00:27:11.640 +verb agreement and then also what is the + +00:27:09.440 --> 00:27:14.080 +nature of the hierarchical structure of + +00:27:11.640 --> 00:27:16.159 +the syntax um and then I have a fun + +00:27:14.080 --> 00:27:17.720 +example from Twitter about how some + +00:27:16.159 --> 00:27:21.679 +English sentences look like nine + +00:27:17.720 --> 00:27:25.320 +consecutive nouns um which I thought was + +00:27:21.679 --> 00:27:27.919 +funny um so words um like there's a lot + +00:27:25.320 --> 00:27:29.720 +of categorization um in this lecture and + +00:27:27.919 --> 00:27:31.559 +where are no exception we can categorize + +00:27:29.720 --> 00:27:34.279 +them based on their morphological + +00:27:31.559 --> 00:27:36.159 +syntactic and semantic properties and we + +00:27:34.279 --> 00:27:37.880 +refer to these categories as parts of + +00:27:36.159 --> 00:27:40.960 +speech like nouns verbs and adjectives + +00:27:37.880 --> 00:27:42.519 +I'm sure you all are very familiar um + +00:27:40.960 --> 00:27:45.399 +however one thing to note is that this + +00:27:42.519 --> 00:27:47.320 +categorization is like not not a very + +00:27:45.399 --> 00:27:49.320 +strict one at all um and it should not + +00:27:47.320 --> 00:27:51.279 +be taken for granted as you study like + +00:27:49.320 --> 00:27:52.880 +more and more complex or languages that + +00:27:51.279 --> 00:27:54.519 +are just completely different from + +00:27:52.880 --> 00:27:57.120 +English you'll realize that some of + +00:27:54.519 --> 00:28:00.440 +these boundaries between like nouns and + +00:27:57.120 --> 00:28:03.440 +adjectives or verbs and some nouns is + +00:28:00.440 --> 00:28:05.159 +actually really really blurry um so even + +00:28:03.440 --> 00:28:07.279 +though like this this is very like you + +00:28:05.159 --> 00:28:09.120 +know announ is a person place or thing + +00:28:07.279 --> 00:28:10.840 +and a verb is like an action it actually + +00:28:09.120 --> 00:28:13.559 +is a bit more complicated when you + +00:28:10.840 --> 00:28:15.120 +factor like morphosyntax into it so keep + +00:28:13.559 --> 00:28:18.000 +that in + +00:28:15.120 --> 00:28:20.399 +mind um a very broad distinction we can + +00:28:18.000 --> 00:28:22.279 +make over all sorts of words is whether + +00:28:20.399 --> 00:28:25.480 +it's an Open Class word or a closed + +00:28:22.279 --> 00:28:28.080 +class word so open classes of words are + +00:28:25.480 --> 00:28:30.440 +um classes where we can add new items + +00:28:28.080 --> 00:28:32.399 +items easily over time um and with + +00:28:30.440 --> 00:28:35.320 +relative ease so if any of you guys are + +00:28:32.399 --> 00:28:37.840 +like online uh we have a new word like + +00:28:35.320 --> 00:28:40.399 +Riz derived from Charisma and it can be + +00:28:37.840 --> 00:28:42.480 +a noun like oh he has so much RZ or it + +00:28:40.399 --> 00:28:45.840 +can be a verb like oh he rised her up + +00:28:42.480 --> 00:28:47.600 +you know fun um and then closed class + +00:28:45.840 --> 00:28:49.640 +words you have a much smaller number of + +00:28:47.600 --> 00:28:52.840 +words and it's a lot harder to add new + +00:28:49.640 --> 00:28:55.000 +items over time um one exception to this + +00:28:52.840 --> 00:28:56.679 +though recently is with pronouns like + +00:28:55.000 --> 00:28:59.200 +people are a bit more productive and how + +00:28:56.679 --> 00:29:03.559 +they use pronouns a bit more flexible + +00:28:59.200 --> 00:29:05.640 +at least in English in in the US um and + +00:29:03.559 --> 00:29:08.440 +even based on how words act in context + +00:29:05.640 --> 00:29:09.919 +we can often infer um the part of speech + +00:29:08.440 --> 00:29:12.120 +even though we've never seen it before + +00:29:09.919 --> 00:29:15.240 +um let me move + +00:29:12.120 --> 00:29:16.840 +this um as in this example which you + +00:29:15.240 --> 00:29:18.399 +might have seen if you've ever taken a + +00:29:16.840 --> 00:29:20.720 +Linguistics class it's like everyone's + +00:29:18.399 --> 00:29:22.960 +favorite part of speech example um it's + +00:29:20.720 --> 00:29:24.880 +by this poem called Jabberwocky by Lis + +00:29:22.960 --> 00:29:27.279 +Caroll where he has a bunch of nons + +00:29:24.880 --> 00:29:29.120 +words um but we can kind of tell even + +00:29:27.279 --> 00:29:30.799 +though we've never seen the word before + +00:29:29.120 --> 00:29:33.960 +what its function in the sentence is + +00:29:30.799 --> 00:29:35.720 +like um all Mimsy were the borov like + +00:29:33.960 --> 00:29:38.559 +borgov has to be a noun here it has to + +00:29:35.720 --> 00:29:40.080 +be something and they were being Mimsy + +00:29:38.559 --> 00:29:42.840 +whatever that + +00:29:40.080 --> 00:29:45.360 +means um so yeah here's a list of + +00:29:42.840 --> 00:29:47.840 +canonical parts of speech um sometimes + +00:29:45.360 --> 00:29:49.320 +based on a linguist desired annotations + +00:29:47.840 --> 00:29:51.519 +we can get more narrow than this but + +00:29:49.320 --> 00:29:53.640 +this is like pretty standard and then + +00:29:51.519 --> 00:29:55.640 +like a sentence annotated for each part + +00:29:53.640 --> 00:29:58.840 +of speech like they is a pronoun had is + +00:29:55.640 --> 00:30:01.159 +an auxiliary verb argued as a verb etc + +00:29:58.840 --> 00:30:03.360 +etc um I won't spend too much time on + +00:30:01.159 --> 00:30:05.440 +this because um I think it's a bit too + +00:30:03.360 --> 00:30:08.559 +nitty-gritty in the + +00:30:05.440 --> 00:30:10.600 +details um so a big part of syntax is + +00:30:08.559 --> 00:30:12.760 +phrases um like what are the types of + +00:30:10.600 --> 00:30:14.960 +phrases we have how are they formed um + +00:30:12.760 --> 00:30:17.080 +here are three very very basic ones so a + +00:30:14.960 --> 00:30:19.120 +noun phrase obviously as a name suggests + +00:30:17.080 --> 00:30:20.720 +contains a noun but it can also include + +00:30:19.120 --> 00:30:22.519 +a determiner to tell you like what set + +00:30:20.720 --> 00:30:24.159 +of nouns are you referring to and then + +00:30:22.519 --> 00:30:26.760 +also things that modify the noun like + +00:30:24.159 --> 00:30:29.080 +adjectives so here is a noun phrase the + +00:30:26.760 --> 00:30:31.320 +old man old man is also a noun phrase + +00:30:29.080 --> 00:30:33.120 +man is also a noun phrase a + +00:30:31.320 --> 00:30:36.080 +prepositional phrase um has a + +00:30:33.120 --> 00:30:38.159 +preposition followed by a noun phrase uh + +00:30:36.080 --> 00:30:39.720 +some preposition phrases we can extend + +00:30:38.159 --> 00:30:42.159 +the rules to be more complicated but + +00:30:39.720 --> 00:30:44.600 +here's a very simple example like to + +00:30:42.159 --> 00:30:46.799 +school um and then verb phrases contain + +00:30:44.600 --> 00:30:48.159 +a verb than any other noun phrase or + +00:30:46.799 --> 00:30:51.200 +prepositional phrase that the verb + +00:30:48.159 --> 00:30:53.559 +requires or has a slot for um my video + +00:30:51.200 --> 00:30:56.760 +keeps getting in the way of my + +00:30:53.559 --> 00:30:58.600 +slides um so as well as any other + +00:30:56.760 --> 00:31:01.960 +adverbial modifiers like + +00:30:58.600 --> 00:31:05.000 +uh sold a car to me very + +00:31:01.960 --> 00:31:07.200 +simple so constituents consist of at + +00:31:05.000 --> 00:31:10.120 +least one contiguous word and behaves as + +00:31:07.200 --> 00:31:13.720 +a single unit um this is like one + +00:31:10.120 --> 00:31:16.200 +theoretical unit that is very um + +00:31:13.720 --> 00:31:17.799 +important in generative syntax so let's + +00:31:16.200 --> 00:31:20.360 +look at this example Beyonce released a + +00:31:17.799 --> 00:31:21.720 +new country album a new country album as + +00:31:20.360 --> 00:31:24.080 +we saw in the previous slide is an out + +00:31:21.720 --> 00:31:26.039 +phrase um a crucial observation to make + +00:31:24.080 --> 00:31:27.639 +is that we can continually replace a new + +00:31:26.039 --> 00:31:28.519 +country album with smaller and smaller + +00:31:27.639 --> 00:31:30.360 +units + +00:31:28.519 --> 00:31:33.720 +all the way down to the word level like + +00:31:30.360 --> 00:31:35.720 +Beyonce released a new language model um + +00:31:33.720 --> 00:31:39.399 +even shorter Beyonce released a balloon + +00:31:35.720 --> 00:31:41.600 +or Beyonce released Lantern flies so um + +00:31:39.399 --> 00:31:43.320 +yeah we can we can see that all of these + +00:31:41.600 --> 00:31:46.200 +things like act as a single unit and + +00:31:43.320 --> 00:31:49.919 +they all act in very similar + +00:31:46.200 --> 00:31:51.720 +ways um so as a followup to that some + +00:31:49.919 --> 00:31:55.600 +people have developed a theory of + +00:31:51.720 --> 00:31:57.880 +language in the context of context free + +00:31:55.600 --> 00:31:59.039 +grammars um in languis spe specific + +00:31:57.880 --> 00:32:01.120 +speically these are called phrase + +00:31:59.039 --> 00:32:03.399 +structure grammars and it's introduced + +00:32:01.120 --> 00:32:05.679 +by no trky who I think actually gets a + +00:32:03.399 --> 00:32:07.039 +lot of flack from NLP but people need to + +00:32:05.679 --> 00:32:08.679 +realize he was actually really really + +00:32:07.039 --> 00:32:10.240 +important for Linguistics like he has + +00:32:08.679 --> 00:32:13.080 +some pretty good ideas even if not all + +00:32:10.240 --> 00:32:15.799 +of them are correct um so Noom Chomsky + +00:32:13.080 --> 00:32:18.240 +defined a phras structure grammar as um + +00:32:15.799 --> 00:32:19.960 +having a finite vocabulary a finite set + +00:32:18.240 --> 00:32:21.320 +of strings that are part of this + +00:32:19.960 --> 00:32:23.320 +vocabulary and then a finite set of + +00:32:21.320 --> 00:32:24.919 +rules that operate on the vocabulary to + +00:32:23.320 --> 00:32:28.480 +produce + +00:32:24.919 --> 00:32:30.600 +strings or more strings and we can use + +00:32:28.480 --> 00:32:32.639 +these rules over and over and over again + +00:32:30.600 --> 00:32:35.240 +recursively to create infinitely long + +00:32:32.639 --> 00:32:37.320 +strings that are still Parable um so + +00:32:35.240 --> 00:32:39.799 +here are some very simple phrase + +00:32:37.320 --> 00:32:42.600 +structure rules for English a sentence + +00:32:39.799 --> 00:32:45.559 +is defined to be a noun phrase followed + +00:32:42.600 --> 00:32:47.080 +by a verb phrase a noun phrase can be uh + +00:32:45.559 --> 00:32:49.240 +constructed by having an optional + +00:32:47.080 --> 00:32:51.440 +determiner and another noun phrase and + +00:32:49.240 --> 00:32:53.919 +then we can take that noun phrase um and + +00:32:51.440 --> 00:32:55.919 +decompose it into another like another + +00:32:53.919 --> 00:32:59.840 +optional adjective phrase a noun a + +00:32:55.919 --> 00:32:59.840 +prepositional phrase so on and so forth + +00:33:00.399 --> 00:33:04.279 +Now using such a set of rules in our pH + +00:33:02.600 --> 00:33:06.480 +structure grammar we can generate lots + +00:33:04.279 --> 00:33:08.519 +and lots of English sentences including + +00:33:06.480 --> 00:33:10.679 +those that are syntactically proper even + +00:33:08.519 --> 00:33:12.960 +if they are semantically nonsensical so + +00:33:10.679 --> 00:33:15.240 +like playing Mad Libs with these phrase + +00:33:12.960 --> 00:33:17.120 +structure rules a very famous example + +00:33:15.240 --> 00:33:19.080 +from tomsky is colorless green ideas + +00:33:17.120 --> 00:33:20.880 +sleep furiously doesn't really mean + +00:33:19.080 --> 00:33:22.240 +anything but as a native English speaker + +00:33:20.880 --> 00:33:24.440 +you're like you know it sounds right + +00:33:22.240 --> 00:33:26.679 +even if I don't know what it means um + +00:33:24.440 --> 00:33:28.760 +which is a really cool observation like + +00:33:26.679 --> 00:33:30.840 +uh speakers have an in for when things + +00:33:28.760 --> 00:33:32.600 +sound syntactically correct even if they + +00:33:30.840 --> 00:33:34.720 +have no + +00:33:32.600 --> 00:33:36.600 +meaning um but if you're wondering you + +00:33:34.720 --> 00:33:39.559 +know all these like sets of rules it + +00:33:36.600 --> 00:33:41.519 +seems way too simple like people mess + +00:33:39.559 --> 00:33:43.399 +things up all the time or there are lots + +00:33:41.519 --> 00:33:45.399 +of other constructions that can't be + +00:33:43.399 --> 00:33:48.200 +explained very easily by these rules + +00:33:45.399 --> 00:33:49.600 +well yeah you're right um some phenomena + +00:33:48.200 --> 00:33:52.720 +are very difficult to model in this + +00:33:49.600 --> 00:33:55.639 +fashion um but you know this is this is + +00:33:52.720 --> 00:33:58.840 +like a very very old grammar back from + +00:33:55.639 --> 00:34:00.559 +the 50s there are lots of new Frameworks + +00:33:58.840 --> 00:34:02.679 +in theoretical Linguistics such as + +00:34:00.559 --> 00:34:05.080 +minimalism uh which is atomski and + +00:34:02.679 --> 00:34:07.519 +tradition there's other formalisms like + +00:34:05.080 --> 00:34:09.520 +hpsg uh cognitive Linguistics approaches + +00:34:07.519 --> 00:34:11.599 +like construction grammars Etc so it's a + +00:34:09.520 --> 00:34:15.159 +lot more wide and varied than just what + +00:34:11.599 --> 00:34:17.119 +you see canonically in intro Ling or uh + +00:34:15.159 --> 00:34:19.879 +Linguistics lectures and NLP classes + +00:34:17.119 --> 00:34:22.240 +like this one um however it's still a + +00:34:19.879 --> 00:34:23.720 +very conceptually powerful and remains + +00:34:22.240 --> 00:34:26.800 +um + +00:34:23.720 --> 00:34:29.200 +influential so a very important aspect + +00:34:26.800 --> 00:34:30.720 +of this line of work and a lot of + +00:34:29.200 --> 00:34:33.320 +subsequent and competing theories is the + +00:34:30.720 --> 00:34:35.079 +idea of hierarchical structure in syntax + +00:34:33.320 --> 00:34:37.200 +so using these phrase structure rules we + +00:34:35.079 --> 00:34:39.240 +can break down the sentence into a tree + +00:34:37.200 --> 00:34:41.520 +where the sentence node s is the root + +00:34:39.240 --> 00:34:43.480 +and the words are the terminal nodes and + +00:34:41.520 --> 00:34:45.000 +their part of speech is the one right + +00:34:43.480 --> 00:34:49.359 +above + +00:34:45.000 --> 00:34:52.000 +it we can also have syntactic trees that + +00:34:49.359 --> 00:34:54.359 +reflect ambiguity so here we have two + +00:34:52.000 --> 00:34:56.960 +trees for the same surface form sentence + +00:34:54.359 --> 00:34:58.359 +um I saw a girl with a telescope um but + +00:34:56.960 --> 00:35:01.000 +they mean slightly different things + +00:34:58.359 --> 00:35:03.760 +depending on how you interpret uh with a + +00:35:01.000 --> 00:35:06.040 +telescope um does it are you seeing the + +00:35:03.760 --> 00:35:07.440 +girl with like are do you have a + +00:35:06.040 --> 00:35:09.119 +telescope in your hand and are you + +00:35:07.440 --> 00:35:12.079 +seeing the girl with it or are you + +00:35:09.119 --> 00:35:13.760 +seeing a girl who has a telescope um you + +00:35:12.079 --> 00:35:16.520 +can represent these two interpretations + +00:35:13.760 --> 00:35:16.520 +differently in + +00:35:16.920 --> 00:35:22.079 +syntax um so what I just described + +00:35:19.520 --> 00:35:25.240 +earlier were like uh par structure or + +00:35:22.079 --> 00:35:28.079 +par phrase structure grammars um that + +00:35:25.240 --> 00:35:29.960 +are based on constituency relations um + +00:35:28.079 --> 00:35:32.440 +but there are other types of ways we can + +00:35:29.960 --> 00:35:35.520 +parse sentences uh such as dependency + +00:35:32.440 --> 00:35:37.880 +parsers U dependency trees are based on + +00:35:35.520 --> 00:35:39.480 +dependency relations sometimes referred + +00:35:37.880 --> 00:35:41.680 +to as grammatical relations and these + +00:35:39.480 --> 00:35:44.480 +are binary asymmetrical relations that + +00:35:41.680 --> 00:35:48.000 +connect words and phrases um so in the + +00:35:44.480 --> 00:35:51.200 +abstract a relation uh a goes to B A is + +00:35:48.000 --> 00:35:52.680 +the head and B is the dependent of a um + +00:35:51.200 --> 00:35:54.880 +you can define a relation in many + +00:35:52.680 --> 00:35:57.200 +different ways syntactic semantic + +00:35:54.880 --> 00:35:58.839 +morphological prootic um but most + +00:35:57.200 --> 00:36:00.960 +Frameworks will focus on syntactic + +00:35:58.839 --> 00:36:02.880 +relations with the main verb serving as + +00:36:00.960 --> 00:36:05.440 +the root of the tree so we can have + +00:36:02.880 --> 00:36:07.440 +clausal relations like whether um the + +00:36:05.440 --> 00:36:09.960 +dependent as a nominal subject direct + +00:36:07.440 --> 00:36:12.280 +object indirect object we can also have + +00:36:09.960 --> 00:36:15.200 +modification relations like is something + +00:36:12.280 --> 00:36:16.960 +modifying a noun or like is a noun + +00:36:15.200 --> 00:36:19.359 +modifying another thing is an adjective + +00:36:16.960 --> 00:36:22.119 +modifying another thing + +00:36:19.359 --> 00:36:24.720 +Etc so here is an example of a + +00:36:22.119 --> 00:36:27.079 +dependency pars from a universal + +00:36:24.720 --> 00:36:29.680 +dependencies um here like in the top + +00:36:27.079 --> 00:36:31.800 +example in English chased is the head of + +00:36:29.680 --> 00:36:35.160 +the tree and you can like follow the + +00:36:31.800 --> 00:36:35.160 +errors to see like what are its + +00:36:35.839 --> 00:36:41.040 +dependence so part speech tagging and + +00:36:39.000 --> 00:36:43.160 +syntactic parsing used to be a big deal + +00:36:41.040 --> 00:36:45.800 +in NLP um but there's a reason why you + +00:36:43.160 --> 00:36:47.440 +don't have a lecture on that anymore um + +00:36:45.800 --> 00:36:49.119 +but especially because when we're + +00:36:47.440 --> 00:36:51.800 +dealing with high resource languages + +00:36:49.119 --> 00:36:54.319 +like English it's not a super big deal + +00:36:51.800 --> 00:36:55.720 +um but it's a still Val a very valuable + +00:36:54.319 --> 00:36:58.079 +resource for people studying lower + +00:36:55.720 --> 00:36:59.880 +resource languages um and having lots of + +00:36:58.079 --> 00:37:01.839 +linguistically annotated corpora over a + +00:36:59.880 --> 00:37:03.880 +wide variety of languages can also + +00:37:01.839 --> 00:37:05.720 +enable us to do like broad range + +00:37:03.880 --> 00:37:07.720 +linguistic + +00:37:05.720 --> 00:37:10.359 +studies um so here are just some + +00:37:07.720 --> 00:37:12.440 +examples of corpora in English like some + +00:37:10.359 --> 00:37:15.079 +major ones are the brown Corpus and Coca + +00:37:12.440 --> 00:37:17.079 +for part of speech tagging uh for + +00:37:15.079 --> 00:37:19.599 +constituency parses there's the pr Tree + +00:37:17.079 --> 00:37:21.720 +Bank uh for dependency parses we also + +00:37:19.599 --> 00:37:23.960 +have Google syntactic ingrams and then + +00:37:21.720 --> 00:37:25.760 +Universal dependencies um this one is + +00:37:23.960 --> 00:37:28.040 +actually quite interesting there's over + +00:37:25.760 --> 00:37:30.200 +140 languages with a bunch of dependency + +00:37:28.040 --> 00:37:31.440 +parses um and it's still a continual + +00:37:30.200 --> 00:37:33.680 +effort to develop more and more + +00:37:31.440 --> 00:37:35.920 +descriptive annotations so very recently + +00:37:33.680 --> 00:37:38.200 +this paper came out to like have a layer + +00:37:35.920 --> 00:37:41.680 +of constructions on top of existing + +00:37:38.200 --> 00:37:41.680 +Universal dependencies + +00:37:41.960 --> 00:37:49.240 +parses okay + +00:37:44.319 --> 00:37:52.680 +yes assume are Universal depes + +00:37:49.240 --> 00:37:55.560 +cons um this is a question that like + +00:37:52.680 --> 00:37:59.119 +typologists are like very concerned with + +00:37:55.560 --> 00:38:03.880 +um I think it's at the for construction + +00:37:59.119 --> 00:38:06.480 +grammars and people who deal with um + +00:38:03.880 --> 00:38:09.040 +like this how are certain semantic + +00:38:06.480 --> 00:38:10.880 +Concepts like how do they emerge in the + +00:38:09.040 --> 00:38:12.640 +structure of a language I think there's + +00:38:10.880 --> 00:38:15.280 +some assumption to be made on like very + +00:38:12.640 --> 00:38:17.880 +basic forms of meaning that all humans + +00:38:15.280 --> 00:38:19.359 +try to communicate and then from there + +00:38:17.880 --> 00:38:21.560 +um how that's actually expressed in the + +00:38:19.359 --> 00:38:23.280 +grammar might differ but there are like + +00:38:21.560 --> 00:38:25.359 +because there are those base meanings + +00:38:23.280 --> 00:38:28.119 +there are going to be comparisons that + +00:38:25.359 --> 00:38:31.440 +are that can be made across languages um + +00:38:28.119 --> 00:38:33.960 +but that is like a very Hot Topic in + +00:38:31.440 --> 00:38:35.560 +linguistics yeah yeah I think there + +00:38:33.960 --> 00:38:38.440 +another comment there's Universal part + +00:38:35.560 --> 00:38:41.079 +of speech uh kind of like the ones that + +00:38:38.440 --> 00:38:43.000 +Mia showed in Universal dependencies + +00:38:41.079 --> 00:38:45.240 +these are like aggressively simplified + +00:38:43.000 --> 00:38:47.400 +to only have the things that occur in + +00:38:45.240 --> 00:38:49.720 +most languages and so then there's other + +00:38:47.400 --> 00:38:53.000 +languages that have other things + +00:38:49.720 --> 00:38:54.760 +basically that go beyond this but by + +00:38:53.000 --> 00:38:56.720 +aggressively simplifying you can at + +00:38:54.760 --> 00:38:59.280 +least like make it very easy to do + +00:38:56.720 --> 00:39:00.920 +comparative sites + +00:38:59.280 --> 00:39:02.319 +yeah a lot of times they'll have like + +00:39:00.920 --> 00:39:05.160 +what Gramma said these very core screen + +00:39:02.319 --> 00:39:06.880 +tags and then as another column um in + +00:39:05.160 --> 00:39:09.119 +The annotation have language specific + +00:39:06.880 --> 00:39:11.119 +tags uh to be a bit more descriptive and + +00:39:09.119 --> 00:39:15.400 +like um show when things might not + +00:39:11.119 --> 00:39:15.400 +necessarily align with the broader + +00:39:15.960 --> 00:39:21.720 +label okay um I think now we're going to + +00:39:19.520 --> 00:39:24.119 +enter things that are a bit more + +00:39:21.720 --> 00:39:25.440 +computationally like NLP relevant um + +00:39:24.119 --> 00:39:29.560 +with meaning and + +00:39:25.440 --> 00:39:31.839 +intent um so semantics is the study of + +00:39:29.560 --> 00:39:34.319 +linguistic meaning and we can study this + +00:39:31.839 --> 00:39:36.839 +at various levels um as we saw we could + +00:39:34.319 --> 00:39:39.280 +see what a morphe means we can ask what + +00:39:36.839 --> 00:39:41.119 +a word means what a sentence means um + +00:39:39.280 --> 00:39:43.440 +and this often interacts with morphology + +00:39:41.119 --> 00:39:45.520 +and syntax as we saw like appending + +00:39:43.440 --> 00:39:47.200 +certain morphemes causes your word to + +00:39:45.520 --> 00:39:49.839 +mean something else + +00:39:47.200 --> 00:39:51.359 +now um a really active area in + +00:39:49.839 --> 00:39:52.920 +linguistics as well is the syntax + +00:39:51.359 --> 00:39:54.960 +semantics interface like what is the + +00:39:52.920 --> 00:39:57.000 +relationship between syntactic form and + +00:39:54.960 --> 00:39:57.960 +meaning how do different meanings of + +00:39:57.000 --> 00:40:00.920 +words + +00:39:57.960 --> 00:40:01.960 +uh change how they act syntactically in + +00:40:00.920 --> 00:40:04.880 +a larger + +00:40:01.960 --> 00:40:08.119 +structure now semantics is obviously a + +00:40:04.880 --> 00:40:10.680 +very very broad field um and we can very + +00:40:08.119 --> 00:40:14.040 +easily Veer into philosophy of language + +00:40:10.680 --> 00:40:15.599 +semiotics uh what is meaning anyways um + +00:40:14.040 --> 00:40:18.000 +we're going to stick to computationally + +00:40:15.599 --> 00:40:19.920 +relevant topics here but even then like + +00:40:18.000 --> 00:40:21.440 +I still don't have time to to cover + +00:40:19.920 --> 00:40:24.359 +certain topics like propositional and + +00:40:21.440 --> 00:40:25.960 +first order logic so um if these things + +00:40:24.359 --> 00:40:29.280 +sound interesting to you I encourage you + +00:40:25.960 --> 00:40:32.480 +to like look them up um they're quite + +00:40:29.280 --> 00:40:34.560 +fun um so let's start with lexical + +00:40:32.480 --> 00:40:37.480 +semantics um which I think is a very + +00:40:34.560 --> 00:40:39.680 +intuitive notion for people um a sense + +00:40:37.480 --> 00:40:42.640 +of a word is a distinct meaning of a + +00:40:39.680 --> 00:40:44.960 +word um and as we all know words can + +00:40:42.640 --> 00:40:48.640 +have multiple semantically related s + +00:40:44.960 --> 00:40:51.960 +senses and we refer to this as word + +00:40:48.640 --> 00:40:55.280 +pocy so I can say like they run + +00:40:51.960 --> 00:40:57.720 +experiments they run races candidates + +00:40:55.280 --> 00:41:00.119 +run for office can I run this idea by + +00:40:57.720 --> 00:41:02.280 +you um we're all using run we kind of + +00:41:00.119 --> 00:41:06.400 +have an intuition of how they're similar + +00:41:02.280 --> 00:41:09.599 +but also a bit different um etc + +00:41:06.400 --> 00:41:11.960 +etc um a related concept but not exactly + +00:41:09.599 --> 00:41:13.599 +the same as homonyms um and this + +00:41:11.960 --> 00:41:16.440 +actually is kind of a blurry distinction + +00:41:13.599 --> 00:41:18.760 +so uh a canonical homonym in English is + +00:41:16.440 --> 00:41:20.680 +something like Bank like River Bank + +00:41:18.760 --> 00:41:24.280 +versus I went to the bank and got some + +00:41:20.680 --> 00:41:26.680 +money um but if we actually look at the + +00:41:24.280 --> 00:41:28.520 +Historical uh traces of like the meaning + +00:41:26.680 --> 00:41:31.319 +of bank and these two settings we'd + +00:41:28.520 --> 00:41:33.520 +actually see that they're actually pimus + +00:41:31.319 --> 00:41:35.760 +um but over time as our uses of these + +00:41:33.520 --> 00:41:37.119 +words have changed in context we've seen + +00:41:35.760 --> 00:41:38.960 +these meanings drift further and further + +00:41:37.119 --> 00:41:41.640 +apart so now they're + +00:41:38.960 --> 00:41:44.319 +homonyms um but in the context of NLP + +00:41:41.640 --> 00:41:46.720 +whether it's a a police pimus word or a + +00:41:44.319 --> 00:41:49.440 +homonym um they kind of give us the same + +00:41:46.720 --> 00:41:51.800 +issue uh which is we have two surface + +00:41:49.440 --> 00:41:54.359 +War Two surface forms which in text + +00:41:51.800 --> 00:41:57.680 +appear exactly the same um but they have + +00:41:54.359 --> 00:41:57.680 +different senses + +00:41:57.880 --> 00:42:02.119 +so not only can we talk about like how a + +00:42:00.240 --> 00:42:04.640 +word has many different senses but we + +00:42:02.119 --> 00:42:07.280 +can compare a word and its sense to + +00:42:04.640 --> 00:42:10.680 +other words and their senses um and we + +00:42:07.280 --> 00:42:12.200 +call these like lexical relations so uh + +00:42:10.680 --> 00:42:14.760 +in a thesaurus you'll have lots of + +00:42:12.200 --> 00:42:17.119 +synonyms and antonyms where synonyms are + +00:42:14.760 --> 00:42:18.640 +things that are about the same meaning + +00:42:17.119 --> 00:42:21.520 +antonyms are things that are opposite + +00:42:18.640 --> 00:42:24.200 +like hot cold very simple there's also + +00:42:21.520 --> 00:42:28.119 +other relations like super subordinate + +00:42:24.200 --> 00:42:31.559 +like uh if I say vehicle um that uh + +00:42:28.119 --> 00:42:34.079 +encompasses all cars uh and all cars or + +00:42:31.559 --> 00:42:35.559 +and all SUVs are encompassed by cars and + +00:42:34.079 --> 00:42:39.280 +you can go all the way down into like a + +00:42:35.559 --> 00:42:41.440 +very very specific car like my uh dad's + +00:42:39.280 --> 00:42:43.040 +old Honda Odyssey it's like a very + +00:42:41.440 --> 00:42:44.520 +specific instance that is covered by the + +00:42:43.040 --> 00:42:47.200 +large umbrella of + +00:42:44.520 --> 00:42:50.000 +vehicles um we can also talk about part + +00:42:47.200 --> 00:42:51.760 +hole relations like a toe is a part of a + +00:42:50.000 --> 00:42:55.599 +foot and a foot is a part of a leg and a + +00:42:51.760 --> 00:42:55.599 +leg is a part of a body etc + +00:42:55.880 --> 00:43:02.800 +etc so so one uh really large project + +00:43:00.200 --> 00:43:05.359 +back in 2005 was to take a bunch of + +00:43:02.800 --> 00:43:06.960 +English words and kind of categorize + +00:43:05.359 --> 00:43:09.920 +them based on their relations to other + +00:43:06.960 --> 00:43:12.040 +words uh this is wordnet which is a very + +00:43:09.920 --> 00:43:14.760 +large database of English words where + +00:43:12.040 --> 00:43:17.359 +they basically took all the content word + +00:43:14.760 --> 00:43:19.640 +content words like nouns verbs + +00:43:17.359 --> 00:43:21.640 +adjectives and adverbs and they grouped + +00:43:19.640 --> 00:43:23.079 +them into sets of synonyms which they + +00:43:21.640 --> 00:43:26.000 +call + +00:43:23.079 --> 00:43:28.559 +sinets um and then for each grouping + +00:43:26.000 --> 00:43:30.319 +they would link uh one grouping to + +00:43:28.559 --> 00:43:33.160 +another through these conceptual + +00:43:30.319 --> 00:43:35.559 +semantic and lexical relations so a very + +00:43:33.160 --> 00:43:37.119 +common uh relation that would link one + +00:43:35.559 --> 00:43:39.680 +group of synonyms to another group of + +00:43:37.119 --> 00:43:41.960 +synonyms is like the super subordinate + +00:43:39.680 --> 00:43:43.559 +relations like I talked about so all the + +00:43:41.960 --> 00:43:46.680 +things that are group together with + +00:43:43.559 --> 00:43:49.160 +vehicle are uh above things that are + +00:43:46.680 --> 00:43:52.280 +grouped together with car and so on and + +00:43:49.160 --> 00:43:54.599 +so forth um they also distinguish uh + +00:43:52.280 --> 00:43:57.920 +between types which are common nouns + +00:43:54.599 --> 00:44:00.160 +like car uh or president + +00:43:57.920 --> 00:44:03.520 +uh with and instances which are proper + +00:44:00.160 --> 00:44:06.800 +nouns so uh president is a type Obama + +00:44:03.520 --> 00:44:08.920 +Trump Biden are instances of that type + +00:44:06.800 --> 00:44:11.960 +um and instances will always occur as + +00:44:08.920 --> 00:44:13.960 +the terminal node in these word net + +00:44:11.960 --> 00:44:15.720 +hierarchies um since then they've + +00:44:13.960 --> 00:44:18.200 +created lots of word Nets in other + +00:44:15.720 --> 00:44:20.240 +languages um and an interesting like + +00:44:18.200 --> 00:44:21.920 +consequence of word net was imag net + +00:44:20.240 --> 00:44:24.000 +which based its hierarchy of images + +00:44:21.920 --> 00:44:26.680 +according to the groupings of nouns in + +00:44:24.000 --> 00:44:29.119 +wordnet um the one caveat I have for + +00:44:26.680 --> 00:44:31.480 +this is because word net was constructed + +00:44:29.119 --> 00:44:34.200 +with um English and then with very + +00:44:31.480 --> 00:44:36.160 +specific annotators of uh that spoke a + +00:44:34.200 --> 00:44:37.520 +certain type of English your hierarchies + +00:44:36.160 --> 00:44:38.960 +are not going to map very well + +00:44:37.520 --> 00:44:41.680 +conceptually across cultures and + +00:44:38.960 --> 00:44:43.319 +languages so that's one consideration to + +00:44:41.680 --> 00:44:45.520 +be made about like grouping words in + +00:44:43.319 --> 00:44:45.520 +this + +00:44:46.040 --> 00:44:51.640 +way uh one really important uh theory + +00:44:49.440 --> 00:44:54.920 +that comes from semantics is the + +00:44:51.640 --> 00:44:57.200 +distributional hypothesis by um zelik + +00:44:54.920 --> 00:44:59.760 +Harris which fun fact was actually + +00:44:57.200 --> 00:45:02.920 +chomsky's PhD adviser uh but have + +00:44:59.760 --> 00:45:04.559 +birthed two completely different like uh + +00:45:02.920 --> 00:45:07.400 +schools of thought in linguistics that + +00:45:04.559 --> 00:45:10.079 +are often at odds um and this hypothesis + +00:45:07.400 --> 00:45:12.839 +was that um linguistic items that have + +00:45:10.079 --> 00:45:16.400 +similar distributions in use uh will end + +00:45:12.839 --> 00:45:18.440 +up having similar meanings uh very uh + +00:45:16.400 --> 00:45:20.720 +famous quotation uh that kind of + +00:45:18.440 --> 00:45:22.119 +rephrases this is you shall know a word + +00:45:20.720 --> 00:45:25.000 +by the company it + +00:45:22.119 --> 00:45:26.720 +keeps um and this idea is the uh + +00:45:25.000 --> 00:45:29.400 +foundation for lots of statistical + +00:45:26.720 --> 00:45:31.599 +approach es to semantics both lexical + +00:45:29.400 --> 00:45:31.599 +and + +00:45:32.079 --> 00:45:36.640 +otherwise so I think you guys have + +00:45:34.200 --> 00:45:38.680 +already talked about this but we can uh + +00:45:36.640 --> 00:45:41.040 +given a large Corpus form Vector + +00:45:38.680 --> 00:45:43.680 +representations of words based on their + +00:45:41.040 --> 00:45:46.640 +relationships and where they appear uh + +00:45:43.680 --> 00:45:48.839 +in context to other words uh with these + +00:45:46.640 --> 00:45:51.319 +Vector representations we can show sense + +00:45:48.839 --> 00:45:53.440 +relations um with cosine similar so if + +00:45:51.319 --> 00:45:56.359 +they're synonyms they'll probably end up + +00:45:53.440 --> 00:45:59.079 +in a very close in space so if we take + +00:45:56.359 --> 00:46:00.599 +their uh cosine similarity will be very + +00:45:59.079 --> 00:46:03.040 +close and then if we do Vector + +00:46:00.599 --> 00:46:05.119 +arithmetic operations we can see like an + +00:46:03.040 --> 00:46:10.000 +an analogical relationships so one + +00:46:05.119 --> 00:46:12.800 +example was uh King to Queen uh is uh + +00:46:10.000 --> 00:46:16.720 +like man to woman where the resulting + +00:46:12.800 --> 00:46:19.760 +differences are uh very similar uh we + +00:46:16.720 --> 00:46:22.119 +have in modern NLP I guess two main + +00:46:19.760 --> 00:46:24.240 +types of word embeddings dense static + +00:46:22.119 --> 00:46:26.640 +embeddings like word Toc and glove and + +00:46:24.240 --> 00:46:27.800 +then con contextual embeddings like Elmo + +00:46:26.640 --> 00:46:29.720 +and B + +00:46:27.800 --> 00:46:31.359 +um the drawback from using things like + +00:46:29.720 --> 00:46:36.119 +static embeddings is that they won't + +00:46:31.359 --> 00:46:38.920 +capture pisame um and homonyms so it + +00:46:36.119 --> 00:46:42.960 +would just be one representation for all + +00:46:38.920 --> 00:46:44.280 +instances of that uh written word + +00:46:42.960 --> 00:46:46.839 +whereas with contextual embeddings we + +00:46:44.280 --> 00:46:46.839 +can make that + +00:46:47.720 --> 00:46:51.839 +distinction uh another very very + +00:46:50.160 --> 00:46:54.079 +important Concept in semantics and I + +00:46:51.839 --> 00:46:55.440 +guess in linguistics in general is this + +00:46:54.079 --> 00:46:58.000 +idea of + +00:46:55.440 --> 00:46:59.200 +compositionality um so so from lot all + +00:46:58.000 --> 00:47:01.520 +the other slides that I've shown + +00:46:59.200 --> 00:47:03.800 +especially with morphology and syntax it + +00:47:01.520 --> 00:47:06.280 +seems that a lot of natural language is + +00:47:03.800 --> 00:47:07.640 +basically by like taking smaller units + +00:47:06.280 --> 00:47:11.520 +putting them together and then we can + +00:47:07.640 --> 00:47:13.200 +kind of see how as a whole we have um a + +00:47:11.520 --> 00:47:15.280 +certain structure that conveys a certain + +00:47:13.200 --> 00:47:17.760 +meaning that is derived from the meaning + +00:47:15.280 --> 00:47:19.960 +of individual Parts um so this was very + +00:47:17.760 --> 00:47:22.640 +clear in things like morphology and a + +00:47:19.960 --> 00:47:25.960 +bit less clear but still there in + +00:47:22.640 --> 00:47:27.760 +syntax um so in sentences uh for example + +00:47:25.960 --> 00:47:30.160 +we can combine the meaning of individual + +00:47:27.760 --> 00:47:31.240 +lexical items and phrases and then maybe + +00:47:30.160 --> 00:47:33.960 +even other + +00:47:31.240 --> 00:47:35.640 +constructions um a very important thing + +00:47:33.960 --> 00:47:38.000 +as well is that we can create novel + +00:47:35.640 --> 00:47:40.319 +sentences and structures systematically + +00:47:38.000 --> 00:47:41.800 +through compositionality um and + +00:47:40.319 --> 00:47:45.440 +similarly we can determine the meaning + +00:47:41.800 --> 00:47:47.400 +of Novel sentences and structures um and + +00:47:45.440 --> 00:47:50.800 +this is still an open question as to + +00:47:47.400 --> 00:47:53.319 +like whether modern models can do this + +00:47:50.800 --> 00:47:54.559 +and to what extent if they do um there + +00:47:53.319 --> 00:47:57.160 +are a couple of compositionality + +00:47:54.559 --> 00:48:00.079 +benchmarks one of them is cogs uh which + +00:47:57.160 --> 00:48:02.720 +uses like semantic representations um + +00:48:00.079 --> 00:48:05.319 +there's also another paper like kind of + +00:48:02.720 --> 00:48:08.200 +a survey paper on like compositionally + +00:48:05.319 --> 00:48:09.760 +compositionality in NLP and like how + +00:48:08.200 --> 00:48:12.319 +people have evaluated these types of + +00:48:09.760 --> 00:48:14.520 +things and uh I guess they pressed for + +00:48:12.319 --> 00:48:17.680 +more evaluations that occur on natural + +00:48:14.520 --> 00:48:20.079 +data as opposed to like artificial data + +00:48:17.680 --> 00:48:22.280 +um but there are also exceptions to + +00:48:20.079 --> 00:48:24.760 +compositionality uh such as idioms and + +00:48:22.280 --> 00:48:27.839 +figurative language so this is a really + +00:48:24.760 --> 00:48:29.800 +funny viral tweet on um the Chinese + +00:48:27.839 --> 00:48:32.240 +McDonald's menu translations so like + +00:48:29.800 --> 00:48:35.319 +this one says unsuspecting Tyrant double + +00:48:32.240 --> 00:48:37.319 +decker beef Fort um so like if you've + +00:48:35.319 --> 00:48:39.119 +ever looked at a Chinese menu and been + +00:48:37.319 --> 00:48:41.640 +like wow that is a really strange dish + +00:48:39.119 --> 00:48:43.800 +name it's because like they're very + +00:48:41.640 --> 00:48:45.960 +figurative dish names and they don't + +00:48:43.800 --> 00:48:47.280 +translate very well when you do like a + +00:48:45.960 --> 00:48:49.280 +word by word + +00:48:47.280 --> 00:48:51.160 +translation so obviously this is a + +00:48:49.280 --> 00:48:53.640 +challenge for applications like machine + +00:48:51.160 --> 00:48:55.160 +translation but also like um more + +00:48:53.640 --> 00:48:58.720 +culturally sensitive language + +00:48:55.160 --> 00:48:58.720 +Technologies as well + +00:49:00.000 --> 00:49:06.000 +so um kind of Switching gears uh one + +00:49:03.920 --> 00:49:08.240 +aspect uh of an expression's meaning + +00:49:06.000 --> 00:49:10.680 +although it doesn't Encompass everything + +00:49:08.240 --> 00:49:12.799 +um is the truth conditions or the + +00:49:10.680 --> 00:49:15.880 +conditions under which the expression + +00:49:12.799 --> 00:49:18.559 +would be considered to be true so for + +00:49:15.880 --> 00:49:21.359 +example if I say it rained in Pittsburgh + +00:49:18.559 --> 00:49:22.520 +yesterday this would be true only if it + +00:49:21.359 --> 00:49:24.880 +actually rained here yesterday and + +00:49:22.520 --> 00:49:28.079 +because it did this sentence is true + +00:49:24.880 --> 00:49:32.000 +pretty pretty straightforward + +00:49:28.079 --> 00:49:34.200 +um but we can also have uh relationships + +00:49:32.000 --> 00:49:37.440 +between Expressions that uh consider + +00:49:34.200 --> 00:49:40.920 +truth conditions so uh the definition of + +00:49:37.440 --> 00:49:44.160 +an entailment is that if a entails B + +00:49:40.920 --> 00:49:46.520 +then B must be true if a is true um in + +00:49:44.160 --> 00:49:49.400 +other words we can say that b is a truth + +00:49:46.520 --> 00:49:52.160 +condition of a um so if I say something + +00:49:49.400 --> 00:49:55.359 +like Emy is my adorable little orange + +00:49:52.160 --> 00:49:59.480 +cat which is 100% true um this entails + +00:49:55.359 --> 00:49:59.480 +that any is a cat and she is inde a + +00:50:00.359 --> 00:50:07.000 +cat um so entailment is uh something + +00:50:04.160 --> 00:50:09.280 +that's uh an objective study in a task + +00:50:07.000 --> 00:50:11.920 +called natural language inference um + +00:50:09.280 --> 00:50:15.160 +this is an NLP task where given some + +00:50:11.920 --> 00:50:17.559 +premise sentence or text determine if a + +00:50:15.160 --> 00:50:21.000 +hypothesis is entailed are contradicted + +00:50:17.559 --> 00:50:22.680 +or neutral um given that premise um so + +00:50:21.000 --> 00:50:26.839 +there are lots of data sets that deal + +00:50:22.680 --> 00:50:28.280 +with this like snli multi nli sale Etc + +00:50:26.839 --> 00:50:30.119 +um here are just some examples I don't + +00:50:28.280 --> 00:50:33.200 +know if you guys can read but it's like + +00:50:30.119 --> 00:50:35.200 +one of the text premises is a man + +00:50:33.200 --> 00:50:37.200 +inspects the uniform of a figure in some + +00:50:35.200 --> 00:50:39.040 +East Asian country um these are all + +00:50:37.200 --> 00:50:40.839 +annotations of images by the way if + +00:50:39.040 --> 00:50:43.079 +they're a bit strange um and then the + +00:50:40.839 --> 00:50:45.119 +hypothesis given that text is the man is + +00:50:43.079 --> 00:50:48.680 +sleeping well since he cannot be + +00:50:45.119 --> 00:50:51.280 +sleeping if uh he's inspecting uniforms + +00:50:48.680 --> 00:50:55.359 +um this is a contradiction given the + +00:50:51.280 --> 00:50:57.240 +premise so um nli isn't really a super + +00:50:55.359 --> 00:50:59.640 +popular task these days + +00:50:57.240 --> 00:51:01.359 +um but it's still pretty useful so we + +00:50:59.640 --> 00:51:03.760 +can use entailment models for things + +00:51:01.359 --> 00:51:06.319 +like a factuality checking of generated + +00:51:03.760 --> 00:51:08.280 +text or uh seeing if two sources agree + +00:51:06.319 --> 00:51:10.960 +in like something like fake news + +00:51:08.280 --> 00:51:12.799 +detection um so especially for like + +00:51:10.960 --> 00:51:14.440 +maybe retrieval augmented systems we can + +00:51:12.799 --> 00:51:17.040 +check if the generated answer is + +00:51:14.440 --> 00:51:19.400 +entailed by some rece like retrieved + +00:51:17.040 --> 00:51:19.400 +Source + +00:51:20.040 --> 00:51:25.599 +text okay so I think I'm actually going + +00:51:23.280 --> 00:51:27.760 +a lot faster than I thought I would but + +00:51:25.599 --> 00:51:30.079 +um pragmatic is an area where you can + +00:51:27.760 --> 00:51:31.799 +ask a lot of questions so and there's + +00:51:30.079 --> 00:51:35.799 +also a lot of ripe things that we can + +00:51:31.799 --> 00:51:38.280 +look at in NLP uh so this as opposed to + +00:51:35.799 --> 00:51:41.040 +semantics um pragmatics deals more with + +00:51:38.280 --> 00:51:43.520 +language use in context um so how is + +00:51:41.040 --> 00:51:45.880 +language used in social interactions How + +00:51:43.520 --> 00:51:48.680 +does context linguistic or otherwise + +00:51:45.880 --> 00:51:50.319 +actually influence how we say things um + +00:51:48.680 --> 00:51:52.440 +what do we actually intend to mean when + +00:51:50.319 --> 00:51:54.559 +we say something and how does this in + +00:51:52.440 --> 00:51:57.960 +influence the interpretation by The + +00:51:54.559 --> 00:51:59.400 +Listener um a very uh prominent theory + +00:51:57.960 --> 00:52:02.000 +in pragmatics is something called the + +00:51:59.400 --> 00:52:04.480 +speech act Theory um which says that the + +00:52:02.000 --> 00:52:06.559 +meaning of like something that you say + +00:52:04.480 --> 00:52:09.359 +is not just comprised of the statement + +00:52:06.559 --> 00:52:12.280 +itself but also of the intended effect + +00:52:09.359 --> 00:52:15.040 +that you meant to have on the listener + +00:52:12.280 --> 00:52:18.599 +um so a very simple example of this is + +00:52:15.040 --> 00:52:21.280 +asking like can you pass me the salt um + +00:52:18.599 --> 00:52:23.559 +like I'm not asking if you can literally + +00:52:21.280 --> 00:52:26.559 +physically pass me the salt I'm + +00:52:23.559 --> 00:52:28.880 +requesting you to pass me the salt um + +00:52:26.559 --> 00:52:32.280 +another funny one in English is like the + +00:52:28.880 --> 00:52:34.240 +do you mind type of construction um + +00:52:32.280 --> 00:52:37.640 +where like if I say do you mind if I sit + +00:52:34.240 --> 00:52:39.280 +next to you and you say yes like if + +00:52:37.640 --> 00:52:40.960 +you're answering literally you're saying + +00:52:39.280 --> 00:52:42.640 +that you mind so you would rather me not + +00:52:40.960 --> 00:52:44.480 +sit there but most people when they just + +00:52:42.640 --> 00:52:47.240 +say yes in isolation they actually mean + +00:52:44.480 --> 00:52:48.520 +go ahead um but if they say no it's a + +00:52:47.240 --> 00:52:51.280 +bit weird and they have to follow up + +00:52:48.520 --> 00:52:52.599 +with I don't mind um so this is a + +00:52:51.280 --> 00:52:54.760 +difference between like what you + +00:52:52.599 --> 00:52:57.400 +literally say versus what you actually + +00:52:54.760 --> 00:52:59.200 +mean + +00:52:57.400 --> 00:53:00.799 +and of course like given this like + +00:52:59.200 --> 00:53:02.319 +difference between what's literally + +00:53:00.799 --> 00:53:04.280 +written and what's actually meant to be + +00:53:02.319 --> 00:53:07.040 +said you can see where a lot of uh + +00:53:04.280 --> 00:53:10.240 +problems and like ripe research areas + +00:53:07.040 --> 00:53:14.000 +might uh be in + +00:53:10.240 --> 00:53:16.440 +NLP um so one uh important thing to + +00:53:14.000 --> 00:53:18.720 +remember in pragmatics is the idea of + +00:53:16.440 --> 00:53:21.400 +presuppositions so in discourse we kind + +00:53:18.720 --> 00:53:23.799 +of have like an agreement on what we + +00:53:21.400 --> 00:53:25.760 +know and like what we're talking about + +00:53:23.799 --> 00:53:28.160 +at the moment and we have implicit + +00:53:25.760 --> 00:53:31.839 +assumptions about the world uh that we + +00:53:28.160 --> 00:53:34.480 +act on so I already told you guys and + +00:53:31.839 --> 00:53:36.079 +showed you a photo of my adorable cat so + +00:53:34.480 --> 00:53:38.799 +if I say something like everyone thinks + +00:53:36.079 --> 00:53:40.839 +my cat is cute which is true this + +00:53:38.799 --> 00:53:42.480 +presupposes that I have a cat um it + +00:53:40.839 --> 00:53:44.839 +would be super strange for me to say + +00:53:42.480 --> 00:53:49.720 +something like this if I didn't have a + +00:53:44.839 --> 00:53:52.319 +cat and also if no one knew about my + +00:53:49.720 --> 00:53:54.359 +cat um so presuppositions can be + +00:53:52.319 --> 00:53:56.559 +triggered by certain lexical items or + +00:53:54.359 --> 00:53:59.079 +constructions so some examples of this + +00:53:56.559 --> 00:54:01.160 +are definite descriptions like the + +00:53:59.079 --> 00:54:02.720 +current King of France current is + +00:54:01.160 --> 00:54:06.319 +missing here but if I said the current + +00:54:02.720 --> 00:54:08.280 +King of France um the the kind of + +00:54:06.319 --> 00:54:10.359 +presupposes that I'm referring to one + +00:54:08.280 --> 00:54:12.799 +thing and that that one thing actually + +00:54:10.359 --> 00:54:15.240 +exists but France does not have a king + +00:54:12.799 --> 00:54:18.079 +at the moment so saying the current King + +00:54:15.240 --> 00:54:19.799 +of France kind of has a false + +00:54:18.079 --> 00:54:21.920 +presupposition uh another thing is + +00:54:19.799 --> 00:54:23.520 +factives which kind of when you use + +00:54:21.920 --> 00:54:26.040 +certain verbs you're kind of relaying + +00:54:23.520 --> 00:54:29.559 +the the subsequent information as things + +00:54:26.040 --> 00:54:31.920 +that are necessarily facts so if I say + +00:54:29.559 --> 00:54:34.000 +something like which is 100 also 100% + +00:54:31.920 --> 00:54:36.280 +true I regret drinking the Vietnamese + +00:54:34.000 --> 00:54:38.440 +cold brew from Red Hawk I couldn't sleep + +00:54:36.280 --> 00:54:40.280 +um this presupposes that I did in fact + +00:54:38.440 --> 00:54:43.160 +drink cold brew from Red Hawk and that + +00:54:40.280 --> 00:54:45.599 +did happen um but if I didn't drink cold + +00:54:43.160 --> 00:54:46.839 +brew from Red Hawk then this would also + +00:54:45.599 --> 00:54:48.920 +have a false + +00:54:46.839 --> 00:54:51.200 +presupposition um this also happens in + +00:54:48.920 --> 00:54:53.280 +questions like which linguist invented + +00:54:51.200 --> 00:54:56.440 +the light bulb presupposes that some + +00:54:53.280 --> 00:54:57.799 +linguist invented the light bulb um but + +00:54:56.440 --> 00:54:58.960 +there is no linguist that invented to + +00:54:57.799 --> 00:55:02.440 +light bulb and this is actually the + +00:54:58.960 --> 00:55:03.640 +title of a paper uh by n njin Kim um + +00:55:02.440 --> 00:55:05.240 +which looked at a bunch of question + +00:55:03.640 --> 00:55:08.079 +answering data sets and they found that + +00:55:05.240 --> 00:55:10.520 +certain questions are just unanswerable + +00:55:08.079 --> 00:55:12.280 +uh so it makes no sense to evaluate um + +00:55:10.520 --> 00:55:14.040 +NLP systems on these questions because + +00:55:12.280 --> 00:55:15.200 +they have false presuppositions there + +00:55:14.040 --> 00:55:17.280 +there's no way you can answer it + +00:55:15.200 --> 00:55:18.319 +factually um and there's a lot of other + +00:55:17.280 --> 00:55:22.160 +different types of + +00:55:18.319 --> 00:55:22.160 +triggers um but these are just a + +00:55:22.480 --> 00:55:28.359 +few um another uh concept is implicature + +00:55:26.920 --> 00:55:29.920 +so in semantics we talked about + +00:55:28.359 --> 00:55:32.160 +entailment like something that must + +00:55:29.920 --> 00:55:34.680 +necessarily be true if I had s something + +00:55:32.160 --> 00:55:36.319 +before that um but inures are slightly + +00:55:34.680 --> 00:55:38.480 +different these are things that are + +00:55:36.319 --> 00:55:40.920 +suggested but they're not liter + +00:55:38.480 --> 00:55:42.640 +necessarily literally expressed so I can + +00:55:40.920 --> 00:55:45.119 +say something like if it's lightly + +00:55:42.640 --> 00:55:48.200 +raining outside cloudy and kind of gross + +00:55:45.119 --> 00:55:50.039 +um like oh today's weather is the worst + +00:55:48.200 --> 00:55:52.520 +um I don't actually mean that it's + +00:55:50.039 --> 00:55:56.359 +literally the worst um like it could be + +00:55:52.520 --> 00:55:59.000 +a lot more worse um but I'm basically + +00:55:56.359 --> 00:56:01.440 +implying that it's bad and I don't like + +00:55:59.000 --> 00:56:03.680 +it I have a distaste for it um but + +00:56:01.440 --> 00:56:06.720 +that's not actually what's present in + +00:56:03.680 --> 00:56:06.720 +the thing I said + +00:56:11.359 --> 00:56:15.440 +yes this like bad to have in an + +00:56:14.039 --> 00:56:17.000 +evaluation data set because wouldn't + +00:56:15.440 --> 00:56:19.359 +this be a good indication of whether + +00:56:17.000 --> 00:56:21.280 +your model can identify the pr positions + +00:56:19.359 --> 00:56:23.440 +and then identify what that position is + +00:56:21.280 --> 00:56:26.400 +true yeah I think the issue was that + +00:56:23.440 --> 00:56:28.280 +that was not they were um trying to like + +00:56:26.400 --> 00:56:30.520 +with question answer evaluation data + +00:56:28.280 --> 00:56:32.280 +sets that wasn't the thing that they + +00:56:30.520 --> 00:56:37.119 +were looking for they were looking for + +00:56:32.280 --> 00:56:38.680 +like some string um so I think + +00:56:37.119 --> 00:56:40.559 +investigating whether question answering + +00:56:38.680 --> 00:56:43.200 +models can detect such false + +00:56:40.559 --> 00:56:44.920 +presuppositions is like a good thing um + +00:56:43.200 --> 00:56:47.119 +but if you don't actually identify that + +00:56:44.920 --> 00:56:50.079 +as a problem in your data to begin with + +00:56:47.119 --> 00:56:51.599 +um then it kind of messes with what like + +00:56:50.079 --> 00:56:53.799 +if you're just doing like a raw question + +00:56:51.599 --> 00:56:56.440 +answering evaluation it will mess with + +00:56:53.799 --> 00:56:58.440 +your end results so it was more of a + +00:56:56.440 --> 00:57:02.359 +mismatch between issues in the data and + +00:56:58.440 --> 00:57:04.760 +what people were looking for + +00:57:02.359 --> 00:57:09.680 +yeah but that's a good + +00:57:04.760 --> 00:57:12.400 +question um right so another example um + +00:57:09.680 --> 00:57:13.960 +which is a bit more Salient um like + +00:57:12.400 --> 00:57:15.799 +let's say I ask you in conversation are + +00:57:13.960 --> 00:57:19.559 +you going to see the eclipse on Monday + +00:57:15.799 --> 00:57:23.760 +and you respond I have work um what what + +00:57:19.559 --> 00:57:26.720 +is being implied by I have work like + +00:57:23.760 --> 00:57:28.760 +can no right like you know I have to go + +00:57:26.720 --> 00:57:31.039 +to work therefore like you know I can't + +00:57:28.760 --> 00:57:35.480 +go out and Skip whatever to see the + +00:57:31.039 --> 00:57:38.640 +eclipse um but you could also say I have + +00:57:35.480 --> 00:57:41.440 +work but I'm going to go anyways um so + +00:57:38.640 --> 00:57:45.559 +unlike entailments which are necessarily + +00:57:41.440 --> 00:57:47.559 +true if the premise is true implicators + +00:57:45.559 --> 00:57:50.559 +are diffusible if we add additional + +00:57:47.559 --> 00:57:54.160 +context be it linguistic like in more + +00:57:50.559 --> 00:57:56.880 +words or like in just a social context + +00:57:54.160 --> 00:57:58.839 +we can uh change + +00:57:56.880 --> 00:58:00.760 +uh the implied the thing that you're + +00:57:58.839 --> 00:58:03.760 +actually going to imply uh so one + +00:58:00.760 --> 00:58:05.960 +example is like let's say um you're a + +00:58:03.760 --> 00:58:10.520 +real estate agent and you're showing + +00:58:05.960 --> 00:58:12.960 +people houses uh you can say um oh this + +00:58:10.520 --> 00:58:14.599 +house has two bedrooms and what you're + +00:58:12.960 --> 00:58:17.319 +actually saying is like this house has + +00:58:14.599 --> 00:58:20.160 +exactly two bedrooms um but let's say + +00:58:17.319 --> 00:58:23.559 +you're hosting uh some guests at your + +00:58:20.160 --> 00:58:25.359 +house and you have exactly five bedrooms + +00:58:23.559 --> 00:58:26.720 +um but they only need two bedrooms to + +00:58:25.359 --> 00:58:28.359 +stay at your house + +00:58:26.720 --> 00:58:29.680 +um they ask you like oh do you have + +00:58:28.359 --> 00:58:31.240 +space for us how many bedrooms do you + +00:58:29.680 --> 00:58:33.559 +have and you could be like oh well I + +00:58:31.240 --> 00:58:35.480 +have two bedrooms like it doesn't have + +00:58:33.559 --> 00:58:37.520 +to be true that you have exactly two but + +00:58:35.480 --> 00:58:39.400 +you have two available bedrooms for them + +00:58:37.520 --> 00:58:41.880 +so this is an example of where like + +00:58:39.400 --> 00:58:44.240 +extra linguistic contexts can change + +00:58:41.880 --> 00:58:47.480 +what you're implying in your statement + +00:58:44.240 --> 00:58:49.760 +yeah and I I also want to point out that + +00:58:47.480 --> 00:58:51.799 +um this is a super good example of how + +00:58:49.760 --> 00:58:54.200 +different varieties of linguistics + +00:58:51.799 --> 00:58:56.760 +interact with each other because lindia + +00:58:54.200 --> 00:58:59.559 +changed the way she said I have two + +00:58:56.760 --> 00:59:01.200 +bedrooms in those two cases right and + +00:58:59.559 --> 00:59:03.839 +like the tone of your voice and and + +00:59:01.200 --> 00:59:06.280 +stuff like that that's called Pro and + +00:59:03.839 --> 00:59:09.240 +like you can change what you mean you + +00:59:06.280 --> 00:59:10.799 +can change uh semantics through Pro and + +00:59:09.240 --> 00:59:11.640 +like I just thought this was too good of + +00:59:10.799 --> 00:59:15.039 +an + +00:59:11.640 --> 00:59:17.640 +examp because like you can't really + +00:59:15.039 --> 00:59:19.799 +say yeah you you'll change the way you + +00:59:17.640 --> 00:59:22.640 +say something if you want to like make + +00:59:19.799 --> 00:59:25.039 +it clear that the Nuance is different + +00:59:22.640 --> 00:59:26.720 +yeah we have more examples of this later + +00:59:25.039 --> 00:59:28.240 +Sor no but like + +00:59:26.720 --> 00:59:31.000 +it's a different type of example but + +00:59:28.240 --> 00:59:36.039 +like it's good to have that pointed out + +00:59:31.000 --> 00:59:38.160 +um yeah so this kind of thing relates to + +00:59:36.039 --> 00:59:40.039 +uh the the overall question of how do + +00:59:38.160 --> 00:59:41.680 +people actually conduct conversations + +00:59:40.039 --> 00:59:43.200 +and how do they achieve e efficient + +00:59:41.680 --> 00:59:45.400 +communication like it's kind of a + +00:59:43.200 --> 00:59:47.480 +miracle that like as people we produce + +00:59:45.400 --> 00:59:49.520 +like a bunch of funky sounds and we know + +00:59:47.480 --> 00:59:52.160 +exactly or kind of approximate what + +00:59:49.520 --> 00:59:53.960 +those funky sounds mean right super cool + +00:59:52.160 --> 00:59:57.039 +um but how does this actually + +00:59:53.960 --> 00:59:59.359 +work um so there's a guy named Paul + +00:59:57.039 --> 01:00:01.680 +Grace and he asked this question and he + +00:59:59.359 --> 01:00:04.359 +came up with the idea that people are + +01:00:01.680 --> 01:00:06.760 +generally rational speakers I would hope + +01:00:04.359 --> 01:00:09.000 +um and to be a rational speaker you kind + +01:00:06.760 --> 01:00:10.839 +of expect and follow certain uh + +01:00:09.000 --> 01:00:13.480 +conventions and conversations which we + +01:00:10.839 --> 01:00:16.240 +refer to as maxims uh so the first is + +01:00:13.480 --> 01:00:18.599 +quantity uh which is to not undershot + +01:00:16.240 --> 01:00:20.079 +overshare like if I ask you um what did + +01:00:18.599 --> 01:00:22.319 +you do yesterday you're not going to + +01:00:20.079 --> 01:00:24.280 +relay like minute-by-minute playthroughs + +01:00:22.319 --> 01:00:26.079 +of what you did from when you woke up to + +01:00:24.280 --> 01:00:27.440 +when you went to bed right but at the + +01:00:26.079 --> 01:00:30.880 +same time you're not going to unders + +01:00:27.440 --> 01:00:32.559 +share and be like I lived like you know + +01:00:30.880 --> 01:00:34.720 +like we kind of we already know that you + +01:00:32.559 --> 01:00:36.839 +lived like tell me a bit more um the + +01:00:34.720 --> 01:00:38.960 +second uh very straightforward don't lie + +01:00:36.839 --> 01:00:41.160 +or at least like don't relay information + +01:00:38.960 --> 01:00:44.000 +that you know to not be factually + +01:00:41.160 --> 01:00:45.880 +correct um the third is relation be + +01:00:44.000 --> 01:00:47.160 +relevant so same question like if I + +01:00:45.880 --> 01:00:49.319 +asked you what you did yesterday you're + +01:00:47.160 --> 01:00:51.079 +not going to go back to your diary flip + +01:00:49.319 --> 01:00:54.319 +back one year and be like this is what I + +01:00:51.079 --> 01:00:56.839 +did one year ago um and then finally + +01:00:54.319 --> 01:00:58.799 +manner be clear which is something I'm + +01:00:56.839 --> 01:01:01.039 +hoping I'm doing in this lecture but you + +01:00:58.799 --> 01:01:03.559 +know don't say things in such a way that + +01:01:01.039 --> 01:01:06.680 +it's going to be uh super difficult for + +01:01:03.559 --> 01:01:08.480 +your listener to parse um now it would + +01:01:06.680 --> 01:01:10.680 +be great if people followed these + +01:01:08.480 --> 01:01:11.799 +conventions all the time because then we + +01:01:10.680 --> 01:01:14.319 +would know exactly what people are + +01:01:11.799 --> 01:01:17.680 +trying to convey at every given moment + +01:01:14.319 --> 01:01:20.680 +but this is not always the case um so + +01:01:17.680 --> 01:01:23.680 +people can intentionally uh violate or + +01:01:20.680 --> 01:01:26.280 +flout um we're doing flouting first + +01:01:23.680 --> 01:01:28.319 +flout one of these maxims to convey like + +01:01:26.280 --> 01:01:30.160 +another layer of meaning but usually + +01:01:28.319 --> 01:01:31.520 +with the intention that the The Listener + +01:01:30.160 --> 01:01:34.160 +will actually understand what they're + +01:01:31.520 --> 01:01:37.760 +trying to convey so a very good example + +01:01:34.160 --> 01:01:40.799 +of this is sarcasm like + +01:01:37.760 --> 01:01:43.640 +um I can't come up with one on the Fly + +01:01:40.799 --> 01:01:46.319 +um I'm totally not blanking right now + +01:01:43.640 --> 01:01:49.280 +that's like you know flouting um uh but + +01:01:46.319 --> 01:01:52.520 +you can also break maxims covertly like + +01:01:49.280 --> 01:01:55.520 +um outright lying this is done when like + +01:01:52.520 --> 01:01:57.400 +you are violating one of the maxims and + +01:01:55.520 --> 01:01:59.599 +you do not want the listener to know + +01:01:57.400 --> 01:02:01.680 +that you're violating a Maxum so other + +01:01:59.599 --> 01:02:03.880 +things like half truths which relates to + +01:02:01.680 --> 01:02:06.359 +quantity when you're unders sharing so + +01:02:03.880 --> 01:02:08.359 +that people does like people don't uh + +01:02:06.359 --> 01:02:10.079 +come up with a certain conclusion or + +01:02:08.359 --> 01:02:12.480 +with manner over complicating making + +01:02:10.079 --> 01:02:13.960 +your syntax really hard to parse uh like + +01:02:12.480 --> 01:02:15.720 +in a court proceeding when you're trying + +01:02:13.960 --> 01:02:19.200 +to convince the judge by overloading + +01:02:15.720 --> 01:02:19.200 +them with information or something like + +01:02:19.559 --> 01:02:27.039 +that um so in relation to like the + +01:02:24.559 --> 01:02:29.400 +conventions we have in conversation + +01:02:27.039 --> 01:02:31.839 +there's often times multiple ways of + +01:02:29.400 --> 01:02:33.640 +saying the thing that we want to convey + +01:02:31.839 --> 01:02:35.480 +um but how do we actually choose which + +01:02:33.640 --> 01:02:38.279 +one of these options is the best one to + +01:02:35.480 --> 01:02:40.520 +pick um so for example we can choose + +01:02:38.279 --> 01:02:42.279 +between different grammatical structures + +01:02:40.520 --> 01:02:43.720 +uh like in writing certain Fields may + +01:02:42.279 --> 01:02:46.079 +want you to construct everything in the + +01:02:43.720 --> 01:02:49.960 +passive voice like this experiment was + +01:02:46.079 --> 01:02:52.680 +conducted or uh the bacteria were uh + +01:02:49.960 --> 01:02:54.559 +incubated for x amount of time versus + +01:02:52.680 --> 01:02:57.839 +like in CS we use active voice + +01:02:54.559 --> 01:03:00.520 +constructions like um we uh modeled or + +01:02:57.839 --> 01:03:02.279 +we trained Etc uh we can also vary + +01:03:00.520 --> 01:03:03.640 +intonation and stress patterns like what + +01:03:02.279 --> 01:03:06.119 +Graham brought up with the like two + +01:03:03.640 --> 01:03:07.559 +bedrooms example um and then we can also + +01:03:06.119 --> 01:03:09.359 +choose different vocabulary and + +01:03:07.559 --> 01:03:11.079 +constructions like I can say certain + +01:03:09.359 --> 01:03:13.359 +words that I think are more simple and + +01:03:11.079 --> 01:03:16.400 +easy to digest if someone is unfamiliar + +01:03:13.359 --> 01:03:18.240 +with a topic um so we have all of these + +01:03:16.400 --> 01:03:20.359 +different things to pick from how do we + +01:03:18.240 --> 01:03:23.319 +actually choose + +01:03:20.359 --> 01:03:25.440 +them um so this in large part it comes + +01:03:23.319 --> 01:03:27.720 +from three different things first a + +01:03:25.440 --> 01:03:29.839 +speaker knowledge of Common Ground uh + +01:03:27.720 --> 01:03:32.599 +like what are the implicit assumptions + +01:03:29.839 --> 01:03:35.119 +they are making about like what they and + +01:03:32.599 --> 01:03:36.520 +the speaker like interacts with or they + +01:03:35.119 --> 01:03:39.359 +and The Listener like interact with what + +01:03:36.520 --> 01:03:41.200 +they have in their environment uh second + +01:03:39.359 --> 01:03:44.279 +what is their communicative goal like is + +01:03:41.200 --> 01:03:46.559 +your goal to acquire something is it to + +01:03:44.279 --> 01:03:48.520 +answer a question and then finally + +01:03:46.559 --> 01:03:50.160 +related to like the question what is + +01:03:48.520 --> 01:03:54.079 +actually desired by The + +01:03:50.160 --> 01:03:55.880 +Listener um so for common ground uh as + +01:03:54.079 --> 01:03:57.680 +an example I heard this in a meeting the + +01:03:55.880 --> 01:04:00.920 +other day we can launch a bunch of small + +01:03:57.680 --> 01:04:03.839 +llamas like if I said this in a random + +01:04:00.920 --> 01:04:05.400 +Pittsburgh restaurant and like the + +01:04:03.839 --> 01:04:07.079 +someone was eavesdropping and they have + +01:04:05.400 --> 01:04:09.599 +no idea about the current state of NLP + +01:04:07.079 --> 01:04:12.319 +they'll be like what are you doing why + +01:04:09.599 --> 01:04:14.839 +are you doing that um so this is a very + +01:04:12.319 --> 01:04:17.880 +illustrative example of like them not + +01:04:14.839 --> 01:04:20.240 +having like a common uh knowledge about + +01:04:17.880 --> 01:04:22.079 +what you're talking about um another + +01:04:20.240 --> 01:04:23.720 +example is like I've recently be been + +01:04:22.079 --> 01:04:24.960 +watching the bear and if you guys are + +01:04:23.720 --> 01:04:27.319 +familiar with that show it's like about + +01:04:24.960 --> 01:04:29.480 +this chef and he's like in really high + +01:04:27.319 --> 01:04:32.000 +stakes high stress situations so he'll + +01:04:29.480 --> 01:04:34.240 +just like yell and if he really wants + +01:04:32.000 --> 01:04:36.359 +salt he'll be like salt versus like if + +01:04:34.240 --> 01:04:37.920 +I'm at dinner and I want someone to pass + +01:04:36.359 --> 01:04:39.119 +me salt I'll be really polite and be + +01:04:37.920 --> 01:04:41.520 +like hey could you please pass me the + +01:04:39.119 --> 01:04:44.359 +salt so the first one is a very urgent + +01:04:41.520 --> 01:04:46.839 +command um like their goal is to get + +01:04:44.359 --> 01:04:48.720 +salt as quickly as possible versus mine + +01:04:46.839 --> 01:04:51.000 +is like well I'm kind of chilling with + +01:04:48.720 --> 01:04:53.440 +my dinner I don't need salt immediately + +01:04:51.000 --> 01:04:57.400 +so I can make this very polite + +01:04:53.440 --> 01:04:59.920 +request um and then and finally when we + +01:04:57.400 --> 01:05:01.640 +a listener desires certain information + +01:04:59.920 --> 01:05:05.279 +we can change what we focus on in our + +01:05:01.640 --> 01:05:07.520 +answer um so if someone asks like who + +01:05:05.279 --> 01:05:09.760 +trains llamas I can be like I train + +01:05:07.520 --> 01:05:11.559 +llamas but if someone asks you like what + +01:05:09.760 --> 01:05:13.720 +do you do with llamas I can say I train + +01:05:11.559 --> 01:05:17.119 +llamas and if someone asks what do I + +01:05:13.720 --> 01:05:19.160 +train I can say I train llamas so like + +01:05:17.119 --> 01:05:22.319 +surface form they're all the same but + +01:05:19.160 --> 01:05:24.039 +how I says how I say it changes to um + +01:05:22.319 --> 01:05:27.640 +like put focus on different parts of my + +01:05:24.039 --> 01:05:27.640 +answer depending on what the listener + +01:05:28.119 --> 01:05:33.079 +wants okay um we're going to go to + +01:05:30.599 --> 01:05:36.559 +something super computational um and + +01:05:33.079 --> 01:05:38.720 +this is rational speech acts um so GCE + +01:05:36.559 --> 01:05:41.240 +in his maxims says that like okay we + +01:05:38.720 --> 01:05:43.680 +kind kind of follow these conventions um + +01:05:41.240 --> 01:05:45.160 +but they're just very like nebulous + +01:05:43.680 --> 01:05:47.000 +conventions they don't really tell us + +01:05:45.160 --> 01:05:49.119 +about how to operationalize them in any + +01:05:47.000 --> 01:05:51.559 +computational setting um this is a + +01:05:49.119 --> 01:05:54.000 +computational theory uh for how + +01:05:51.559 --> 01:05:57.599 +communication may work um and it's a + +01:05:54.000 --> 01:05:59.160 +basian model uh so I'd also like to + +01:05:57.599 --> 01:06:00.880 +mention there are other competing models + +01:05:59.160 --> 01:06:03.279 +and not everyone believes this but it is + +01:06:00.880 --> 01:06:07.920 +pretty useful um in modeling + +01:06:03.279 --> 01:06:10.119 +settings so uh RSA views communication + +01:06:07.920 --> 01:06:13.200 +um as a recursive reasoning process + +01:06:10.119 --> 01:06:15.480 +between a speaker and a listener so it's + +01:06:13.200 --> 01:06:18.480 +like am I thinking what you're thinking + +01:06:15.480 --> 01:06:21.520 +I'm thinking that you're thinking I'm + +01:06:18.480 --> 01:06:23.880 +thinking Etc ET ET um and this is + +01:06:21.520 --> 01:06:27.760 +closely tied uh to another concept from + +01:06:23.880 --> 01:06:30.839 +psychology is and anyone have a guess to + +01:06:27.760 --> 01:06:30.839 +what that concept may + +01:06:31.960 --> 01:06:35.520 +be you know like what am I thinking + +01:06:34.160 --> 01:06:37.319 +you're thinking or like what are you + +01:06:35.520 --> 01:06:40.960 +thinking I'm + +01:06:37.319 --> 01:06:43.480 +thinking yeah yeah theory of mine um so + +01:06:40.960 --> 01:06:45.319 +yeah these like pragmatics and Concepts + +01:06:43.480 --> 01:06:46.520 +like theory of Mind do overlap um + +01:06:45.319 --> 01:06:48.000 +there's still like lots of debate on + +01:06:46.520 --> 01:06:50.039 +like how they actually interact and how + +01:06:48.000 --> 01:06:51.599 +they are actually operationalized like + +01:06:50.039 --> 01:06:54.520 +what are the psychological realities of + +01:06:51.599 --> 01:06:56.720 +these two things um but yeah they do + +01:06:54.520 --> 01:06:59.160 +overlap + +01:06:56.720 --> 01:07:02.039 +um so for Simplicity we can consider a + +01:06:59.160 --> 01:07:05.039 +setting of a reference game so let's say + +01:07:02.039 --> 01:07:08.359 +like uh me and some other person we have + +01:07:05.039 --> 01:07:10.480 +like a basket of colored balls um with + +01:07:08.359 --> 01:07:11.960 +different properties I'm thinking of a + +01:07:10.480 --> 01:07:13.960 +ball I want to communicate that I'm + +01:07:11.960 --> 01:07:15.920 +thinking of a certain ball um what + +01:07:13.960 --> 01:07:18.079 +should I say so that the listener will + +01:07:15.920 --> 01:07:21.400 +pick the same ball this is a reference + +01:07:18.079 --> 01:07:23.640 +game um so as I said this is a a + +01:07:21.400 --> 01:07:25.799 +recursive model and the base case for + +01:07:23.640 --> 01:07:29.720 +this recursion is a literal listener so + +01:07:25.799 --> 01:07:32.440 +they will select um something in their + +01:07:29.720 --> 01:07:34.440 +like set of references only considering + +01:07:32.440 --> 01:07:35.920 +what that person has said literally so + +01:07:34.440 --> 01:07:38.039 +like in this example the literal + +01:07:35.920 --> 01:07:39.319 +listener is like the innermost bubble + +01:07:38.039 --> 01:07:41.680 +there's three different items they can + +01:07:39.319 --> 01:07:43.880 +select from a smiley face with nothing + +01:07:41.680 --> 01:07:47.279 +on it a smiley face with glasses and a + +01:07:43.880 --> 01:07:50.520 +smiley face with with glasses and a half + +01:07:47.279 --> 01:07:52.799 +um they take us input something that the + +01:07:50.520 --> 01:07:55.640 +listener said so the listener said my + +01:07:52.799 --> 01:07:58.799 +friend has glasses the literal listener + +01:07:55.640 --> 01:08:00.880 +will pick the subset of items with + +01:07:58.799 --> 01:08:05.079 +glasses um so these are two out of the + +01:08:00.880 --> 01:08:08.160 +three items um so very basic um one + +01:08:05.079 --> 01:08:10.240 +level up is the speaker um the speaker + +01:08:08.160 --> 01:08:12.039 +at the lowest level will reason about + +01:08:10.240 --> 01:08:14.039 +potential interpretations by The + +01:08:12.039 --> 01:08:16.520 +Listener with the base case being this + +01:08:14.039 --> 01:08:18.880 +literal listener and choose from a + +01:08:16.520 --> 01:08:20.679 +selection of utterances such that the + +01:08:18.880 --> 01:08:25.199 +listener is most likely to pick the + +01:08:20.679 --> 01:08:27.319 +correct option so if the speaker was + +01:08:25.199 --> 01:08:29.520 +thinking about okay how can I maximally + +01:08:27.319 --> 01:08:31.120 +identify the smiley face with glasses + +01:08:29.520 --> 01:08:33.400 +they're not going to say my friend is + +01:08:31.120 --> 01:08:35.480 +smiling because all of them are smiling + +01:08:33.400 --> 01:08:38.759 +so that gives no additional information + +01:08:35.480 --> 01:08:40.640 +but by saying um and then the other + +01:08:38.759 --> 01:08:42.000 +option is to say okay my friend has a + +01:08:40.640 --> 01:08:43.239 +hat but that's not actually what they + +01:08:42.000 --> 01:08:46.440 +want to refer to so they're going to + +01:08:43.239 --> 01:08:47.640 +throw that option out um so from all the + +01:08:46.440 --> 01:08:50.359 +options that they have they're like okay + +01:08:47.640 --> 01:08:52.400 +glasses might be the best one one level + +01:08:50.359 --> 01:08:55.799 +up from that is a listener who is + +01:08:52.400 --> 01:08:58.719 +reasoning about potential States of the + +01:08:55.799 --> 01:09:00.880 +speaker who is reasoning about the Bas + +01:08:58.719 --> 01:09:02.440 +listener and thinking okay the speaker + +01:09:00.880 --> 01:09:04.159 +is attempting to be maximally + +01:09:02.440 --> 01:09:06.640 +informative out of all the things that + +01:09:04.159 --> 01:09:08.000 +they could tell me um why would they + +01:09:06.640 --> 01:09:09.920 +pick this one to be maximally + +01:09:08.000 --> 01:09:12.480 +informative and then we can iterate over + +01:09:09.920 --> 01:09:14.239 +this process as many times as we want so + +01:09:12.480 --> 01:09:15.440 +we go from base listener to speaker to + +01:09:14.239 --> 01:09:16.839 +listener to speaker to listener to + +01:09:15.440 --> 01:09:20.120 +speaker at + +01:09:16.839 --> 01:09:22.319 +infinum um this is actually used I think + +01:09:20.120 --> 01:09:25.640 +in some of Daniel Freed's work on like + +01:09:22.319 --> 01:09:29.359 +Vision QA or visual uh visual like + +01:09:25.640 --> 01:09:31.480 +captioning I believe um so it is pretty + +01:09:29.359 --> 01:09:35.000 +interesting and like is an information + +01:09:31.480 --> 01:09:35.000 +Theory based perspective on + +01:09:35.199 --> 01:09:40.239 +pragmatics um so yeah these are some + +01:09:38.279 --> 01:09:41.759 +interesting things that I did not have + +01:09:40.239 --> 01:09:44.319 +time for and lots of things that remain + +01:09:41.759 --> 01:09:47.080 +to be studied so as I mentioned earlier + +01:09:44.319 --> 01:09:48.920 +in like the intro slides there's um + +01:09:47.080 --> 01:09:50.199 +other fields like neural Linguistics + +01:09:48.920 --> 01:09:51.880 +psychol Linguistics and social + +01:09:50.199 --> 01:09:54.320 +Linguistics as well as linguistic + +01:09:51.880 --> 01:09:56.679 +typology um and I think recently + +01:09:54.320 --> 01:09:58.199 +especially there's a lot overlap between + +01:09:56.679 --> 01:09:59.679 +questions in some of these more applied + +01:09:58.199 --> 01:10:02.120 +fields and current + +01:09:59.679 --> 01:10:04.040 +NLP um so here are some example + +01:10:02.120 --> 01:10:06.960 +questions that uh I think would be + +01:10:04.040 --> 01:10:09.000 +interesting to explore um first is that + +01:10:06.960 --> 01:10:11.199 +humans seem to be really data efficient + +01:10:09.000 --> 01:10:12.960 +in terms of their linguistic input um + +01:10:11.199 --> 01:10:14.600 +chsky even had a hypothesis for this + +01:10:12.960 --> 01:10:17.239 +called Poverty of the stimulus he's like + +01:10:14.600 --> 01:10:20.239 +we must have grammar in our brains + +01:10:17.239 --> 01:10:21.920 +imbued at Birth because there is so + +01:10:20.239 --> 01:10:24.400 +little negative examples that we give + +01:10:21.920 --> 01:10:26.000 +like how do we actually generalize rules + +01:10:24.400 --> 01:10:27.719 +um whether or not you believe in that or + +01:10:26.000 --> 01:10:29.760 +not it is quite true and that like + +01:10:27.719 --> 01:10:32.120 +literal linguistic input is a lot more + +01:10:29.760 --> 01:10:33.440 +sparse for humans than it is for like a + +01:10:32.120 --> 01:10:34.840 +language model trained on trillions and + +01:10:33.440 --> 01:10:37.840 +trillions of tokens which is just not + +01:10:34.840 --> 01:10:39.840 +acquisition reasonable um so how can we + +01:10:37.840 --> 01:10:41.920 +imbue this type of data efficiency in + +01:10:39.840 --> 01:10:43.520 +models and like can we learn something + +01:10:41.920 --> 01:10:47.440 +from Human processes and human learning + +01:10:43.520 --> 01:10:49.520 +to do that um I guess one more specific + +01:10:47.440 --> 01:10:52.320 +question from this is how do we learn to + +01:10:49.520 --> 01:10:54.640 +generalize uh from linguistic exemplars + +01:10:52.320 --> 01:10:56.800 +like if you think about um like past + +01:10:54.640 --> 01:10:58.280 +tenses a past tense inflection in + +01:10:56.800 --> 01:11:03.760 +English there are some that are + +01:10:58.280 --> 01:11:08.040 +irregular like um I went uh like + +01:11:03.760 --> 01:11:11.040 +I uh talk versus I talked but if I say I + +01:11:08.040 --> 01:11:13.679 +go it's not I go it's I went um how do + +01:11:11.040 --> 01:11:15.320 +we figure out when to create a rule + +01:11:13.679 --> 01:11:17.239 +given that there are exceptions do we + +01:11:15.320 --> 01:11:19.440 +create a rule at all and how many + +01:11:17.239 --> 01:11:23.080 +exemplars do we need to create a + +01:11:19.440 --> 01:11:24.760 +rule um another one is uh very broad how + +01:11:23.080 --> 01:11:26.120 +can we make NLP systems that work better + +01:11:24.760 --> 01:11:27.840 +for everyone + +01:11:26.120 --> 01:11:31.239 +including people who speak non-standard + +01:11:27.840 --> 01:11:33.640 +dialects and marginalized languages so + +01:11:31.239 --> 01:11:35.560 +um in sociol linguistics uh this is + +01:11:33.640 --> 01:11:37.280 +something that's studied uh like the + +01:11:35.560 --> 01:11:38.679 +type of variation that you would have in + +01:11:37.280 --> 01:11:41.360 +communities and across communities is + +01:11:38.679 --> 01:11:43.639 +something that's uh studied um and when + +01:11:41.360 --> 01:11:45.360 +we have like characterizations of why + +01:11:43.639 --> 01:11:48.800 +certain speakers would say things in a + +01:11:45.360 --> 01:11:51.840 +different way or like why um that change + +01:11:48.800 --> 01:11:54.760 +may occur we can have more informed uh + +01:11:51.840 --> 01:11:57.520 +data collection uh like data collection + +01:11:54.760 --> 01:12:00.360 +strategy for NLP systems um and we could + +01:11:57.520 --> 01:12:02.480 +also talk about like why certain uh + +01:12:00.360 --> 01:12:04.719 +systems might not work well for others + +01:12:02.480 --> 01:12:06.639 +by talking about the actual linguistic + +01:12:04.719 --> 01:12:08.360 +variations that occur as opposed to just + +01:12:06.639 --> 01:12:11.199 +saying oh they're + +01:12:08.360 --> 01:12:12.600 +different um and then finally uh one + +01:12:11.199 --> 01:12:14.840 +thing that I kind of touched upon is + +01:12:12.600 --> 01:12:16.560 +like uh nowadays people are making a lot + +01:12:14.840 --> 01:12:19.639 +of comparisons between oh like language + +01:12:16.560 --> 01:12:23.080 +models can do this humans can do this um + +01:12:19.639 --> 01:12:24.800 +are language models super human um but a + +01:12:23.080 --> 01:12:26.880 +lot of these things are questions around + +01:12:24.800 --> 01:12:29.080 +evaluation like how do we actually make + +01:12:26.880 --> 01:12:31.199 +Fair comparisons between human and model + +01:12:29.080 --> 01:12:33.440 +language competence um how do we test + +01:12:31.199 --> 01:12:36.920 +for this type of linguistic knowledge um + +01:12:33.440 --> 01:12:39.400 +and I think this is a very like ripe uh + +01:12:36.920 --> 01:12:41.679 +active field and would be great if you + +01:12:39.400 --> 01:12:43.920 +guys were interested in exploring more + +01:12:41.679 --> 01:12:47.320 +um yeah I think that is actually all my + +01:12:43.920 --> 01:12:47.320 +content for today so I made + +01:12:50.600 --> 01:12:56.480 +it so yeah we we actually have a little + +01:12:54.040 --> 01:13:01.080 +bit of time uh for for questions about + +01:12:56.480 --> 01:13:01.080 +any of this uh stuff here if anybody + +01:13:03.360 --> 01:13:10.639 +has um we had a few questions along the + +01:13:06.040 --> 01:13:10.639 +way but there's a lot of + +01:13:11.000 --> 01:13:16.159 +um yeah anybody has things you wanted to + +01:13:16.199 --> 01:13:21.800 +ask okay um we can also take questions + +01:13:19.000 --> 01:13:24.639 +up front uh if you'd like to just ask + +01:13:21.800 --> 01:13:26.199 +privately I I think um oh yeah got one + +01:13:24.639 --> 01:13:28.960 +there + +01:13:26.199 --> 01:13:31.080 +these SES are actually + +01:13:28.960 --> 01:13:33.920 +different oh yeah I I might have + +01:13:31.080 --> 01:13:35.960 +uploaded a vision so yeah I'll I'll + +01:13:33.920 --> 01:13:38.360 +upload the ones here and we'll also + +01:13:35.960 --> 01:13:41.840 +upload all the references that she said + +01:13:38.360 --> 01:13:44.040 +to so yeah yeah so how do we + +01:13:41.840 --> 01:13:47.480 +take we have + +01:13:44.040 --> 01:13:47.480 +today language + +01:13:48.199 --> 01:13:53.719 +model doing data preparation or what + +01:13:51.000 --> 01:13:56.760 +kind of model do you need to um adapt in + +01:13:53.719 --> 01:13:59.760 +order to accom those kind of + +01:13:56.760 --> 01:13:59.760 +listic + +01:14:00.320 --> 01:14:04.679 +ideas yeah I mean it I'm going to talk + +01:14:02.840 --> 01:14:07.120 +about multilingual NLP where we talk + +01:14:04.679 --> 01:14:11.159 +more about it next time um although I'm + +01:14:07.120 --> 01:14:11.159 +not going to talk about it a whole lot + +01:14:11.560 --> 01:14:15.600 +um if you have any comments about that + +01:14:14.440 --> 01:14:18.600 +you + +01:14:15.600 --> 01:14:21.040 +can yeah I think like so the point + +01:14:18.600 --> 01:14:23.920 +Graham made about um like + +01:14:21.040 --> 01:14:27.280 +tokenization uh that that's a really big + +01:14:23.920 --> 01:14:30.800 +one um like some people have shifted to + +01:14:27.280 --> 01:14:32.080 +like bite based models uh for like + +01:14:30.800 --> 01:14:35.360 +multilingual but also if you have + +01:14:32.080 --> 01:14:37.360 +scripts that are just like not Roman + +01:14:35.360 --> 01:14:38.239 +scripts or very un like uncommon in your + +01:14:37.360 --> 01:14:42.480 +training + +01:14:38.239 --> 01:14:45.080 +data um I think it's a very broad + +01:14:42.480 --> 01:14:48.480 +question uh it really depends on what + +01:14:45.080 --> 01:14:50.719 +you want your model to be a model of + +01:14:48.480 --> 01:14:53.320 +like if you want your model to be a + +01:14:50.719 --> 01:14:55.440 +model of human language and cognition + +01:14:53.320 --> 01:14:57.239 +then you know you would want want to + +01:14:55.440 --> 01:14:58.760 +make sure your data scale is similar + +01:14:57.239 --> 01:15:00.840 +that your inputs are similar like people + +01:14:58.760 --> 01:15:02.719 +that train on child directed speech for + +01:15:00.840 --> 01:15:04.320 +example to study like whether language + +01:15:02.719 --> 01:15:05.880 +models can acquire similar linguistic + +01:15:04.320 --> 01:15:09.880 +capabilities to children at a certain + +01:15:05.880 --> 01:15:12.320 +age um if you want like an NLP system + +01:15:09.880 --> 01:15:15.239 +that's more culturally aware or like uh + +01:15:12.320 --> 01:15:18.199 +can hand handle non-standard dialects + +01:15:15.239 --> 01:15:19.600 +and non-standard uh uh ways of saying + +01:15:18.199 --> 01:15:22.960 +things then I think that would be more + +01:15:19.600 --> 01:15:24.280 +on the data collection side um like + +01:15:22.960 --> 01:15:25.880 +figuring out like what appropriate + +01:15:24.280 --> 01:15:27.800 +balance of data is like what were those + +01:15:25.880 --> 01:15:29.719 +sources and also just like ethical + +01:15:27.800 --> 01:15:32.679 +considerations of where you're sourcing + +01:15:29.719 --> 01:15:35.400 +that data from um I don't know if you + +01:15:32.679 --> 01:15:38.600 +had any specific like tasks or examples + +01:15:35.400 --> 01:15:40.600 +in mind yeah I I can also follow up a + +01:15:38.600 --> 01:15:43.320 +little bit like when you saw the tour of + +01:15:40.600 --> 01:15:44.600 +large language models class that I gave + +01:15:43.320 --> 01:15:46.320 +one thing you might have noticed there + +01:15:44.600 --> 01:15:47.800 +is that almost nobody is actually + +01:15:46.320 --> 01:15:50.480 +messing around with the architecture + +01:15:47.800 --> 01:15:52.320 +anymore um of language models they're + +01:15:50.480 --> 01:15:56.159 +all very very similar and I think the + +01:15:52.320 --> 01:15:59.719 +reason why is um people are training on + +01:15:56.159 --> 01:16:01.960 +like more and more data and + +01:15:59.719 --> 01:16:03.320 +um you you can mess around with + +01:16:01.960 --> 01:16:05.080 +architectures but the differences + +01:16:03.320 --> 01:16:07.120 +between architectures grow smaller as + +01:16:05.080 --> 01:16:10.040 +you train on more data and larger + +01:16:07.120 --> 01:16:13.639 +architectures and um also Transformers + +01:16:10.040 --> 01:16:15.199 +scale well all the like GPU based + +01:16:13.639 --> 01:16:16.800 +tooling is around them and stuff like + +01:16:15.199 --> 01:16:18.920 +that so because of that what do people + +01:16:16.800 --> 01:16:21.199 +mess around with they mess around with + +01:16:18.920 --> 01:16:22.639 +data what do what do they look at when + +01:16:21.199 --> 01:16:24.520 +they mess around with data they look at + +01:16:22.639 --> 01:16:26.400 +evaluations and how do people make + +01:16:24.520 --> 01:16:28.400 +evaluations well a lot of evaluations + +01:16:26.400 --> 01:16:32.960 +were designed based on listic principles + +01:16:28.400 --> 01:16:34.480 +so kind of I feel like compared to 20 + +01:16:32.960 --> 01:16:38.800 +years ago the + +01:16:34.480 --> 01:16:40.560 +connection the in well like 20 years ago + +01:16:38.800 --> 01:16:41.719 +for example there were people working on + +01:16:40.560 --> 01:16:45.000 +things where you would actually like + +01:16:41.719 --> 01:16:46.520 +parse the input and based on a parse + +01:16:45.000 --> 01:16:47.960 +tree of the input you would extract + +01:16:46.520 --> 01:16:50.159 +semantic structure and then you would + +01:16:47.960 --> 01:16:51.679 +manipulate that to do translation or + +01:16:50.159 --> 01:16:53.199 +something like that but I feel like + +01:16:51.679 --> 01:16:55.800 +we're not doing that anymore because we + +01:16:53.199 --> 01:16:57.920 +have a lot of endtoend systems + +01:16:55.800 --> 01:17:00.400 +um and so I think a lot of this goes + +01:16:57.920 --> 01:17:02.360 +into guiding evaluation the other thing + +01:17:00.400 --> 01:17:03.960 +is it is still really important for + +01:17:02.360 --> 01:17:06.960 +multilingual stuff where we don't have a + +01:17:03.960 --> 01:17:10.239 +lot of data in the other languages um + +01:17:06.960 --> 01:17:11.600 +and if you speak any language that's not + +01:17:10.239 --> 01:17:14.800 +English or + +01:17:11.600 --> 01:17:16.520 +Chinese um and you or actually if you + +01:17:14.800 --> 01:17:18.400 +speak any language that's not English + +01:17:16.520 --> 01:17:22.320 +and you use many of the + +01:17:18.400 --> 01:17:23.719 +open-source uh like language models + +01:17:22.320 --> 01:17:26.679 +you'll notice they're not even good at + +01:17:23.719 --> 01:17:28.960 +syntax in languages they still mess up + +01:17:26.679 --> 01:17:31.080 +syntax in non- English languages in + +01:17:28.960 --> 01:17:33.719 +English they mostly don't but sometimes + +01:17:31.080 --> 01:17:36.440 +do um and then if you go up to the + +01:17:33.719 --> 01:17:38.239 +really big models like gp4 and stuff + +01:17:36.440 --> 01:17:41.080 +like that they still mess up syntax in + +01:17:38.239 --> 01:17:43.400 +lower resource languages so you know if + +01:17:41.080 --> 01:17:45.560 +there are ways we can go in and enforce + +01:17:43.400 --> 01:17:46.760 +syntax um and then semantics is even + +01:17:45.560 --> 01:17:48.320 +harder because the dependencies are + +01:17:46.760 --> 01:17:49.880 +longer they're more complex and stuff + +01:17:48.320 --> 01:17:52.480 +like that so I think there are still + +01:17:49.880 --> 01:17:54.960 +modeling things to be done there + +01:17:52.480 --> 01:17:56.960 +too my question is kind of related to + +01:17:54.960 --> 01:17:59.080 +just said about like how architectures + +01:17:56.960 --> 01:18:01.320 +are like not really Frozen but they're + +01:17:59.080 --> 01:18:04.080 +kind of set at this point like why is + +01:18:01.320 --> 01:18:06.000 +scalability the only reason why like + +01:18:04.080 --> 01:18:07.840 +more drastic experimentation with + +01:18:06.000 --> 01:18:09.920 +architecture + +01:18:07.840 --> 01:18:11.639 +happening so I I don't think they're + +01:18:09.920 --> 01:18:15.000 +entirely set like Mamba is a good + +01:18:11.639 --> 01:18:18.000 +example of that um and like RW KB and + +01:18:15.000 --> 01:18:19.880 +other things like this and I think so + +01:18:18.000 --> 01:18:22.719 +there is still some Innovation going on + +01:18:19.880 --> 01:18:25.040 +in in uh + +01:18:22.719 --> 01:18:27.639 +architectures I think we don't have a + +01:18:25.040 --> 01:18:31.679 +good enough pipeline + +01:18:27.639 --> 01:18:34.600 +from experiments on smaller models to + +01:18:31.679 --> 01:18:36.679 +larger models so like what we would + +01:18:34.600 --> 01:18:40.440 +really like to be able to do because I + +01:18:36.679 --> 01:18:43.040 +mean training a state-ofthe-art LM costs + +01:18:40.440 --> 01:18:45.480 +like 10 to hundred million do and you + +01:18:43.040 --> 01:18:46.880 +don't want to run that over and over + +01:18:45.480 --> 01:18:48.840 +again to try different architectures so + +01:18:46.880 --> 01:18:51.480 +what we really need is we need some way + +01:18:48.840 --> 01:18:53.000 +to do like cheap experimentation with + +01:18:51.480 --> 01:18:55.360 +new model architectures that are better + +01:18:53.000 --> 01:18:58.719 +in some way and then like gradually + +01:18:55.360 --> 01:19:01.679 +scale it up and I just went to the sorry + +01:18:58.719 --> 01:19:05.840 +um very quickly I I just went to an open + +01:19:01.679 --> 01:19:08.960 +source uh like generative AI workshop + +01:19:05.840 --> 01:19:10.320 +and there's an architecture called RW KB + +01:19:08.960 --> 01:19:12.639 +it's not from Academia it's from the + +01:19:10.320 --> 01:19:14.280 +open source community and they had this + +01:19:12.639 --> 01:19:15.840 +really interesting presentation which is + +01:19:14.280 --> 01:19:17.719 +also on YouTube now if you want to see + +01:19:15.840 --> 01:19:19.719 +it where they basically say they have + +01:19:17.719 --> 01:19:21.880 +the whole Community experimenting on 500 + +01:19:19.719 --> 01:19:24.880 +million models then once they have a + +01:19:21.880 --> 01:19:27.480 +kind of like nice looking 500 billion + +01:19:24.880 --> 01:19:29.400 +model they then press a button on a + +01:19:27.480 --> 01:19:32.120 +larger experiment and run it on 1.3 + +01:19:29.400 --> 01:19:34.080 +billion and then seven billion and then + +01:19:32.120 --> 01:19:35.280 +they gradually funnel to the ones that + +01:19:34.080 --> 01:19:38.760 +they can actually run on like the + +01:19:35.280 --> 01:19:40.679 +biggest parameter sizes so I think not + +01:19:38.760 --> 01:19:41.840 +many people can do that effectively + +01:19:40.679 --> 01:19:43.520 +right now and I think that's a big + +01:19:41.840 --> 01:19:45.760 +reason why you know all of the really + +01:19:43.520 --> 01:19:48.199 +competitive models all look exactly like + +01:19:45.760 --> 01:19:49.679 +w to basically so I guess I was asking + +01:19:48.199 --> 01:19:50.880 +because at what point is it like + +01:19:49.679 --> 01:19:52.960 +something novel we're doing with + +01:19:50.880 --> 01:19:55.320 +architecture or like or like something + +01:19:52.960 --> 01:19:58.159 +we're doing with the inut + +01:19:55.320 --> 01:20:00.800 +one is it just buying performance + +01:19:58.159 --> 01:20:02.600 +with yeah I mean you can buy some + +01:20:00.800 --> 01:20:05.840 +performance with parameters but you can + +01:20:02.600 --> 01:20:07.679 +buy more performance for cheaper with + +01:20:05.840 --> 01:20:09.880 +better data better architectures and + +01:20:07.679 --> 01:20:13.679 +stuff like that so I think like there's + +01:20:09.880 --> 01:20:17.639 +actually a bet by Sasha rush and um and + +01:20:13.679 --> 01:20:19.199 +Jonathan Frankle uh Sasha Rush being a + +01:20:17.639 --> 01:20:21.840 +Cornell and hugging face and Jonathan + +01:20:19.199 --> 01:20:24.960 +Frankle being at data Bricks now where + +01:20:21.840 --> 01:20:26.040 +um Jonathan Frankle said Transformers + +01:20:24.960 --> 01:20:27.600 +like attention is all you need + +01:20:26.040 --> 01:20:29.639 +Transformers are all you need and Sasha + +01:20:27.600 --> 01:20:31.120 +Rush said you don't and in three years + +01:20:29.639 --> 01:20:32.960 +we're going to see which ones are on top + +01:20:31.120 --> 01:20:34.760 +of the leaderboard so I'm really looking + +01:20:32.960 --> 01:20:37.000 +forward to what the result of that bet + +01:20:34.760 --> 01:20:38.639 +is like in three years our Transformers + +01:20:37.000 --> 01:20:40.400 +is going to be up top or something else + +01:20:38.639 --> 01:20:42.000 +so yeah we'll see I don't want to keep + +01:20:40.400 --> 01:20:45.000 +everybody for too long but thank you for + +01:20:42.000 --> 01:20:45.000 +the the question diff --git a/CMU Advanced NLP 2024 (23) Multilingual NLP/CMU Advanced NLP 2024 (23) Multilingual NLP.mp4 b/CMU Advanced NLP 2024 (23) Multilingual NLP/CMU Advanced NLP 2024 (23) Multilingual NLP.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6b02602ea1c68bee4209290a854c26699f088c60 --- /dev/null +++ b/CMU Advanced NLP 2024 (23) Multilingual NLP/CMU Advanced NLP 2024 (23) Multilingual NLP.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d5feef605da23edea5ef5b8f7cda06e62b653f4a00c030873ad4fdff0d54034 +size 77477775 diff --git a/CMU Advanced NLP 2024 (23) Multilingual NLP/metadata.json b/CMU Advanced NLP 2024 (23) Multilingual NLP/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..9488ea82f09de562a97a67f8c6595d84d19d2495 --- /dev/null +++ b/CMU Advanced NLP 2024 (23) Multilingual NLP/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=1Y9qermdf8I", + "title": "CMU Advanced NLP 2024 (23) Multilingual NLP" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (23) Multilingual NLP/transcript.srt b/CMU Advanced NLP 2024 (23) Multilingual NLP/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..1a77c4f25afb78f14ce44e156a371dd7edd27faf --- /dev/null +++ b/CMU Advanced NLP 2024 (23) Multilingual NLP/transcript.srt @@ -0,0 +1,6907 @@ +1 +00:00:00,960 --> 00:00:09,240 +okay um I'll talk about multilingual + +2 +00:00:03,960 --> 00:00:10,400 +NLP and um multilingual NLP is uh NLP in + +3 +00:00:09,240 --> 00:00:13,839 +many different + +4 +00:00:10,400 --> 00:00:16,359 +languages um there is specifically two + +5 +00:00:13,839 --> 00:00:18,480 +varieties of multilingual NLP um the + +6 +00:00:16,359 --> 00:00:20,600 +first one is monolingual NLP in multiple + +7 +00:00:18,480 --> 00:00:22,880 +languages and what I mean by this is + +8 +00:00:20,600 --> 00:00:25,960 +basically any task that you could do in + +9 +00:00:22,880 --> 00:00:28,880 +English um you could do it in languages + +10 +00:00:25,960 --> 00:00:30,720 +that are not English and so uh this + +11 +00:00:28,880 --> 00:00:32,800 +would be a question answer ing sentiment + +12 +00:00:30,720 --> 00:00:36,320 +analysis chatbots code generation + +13 +00:00:32,800 --> 00:00:40,120 +whatever else and then the other one is + +14 +00:00:36,320 --> 00:00:43,440 +some variety of cross uh crosslingual + +15 +00:00:40,120 --> 00:00:45,440 +NLP and this is specifically tasks that + +16 +00:00:43,440 --> 00:00:46,840 +handle more than one language at once + +17 +00:00:45,440 --> 00:00:51,160 +and so that would be things like machine + +18 +00:00:46,840 --> 00:00:53,680 +translation crosslingual QA um etc etc + +19 +00:00:51,160 --> 00:00:55,600 +um crosslingual QA is uh just like for + +20 +00:00:53,680 --> 00:00:56,960 +example answering questions where the + +21 +00:00:55,600 --> 00:00:59,039 +source material is in a different + +22 +00:00:56,960 --> 00:01:01,039 +language so if I ask a question in + +23 +00:00:59,039 --> 00:01:04,640 +Japanese it can go find some information + +24 +00:01:01,039 --> 00:01:04,640 +in English and answer the question in + +25 +00:01:04,960 --> 00:01:12,840 +Japanese so um right now so many of our + +26 +00:01:09,400 --> 00:01:16,280 +systems are trained by you know using + +27 +00:01:12,840 --> 00:01:18,119 +large data sets and probably by far the + +28 +00:01:16,280 --> 00:01:21,079 +biggest challenge in multilingual nalp + +29 +00:01:18,119 --> 00:01:23,720 +is this uh pity of data in uh many of + +30 +00:01:21,079 --> 00:01:27,640 +the languages that we care about this + +31 +00:01:23,720 --> 00:01:29,600 +particular example is uh of Wikipedia + +32 +00:01:27,640 --> 00:01:30,920 +articles and how many Wikipedia articles + +33 +00:01:29,600 --> 00:01:34,079 +there are in different + +34 +00:01:30,920 --> 00:01:35,759 +languages so uh you can see that it + +35 +00:01:34,079 --> 00:01:39,720 +drops off very quickly number one is + +36 +00:01:35,759 --> 00:01:41,720 +English of course and um after the first + +37 +00:01:39,720 --> 00:01:43,520 +20 to 30 languages there are just very + +38 +00:01:41,720 --> 00:01:48,640 +few articles in any language that you + +39 +00:01:43,520 --> 00:01:50,680 +want to use um it looks similar for just + +40 +00:01:48,640 --> 00:01:53,920 +general text on the internet but it's + +41 +00:01:50,680 --> 00:01:55,560 +not quite as sharp in general text on + +42 +00:01:53,920 --> 00:01:57,560 +the internet so there is still this very + +43 +00:01:55,560 --> 00:02:01,320 +longkill distribution but it's not quite + +44 +00:01:57,560 --> 00:02:03,119 +as uh bad as in Wikipedia + +45 +00:02:01,320 --> 00:02:05,039 +um one other thing to note is that + +46 +00:02:03,119 --> 00:02:07,479 +there's even less annotated data of + +47 +00:02:05,039 --> 00:02:11,440 +course because uh the annotated data is + +48 +00:02:07,479 --> 00:02:13,680 +a subset of monolingual data and so that + +49 +00:02:11,440 --> 00:02:15,640 +means we have less data for machine + +50 +00:02:13,680 --> 00:02:17,840 +translation we have less data for + +51 +00:02:15,640 --> 00:02:19,160 +sequence labeling dialogue question + +52 +00:02:17,840 --> 00:02:22,280 +answering other stuff like that + +53 +00:02:19,160 --> 00:02:22,280 +instruction following and + +54 +00:02:22,480 --> 00:02:26,800 +things another thing that makes + +55 +00:02:24,519 --> 00:02:30,000 +multilingual NLP difficult is we just + +56 +00:02:26,800 --> 00:02:33,200 +had a a lecture on linguistics + +57 +00:02:30,000 --> 00:02:34,760 +and not all languages are the same and I + +58 +00:02:33,200 --> 00:02:37,519 +would say that this is the smaller + +59 +00:02:34,760 --> 00:02:39,480 +problem but it's still a problem um and + +60 +00:02:37,519 --> 00:02:41,720 +it can cause uh issues when you're + +61 +00:02:39,480 --> 00:02:44,879 +trying to process something that is not + +62 +00:02:41,720 --> 00:02:47,560 +English with models that were mostly uh + +63 +00:02:44,879 --> 00:02:50,760 +trained in English and to give some + +64 +00:02:47,560 --> 00:02:52,519 +examples um morphology is one of them so + +65 +00:02:50,760 --> 00:02:55,319 +we talked about how morphology we can + +66 +00:02:52,519 --> 00:02:57,200 +have things like um you know infix + +67 +00:02:55,319 --> 00:03:00,680 +morphology where you change the inner + +68 +00:02:57,200 --> 00:03:03,120 +letters of a um + +69 +00:03:00,680 --> 00:03:07,000 +of a word and in English we mostly don't + +70 +00:03:03,120 --> 00:03:09,519 +have that we have like you know um Goose + +71 +00:03:07,000 --> 00:03:11,040 +geese uh and other things like this but + +72 +00:03:09,519 --> 00:03:13,120 +it's very rare for us to change the + +73 +00:03:11,040 --> 00:03:15,959 +middle uh to morphologically change the + +74 +00:03:13,120 --> 00:03:17,799 +middle letters of a sentence um of a + +75 +00:03:15,959 --> 00:03:19,560 +word but in other languages we have that + +76 +00:03:17,799 --> 00:03:23,239 +all the time and that breaks sentence + +77 +00:03:19,560 --> 00:03:26,159 +piece for example so that's one issue um + +78 +00:03:23,239 --> 00:03:30,640 +another thing is accents and diacritics + +79 +00:03:26,159 --> 00:03:34,120 +so um an accent or a diacritic is + +80 +00:03:30,640 --> 00:03:35,280 +uh basically where you have a um another + +81 +00:03:34,120 --> 00:03:37,959 +thing on top of the character to + +82 +00:03:35,280 --> 00:03:39,840 +indicate its tone um does anybody speak + +83 +00:03:37,959 --> 00:03:43,000 +a language that uses lots of accents or + +84 +00:03:39,840 --> 00:03:47,120 +diacritics yeah Spanish Spanish okay + +85 +00:03:43,000 --> 00:03:49,200 +yeah um so yeah that that's a good one + +86 +00:03:47,120 --> 00:03:50,760 +um any any other + +87 +00:03:49,200 --> 00:03:53,959 +ones + +88 +00:03:50,760 --> 00:03:56,360 +yeah yeah French has a little bit um + +89 +00:03:53,959 --> 00:04:00,799 +there there are some that are even more + +90 +00:03:56,360 --> 00:04:03,599 +uh kind of uh rich or uh to give one + +91 +00:04:00,799 --> 00:04:07,079 +example Yura which is spoken widely in + +92 +00:04:03,599 --> 00:04:08,879 +Nigeria has lots of diacritics but very + +93 +00:04:07,079 --> 00:04:11,000 +often it's written the language is + +94 +00:04:08,879 --> 00:04:13,439 +written without them so it adds a lot of + +95 +00:04:11,000 --> 00:04:16,199 +ambiguity and like lexical diversity to + +96 +00:04:13,439 --> 00:04:20,040 +the language and stuff like this as well + +97 +00:04:16,199 --> 00:04:21,880 +um pinion kind of has them so it has uh + +98 +00:04:20,040 --> 00:04:23,759 +like pinion for Chinese kind of has them + +99 +00:04:21,880 --> 00:04:25,520 +it has like a number at the end of the + +100 +00:04:23,759 --> 00:04:27,600 +syllable that indicates the tone so it's + +101 +00:04:25,520 --> 00:04:30,120 +kind of similar as + +102 +00:04:27,600 --> 00:04:34,440 +well um other things are different + +103 +00:04:30,120 --> 00:04:40,800 +scripts such as cjk um Chinese Japanese + +104 +00:04:34,440 --> 00:04:44,160 +Korean scripts um so Chinese script uh + +105 +00:04:40,800 --> 00:04:46,600 +is uh you know they they also use Roman + +106 +00:04:44,160 --> 00:04:48,880 +characters but they have lots of + +107 +00:04:46,600 --> 00:04:51,600 +ideographs where the characters mean + +108 +00:04:48,880 --> 00:04:55,840 +things um is supposed to be indicating + +109 +00:04:51,600 --> 00:04:59,240 +the pronunciation Japanese has both uh + +110 +00:04:55,840 --> 00:05:00,160 +um both ideographs and uh regular + +111 +00:04:59,240 --> 00:05:02,520 +characters + +112 +00:05:00,160 --> 00:05:05,680 +that have pronunciations Korean is all + +113 +00:05:02,520 --> 00:05:10,039 +pronunciation but they stick three + +114 +00:05:05,680 --> 00:05:10,039 +pronunciations together in a single + +115 +00:05:11,120 --> 00:05:14,520 +character + +116 +00:05:12,919 --> 00:05:17,919 +um + +117 +00:05:14,520 --> 00:05:20,560 +so like I don't I don't know much Korean + +118 +00:05:17,919 --> 00:05:22,199 +but I maybe know enough to write this + +119 +00:05:20,560 --> 00:05:25,080 +properly so this + +120 +00:05:22,199 --> 00:05:27,400 +is + +121 +00:05:25,080 --> 00:05:30,520 +uh is there a line down there does + +122 +00:05:27,400 --> 00:05:37,319 +anyone know Korean yeah + +123 +00:05:30,520 --> 00:05:37,319 +there's is there a line no okay cool + +124 +00:05:39,600 --> 00:05:50,960 +what oh I this is Korea right + +125 +00:05:43,840 --> 00:05:50,960 +yeah it has a okay okay yeah so so + +126 +00:05:53,639 --> 00:06:00,520 +that's and then this is + +127 +00:05:57,120 --> 00:06:03,600 +the yeah okay so so this is is um this + +128 +00:06:00,520 --> 00:06:06,840 +is Korea H + +129 +00:06:03,600 --> 00:06:12,280 +could okay thank you and then this + +130 +00:06:06,840 --> 00:06:14,960 +is Yeah so basically this is like h a n + +131 +00:06:12,280 --> 00:06:21,840 +and then kind of like K + +132 +00:06:14,960 --> 00:06:23,360 +kg i u k g um and so like there's + +133 +00:06:21,840 --> 00:06:25,120 +actually kind of three characters in + +134 +00:06:23,360 --> 00:06:26,479 +each one of these characters so it's + +135 +00:06:25,120 --> 00:06:29,800 +kind of like an alphabet but they're all + +136 +00:06:26,479 --> 00:06:31,240 +stuck together and if you um if you deal + +137 +00:06:29,800 --> 00:06:33,080 +with this on a computer and you're not + +138 +00:06:31,240 --> 00:06:35,199 +very smart about it basically it will + +139 +00:06:33,080 --> 00:06:37,360 +just look like a Chinese character and + +140 +00:06:35,199 --> 00:06:41,120 +you it will be segmented with SE + +141 +00:06:37,360 --> 00:06:43,520 +sentence piece in a you know uh in uh + +142 +00:06:41,120 --> 00:06:48,160 +weird ways if you use bites or or + +143 +00:06:43,520 --> 00:06:49,160 +whatever so um each one has their own uh + +144 +00:06:48,160 --> 00:06:51,880 +their own + +145 +00:06:49,160 --> 00:06:54,319 +peculiarities um another thing is + +146 +00:06:51,880 --> 00:06:56,039 +dialectal language so sometimes people + +147 +00:06:54,319 --> 00:06:59,160 +speak different dialects and that can + +148 +00:06:56,039 --> 00:07:01,879 +throw things off and also lack of uh + +149 +00:06:59,160 --> 00:07:04,759 +form writing system so for a lot of + +150 +00:07:01,879 --> 00:07:07,280 +languages um there isn't really + +151 +00:07:04,759 --> 00:07:08,800 +standardized writing and people you know + +152 +00:07:07,280 --> 00:07:11,560 +sometimes write in the native script + +153 +00:07:08,800 --> 00:07:14,800 +sometimes write in Roman script um and + +154 +00:07:11,560 --> 00:07:17,800 +other things like that so these English + +155 +00:07:14,800 --> 00:07:21,759 +is relatively standardized + +156 +00:07:17,800 --> 00:07:25,599 +relatively um you know relatively poor + +157 +00:07:21,759 --> 00:07:27,039 +morphology or simple morphology um + +158 +00:07:25,599 --> 00:07:28,400 +doesn't have a whole lot of characters + +159 +00:07:27,039 --> 00:07:30,319 +and stuff like that so it has a lot of + +160 +00:07:28,400 --> 00:07:33,599 +simplifying things that you you uh you + +161 +00:07:30,319 --> 00:07:35,280 +have to deal with when you work in other + +162 +00:07:33,599 --> 00:07:37,479 +languages + +163 +00:07:35,280 --> 00:07:39,639 +so how do we start attacking + +164 +00:07:37,479 --> 00:07:42,840 +multilingual problems so like one really + +165 +00:07:39,639 --> 00:07:46,639 +huge it um thing over the past five + +166 +00:07:42,840 --> 00:07:48,599 +years or seven years or so is that um we + +167 +00:07:46,639 --> 00:07:50,080 +can learn models that process multiple + +168 +00:07:48,599 --> 00:07:51,319 +languages and all the languages can + +169 +00:07:50,080 --> 00:07:52,960 +learn from each other and that really + +170 +00:07:51,319 --> 00:07:58,360 +pulls up the languages that don't have a + +171 +00:07:52,960 --> 00:08:00,120 +lot of data and um so this is a trans a + +172 +00:07:58,360 --> 00:08:01,840 +variety of transfer learning + +173 +00:08:00,120 --> 00:08:03,360 +and this allows you to improve accuracy + +174 +00:08:01,840 --> 00:08:05,360 +on Lower resource Languages by + +175 +00:08:03,360 --> 00:08:07,759 +leveraging data in higher resource + +176 +00:08:05,360 --> 00:08:10,360 +languages um another really big + +177 +00:08:07,759 --> 00:08:11,479 +advantage of uh multilingual learning + +178 +00:08:10,360 --> 00:08:14,720 +and learning models that work in + +179 +00:08:11,479 --> 00:08:18,080 +multiple languages is practical which is + +180 +00:08:14,720 --> 00:08:21,240 +before Google translate would deploy a + +181 +00:08:18,080 --> 00:08:23,520 +100 models or maybe even 200 models + +182 +00:08:21,240 --> 00:08:25,720 +because if they were translating into + +183 +00:08:23,520 --> 00:08:28,319 +English and then out of English they + +184 +00:08:25,720 --> 00:08:31,360 +would have one model for like English to + +185 +00:08:28,319 --> 00:08:33,719 +Chinese English to Japanese English to + +186 +00:08:31,360 --> 00:08:36,519 +French English to Spanish and then + +187 +00:08:33,719 --> 00:08:37,880 +Spanish to English French to English and + +188 +00:08:36,519 --> 00:08:39,479 +so they would have to deal with like + +189 +00:08:37,880 --> 00:08:40,919 +deploying all of these models having + +190 +00:08:39,479 --> 00:08:43,320 +different servers that served all of + +191 +00:08:40,919 --> 00:08:45,560 +them and stuff like this um now you can + +192 +00:08:43,320 --> 00:08:47,800 +just have one big model uh that handles + +193 +00:08:45,560 --> 00:08:49,680 +all of the languages at once and deploy + +194 +00:08:47,800 --> 00:08:51,640 +it which allows you to make that model + +195 +00:08:49,680 --> 00:08:56,120 +bigger itself because you need to deploy + +196 +00:08:51,640 --> 00:08:58,920 +it fewer times and um also uh you know + +197 +00:08:56,120 --> 00:09:00,040 +it can benefit from transfer learning so + +198 +00:08:58,920 --> 00:09:01,839 +because of the + +199 +00:09:00,040 --> 00:09:04,519 +uh a lot of places that handle + +200 +00:09:01,839 --> 00:09:07,240 +multilingual stuff are are trans like + +201 +00:09:04,519 --> 00:09:07,240 +changing to this + +202 +00:09:08,320 --> 00:09:13,000 +Paradigm + +203 +00:09:09,839 --> 00:09:15,200 +um in terms of like let's say you want + +204 +00:09:13,000 --> 00:09:18,959 +to handle a different language other + +205 +00:09:15,200 --> 00:09:20,800 +than English um this is a highle uh + +206 +00:09:18,959 --> 00:09:23,560 +multilingual learning + +207 +00:09:20,800 --> 00:09:25,760 +flowchart that you can kind of follow to + +208 +00:09:23,560 --> 00:09:28,959 +decide which methodology you could be + +209 +00:09:25,760 --> 00:09:31,079 +using and this is from the point of view + +210 +00:09:28,959 --> 00:09:35,720 +of wanting to get the best possible + +211 +00:09:31,079 --> 00:09:37,360 +model um so first is there sufficient + +212 +00:09:35,720 --> 00:09:39,240 +labeled data in the target language and + +213 +00:09:37,360 --> 00:09:41,160 +when I say sufficient you know obviously + +214 +00:09:39,240 --> 00:09:44,320 +more is always better but you know a + +215 +00:09:41,160 --> 00:09:46,320 +reasonably large amount uh from the + +216 +00:09:44,320 --> 00:09:48,160 +point of view of you being able to train + +217 +00:09:46,320 --> 00:09:50,399 +a good system for the task you're + +218 +00:09:48,160 --> 00:09:52,120 +interested in for machine translation + +219 +00:09:50,399 --> 00:09:53,680 +that's something like at least a million + +220 +00:09:52,120 --> 00:09:55,360 +sentences for something like + +221 +00:09:53,680 --> 00:09:57,440 +classification it might only be a + +222 +00:09:55,360 --> 00:10:01,240 +thousand sentences or something like + +223 +00:09:57,440 --> 00:10:03,839 +this um then the second question is uh + +224 +00:10:01,240 --> 00:10:06,240 +must you serve many languages uh with + +225 +00:10:03,839 --> 00:10:08,440 +strict memory constraints if the answer + +226 +00:10:06,240 --> 00:10:10,279 +is yes you can do multi-lingual models + +227 +00:10:08,440 --> 00:10:11,920 +if the answer is no you could still do a + +228 +00:10:10,279 --> 00:10:13,680 +multilingual model but you could also + +229 +00:10:11,920 --> 00:10:15,200 +adapt to the specific language that + +230 +00:10:13,680 --> 00:10:17,480 +you're interested in processing and do + +231 +00:10:15,200 --> 00:10:19,760 +better but by doing that + +232 +00:10:17,480 --> 00:10:21,560 +adaption uh then if you don't have + +233 +00:10:19,760 --> 00:10:24,360 +sufficient labeled data in the target + +234 +00:10:21,560 --> 00:10:25,959 +language um if you have access to people + +235 +00:10:24,360 --> 00:10:27,440 +who can provide that data for you and + +236 +00:10:25,959 --> 00:10:29,600 +you're serious about building a model + +237 +00:10:27,440 --> 00:10:31,920 +for that language you can just ask + +238 +00:10:29,600 --> 00:10:33,920 +people to annotate things so there's a + +239 +00:10:31,920 --> 00:10:35,519 +lot of work on zero shot adaptation + +240 +00:10:33,920 --> 00:10:36,959 +which is essentially trying to get you + +241 +00:10:35,519 --> 00:10:39,760 +know models to work well on new + +242 +00:10:36,959 --> 00:10:42,160 +languages with no annotated data but in + +243 +00:10:39,760 --> 00:10:46,279 +reality like if you're ever going to be + +244 +00:10:42,160 --> 00:10:49,120 +deploying a model to users um you might + +245 +00:10:46,279 --> 00:10:50,480 +as well label a thousand examples of + +246 +00:10:49,120 --> 00:10:52,720 +whatever task you want to solve and + +247 +00:10:50,480 --> 00:10:55,279 +train on those examples and that's far + +248 +00:10:52,720 --> 00:10:58,279 +easier than doing zero shot adaptation + +249 +00:10:55,279 --> 00:11:01,600 +um one caveat is um if you're trying to + +250 +00:10:58,279 --> 00:11:04,560 +show somebody us like the possibility of + +251 +00:11:01,600 --> 00:11:06,959 +something working so like for example um + +252 +00:11:04,560 --> 00:11:08,480 +you have a nice spe speech recognition + +253 +00:11:06,959 --> 00:11:10,760 +system that works in many different + +254 +00:11:08,480 --> 00:11:12,399 +languages and you want to show people in + +255 +00:11:10,760 --> 00:11:15,519 +a new country that it could possibly + +256 +00:11:12,399 --> 00:11:18,120 +work then applying it zero shot uh and + +257 +00:11:15,519 --> 00:11:19,440 +not using any uh training data might be + +258 +00:11:18,120 --> 00:11:21,760 +a good way to convince them that they + +259 +00:11:19,440 --> 00:11:23,519 +should work with you but in the end you + +260 +00:11:21,760 --> 00:11:25,639 +know like if you care enough about it + +261 +00:11:23,519 --> 00:11:27,920 +you'll be annotating data so this is my + +262 +00:11:25,639 --> 00:11:30,920 +general like flow for building a usable + +263 +00:11:27,920 --> 00:11:30,920 +system + +264 +00:11:31,040 --> 00:11:34,399 +cool any questions so + +265 +00:11:34,480 --> 00:11:39,320 +far + +266 +00:11:36,000 --> 00:11:42,399 +okay um so let's go into multilingual + +267 +00:11:39,320 --> 00:11:44,920 +language modeling um so multilingual + +268 +00:11:42,399 --> 00:11:47,800 +language modeling + +269 +00:11:44,920 --> 00:11:50,920 +um in the very simplest sense is just + +270 +00:11:47,800 --> 00:11:55,160 +like train a language model on lots of + +271 +00:11:50,920 --> 00:11:57,399 +data um and so you you know if you're + +272 +00:11:55,160 --> 00:11:58,959 +just training like GPT or something like + +273 +00:11:57,399 --> 00:11:59,839 +that you just throw all of the data in + +274 +00:11:58,959 --> 00:12:01,680 +there + +275 +00:11:59,839 --> 00:12:03,639 +um you train your subword vocabularies + +276 +00:12:01,680 --> 00:12:06,680 +over all of the data and something will + +277 +00:12:03,639 --> 00:12:07,880 +happen um you might want to do more than + +278 +00:12:06,680 --> 00:12:11,200 +that though if you really care about + +279 +00:12:07,880 --> 00:12:14,800 +performance um so anyway uh if we're + +280 +00:12:11,200 --> 00:12:17,519 +doing multilingual modeling um there's + +281 +00:12:14,800 --> 00:12:19,440 +two varieties of multilingual modeling + +282 +00:12:17,519 --> 00:12:22,079 +um the first one is if you have + +283 +00:12:19,440 --> 00:12:23,519 +multilingual inputs and if you have + +284 +00:12:22,079 --> 00:12:26,560 +multilingual inputs you really don't + +285 +00:12:23,519 --> 00:12:28,440 +need to do anything um in order to get + +286 +00:12:26,560 --> 00:12:31,560 +it to work at least somewhat as long as + +287 +00:12:28,440 --> 00:12:34,440 +your base model can uh can learn uh can + +288 +00:12:31,560 --> 00:12:35,959 +handle multiple languages so to give an + +289 +00:12:34,440 --> 00:12:38,440 +example like let's say I want to + +290 +00:12:35,959 --> 00:12:41,199 +translate into English if I want to + +291 +00:12:38,440 --> 00:12:43,480 +translate into English um then I can + +292 +00:12:41,199 --> 00:12:45,120 +just throw in these sentences and not + +293 +00:12:43,480 --> 00:12:47,680 +say anything and say please translate + +294 +00:12:45,120 --> 00:12:49,240 +this into English and GPT will do a + +295 +00:12:47,680 --> 00:12:50,720 +reasonably good job of translating it + +296 +00:12:49,240 --> 00:12:52,279 +into English for me I don't even say + +297 +00:12:50,720 --> 00:12:54,760 +this is a French sentence or this is a + +298 +00:12:52,279 --> 00:12:58,079 +Japanese sentence or something like + +299 +00:12:54,760 --> 00:13:00,199 +that however this may be very obvious + +300 +00:12:58,079 --> 00:13:01,480 +but like if you have multi output at the + +301 +00:13:00,199 --> 00:13:04,240 +very least you need to tell it which + +302 +00:13:01,480 --> 00:13:05,880 +language it should be generating in so + +303 +00:13:04,240 --> 00:13:07,760 +um there's different ways to do this but + +304 +00:13:05,880 --> 00:13:09,880 +basically you can add a tag or prompt + +305 +00:13:07,760 --> 00:13:12,920 +about the target language um for + +306 +00:13:09,880 --> 00:13:14,959 +generative tasks and originally this is + +307 +00:13:12,920 --> 00:13:16,399 +what was done in uh Google translate + +308 +00:13:14,959 --> 00:13:18,519 +it's probably what they're doing right + +309 +00:13:16,399 --> 00:13:20,920 +now and they basically added a single + +310 +00:13:18,519 --> 00:13:23,440 +tag to the beginning that said French or + +311 +00:13:20,920 --> 00:13:25,079 +Japanese and then uh the sentence they + +312 +00:13:23,440 --> 00:13:28,360 +wanted to translate and then it gave + +313 +00:13:25,079 --> 00:13:28,360 +them the output + +314 +00:13:30,519 --> 00:13:34,399 +however um there are a few difficulties + +315 +00:13:32,399 --> 00:13:37,320 +in multilingual learning so the first + +316 +00:13:34,399 --> 00:13:39,680 +one um is the curse of + +317 +00:13:37,320 --> 00:13:42,160 +multilinguality and so actually if you + +318 +00:13:39,680 --> 00:13:44,839 +look at a lot of the open source models + +319 +00:13:42,160 --> 00:13:46,920 +at least um most of them are only + +320 +00:13:44,839 --> 00:13:49,360 +trained seriously on English this + +321 +00:13:46,920 --> 00:13:52,639 +includes things like + +322 +00:13:49,360 --> 00:13:53,920 +lamao um most of the language models + +323 +00:13:52,639 --> 00:13:57,519 +that I talked about and I think the + +324 +00:13:53,920 --> 00:14:00,680 +reason why is uh this curse of + +325 +00:13:57,519 --> 00:14:03,839 +multilinguality and given fix size model + +326 +00:14:00,680 --> 00:14:06,680 +um the per language capacity decreases + +327 +00:14:03,839 --> 00:14:09,880 +uh as we increase the number of + +328 +00:14:06,680 --> 00:14:14,279 +languages and this is an uh an older + +329 +00:14:09,880 --> 00:14:15,959 +example um from the xlmr paper which was + +330 +00:14:14,279 --> 00:14:18,759 +kind of a mass language model that was + +331 +00:14:15,959 --> 00:14:22,519 +used in a bunch of different languages + +332 +00:14:18,759 --> 00:14:25,800 +but what you can see is as they increase + +333 +00:14:22,519 --> 00:14:28,480 +the number of languages the scores go + +334 +00:14:25,800 --> 00:14:31,000 +down for the high resource languages as + +335 +00:14:28,480 --> 00:14:32,880 +you get up to like 100 languages for the + +336 +00:14:31,000 --> 00:14:36,079 +low resource languages the scores + +337 +00:14:32,880 --> 00:14:38,000 +momentarily go up because you're now + +338 +00:14:36,079 --> 00:14:40,000 +benefiting from transfer learning from + +339 +00:14:38,000 --> 00:14:42,639 +other languages but then they start to + +340 +00:14:40,000 --> 00:14:45,279 +go down again as the model capacity runs + +341 +00:14:42,639 --> 00:14:47,279 +out and you essentially do worse and so + +342 +00:14:45,279 --> 00:14:49,800 +this is an older paper it shows it more + +343 +00:14:47,279 --> 00:14:51,480 +convincingly but there's also some other + +344 +00:14:49,800 --> 00:14:54,320 +examples like there was a very big + +345 +00:14:51,480 --> 00:14:56,440 +effort by hugging face called Bloom uh + +346 +00:14:54,320 --> 00:15:00,600 +which was a model that was trained to be + +347 +00:14:56,440 --> 00:15:02,880 +very multilingual and um and they were + +348 +00:15:00,600 --> 00:15:05,399 +they trained a 175 billion parameter + +349 +00:15:02,880 --> 00:15:06,839 +model to try to make it you know very + +350 +00:15:05,399 --> 00:15:08,560 +strong in a lot of different languages + +351 +00:15:06,839 --> 00:15:10,320 +and it just ended up not being very good + +352 +00:15:08,560 --> 00:15:13,000 +on English and being even worse on the + +353 +00:15:10,320 --> 00:15:16,480 +other languages um because they kind of + +354 +00:15:13,000 --> 00:15:18,360 +overreached their language you know + +355 +00:15:16,480 --> 00:15:19,880 +language modeling abilities at the time + +356 +00:15:18,360 --> 00:15:21,600 +were just not good enough I think things + +357 +00:15:19,880 --> 00:15:24,720 +have gotten significantly better now but + +358 +00:15:21,600 --> 00:15:28,199 +still you'll notice that if you um + +359 +00:15:24,720 --> 00:15:29,920 +emphasize more on multilingual data you + +360 +00:15:28,199 --> 00:15:32,800 +do increase the number of tokens that + +361 +00:15:29,920 --> 00:15:35,399 +you see in like English for example and + +362 +00:15:32,800 --> 00:15:38,680 +that could cause accuracy to go down + +363 +00:15:35,399 --> 00:15:41,279 +number of tokens and capacity + +364 +00:15:38,680 --> 00:15:44,720 +available yeah so does that happen + +365 +00:15:41,279 --> 00:15:47,920 +because we see a transf learning stop or + +366 +00:15:44,720 --> 00:15:49,920 +is it like another underlying reason do + +367 +00:15:47,920 --> 00:15:52,199 +does it happen because we see transfer + +368 +00:15:49,920 --> 00:15:55,399 +learning stop or for another underlying + +369 +00:15:52,199 --> 00:15:57,000 +reason I I think there's two reasons um + +370 +00:15:55,399 --> 00:16:01,000 +the first reason is like if you have a + +371 +00:15:57,000 --> 00:16:03,519 +fixed compute budget then you're + +372 +00:16:01,000 --> 00:16:06,240 +fundamentally going to be limited in the + +373 +00:16:03,519 --> 00:16:09,440 +number of uh things that you can see and + +374 +00:16:06,240 --> 00:16:10,519 +actually there's a nice paper by um my + +375 +00:16:09,440 --> 00:16:12,360 +student + +376 +00:16:10,519 --> 00:16:14,720 +Patrick + +377 +00:16:12,360 --> 00:16:18,040 +um I'm allowed to call it nice because + +378 +00:16:14,720 --> 00:16:20,480 +it's not my paper but I wasn't involved + +379 +00:16:18,040 --> 00:16:20,480 +in it + +380 +00:16:25,480 --> 00:16:29,639 +but I should have put this on the slides + +381 +00:16:27,800 --> 00:16:31,800 +actually but there there's the this um + +382 +00:16:29,639 --> 00:16:31,800 +this + +383 +00:16:32,639 --> 00:16:40,959 +paper actually may maybe I I did um but + +384 +00:16:36,720 --> 00:16:46,480 +basically what they find + +385 +00:16:40,959 --> 00:16:50,399 +is the amount of compute that you + +386 +00:16:46,480 --> 00:16:55,639 +spend trying to find the main + +387 +00:16:50,399 --> 00:17:00,800 +thing so essentially the amount of uh + +388 +00:16:55,639 --> 00:17:00,800 +weight that you spend on any particular + +389 +00:17:02,160 --> 00:17:06,760 +um sorry I I uh I'd have to go back and + +390 +00:17:05,480 --> 00:17:08,799 +and take some time to look the figures + +391 +00:17:06,760 --> 00:17:11,880 +to explain them accurately but one of + +392 +00:17:08,799 --> 00:17:13,559 +the findings in this paper is the amount + +393 +00:17:11,880 --> 00:17:15,439 +of time that you spend on any individual + +394 +00:17:13,559 --> 00:17:17,439 +language kind of corresponds to how well + +395 +00:17:15,439 --> 00:17:21,000 +you do on that language and so if you're + +396 +00:17:17,439 --> 00:17:22,679 +spending more time on one language um + +397 +00:17:21,000 --> 00:17:24,799 +you do better on the other language and + +398 +00:17:22,679 --> 00:17:29,799 +there's the idea of scaling laws that I + +399 +00:17:24,799 --> 00:17:32,559 +talked about before where your effective + +400 +00:17:29,799 --> 00:17:34,640 +your effective capacity or like how good + +401 +00:17:32,559 --> 00:17:36,039 +the model becomes is a function of like + +402 +00:17:34,640 --> 00:17:38,520 +the parameter size and the amount of + +403 +00:17:36,039 --> 00:17:40,160 +compute you spend and if you're doing + +404 +00:17:38,520 --> 00:17:42,559 +multiple languages then you need to kind + +405 +00:17:40,160 --> 00:17:44,400 +of to some extent split your parameters + +406 +00:17:42,559 --> 00:17:46,960 +between languages and one of the + +407 +00:17:44,400 --> 00:17:50,000 +interesting findings from this paper is + +408 +00:17:46,960 --> 00:17:51,919 +that actually you would expect more + +409 +00:17:50,000 --> 00:17:56,039 +benefit from sharing than you actually + +410 +00:17:51,919 --> 00:17:57,960 +get like you you actually get relatively + +411 +00:17:56,039 --> 00:18:00,559 +little benefit from sharing if you have + +412 +00:17:57,960 --> 00:18:03,200 +enough data to train on and because in + +413 +00:18:00,559 --> 00:18:06,080 +many of the cases when we're training + +414 +00:18:03,200 --> 00:18:08,480 +models we're less bottleneck by data for + +415 +00:18:06,080 --> 00:18:11,320 +the low resource languages + +416 +00:18:08,480 --> 00:18:12,880 +um uh or we're still bottleneck by data + +417 +00:18:11,320 --> 00:18:14,320 +for the low resource languages but for + +418 +00:18:12,880 --> 00:18:17,080 +high resource languages we're usually + +419 +00:18:14,320 --> 00:18:19,400 +bottleneck by compute so if you allocate + +420 +00:18:17,080 --> 00:18:21,679 +more compute to other languages then + +421 +00:18:19,400 --> 00:18:23,440 +you're going to allocate less compute to + +422 +00:18:21,679 --> 00:18:24,840 +um English for example and a lot of + +423 +00:18:23,440 --> 00:18:27,360 +people like a lot of the benchmarks are + +424 +00:18:24,840 --> 00:18:29,520 +in English for better or worse so I + +425 +00:18:27,360 --> 00:18:32,360 +think probably one + +426 +00:18:29,520 --> 00:18:34,280 +um like one good strategy here is yes we + +427 +00:18:32,360 --> 00:18:35,799 +have some people focusing on bu building + +428 +00:18:34,280 --> 00:18:37,520 +really good English models we have some + +429 +00:18:35,799 --> 00:18:39,960 +people focusing on building really good + +430 +00:18:37,520 --> 00:18:41,440 +models for the top like 10 languages + +431 +00:18:39,960 --> 00:18:42,679 +where we can afford to build a model for + +432 +00:18:41,440 --> 00:18:45,039 +each of them and then we have some + +433 +00:18:42,679 --> 00:18:46,520 +people working on you know really good + +434 +00:18:45,039 --> 00:18:48,240 +multilingual models that can handle a + +435 +00:18:46,520 --> 00:18:49,840 +whole bunch of languages and kind of + +436 +00:18:48,240 --> 00:18:51,840 +just spread our efforts out and our + +437 +00:18:49,840 --> 00:18:53,520 +compute for each of those models out + +438 +00:18:51,840 --> 00:18:57,480 +over + +439 +00:18:53,520 --> 00:19:00,240 +there um cool any other questions about + +440 +00:18:57,480 --> 00:19:02,559 +this yeah + +441 +00:19:00,240 --> 00:19:03,799 +get what sharing means in this context + +442 +00:19:02,559 --> 00:19:06,039 +is it a shared + +443 +00:19:03,799 --> 00:19:08,480 +representation Yeah so basically usually + +444 +00:19:06,039 --> 00:19:10,320 +what we do is we just train um we just + +445 +00:19:08,480 --> 00:19:11,919 +train a single model and all of the + +446 +00:19:10,320 --> 00:19:13,919 +parameters are shared between all of the + +447 +00:19:11,919 --> 00:19:16,480 +languages the only things that are not + +448 +00:19:13,919 --> 00:19:18,280 +shared are the um the word embeddings + +449 +00:19:16,480 --> 00:19:20,840 +and I'll I'll or the subword embeddings + +450 +00:19:18,280 --> 00:19:22,600 +and I'll talk about that in a little + +451 +00:19:20,840 --> 00:19:24,760 +bit + +452 +00:19:22,600 --> 00:19:28,360 +cool + +453 +00:19:24,760 --> 00:19:29,559 +um so there's a number of ways to + +454 +00:19:28,360 --> 00:19:32,480 +mitigate + +455 +00:19:29,559 --> 00:19:34,480 +um this curse of uh multilinguality I + +456 +00:19:32,480 --> 00:19:37,640 +kind of got ahead of myself in talking + +457 +00:19:34,480 --> 00:19:40,240 +about the other paper but um there's a + +458 +00:19:37,640 --> 00:19:42,000 +couple things that we can do to improve + +459 +00:19:40,240 --> 00:19:43,880 +this so the first one is the + +460 +00:19:42,000 --> 00:19:47,400 +tokenization disparity which I just + +461 +00:19:43,880 --> 00:19:50,000 +talked about um the subword embeddings + +462 +00:19:47,400 --> 00:19:51,799 +we we share all of the parameters in the + +463 +00:19:50,000 --> 00:19:53,320 +body of the model but fundamentally the + +464 +00:19:51,799 --> 00:19:55,520 +words in the different languages are + +465 +00:19:53,320 --> 00:19:58,840 +different sometimes the scripts are even + +466 +00:19:55,520 --> 00:20:00,919 +different and so what we do is we + +467 +00:19:58,840 --> 00:20:03,960 +usually have a shared tokenizer that's + +468 +00:20:00,919 --> 00:20:06,400 +used between um all of the different + +469 +00:20:03,960 --> 00:20:08,600 +languages and so if it's something like + +470 +00:20:06,400 --> 00:20:10,960 +English and French uh with lots of + +471 +00:20:08,600 --> 00:20:12,640 +shared words then actually many of the + +472 +00:20:10,960 --> 00:20:13,600 +embeddings will be shared between the + +473 +00:20:12,640 --> 00:20:16,200 +different + +474 +00:20:13,600 --> 00:20:18,480 +languages that helps transfer but it's + +475 +00:20:16,200 --> 00:20:20,440 +not absolutely essential for transfer so + +476 +00:20:18,480 --> 00:20:22,159 +like there has been some work that + +477 +00:20:20,440 --> 00:20:24,640 +demonstrates that even with no shared + +478 +00:20:22,159 --> 00:20:26,640 +vocabulary you can still um you can + +479 +00:20:24,640 --> 00:20:29,480 +still benefit from transfer to some + +480 +00:20:26,640 --> 00:20:30,840 +extent but um anyway so with respect to + +481 +00:20:29,480 --> 00:20:35,679 +the tokenization + +482 +00:20:30,840 --> 00:20:40,080 +disparity um I tried tokenizing English + +483 +00:20:35,679 --> 00:20:44,000 +um in using the uh open AI + +484 +00:20:40,080 --> 00:20:47,640 +GPT 3.5 and4 tokenizer and you can see + +485 +00:20:44,000 --> 00:20:51,440 +that this content uh gave me 58 + +486 +00:20:47,640 --> 00:20:53,440 +tokens and then I tried to translate + +487 +00:20:51,440 --> 00:20:55,760 +this into Burmese using Google translate + +488 +00:20:53,440 --> 00:20:58,600 +I don't know burmes myself but I I tried + +489 +00:20:55,760 --> 00:21:01,039 +to translate it and so this should + +490 +00:20:58,600 --> 00:21:02,320 +should be the same content uh you know + +491 +00:21:01,039 --> 00:21:06,280 +at least to the extent that Google + +492 +00:21:02,320 --> 00:21:09,640 +Translate into burmes is accurate um and + +493 +00:21:06,280 --> 00:21:12,600 +then I got 6117 tokens for the same + +494 +00:21:09,640 --> 00:21:15,400 +content um and the reason why is because + +495 +00:21:12,600 --> 00:21:18,960 +you can see here basically um it's still + +496 +00:21:15,400 --> 00:21:20,600 +got open AI right um and interestingly I + +497 +00:21:18,960 --> 00:21:22,840 +didn't realize this open AI isn't a + +498 +00:21:20,600 --> 00:21:25,480 +single token in gpd's model it's two + +499 +00:21:22,840 --> 00:21:28,600 +tokens but um all of the other things + +500 +00:21:25,480 --> 00:21:31,720 +are basically converted into bite level + +501 +00:21:28,600 --> 00:21:33,960 +tokens so each bite is a token and + +502 +00:21:31,720 --> 00:21:36,720 +because burmes uses a different script + +503 +00:21:33,960 --> 00:21:39,760 +each one of these is like three bytes so + +504 +00:21:36,720 --> 00:21:45,520 +what you can see is uh it's a lot uh you + +505 +00:21:39,760 --> 00:21:47,559 +know more so it's 10.6 times the uh the + +506 +00:21:45,520 --> 00:21:50,320 +total amount and what does that mean + +507 +00:21:47,559 --> 00:21:54,799 +this means number one um if you're + +508 +00:21:50,320 --> 00:21:57,240 +processing burmes with uh gp4 it's 10 + +509 +00:21:54,799 --> 00:21:58,320 +times more expensive for the same time + +510 +00:21:57,240 --> 00:21:59,640 +because they count by the number of + +511 +00:21:58,320 --> 00:22:02,320 +token + +512 +00:21:59,640 --> 00:22:04,679 +number two um because you're letting it + +513 +00:22:02,320 --> 00:22:06,799 +up into bites like this it's slow + +514 +00:22:04,679 --> 00:22:08,720 +generation becomes 10 times slower + +515 +00:22:06,799 --> 00:22:10,760 +because you know it generates tokens at + +516 +00:22:08,720 --> 00:22:12,080 +a particular speed and also it gets + +517 +00:22:10,760 --> 00:22:13,799 +worse because the reason why we do + +518 +00:22:12,080 --> 00:22:15,960 +subord tokens in the first place is + +519 +00:22:13,799 --> 00:22:18,039 +because they kind of allow us to Clump + +520 +00:22:15,960 --> 00:22:19,720 +semantic units together and if the + +521 +00:22:18,039 --> 00:22:21,640 +semantic units aren't clumped together + +522 +00:22:19,720 --> 00:22:24,480 +the model has to use its capacity on + +523 +00:22:21,640 --> 00:22:27,120 +combining together uh the tokens in the + +524 +00:22:24,480 --> 00:22:30,440 +layers of the model and that is prone to + +525 +00:22:27,120 --> 00:22:33,039 +error and uh it ca uses the models + +526 +00:22:30,440 --> 00:22:36,279 +capacity and it does less well at + +527 +00:22:33,039 --> 00:22:38,480 +modeling so this is a pretty big problem + +528 +00:22:36,279 --> 00:22:41,159 +for a number of reasons you know cost + +529 +00:22:38,480 --> 00:22:41,159 +efficiency + +530 +00:22:41,400 --> 00:22:48,440 +accuracy so one way to fix this um both + +531 +00:22:46,840 --> 00:22:51,480 +from the point of view of training and + +532 +00:22:48,440 --> 00:22:54,360 +learning tokenizers is uh through heris + +533 +00:22:51,480 --> 00:22:55,600 +sampling of data and the most common way + +534 +00:22:54,360 --> 00:22:57,880 +to do this is something called + +535 +00:22:55,600 --> 00:23:00,640 +temperature sampling and the way + +536 +00:22:57,880 --> 00:23:04,080 +temperature sampling works is you + +537 +00:23:00,640 --> 00:23:06,200 +essentially group the data into groups + +538 +00:23:04,080 --> 00:23:10,159 +um and usually when we say groups we + +539 +00:23:06,200 --> 00:23:14,679 +talk about grouping into languages um + +540 +00:23:10,159 --> 00:23:17,120 +and we sample the data uh according to + +541 +00:23:14,679 --> 00:23:20,440 +its frequency + +542 +00:23:17,120 --> 00:23:22,279 +um where we exponentiate the frequency + +543 +00:23:20,440 --> 00:23:25,480 +by one divided by a + +544 +00:23:22,279 --> 00:23:27,279 +temperature and uh and then renormalize + +545 +00:23:25,480 --> 00:23:29,799 +and then sample batches from each + +546 +00:23:27,279 --> 00:23:31,320 +language according to that so the + +547 +00:23:29,799 --> 00:23:34,600 +probability of + +548 +00:23:31,320 --> 00:23:38,480 +sampling a batch from language + +549 +00:23:34,600 --> 00:23:40,279 +L um becomes e to + +550 +00:23:38,480 --> 00:23:43,520 +the uh + +551 +00:23:40,279 --> 00:23:43,520 +frequency of + +552 +00:23:45,919 --> 00:23:48,919 +L + +553 +00:23:49,200 --> 00:23:53,320 +sorry frequency of + +554 +00:23:54,600 --> 00:23:59,600 +L the oneid temperature + +555 +00:24:00,720 --> 00:24:08,080 +um normalized so we have L Prime + +556 +00:24:05,120 --> 00:24:08,080 +frequency L + +557 +00:24:08,440 --> 00:24:13,880 +Prime + +558 +00:24:10,159 --> 00:24:16,600 +um and so what that does is essentially + +559 +00:24:13,880 --> 00:24:18,120 +if we have a data distribution from a l + +560 +00:24:16,600 --> 00:24:21,279 +from each language which is kind of like + +561 +00:24:18,120 --> 00:24:23,480 +a longtail distribution like this + +562 +00:24:21,279 --> 00:24:26,000 +normally you would sample this language + +563 +00:24:23,480 --> 00:24:27,399 +a lot more if you take the temperature + +564 +00:24:26,000 --> 00:24:29,360 +temperature and turn it into something + +565 +00:24:27,399 --> 00:24:31,679 +like five that will flatten the + +566 +00:24:29,360 --> 00:24:33,320 +distribution so you'll sample the less + +567 +00:24:31,679 --> 00:24:35,520 +frequent languages more and the more + +568 +00:24:33,320 --> 00:24:37,559 +frequent languages less and if you take + +569 +00:24:35,520 --> 00:24:42,600 +a very large number like 100 this will + +570 +00:24:37,559 --> 00:24:46,000 +just flatten and say you um you sample + +571 +00:24:42,600 --> 00:24:49,640 +like this uh from each language + +572 +00:24:46,000 --> 00:24:52,039 +uniformly and this can this sampling can + +573 +00:24:49,640 --> 00:24:55,840 +be done both at model training time and + +574 +00:24:52,039 --> 00:24:57,559 +at vocabulary construction time and when + +575 +00:24:55,840 --> 00:24:59,080 +you do it at vocabulary construction + +576 +00:24:57,559 --> 00:25:01,360 +time basically what that means is it + +577 +00:24:59,080 --> 00:25:04,039 +will down sample English and upsample + +578 +00:25:01,360 --> 00:25:05,799 +Burmese or any of the other low resource + +579 +00:25:04,039 --> 00:25:08,240 +languages so the low resource languages + +580 +00:25:05,799 --> 00:25:10,320 +get more weight in creating the + +581 +00:25:08,240 --> 00:25:13,440 +vocabulary so you won't they won't be + +582 +00:25:10,320 --> 00:25:16,559 +split up as much like + +583 +00:25:13,440 --> 00:25:20,240 +this so I don't know if GPD does this + +584 +00:25:16,559 --> 00:25:22,039 +but xlmr does this um and quen does this + +585 +00:25:20,240 --> 00:25:25,480 +so there are definitely language models + +586 +00:25:22,039 --> 00:25:28,640 +that are attempting to be you know more + +587 +00:25:25,480 --> 00:25:30,320 +fair across different languages + +588 +00:25:28,640 --> 00:25:32,000 +another thing that quen does I talked + +589 +00:25:30,320 --> 00:25:33,559 +about this previously in the tour of + +590 +00:25:32,000 --> 00:25:36,840 +large language models but it also makes + +591 +00:25:33,559 --> 00:25:39,880 +a much larger vocabulary um so normally + +592 +00:25:36,840 --> 00:25:42,080 +llama is like 32 and I think quen was + +593 +00:25:39,880 --> 00:25:44,919 +32k and quen was like something like + +594 +00:25:42,080 --> 00:25:46,880 +215k or something so they intentionally + +595 +00:25:44,919 --> 00:25:49,279 +made the vocabulary larger so the + +596 +00:25:46,880 --> 00:25:51,559 +distribution uh would be better across + +597 +00:25:49,279 --> 00:25:51,559 +different + +598 +00:25:51,720 --> 00:25:56,799 +languages yeah so how do you construct a + +599 +00:25:54,600 --> 00:25:58,440 +batch for like machine translation like + +600 +00:25:56,799 --> 00:26:01,159 +when a batch be assigned to English + +601 +00:25:58,440 --> 00:26:04,919 +French and then another yeah that's a + +602 +00:26:01,159 --> 00:26:06,840 +good question um you can do both uh so + +603 +00:26:04,919 --> 00:26:08,640 +uh sorry just to repeat the question how + +604 +00:26:06,840 --> 00:26:10,159 +do you construct batches for like + +605 +00:26:08,640 --> 00:26:12,000 +machine translation would you create a + +606 +00:26:10,159 --> 00:26:14,080 +batch of English French or would you + +607 +00:26:12,000 --> 00:26:19,000 +create uh a batch with like lots of + +608 +00:26:14,080 --> 00:26:21,960 +different languages in it at once um I + +609 +00:26:19,000 --> 00:26:24,080 +personally what I would do and I think + +610 +00:26:21,960 --> 00:26:27,000 +what most people do is they don't + +611 +00:26:24,080 --> 00:26:28,640 +actually sample batches uh that are + +612 +00:26:27,000 --> 00:26:30,480 +uniform in is particular like language + +613 +00:26:28,640 --> 00:26:32,120 +and the reason why is because SGD if you + +614 +00:26:30,480 --> 00:26:34,640 +have lots of variant in your gradients + +615 +00:26:32,120 --> 00:26:36,480 +it makes it less stable and so if you do + +616 +00:26:34,640 --> 00:26:38,120 +all English French in a single batch at + +617 +00:26:36,480 --> 00:26:39,520 +one time that'll make it less stable + +618 +00:26:38,120 --> 00:26:40,799 +because it will move all in the French + +619 +00:26:39,520 --> 00:26:43,240 +Direction than all in the German + +620 +00:26:40,799 --> 00:26:44,600 +direction so um it's generally better + +621 +00:26:43,240 --> 00:26:47,600 +practice to do this sort of upweighting + +622 +00:26:44,600 --> 00:26:49,240 +or sampling before you form your batches + +623 +00:26:47,600 --> 00:26:52,360 +um and another thing that you can do is + +624 +00:26:49,240 --> 00:26:55,279 +you can do a bunch of this sampling it + +625 +00:26:52,360 --> 00:26:57,480 +once form a bunch of batches and then + +626 +00:26:55,279 --> 00:26:59,120 +run through all of them and then when + +627 +00:26:57,480 --> 00:27:01,200 +you get near the end form some more + +628 +00:26:59,120 --> 00:27:02,559 +batches and then throw them in there so + +629 +00:27:01,200 --> 00:27:08,120 +you can write data loaders to do that + +630 +00:27:02,559 --> 00:27:12,440 +sort of stuff yeah cool um + +631 +00:27:08,120 --> 00:27:14,760 +nice so there's also work on learning + +632 +00:27:12,440 --> 00:27:18,399 +how to balance data and this is a paper + +633 +00:27:14,760 --> 00:27:21,480 +by uh my former student Cindy um and + +634 +00:27:18,399 --> 00:27:23,520 +it's kind of interesting I don't know um + +635 +00:27:21,480 --> 00:27:26,240 +I like it because it shows the + +636 +00:27:23,520 --> 00:27:28,799 +possibility of what could be done with + +637 +00:27:26,240 --> 00:27:30,760 +respect to things here um but it's a + +638 +00:27:28,799 --> 00:27:32,799 +little bit uh it's a little bit + +639 +00:27:30,760 --> 00:27:34,840 +expensive to run so it might not be uh + +640 +00:27:32,799 --> 00:27:38,080 +well suited for large scale pre-training + +641 +00:27:34,840 --> 00:27:41,840 +for example but the basic + +642 +00:27:38,080 --> 00:27:44,480 +idea is um we have several training sets + +643 +00:27:41,840 --> 00:27:47,279 +where the training sets are uh composed + +644 +00:27:44,480 --> 00:27:49,200 +of different languages and then we have + +645 +00:27:47,279 --> 00:27:51,559 +several development sets and these + +646 +00:27:49,200 --> 00:27:54,679 +development sets are uh ones that we + +647 +00:27:51,559 --> 00:27:57,679 +want to be good at trans like you know + +648 +00:27:54,679 --> 00:28:02,760 +uh processing uh be it translation or QA + +649 +00:27:57,679 --> 00:28:07,000 +or whatever else and what we can do is + +650 +00:28:02,760 --> 00:28:09,519 +we calculate gradients according to um + +651 +00:28:07,000 --> 00:28:12,640 +these various training sets we also + +652 +00:28:09,519 --> 00:28:15,640 +calculate the gradient on the dev set + +653 +00:28:12,640 --> 00:28:17,440 +and we calculate the alignment between + +654 +00:28:15,640 --> 00:28:19,600 +the train gradient and the development + +655 +00:28:17,440 --> 00:28:21,279 +gradient and see how closely they align + +656 +00:28:19,600 --> 00:28:23,279 +if they align very well basically what + +657 +00:28:21,279 --> 00:28:26,919 +it's saying is this training set is + +658 +00:28:23,279 --> 00:28:28,200 +moving us in a good direction for um for + +659 +00:28:26,919 --> 00:28:31,679 +optimizing the performance on the the + +660 +00:28:28,200 --> 00:28:33,480 +dev set um and if uh they don't align + +661 +00:28:31,679 --> 00:28:35,799 +well it's basically like harmful for + +662 +00:28:33,480 --> 00:28:38,320 +optimizing performance on the dev set + +663 +00:28:35,799 --> 00:28:41,000 +and then we upweight or downweight the + +664 +00:28:38,320 --> 00:28:43,159 +kind of mixing factor of these uh data + +665 +00:28:41,000 --> 00:28:45,360 +sets uh according to how well the + +666 +00:28:43,159 --> 00:28:46,799 +gradients Align and so why is this + +667 +00:28:45,360 --> 00:28:48,360 +interesting conceptually this is + +668 +00:28:46,799 --> 00:28:51,679 +interesting conceptually because you can + +669 +00:28:48,360 --> 00:28:54,960 +also do it um as you continue training + +670 +00:28:51,679 --> 00:28:56,640 +the model and one of the problems with + +671 +00:28:54,960 --> 00:28:58,960 +theistic + +672 +00:28:56,640 --> 00:29:01,559 +sampling is + +673 +00:28:58,960 --> 00:29:05,279 +this number is very + +674 +00:29:01,559 --> 00:29:06,880 +fiddly and um like this uh this + +675 +00:29:05,279 --> 00:29:10,840 +temperature number is very fiddly it + +676 +00:29:06,880 --> 00:29:13,760 +differs from data set to data set and + +677 +00:29:10,840 --> 00:29:16,799 +um it doesn't fully give you the full + +678 +00:29:13,760 --> 00:29:19,080 +spectrum of uh you know which data you + +679 +00:29:16,799 --> 00:29:20,559 +should be upsampling and this can allow + +680 +00:29:19,080 --> 00:29:22,799 +you to learn it + +681 +00:29:20,559 --> 00:29:25,080 +automatically and just to give it an + +682 +00:29:22,799 --> 00:29:27,000 +example of something it can learn it can + +683 +00:29:25,080 --> 00:29:28,799 +learn that at the beginning of training + +684 +00:29:27,000 --> 00:29:31,159 +you should be maybe up ating the low + +685 +00:29:28,799 --> 00:29:33,760 +resource data sets but then you start + +686 +00:29:31,159 --> 00:29:35,559 +overfitting to the relatively small data + +687 +00:29:33,760 --> 00:29:37,120 +you have for burmes or relatively small + +688 +00:29:35,559 --> 00:29:39,760 +data you have for some under resource + +689 +00:29:37,120 --> 00:29:41,960 +languages and so it starts actively like + +690 +00:29:39,760 --> 00:29:44,320 +harming your model and when it starts + +691 +00:29:41,960 --> 00:29:46,720 +actively harming your model then it will + +692 +00:29:44,320 --> 00:29:49,760 +be automatically down weighted uh so it + +693 +00:29:46,720 --> 00:29:51,200 +stops harming your model and um uh then + +694 +00:29:49,760 --> 00:29:54,320 +it'll be upweighted again when it makes + +695 +00:29:51,200 --> 00:29:56,480 +sense to use it again so um this allows + +696 +00:29:54,320 --> 00:29:59,480 +you to learn more nuanced strategies + +697 +00:29:56,480 --> 00:30:02,600 +than this um in + +698 +00:29:59,480 --> 00:30:04,000 +the um in some papers I think this was + +699 +00:30:02,600 --> 00:30:05,840 +the quen paper but it might have been + +700 +00:30:04,000 --> 00:30:07,840 +another multilingual uh large language + +701 +00:30:05,840 --> 00:30:10,360 +modeling paper they did something that + +702 +00:30:07,840 --> 00:30:12,519 +was a little bit like this no sorry it + +703 +00:30:10,360 --> 00:30:14,399 +was um the NLB paper that I'm going to + +704 +00:30:12,519 --> 00:30:15,679 +talk about in a second so they did + +705 +00:30:14,399 --> 00:30:17,240 +something a little bit like this which + +706 +00:30:15,679 --> 00:30:19,440 +is that they started out training on + +707 +00:30:17,240 --> 00:30:21,440 +higher resource languages and then + +708 +00:30:19,440 --> 00:30:23,960 +gradually added in the lower resource + +709 +00:30:21,440 --> 00:30:26,200 +languages later uh because the lower + +710 +00:30:23,960 --> 00:30:27,799 +resource languages have very uh in some + +711 +00:30:26,200 --> 00:30:29,399 +cases have very little data and you + +712 +00:30:27,799 --> 00:30:30,559 +don't want to overfit to that data + +713 +00:30:29,399 --> 00:30:33,559 +before the end of training when the + +714 +00:30:30,559 --> 00:30:35,200 +model is capable of training on them um + +715 +00:30:33,559 --> 00:30:36,640 +but again that's just a heris and + +716 +00:30:35,200 --> 00:30:38,440 +there's probably better strategies and + +717 +00:30:36,640 --> 00:30:40,919 +hopefully doing something automatically + +718 +00:30:38,440 --> 00:30:43,919 +would allow you to to learn + +719 +00:30:40,919 --> 00:30:43,919 +that + +720 +00:30:44,200 --> 00:30:50,559 +cool + +721 +00:30:46,880 --> 00:30:52,799 +um any questions for that yeah sure I + +722 +00:30:50,559 --> 00:30:54,880 +guess going back to the one where you + +723 +00:30:52,799 --> 00:30:58,559 +the model had like + +724 +00:30:54,880 --> 00:31:02,600 +220k yeah do you not have the software + +725 +00:30:58,559 --> 00:31:02,600 +issues and other issues you + +726 +00:31:03,039 --> 00:31:08,360 +have uh so + +727 +00:31:06,360 --> 00:31:10,600 +215k I guess + +728 +00:31:08,360 --> 00:31:15,440 +the I mean it it definitely makes it + +729 +00:31:10,600 --> 00:31:17,919 +slower yeah for sure um uh the thing is + +730 +00:31:15,440 --> 00:31:19,960 +it only makes calculating the softmax + +731 +00:31:17,919 --> 00:31:22,919 +and the word embedding slower but if it + +732 +00:31:19,960 --> 00:31:24,559 +makes the sequence length shorter then + +733 +00:31:22,919 --> 00:31:26,320 +you also benefit from a shorter sequence + +734 +00:31:24,559 --> 00:31:28,159 +length right so it's kind of a trade-off + +735 +00:31:26,320 --> 00:31:29,960 +and I I think especially if if you're + +736 +00:31:28,159 --> 00:31:33,120 +processing lots of multilingual data + +737 +00:31:29,960 --> 00:31:35,279 +like 32k for English turning into 215k + +738 +00:31:33,120 --> 00:31:36,760 +for 100 languages doesn't seem like a + +739 +00:31:35,279 --> 00:31:41,600 +bad idea + +740 +00:31:36,760 --> 00:31:43,399 +right cool um yeah so then going to + +741 +00:31:41,600 --> 00:31:46,240 +machine translation machine translation + +742 +00:31:43,399 --> 00:31:48,679 +is like um probably still the most + +743 +00:31:46,240 --> 00:31:52,159 +important multilingual uh inherently + +744 +00:31:48,679 --> 00:31:53,840 +multilingual task that we handle um uh + +745 +00:31:52,159 --> 00:31:57,120 +as we know translation is basically + +746 +00:31:53,840 --> 00:31:59,039 +translating uh from one language to uh + +747 +00:31:57,120 --> 00:32:00,760 +to another + +748 +00:31:59,039 --> 00:32:04,320 +and if we look at why it's difficult to + +749 +00:32:00,760 --> 00:32:06,360 +translate there's basically two reasons + +750 +00:32:04,320 --> 00:32:08,919 +um the first reason is that there's + +751 +00:32:06,360 --> 00:32:10,960 +syntactic divergences between languages + +752 +00:32:08,919 --> 00:32:15,480 +um so we don't use the same + +753 +00:32:10,960 --> 00:32:16,880 +syntax um so I I put in this sentence uh + +754 +00:32:15,480 --> 00:32:20,320 +the development of artificial + +755 +00:32:16,880 --> 00:32:22,240 +intelligence is a really big deal and + +756 +00:32:20,320 --> 00:32:23,679 +maybe everybody can if you know another + +757 +00:32:22,240 --> 00:32:25,159 +language you can take a moment to think + +758 +00:32:23,679 --> 00:32:28,480 +about how you would translate that into + +759 +00:32:25,159 --> 00:32:28,480 +another language + +760 +00:32:32,200 --> 00:32:37,600 +and keep it in mind and then um I + +761 +00:32:35,039 --> 00:32:39,039 +actually have uh Spanish so hopefully I + +762 +00:32:37,600 --> 00:32:41,320 +didn't mess it up because I have two + +763 +00:32:39,039 --> 00:32:44,399 +actual Spanish speakers in here my high + +764 +00:32:41,320 --> 00:32:50,320 +school Spanish is is is that + +765 +00:32:44,399 --> 00:32:51,960 +okay okay good okay thanks um so uh what + +766 +00:32:50,320 --> 00:32:54,320 +we could see here is that there's some + +767 +00:32:51,960 --> 00:32:57,519 +Divergence in syntax between English and + +768 +00:32:54,320 --> 00:33:01,880 +Spanish um the these ones are in the + +769 +00:32:57,519 --> 00:33:04,320 +same order um the first Divergence is + +770 +00:33:01,880 --> 00:33:06,519 +that you use an article here in Spanish + +771 +00:33:04,320 --> 00:33:08,600 +and you use no article in English so + +772 +00:33:06,519 --> 00:33:10,519 +article there's no it's not the + +773 +00:33:08,600 --> 00:33:12,760 +artificial intelligence in English it's + +774 +00:33:10,519 --> 00:33:15,240 +just artificial intelligence the second + +775 +00:33:12,760 --> 00:33:18,200 +one is swapping the order of nouns and + +776 +00:33:15,240 --> 00:33:20,919 +adjectives which is a famous thing like + +777 +00:33:18,200 --> 00:33:23,600 +actually English is the unusual language + +778 +00:33:20,919 --> 00:33:25,080 +here um most languages with similar word + +779 +00:33:23,600 --> 00:33:27,200 +order to English put the adjectives + +780 +00:33:25,080 --> 00:33:28,840 +after the nouns more often than they put + +781 +00:33:27,200 --> 00:33:31,480 +them before the + +782 +00:33:28,840 --> 00:33:33,679 +and then um there's also really big deal + +783 +00:33:31,480 --> 00:33:35,960 +I put in kind of like an idiomatic + +784 +00:33:33,679 --> 00:33:38,840 +expression here like big deal and big + +785 +00:33:35,960 --> 00:33:40,919 +deal uh won't be translated consistently + +786 +00:33:38,840 --> 00:33:43,399 +into big and deal in other languages so + +787 +00:33:40,919 --> 00:33:45,440 +you can see that it turned into + +788 +00:33:43,399 --> 00:33:48,039 +something uh something + +789 +00:33:45,440 --> 00:33:51,559 +else um also interestingly this kind of + +790 +00:33:48,039 --> 00:33:53,799 +went in the middle of big deal um so I + +791 +00:33:51,559 --> 00:33:56,039 +also did it in Japanese and Japanese has + +792 +00:33:53,799 --> 00:33:58,360 +very different syntax from English um + +793 +00:33:56,039 --> 00:34:00,120 +it's a subject object verb + +794 +00:33:58,360 --> 00:34:02,360 +uh order language so the verb comes at + +795 +00:34:00,120 --> 00:34:03,639 +the end instead of the middle and you + +796 +00:34:02,360 --> 00:34:07,360 +can see that it's a little bit more + +797 +00:34:03,639 --> 00:34:09,399 +crazy actually this doesn't convey the + +798 +00:34:07,360 --> 00:34:11,159 +full extent of how crazy the reorderings + +799 +00:34:09,399 --> 00:34:14,440 +are between English and Japanese it's + +800 +00:34:11,159 --> 00:34:16,040 +just in a very very different order um + +801 +00:34:14,440 --> 00:34:18,720 +but you can see that actually in some + +802 +00:34:16,040 --> 00:34:22,399 +cases it's more similar to English than + +803 +00:34:18,720 --> 00:34:24,079 +uh than uh Spanish is because uh + +804 +00:34:22,399 --> 00:34:25,800 +artificial intelligence is in the same + +805 +00:34:24,079 --> 00:34:28,320 +order so there are places where it's + +806 +00:34:25,800 --> 00:34:30,040 +similar + +807 +00:34:28,320 --> 00:34:32,399 +um another reason why it's difficult to + +808 +00:34:30,040 --> 00:34:35,800 +translate is lexical ambiguities so this + +809 +00:34:32,399 --> 00:34:41,040 +is an example from English and French um + +810 +00:34:35,800 --> 00:34:43,240 +where different uh things in um uh + +811 +00:34:41,040 --> 00:34:45,800 +different words in different languages + +812 +00:34:43,240 --> 00:34:49,599 +uh kind + +813 +00:34:45,800 --> 00:34:53,280 +of uh are translated differently so like + +814 +00:34:49,599 --> 00:34:55,159 +a leg of a journey a leg of an animal a + +815 +00:34:53,280 --> 00:34:58,640 +leg of a chair and a leg of a human are + +816 +00:34:55,159 --> 00:35:00,280 +all translated differently um and so I'm + +817 +00:34:58,640 --> 00:35:02,839 +sure you can come up with other examples + +818 +00:35:00,280 --> 00:35:05,720 +here my favorite one is run like run a + +819 +00:35:02,839 --> 00:35:07,720 +marathon run a program run a company a + +820 +00:35:05,720 --> 00:35:10,960 +run in a stocking like all of these are + +821 +00:35:07,720 --> 00:35:15,560 +different in most languages in the + +822 +00:35:10,960 --> 00:35:17,440 +world um so so this is really hard um + +823 +00:35:15,560 --> 00:35:19,839 +there's also some other difficulties + +824 +00:35:17,440 --> 00:35:23,280 +like uh that are language specific like + +825 +00:35:19,839 --> 00:35:24,400 +in Japanese they almost never say the uh + +826 +00:35:23,280 --> 00:35:26,920 +who did + +827 +00:35:24,400 --> 00:35:30,960 +something uh in conversational Japanese + +828 +00:35:26,920 --> 00:35:34,880 +for example so so if I say um + +829 +00:35:30,960 --> 00:35:37,800 +likea which means to eat uh it could or + +830 +00:35:34,880 --> 00:35:42,160 +ate it could mean I ate you ate he ate + +831 +00:35:37,800 --> 00:35:43,839 +she ate the dog ate you know um it ate + +832 +00:35:42,160 --> 00:35:45,920 +any of those things and you don't know + +833 +00:35:43,839 --> 00:35:48,400 +outside the context and language models + +834 +00:35:45,920 --> 00:35:49,880 +are pretty bad at or uh translation + +835 +00:35:48,400 --> 00:35:52,359 +systems are pretty bad at figuring out + +836 +00:35:49,880 --> 00:35:55,359 +that context so um often in + +837 +00:35:52,359 --> 00:35:59,480 +conversational translation um it should + +838 +00:35:55,359 --> 00:36:04,000 +be like a question uh like did you eat + +839 +00:35:59,480 --> 00:36:07,480 +uh so like t uh in that intonation means + +840 +00:36:04,000 --> 00:36:09,000 +did you eat but it says I8 instead + +841 +00:36:07,480 --> 00:36:11,520 +because that's the default and it's + +842 +00:36:09,000 --> 00:36:13,560 +really confusing so there's all kinds of + +843 +00:36:11,520 --> 00:36:18,760 +peculiarities like that as + +844 +00:36:13,560 --> 00:36:21,079 +well um translation tasks um there's uh + +845 +00:36:18,760 --> 00:36:22,599 +WMT which stands for the conference on + +846 +00:36:21,079 --> 00:36:24,599 +machine + +847 +00:36:22,599 --> 00:36:26,800 +translation uh which might seem a little + +848 +00:36:24,599 --> 00:36:29,880 +bit strange to you because conference + +849 +00:36:26,800 --> 00:36:31,599 +doesn't start with a W but um uh there + +850 +00:36:29,880 --> 00:36:33,480 +are these shared tasks that are run + +851 +00:36:31,599 --> 00:36:35,480 +every year for translation and + +852 +00:36:33,480 --> 00:36:38,480 +evaluation + +853 +00:36:35,480 --> 00:36:40,160 +um one interesting thing about this uh + +854 +00:36:38,480 --> 00:36:42,160 +which I might have mentioned briefly + +855 +00:36:40,160 --> 00:36:43,960 +before but I'll mention again is here + +856 +00:36:42,160 --> 00:36:46,200 +are the translation systems in the + +857 +00:36:43,960 --> 00:36:48,920 +evaluation systems co-evolve so + +858 +00:36:46,200 --> 00:36:51,319 +basically every year they have a let's + +859 +00:36:48,920 --> 00:36:54,280 +try to maximize translation accuracy + +860 +00:36:51,319 --> 00:36:57,240 +task and they have a let's try to + +861 +00:36:54,280 --> 00:36:59,680 +maximize evaluation accuracy task and + +862 +00:36:57,240 --> 00:37:02,520 +always the evaluation accuracy task uses + +863 +00:36:59,680 --> 00:37:05,240 +the systems from the translation + +864 +00:37:02,520 --> 00:37:07,760 +accuracy task and so every time they're + +865 +00:37:05,240 --> 00:37:10,920 +trying to improve automatic evaluation + +866 +00:37:07,760 --> 00:37:12,280 +on the like best systems of that year um + +867 +00:37:10,920 --> 00:37:14,359 +which makes it really challenging to + +868 +00:37:12,280 --> 00:37:15,800 +build good evaluation systems as the + +869 +00:37:14,359 --> 00:37:17,440 +translation systems get better and + +870 +00:37:15,800 --> 00:37:19,160 +better so if you're interested in + +871 +00:37:17,440 --> 00:37:22,160 +evaluation i' definitely take a look at + +872 +00:37:19,160 --> 00:37:24,760 +this it's a good gold standard for + +873 +00:37:22,160 --> 00:37:27,040 +that um another really good resource + +874 +00:37:24,760 --> 00:37:28,319 +which is not a shared task it's called + +875 +00:37:27,040 --> 00:37:31,680 +Flores + +876 +00:37:28,319 --> 00:37:33,880 +um and it is a data set of 200 languages + +877 +00:37:31,680 --> 00:37:37,560 +translated from English + +878 +00:37:33,880 --> 00:37:39,240 +Wikipedia um and so it's uh data in lots + +879 +00:37:37,560 --> 00:37:42,240 +of different languages I like this for + +880 +00:37:39,240 --> 00:37:44,720 +two reasons um the first reason is I + +881 +00:37:42,240 --> 00:37:47,200 +Believe Wikipedia is a really important + +882 +00:37:44,720 --> 00:37:49,560 +domain to be able to translate because + +883 +00:37:47,200 --> 00:37:51,680 +it has so much knowledge that's very + +884 +00:37:49,560 --> 00:37:54,079 +useful and if we could convey that + +885 +00:37:51,680 --> 00:37:56,200 +knowledge to people in you know many + +886 +00:37:54,079 --> 00:37:57,760 +different languages it would be you know + +887 +00:37:56,200 --> 00:38:00,200 +very beneficial to the people of the + +888 +00:37:57,760 --> 00:38:01,760 +world um another reason why I like this + +889 +00:38:00,200 --> 00:38:04,359 +is because it's really hard to get + +890 +00:38:01,760 --> 00:38:06,800 +highquality translations in 200 + +891 +00:38:04,359 --> 00:38:09,040 +languages um just because it's hard to + +892 +00:38:06,800 --> 00:38:11,200 +hire that many good translators and do + +893 +00:38:09,040 --> 00:38:12,920 +quality control and stuff um and this + +894 +00:38:11,200 --> 00:38:15,040 +was a data set created by meta and I + +895 +00:38:12,920 --> 00:38:16,800 +definitely like commend them for their + +896 +00:38:15,040 --> 00:38:18,880 +ability in doing that so this is kind of + +897 +00:38:16,800 --> 00:38:21,560 +a standard for low resource language + +898 +00:38:18,880 --> 00:38:23,440 +translation um and then iwslt there are + +899 +00:38:21,560 --> 00:38:25,400 +tasks on speech translation so if you're + +900 +00:38:23,440 --> 00:38:28,720 +interested in speech that's one you can + +901 +00:38:25,400 --> 00:38:30,960 +take a look at + +902 +00:38:28,720 --> 00:38:32,640 +um I'm not going to this isn't a class + +903 +00:38:30,960 --> 00:38:33,880 +only on machine translation so I'm only + +904 +00:38:32,640 --> 00:38:37,040 +going to go through it briefly but there + +905 +00:38:33,880 --> 00:38:39,760 +is one model that is worth uh taking a + +906 +00:38:37,040 --> 00:38:42,880 +careful look at it's the NLB translation + +907 +00:38:39,760 --> 00:38:46,240 +model um and it's an example of building + +908 +00:38:42,880 --> 00:38:47,680 +a strong Mt model uh it's open- source + +909 +00:38:46,240 --> 00:38:51,200 +and they describe all of the stuff they + +910 +00:38:47,680 --> 00:38:54,160 +do in doing it and basically to + +911 +00:38:51,200 --> 00:38:58,160 +summarize uh they start out with um + +912 +00:38:54,160 --> 00:38:59,880 +public bitext in a small uh seed of + +913 +00:38:58,160 --> 00:39:03,040 +uh bilingual data and lots of + +914 +00:38:59,880 --> 00:39:05,760 +monolingual data they then train a + +915 +00:39:03,040 --> 00:39:07,480 +multilingual embedding model where the + +916 +00:39:05,760 --> 00:39:09,520 +goal of the multilingual embedding model + +917 +00:39:07,480 --> 00:39:11,760 +is to identify things that are good + +918 +00:39:09,520 --> 00:39:13,560 +translations of each other and then + +919 +00:39:11,760 --> 00:39:15,400 +based on this they run this multilingual + +920 +00:39:13,560 --> 00:39:17,079 +embedding model over all the monolingual + +921 +00:39:15,400 --> 00:39:20,000 +data that they have from multiple + +922 +00:39:17,079 --> 00:39:23,440 +languages and they try to extract Mind + +923 +00:39:20,000 --> 00:39:27,000 +by text where the mind bitext has a kind + +924 +00:39:23,440 --> 00:39:27,880 +of like confidence score of how good the + +925 +00:39:27,000 --> 00:39:31,839 +uh + +926 +00:39:27,880 --> 00:39:33,920 +the Mind data is um oh yeah and uh + +927 +00:39:31,839 --> 00:39:36,000 +another thing that I forgot is language + +928 +00:39:33,920 --> 00:39:38,800 +identification language identification + +929 +00:39:36,000 --> 00:39:40,800 +is actually very hard um trying uh + +930 +00:39:38,800 --> 00:39:42,800 +trying to figure out uh which language + +931 +00:39:40,800 --> 00:39:44,839 +something is written in especially once + +932 +00:39:42,800 --> 00:39:46,839 +you start talking about 200 languages + +933 +00:39:44,839 --> 00:39:49,839 +they can be pretty similar and + +934 +00:39:46,839 --> 00:39:49,839 +especially + +935 +00:39:53,200 --> 00:40:01,520 +um there's this uh amazing + +936 +00:39:58,200 --> 00:40:04,079 +uh amazing paper uh that I can show you + +937 +00:40:01,520 --> 00:40:06,200 +I don't have it in the slides but it's a + +938 +00:40:04,079 --> 00:40:08,280 +language ID in the wild unexpected + +939 +00:40:06,200 --> 00:40:09,880 +challenges on the path to a thousand + +940 +00:40:08,280 --> 00:40:13,880 +language web Text + +941 +00:40:09,880 --> 00:40:16,119 +corpus and just look at this uh figure + +942 +00:40:13,880 --> 00:40:16,119 +in + +943 +00:40:17,880 --> 00:40:24,040 +here so + +944 +00:40:20,119 --> 00:40:24,920 +here the predicted uh language of this + +945 +00:40:24,040 --> 00:40:28,000 +is + +946 +00:40:24,920 --> 00:40:30,359 +manip um I don't know + +947 +00:40:28,000 --> 00:40:34,079 +I don't know why it's clearly all emojis + +948 +00:40:30,359 --> 00:40:37,280 +but apparently the uh the language ID + +949 +00:40:34,079 --> 00:40:40,280 +system had to had to do + +950 +00:40:37,280 --> 00:40:42,440 +that also uh why you lying why you + +951 +00:40:40,280 --> 00:40:44,480 +always lying and like all these little + +952 +00:40:42,440 --> 00:40:46,800 +characters got predicted as + +953 +00:40:44,480 --> 00:40:52,280 +twe + +954 +00:40:46,800 --> 00:40:56,119 +um this is a Mis rendered PDF uh is of + +955 +00:40:52,280 --> 00:40:59,240 +aradi um is Amara um this is a + +956 +00:40:56,119 --> 00:40:59,240 +non-unicode font + +957 +00:40:59,800 --> 00:41:05,640 +um yeah more uh more things like this + +958 +00:41:03,040 --> 00:41:08,280 +creative use of Unicode + +959 +00:41:05,640 --> 00:41:11,240 +me is + +960 +00:41:08,280 --> 00:41:13,520 +pul so so there's just like lots of + +961 +00:41:11,240 --> 00:41:16,800 +examples where you know like actual web + +962 +00:41:13,520 --> 00:41:18,319 +text can make language ID pretty hard um + +963 +00:41:16,800 --> 00:41:22,280 +uh + +964 +00:41:18,319 --> 00:41:24,920 +fortunately uh there are ways around + +965 +00:41:22,280 --> 00:41:26,599 +this by having like very well curated uh + +966 +00:41:24,920 --> 00:41:28,480 +training data for language ID systems + +967 +00:41:26,599 --> 00:41:32,440 +and also having confidence metrics for + +968 +00:41:28,480 --> 00:41:36,000 +language ID systems but it's definitely + +969 +00:41:32,440 --> 00:41:39,319 +non-trivial um so after they did that uh + +970 +00:41:36,000 --> 00:41:41,800 +the language ID they get this model and + +971 +00:41:39,319 --> 00:41:44,000 +um in terms of modeling techniques they + +972 +00:41:41,800 --> 00:41:47,079 +use some uh some interesting modeling + +973 +00:41:44,000 --> 00:41:51,640 +techniques the first one is mixture of + +974 +00:41:47,079 --> 00:41:53,319 +experts um so mixture of experts is uh + +975 +00:41:51,640 --> 00:41:57,800 +I've already talked about it before in + +976 +00:41:53,319 --> 00:42:00,520 +the context of the mix model um this and + +977 +00:41:57,800 --> 00:42:03,400 +you know GPD models are also allegedly + +978 +00:42:00,520 --> 00:42:05,040 +using a mixture of experts but um here + +979 +00:42:03,400 --> 00:42:06,760 +it's particularly important because + +980 +00:42:05,040 --> 00:42:08,640 +they're doing many different languages + +981 +00:42:06,760 --> 00:42:10,200 +and so if you're doing many different + +982 +00:42:08,640 --> 00:42:12,440 +languages you can use particular + +983 +00:42:10,200 --> 00:42:14,640 +parameters for some languages and other + +984 +00:42:12,440 --> 00:42:17,400 +parameters for other languages so it's + +985 +00:42:14,640 --> 00:42:19,040 +pretty helpful in that case um another + +986 +00:42:17,400 --> 00:42:21,160 +thing is curriculum learning like I just + +987 +00:42:19,040 --> 00:42:23,040 +mentioned before they uh start training + +988 +00:42:21,160 --> 00:42:26,520 +on the large on the lower resource + +989 +00:42:23,040 --> 00:42:28,839 +languages later um self-supervised + +990 +00:42:26,520 --> 00:42:31,440 +training so basically um the way this + +991 +00:42:28,839 --> 00:42:34,119 +works is by having a d noising objective + +992 +00:42:31,440 --> 00:42:35,760 +where they add noise to the monolingual + +993 +00:42:34,119 --> 00:42:39,599 +data and then try to reproduce the + +994 +00:42:35,760 --> 00:42:43,640 +original monolingual data so um they use + +995 +00:42:39,599 --> 00:42:45,839 +that as well also another uh important + +996 +00:42:43,640 --> 00:42:48,960 +technique that is very widely used in + +997 +00:42:45,839 --> 00:42:51,520 +machine translation now is uh something + +998 +00:42:48,960 --> 00:42:53,240 +called back translation and the way back + +999 +00:42:51,520 --> 00:42:59,040 +translation works is like let's say we + +1000 +00:42:53,240 --> 00:42:59,040 +want to translate in English to Swahili + +1001 +00:43:00,599 --> 00:43:05,280 +um English to + +1002 +00:43:02,720 --> 00:43:07,000 +Swahili uh translation model and we + +1003 +00:43:05,280 --> 00:43:08,839 +don't have a lot of biex for English and + +1004 +00:43:07,000 --> 00:43:11,480 +Swahili but we have lots of + +1005 +00:43:08,839 --> 00:43:13,839 +Swahili basically what we do is we train + +1006 +00:43:11,480 --> 00:43:18,520 +a Swahili to English + +1007 +00:43:13,839 --> 00:43:21,200 +model and we generate a lot of um like + +1008 +00:43:18,520 --> 00:43:23,119 +translated like pseudo translated + +1009 +00:43:21,200 --> 00:43:26,640 +English here um and then use that as + +1010 +00:43:23,119 --> 00:43:30,200 +parallel data um this is really widely + +1011 +00:43:26,640 --> 00:43:32,400 +used in machine translation um there is + +1012 +00:43:30,200 --> 00:43:35,400 +a caveat that this English is not + +1013 +00:43:32,400 --> 00:43:37,480 +natural um but that's actually kind of + +1014 +00:43:35,400 --> 00:43:41,520 +okay for a couple reasons um if the + +1015 +00:43:37,480 --> 00:43:43,119 +Swahili is actually natural then um at + +1016 +00:43:41,520 --> 00:43:44,920 +least the output is natural so you're + +1017 +00:43:43,119 --> 00:43:47,359 +generating natural output even if the + +1018 +00:43:44,920 --> 00:43:49,400 +input is a little bit unnatural the + +1019 +00:43:47,359 --> 00:43:51,400 +other thing is actually as models get + +1020 +00:43:49,400 --> 00:43:54,079 +better there's a lot of really bad + +1021 +00:43:51,400 --> 00:43:55,920 +translations online um like for example + +1022 +00:43:54,079 --> 00:43:59,119 +translations from not very good + +1023 +00:43:55,920 --> 00:44:01,119 +translators or translations from uh + +1024 +00:43:59,119 --> 00:44:03,960 +older versions of Google Translate for + +1025 +00:44:01,119 --> 00:44:06,760 +example a lot of the like data online is + +1026 +00:44:03,960 --> 00:44:08,319 +like old machine translation data and so + +1027 +00:44:06,760 --> 00:44:09,680 +be because of that if you have a good + +1028 +00:44:08,319 --> 00:44:11,240 +back translation model it might be + +1029 +00:44:09,680 --> 00:44:14,319 +actually better than your original data + +1030 +00:44:11,240 --> 00:44:16,280 +in the first place so um so because of + +1031 +00:44:14,319 --> 00:44:17,280 +that back translation can be uh can be + +1032 +00:44:16,280 --> 00:44:20,079 +pretty + +1033 +00:44:17,280 --> 00:44:23,480 +good and they also Incorporated the seed + +1034 +00:44:20,079 --> 00:44:26,520 +data that they created to to seed the + +1035 +00:44:23,480 --> 00:44:29,040 +the model so this is a pretty good model + +1036 +00:44:26,520 --> 00:44:29,040 +um we + +1037 +00:44:29,200 --> 00:44:35,319 +did um some evaluation of this from the + +1038 +00:44:32,400 --> 00:44:35,319 +language modeling + +1039 +00:44:38,720 --> 00:44:44,319 +perspective and basically what we found + +1040 +00:44:41,880 --> 00:44:48,880 +was that it + +1041 +00:44:44,319 --> 00:44:50,920 +is quite effective uh uh the NLB model + +1042 +00:44:48,880 --> 00:44:53,440 +is quite competitive even with respect + +1043 +00:44:50,920 --> 00:44:55,960 +to the GPT models for uh lower resource + +1044 +00:44:53,440 --> 00:44:59,040 +languages under like the top 40 it's not + +1045 +00:44:55,960 --> 00:45:01,040 +as good for the top 40 languages and if + +1046 +00:44:59,040 --> 00:45:04,040 +you compare Google translate and chat + +1047 +00:45:01,040 --> 00:45:06,200 +GPT on the top uh kind of like the + +1048 +00:45:04,040 --> 00:45:08,720 +higher resource languages actually um + +1049 +00:45:06,200 --> 00:45:11,319 +GPD 4 can beat Google translate on some + +1050 +00:45:08,720 --> 00:45:14,160 +languages like Romanian and uh other + +1051 +00:45:11,319 --> 00:45:16,280 +things like that so um but anyway the + +1052 +00:45:14,160 --> 00:45:17,760 +NLB model is quite good so if you want + +1053 +00:45:16,280 --> 00:45:20,000 +to start out with a model you can use + +1054 +00:45:17,760 --> 00:45:22,000 +this there's also another more recent + +1055 +00:45:20,000 --> 00:45:25,040 +model called seamless m4t that also + +1056 +00:45:22,000 --> 00:45:27,559 +allows you to do speech translation as + +1057 +00:45:25,040 --> 00:45:30,280 +well um and if you uh you want to show + +1058 +00:45:27,559 --> 00:45:33,359 +your CMU Pride there's also Lego MP from + +1059 +00:45:30,280 --> 00:45:36,359 +le le group that you can use for + +1060 +00:45:33,359 --> 00:45:36,359 +this + +1061 +00:45:36,440 --> 00:45:41,040 +cool okay um I'd like to move on to + +1062 +00:45:39,400 --> 00:45:42,760 +multilingual pre-train models are there + +1063 +00:45:41,040 --> 00:45:45,040 +any questions about what I talked about + +1064 +00:45:42,760 --> 00:45:45,040 +so + +1065 +00:45:45,559 --> 00:45:52,240 +far Okay cool so I I want to talk about + +1066 +00:45:48,920 --> 00:45:55,359 +multilingual pre-trained models um + +1067 +00:45:52,240 --> 00:45:58,280 +closed llms such as gp4 are typically + +1068 +00:45:55,359 --> 00:46:02,559 +kind of like incidentally multi lingual + +1069 +00:45:58,280 --> 00:46:05,920 +um due to large training data um open + +1070 +00:46:02,559 --> 00:46:07,480 +llms often uh do data filtering to allow + +1071 +00:46:05,920 --> 00:46:09,760 +for good performance on English like I + +1072 +00:46:07,480 --> 00:46:13,000 +mentioned before and so there aren't a + +1073 +00:46:09,760 --> 00:46:15,520 +whole lot of really good uh open options + +1074 +00:46:13,000 --> 00:46:18,319 +for standard left to right Auto + +1075 +00:46:15,520 --> 00:46:20,200 +regressive models I would say probably + +1076 +00:46:18,319 --> 00:46:24,200 +at the moment quen is the best one and I + +1077 +00:46:20,200 --> 00:46:25,800 +already uh I already covered that uh + +1078 +00:46:24,200 --> 00:46:28,359 +before in the previous class so I'm not + +1079 +00:46:25,800 --> 00:46:30,680 +going to do it again so but what I would + +1080 +00:46:28,359 --> 00:46:33,599 +like to talk about is multilingual um + +1081 +00:46:30,680 --> 00:46:37,720 +representation learning models and also + +1082 +00:46:33,599 --> 00:46:39,880 +um encoder decoder models because the + +1083 +00:46:37,720 --> 00:46:42,720 +they unlike in English where I would say + +1084 +00:46:39,880 --> 00:46:45,480 +all the good models are Auto regressive + +1085 +00:46:42,720 --> 00:46:47,000 +um the encoder decoder and um Mass + +1086 +00:46:45,480 --> 00:46:49,040 +language models are actually pretty + +1087 +00:46:47,000 --> 00:46:52,240 +competitive for multilingual + +1088 +00:46:49,040 --> 00:46:55,079 +tasks so um language model pre-training + +1089 +00:46:52,240 --> 00:46:57,599 +uh such as BT has been shown to be uh + +1090 +00:46:55,079 --> 00:46:59,240 +effective and it uses Mass language + +1091 +00:46:57,599 --> 00:47:02,200 +modeling uh + +1092 +00:46:59,240 --> 00:47:05,079 +objectives um in models such as embert + +1093 +00:47:02,200 --> 00:47:06,680 +and xlm and xlmr extend Bert style + +1094 +00:47:05,079 --> 00:47:08,680 +training for multilingual + +1095 +00:47:06,680 --> 00:47:11,480 +pre-training um before I get into + +1096 +00:47:08,680 --> 00:47:13,359 +exactly how they do this um uh I'd like + +1097 +00:47:11,480 --> 00:47:16,000 +to talk a little bit about multilingual + +1098 +00:47:13,359 --> 00:47:17,760 +uh representation evaluation and there's + +1099 +00:47:16,000 --> 00:47:19,839 +a few standard benchmarks that people + +1100 +00:47:17,760 --> 00:47:23,040 +use for kind of just general purpose + +1101 +00:47:19,839 --> 00:47:25,760 +skills of multilingual models um the + +1102 +00:47:23,040 --> 00:47:28,880 +first one is Extreme and the follow-up + +1103 +00:47:25,760 --> 00:47:31,240 +extreme r and basically uh the way they + +1104 +00:47:28,880 --> 00:47:33,040 +work is you have sentence classification + +1105 +00:47:31,240 --> 00:47:35,680 +structured prediction sentence retrieval + +1106 +00:47:33,040 --> 00:47:37,440 +and question answering tasks across a + +1107 +00:47:35,680 --> 00:47:39,760 +pretty wide variety of typologically + +1108 +00:47:37,440 --> 00:47:41,480 +diverse languages maybe uh something on + +1109 +00:47:39,760 --> 00:47:44,520 +the order of 40 different + +1110 +00:47:41,480 --> 00:47:46,599 +languages um then there's also exclu + +1111 +00:47:44,520 --> 00:47:49,200 +which is less typologically diverse but + +1112 +00:47:46,599 --> 00:47:52,079 +also contains generation style + +1113 +00:47:49,200 --> 00:47:54,359 +tasks um and yeah extreme R is a harder + +1114 +00:47:52,079 --> 00:47:58,200 +version based on + +1115 +00:47:54,359 --> 00:48:00,760 +Extreme um so the the way that people do + +1116 +00:47:58,200 --> 00:48:03,079 +multilingual mask language modeling + +1117 +00:48:00,760 --> 00:48:05,520 +style objectives is unlike Mas language + +1118 +00:48:03,079 --> 00:48:08,079 +modeling where you mask out the input + +1119 +00:48:05,520 --> 00:48:10,440 +and you try to predict the output um + +1120 +00:48:08,079 --> 00:48:14,359 +what they can do is they feed in one + +1121 +00:48:10,440 --> 00:48:16,319 +sentence that's in um in one language in + +1122 +00:48:14,359 --> 00:48:19,839 +another sentence that's in another + +1123 +00:48:16,319 --> 00:48:23,400 +language uh ideally parallel sentences + +1124 +00:48:19,839 --> 00:48:25,720 +and then Mas things out so the training + +1125 +00:48:23,400 --> 00:48:28,920 +code can be entirely the same or almost + +1126 +00:48:25,720 --> 00:48:31,400 +entirely the same minus adding uh kind + +1127 +00:48:28,920 --> 00:48:35,839 +of language embeddings here but they + +1128 +00:48:31,400 --> 00:48:37,760 +train the model um so that it uh it can + +1129 +00:48:35,839 --> 00:48:40,160 +kind of predict across the languages + +1130 +00:48:37,760 --> 00:48:42,079 +which aligns the representations across + +1131 +00:48:40,160 --> 00:48:44,079 +languages better and people have + +1132 +00:48:42,079 --> 00:48:46,480 +demonstrated that this is uh good for + +1133 +00:48:44,079 --> 00:48:46,480 +learning + +1134 +00:48:47,119 --> 00:48:50,119 +representations + +1135 +00:48:50,520 --> 00:48:56,359 +um actually in the + +1136 +00:48:53,880 --> 00:48:58,520 +um there there's also methods that can + +1137 +00:48:56,359 --> 00:49:00,240 +explicit L use alignments to improve the + +1138 +00:48:58,520 --> 00:49:02,440 +alignment between representations in + +1139 +00:49:00,240 --> 00:49:04,400 +different languages but in the interest + +1140 +00:49:02,440 --> 00:49:09,079 +of time I'll skip that and maybe get + +1141 +00:49:04,400 --> 00:49:11,680 +back to it if we have uh have time um a + +1142 +00:49:09,079 --> 00:49:14,119 +very good model to know about because I + +1143 +00:49:11,680 --> 00:49:18,400 +think this is still one of the the best + +1144 +00:49:14,119 --> 00:49:20,760 +models for um being used for + +1145 +00:49:18,400 --> 00:49:24,200 +multilingual processing is + +1146 +00:49:20,760 --> 00:49:26,000 +mt5 and mt5 is based on the T5 + +1147 +00:49:24,200 --> 00:49:28,599 +architecture which is basically an + +1148 +00:49:26,000 --> 00:49:33,240 +encoder decoder architecture with a + +1149 +00:49:28,599 --> 00:49:36,119 +masked uh a masked reconstruction + +1150 +00:49:33,240 --> 00:49:40,640 +objective and uh the way that works in + +1151 +00:49:36,119 --> 00:49:43,559 +case um uh you know we haven't talked + +1152 +00:49:40,640 --> 00:49:45,880 +about this in a while so in case uh we + +1153 +00:49:43,559 --> 00:49:48,520 +uh we need a refresher the way it works + +1154 +00:49:45,880 --> 00:49:52,920 +is basically um you have an encoder + +1155 +00:49:48,520 --> 00:49:55,799 +decoder model uh that takes in an input + +1156 +00:49:52,920 --> 00:49:58,960 +and you do like perturbations of the + +1157 +00:49:55,799 --> 00:50:01,040 +output so you you do things like + +1158 +00:49:58,960 --> 00:50:05,160 +dropping you do things like dropping + +1159 +00:50:01,040 --> 00:50:07,359 +words and you do things like + +1160 +00:50:05,160 --> 00:50:09,079 +um dropping words from the output + +1161 +00:50:07,359 --> 00:50:11,119 +reordering words from the output and you + +1162 +00:50:09,079 --> 00:50:15,240 +try to get the model to reconstruct the + +1163 +00:50:11,119 --> 00:50:17,960 +original uh output and so basically they + +1164 +00:50:15,240 --> 00:50:21,160 +train this on many different languages + +1165 +00:50:17,960 --> 00:50:23,520 +and uh this gives pretty high + +1166 +00:50:21,160 --> 00:50:27,280 +performance um overall for a lot of + +1167 +00:50:23,520 --> 00:50:29,240 +different tasks and in particular + +1168 +00:50:27,280 --> 00:50:32,079 +this model was trained explicitly to be + +1169 +00:50:29,240 --> 00:50:35,160 +multilingual so it's + +1170 +00:50:32,079 --> 00:50:36,960 +um it essentially has better performance + +1171 +00:50:35,160 --> 00:50:38,520 +at kind of like the longtail tasks than + +1172 +00:50:36,960 --> 00:50:41,799 +a lot of the standard language models + +1173 +00:50:38,520 --> 00:50:48,359 +that we have um there's also other + +1174 +00:50:41,799 --> 00:50:52,119 +versions uh like mt0 and um B T5 mt0 was + +1175 +00:50:48,359 --> 00:50:54,920 +instruction tuned further um B T5 was + +1176 +00:50:52,119 --> 00:50:58,119 +trained with no tokenization just bite + +1177 +00:50:54,920 --> 00:50:59,480 +level um uh bite level modeling and + +1178 +00:50:58,119 --> 00:51:01,440 +because it's bite level modeling it + +1179 +00:50:59,480 --> 00:51:03,640 +allows it to model any script you won't + +1180 +00:51:01,440 --> 00:51:06,480 +have troubles with Unicode and other + +1181 +00:51:03,640 --> 00:51:09,319 +stuff like that um I have personally + +1182 +00:51:06,480 --> 00:51:11,000 +found that mt5 performs better than b T5 + +1183 +00:51:09,319 --> 00:51:14,119 +um or at least it's easier to get it to + +1184 +00:51:11,000 --> 00:51:15,839 +work so um if you want a single one to + +1185 +00:51:14,119 --> 00:51:18,599 +start out with I'd say mt5 is pretty + +1186 +00:51:15,839 --> 00:51:20,880 +good um but there's also other uh + +1187 +00:51:18,599 --> 00:51:24,000 +options and I should also mention + +1188 +00:51:20,880 --> 00:51:27,480 +actually there was um a fine-tuned + +1189 +00:51:24,000 --> 00:51:31,150 +version of Mt I've + +1190 +00:51:27,480 --> 00:51:34,249 +recently uh which was + +1191 +00:51:31,150 --> 00:51:34,249 +[Music] + +1192 +00:51:38,160 --> 00:51:45,200 +called so um there was this model um + +1193 +00:51:42,319 --> 00:51:48,119 +based on a large amount of kind of + +1194 +00:51:45,200 --> 00:51:48,119 +instruction tuning + +1195 +00:51:48,880 --> 00:51:54,880 +data trying to find their + +1196 +00:51:51,280 --> 00:51:54,880 +modeling modeling + +1197 +00:51:55,280 --> 00:52:01,240 +page yes so it's based on mt5 um but + +1198 +00:51:58,960 --> 00:52:02,880 +they trained it on a whole bunch of uh + +1199 +00:52:01,240 --> 00:52:06,200 +kind of instruction tuning data so this + +1200 +00:52:02,880 --> 00:52:08,240 +is kind of like a more modern version of + +1201 +00:52:06,200 --> 00:52:10,799 +uh mt0 I guess and I haven't played + +1202 +00:52:08,240 --> 00:52:13,240 +around with it myself a lot but it's uh + +1203 +00:52:10,799 --> 00:52:16,440 +uh allegedly pretty good instruction + +1204 +00:52:13,240 --> 00:52:16,440 +following and other stuff like + +1205 +00:52:20,680 --> 00:52:24,839 +this + +1206 +00:52:22,559 --> 00:52:27,960 +cool um I'd like to talk a little bit + +1207 +00:52:24,839 --> 00:52:30,359 +about uh some more advanced modeling + +1208 +00:52:27,960 --> 00:52:32,760 +strategies um and so for crosslingual + +1209 +00:52:30,359 --> 00:52:35,839 +transfer learning um it leverages data + +1210 +00:52:32,760 --> 00:52:39,319 +from one or more High resource uh Source + +1211 +00:52:35,839 --> 00:52:42,200 +languages um another thing that we often + +1212 +00:52:39,319 --> 00:52:44,119 +do is uh pre-training and fine-tuning + +1213 +00:52:42,200 --> 00:52:45,799 +and pre-training and fine-tuning is good + +1214 +00:52:44,119 --> 00:52:47,359 +if you have a specific language that you + +1215 +00:52:45,799 --> 00:52:48,839 +want your model to be good at because it + +1216 +00:52:47,359 --> 00:52:50,640 +allows you to specialize to that + +1217 +00:52:48,839 --> 00:52:52,200 +language and as I mentioned with the + +1218 +00:52:50,640 --> 00:52:53,400 +curse of multilinguality it's hard to + +1219 +00:52:52,200 --> 00:52:54,079 +get a model that's really good at all + +1220 +00:52:53,400 --> 00:52:56,559 +the + +1221 +00:52:54,079 --> 00:53:00,079 +languages um there's also zero shot + +1222 +00:52:56,559 --> 00:53:02,520 +transfer um and finally something called + +1223 +00:53:00,079 --> 00:53:06,160 +annotation projection or it's also + +1224 +00:53:02,520 --> 00:53:08,960 +called translate train in some + +1225 +00:53:06,160 --> 00:53:10,480 +literature so pre-train and fine-tune um + +1226 +00:53:08,960 --> 00:53:13,599 +basically the way it works is you train + +1227 +00:53:10,480 --> 00:53:15,160 +on lots of uh lots and lots of data from + +1228 +00:53:13,599 --> 00:53:17,400 +lots of different languages and then you + +1229 +00:53:15,160 --> 00:53:20,359 +find tune on data in another + +1230 +00:53:17,400 --> 00:53:23,240 +language um + +1231 +00:53:20,359 --> 00:53:25,760 +and so this uh this tends to work pretty + +1232 +00:53:23,240 --> 00:53:28,799 +well it particularly works well if there + +1233 +00:53:25,760 --> 00:53:30,319 +was at least some uh data from the + +1234 +00:53:28,799 --> 00:53:32,559 +language that you want to train on in + +1235 +00:53:30,319 --> 00:53:34,400 +the original training data set if there + +1236 +00:53:32,559 --> 00:53:35,799 +was no data from the language you want + +1237 +00:53:34,400 --> 00:53:37,640 +to train on in the original training + +1238 +00:53:35,799 --> 00:53:39,400 +data set then it's pretty tough to get + +1239 +00:53:37,640 --> 00:53:41,880 +this to work because the model doesn't + +1240 +00:53:39,400 --> 00:53:43,799 +like already have knowledge of the of + +1241 +00:53:41,880 --> 00:53:47,040 +the + +1242 +00:53:43,799 --> 00:53:50,880 +language um another thing that you can + +1243 +00:53:47,040 --> 00:53:52,720 +do is you can uh so one of the problems + +1244 +00:53:50,880 --> 00:53:54,720 +with adapting to low resource languages + +1245 +00:53:52,720 --> 00:53:57,400 +is obviously like lack of data + +1246 +00:53:54,720 --> 00:53:59,319 +especially lack of supervised data + +1247 +00:53:57,400 --> 00:54:02,200 +so another thing that you can do is you + +1248 +00:53:59,319 --> 00:54:05,319 +can train on that language itself and + +1249 +00:54:02,200 --> 00:54:09,280 +then a few other very highly related + +1250 +00:54:05,319 --> 00:54:11,240 +languages and so if you know um like if + +1251 +00:54:09,280 --> 00:54:14,640 +you want to train for a + +1252 +00:54:11,240 --> 00:54:17,359 +particular uh you know language from + +1253 +00:54:14,640 --> 00:54:19,359 +India or something like this very often + +1254 +00:54:17,359 --> 00:54:20,680 +there's a few other languages from India + +1255 +00:54:19,359 --> 00:54:24,240 +and sometimes some of them might be + +1256 +00:54:20,680 --> 00:54:27,440 +higher resource so maybe like Hindu uh + +1257 +00:54:24,240 --> 00:54:29,000 +Hindi is related um and that's higher + +1258 +00:54:27,440 --> 00:54:31,680 +resource than the language that you're + +1259 +00:54:29,000 --> 00:54:33,200 +interested in trading on um you can also + +1260 +00:54:31,680 --> 00:54:36,119 +do it I mean obviously you can do this + +1261 +00:54:33,200 --> 00:54:37,760 +for any language um a lot for some + +1262 +00:54:36,119 --> 00:54:39,599 +languages there's just no higher + +1263 +00:54:37,760 --> 00:54:43,280 +resource counterpart which makes it a + +1264 +00:54:39,599 --> 00:54:46,839 +little bit tricky um but uh when there + +1265 +00:54:43,280 --> 00:54:46,839 +is you can take advantage of + +1266 +00:54:46,880 --> 00:54:52,400 +that this is also another example of + +1267 +00:54:49,680 --> 00:54:54,200 +something that you know is a little bit + +1268 +00:54:52,400 --> 00:54:55,760 +tricky to implement because it requires + +1269 +00:54:54,200 --> 00:54:58,960 +meta learning and calculation of + +1270 +00:54:55,760 --> 00:55:01,720 +gradients stuff like this but there's + +1271 +00:54:58,960 --> 00:55:03,680 +this really nice work that essentially + +1272 +00:55:01,720 --> 00:55:06,920 +tries to learn a model that's good for + +1273 +00:55:03,680 --> 00:55:09,839 +fine-tuning into different languages and + +1274 +00:55:06,920 --> 00:55:12,240 +so the way they do this is unlike + +1275 +00:55:09,839 --> 00:55:15,200 +standard multilingual learning where you + +1276 +00:55:12,240 --> 00:55:17,160 +learn a model that is good um at + +1277 +00:55:15,200 --> 00:55:19,200 +processing multiple languages and then + +1278 +00:55:17,160 --> 00:55:22,200 +try to fine-tune it to a lower resource + +1279 +00:55:19,200 --> 00:55:27,559 +language what you do is you try to learn + +1280 +00:55:22,200 --> 00:55:29,280 +a model that is good at adap in to low + +1281 +00:55:27,559 --> 00:55:30,559 +resource languages and the way you do + +1282 +00:55:29,280 --> 00:55:32,760 +this is through a method called metal + +1283 +00:55:30,559 --> 00:55:35,200 +learning um I'm not going to cover metal + +1284 +00:55:32,760 --> 00:55:36,520 +learning given the limited amount of + +1285 +00:55:35,200 --> 00:55:39,280 +time if you've heard about it before you + +1286 +00:55:36,520 --> 00:55:41,440 +know basically what it is um if you + +1287 +00:55:39,280 --> 00:55:43,760 +haven't heard about it um just to give a + +1288 +00:55:41,440 --> 00:55:46,240 +general idea what you do is you have a + +1289 +00:55:43,760 --> 00:55:48,839 +heldout dead Deb uh development set like + +1290 +00:55:46,240 --> 00:55:52,119 +I did for the uh data balancing thing + +1291 +00:55:48,839 --> 00:55:53,839 +and you try to learn models so that the + +1292 +00:55:52,119 --> 00:55:55,520 +gradients that you derive from the + +1293 +00:55:53,839 --> 00:55:58,720 +updates here align well with the + +1294 +00:55:55,520 --> 00:56:00,760 +gradients on this data um and so you're + +1295 +00:55:58,720 --> 00:56:02,760 +trying to learn um you're trying to + +1296 +00:56:00,760 --> 00:56:04,880 +learn things where you know you update + +1297 +00:56:02,760 --> 00:56:07,119 +in a direction that is uh good for + +1298 +00:56:04,880 --> 00:56:10,440 +updating towards uh the low resource + +1299 +00:56:07,119 --> 00:56:10,440 +language when you start training on + +1300 +00:56:11,720 --> 00:56:17,880 +it so these are all fine-tuning related + +1301 +00:56:15,000 --> 00:56:19,920 +things um there's other there's a lot + +1302 +00:56:17,880 --> 00:56:22,480 +more to talk about uh with respect to + +1303 +00:56:19,920 --> 00:56:25,880 +this um like how do you choose languages + +1304 +00:56:22,480 --> 00:56:29,319 +and other things like this um another + +1305 +00:56:25,880 --> 00:56:31,079 +big Paradigm is zero shot transfer um + +1306 +00:56:29,319 --> 00:56:33,200 +for pre-trained + +1307 +00:56:31,079 --> 00:56:35,760 +representations and the way that this + +1308 +00:56:33,200 --> 00:56:37,880 +works is um you pre-train a large + +1309 +00:56:35,760 --> 00:56:40,520 +language model using monolingual data + +1310 +00:56:37,880 --> 00:56:42,480 +from many different languages and then + +1311 +00:56:40,520 --> 00:56:45,319 +you fine tune using annotated data in a + +1312 +00:56:42,480 --> 00:56:48,280 +given language like English and then you + +1313 +00:56:45,319 --> 00:56:50,440 +test it on a a model on a different + +1314 +00:56:48,280 --> 00:56:51,160 +language uh from the fine tune language + +1315 +00:56:50,440 --> 00:56:54,079 +like + +1316 +00:56:51,160 --> 00:56:55,599 +French and uh we benefit from this + +1317 +00:56:54,079 --> 00:56:56,880 +because multilingual pre-training can + +1318 +00:56:55,599 --> 00:56:58,520 +learn something + +1319 +00:56:56,880 --> 00:57:00,480 +I shouldn't say a universal + +1320 +00:56:58,520 --> 00:57:02,200 +representation but at least a + +1321 +00:57:00,480 --> 00:57:04,559 +representation that is conducive for + +1322 +00:57:02,200 --> 00:57:07,400 +transfer across + +1323 +00:57:04,559 --> 00:57:11,359 +languages um there's a lot of work on + +1324 +00:57:07,400 --> 00:57:13,599 +this um I am not going to cover it in a + +1325 +00:57:11,359 --> 00:57:15,520 +lot of detail number one in the interest + +1326 +00:57:13,599 --> 00:57:18,280 +of time but also number two because I do + +1327 +00:57:15,520 --> 00:57:20,440 +actually kind of uh strongly believe + +1328 +00:57:18,280 --> 00:57:23,799 +that there are other uh reasonable + +1329 +00:57:20,440 --> 00:57:26,599 +options that outperform this uh a pretty + +1330 +00:57:23,799 --> 00:57:28,599 +fair amount of the time and one of them + +1331 +00:57:26,599 --> 00:57:33,280 +is something called uh annotation + +1332 +00:57:28,599 --> 00:57:39,960 +projection um or translate uh translate + +1333 +00:57:33,280 --> 00:57:39,960 +train and the way this works is very + +1334 +00:57:42,920 --> 00:57:47,680 +similar very similar to what I talked + +1335 +00:57:45,079 --> 00:57:47,680 +about with back + +1336 +00:57:48,000 --> 00:57:52,160 +translation but there's two varieties of + +1337 +00:57:50,480 --> 00:57:54,400 +annotation production the first one + +1338 +00:57:52,160 --> 00:57:57,680 +translate + +1339 +00:57:54,400 --> 00:58:00,480 +train is you have um annotated training + +1340 +00:57:57,680 --> 00:58:02,960 +data in English and you translate it to + +1341 +00:58:00,480 --> 00:58:04,839 +the language you want to process like SW + +1342 +00:58:02,960 --> 00:58:06,400 +and so this is relatively easy for like + +1343 +00:58:04,839 --> 00:58:07,920 +question answering or something like + +1344 +00:58:06,400 --> 00:58:10,119 +this so what you do is you just take + +1345 +00:58:07,920 --> 00:58:13,280 +question answering data you translate + +1346 +00:58:10,119 --> 00:58:15,039 +the question you translate the answer + +1347 +00:58:13,280 --> 00:58:16,640 +and now you have Swahili question + +1348 +00:58:15,039 --> 00:58:18,599 +answering data and sure there may be + +1349 +00:58:16,640 --> 00:58:20,760 +like translation errors or something + +1350 +00:58:18,599 --> 00:58:22,640 +like this but having some training data + +1351 +00:58:20,760 --> 00:58:25,000 +is better than having no training data + +1352 +00:58:22,640 --> 00:58:26,799 +and also machine translation systems are + +1353 +00:58:25,000 --> 00:58:28,839 +reasonably good nowadays so you can + +1354 +00:58:26,799 --> 00:58:32,880 +actually get reasonably high quality + +1355 +00:58:28,839 --> 00:58:35,119 +data um the more complex version of this + +1356 +00:58:32,880 --> 00:58:37,039 +is what if you can't just translate your + +1357 +00:58:35,119 --> 00:58:41,119 +data what if your your translated data + +1358 +00:58:37,039 --> 00:58:44,720 +is not just um text but it's some sort + +1359 +00:58:41,119 --> 00:58:47,960 +of annotations on top of text + +1360 +00:58:44,720 --> 00:58:49,760 +and um to take the hardest possible + +1361 +00:58:47,960 --> 00:58:51,520 +example the hardest possible example is + +1362 +00:58:49,760 --> 00:58:54,520 +something where you have like tags on + +1363 +00:58:51,520 --> 00:58:57,119 +every word and + +1364 +00:58:54,520 --> 00:58:59,760 +so this could be like for example for + +1365 +00:58:57,119 --> 00:59:03,079 +part of speech tagging and if you have + +1366 +00:58:59,760 --> 00:59:03,079 +part of speech tagging what you can + +1367 +00:59:03,359 --> 00:59:10,319 +do is um you can have English Swahili + +1368 +00:59:06,480 --> 00:59:14,599 +data Maybe This is even um you know like + +1369 +00:59:10,319 --> 00:59:16,240 +already translated data um it's already + +1370 +00:59:14,599 --> 00:59:18,599 +translated data but you have part of + +1371 +00:59:16,240 --> 00:59:20,520 +speech tags either manual or automatic + +1372 +00:59:18,599 --> 00:59:23,079 +for the English data and then you + +1373 +00:59:20,520 --> 00:59:26,280 +basically project the part of speech + +1374 +00:59:23,079 --> 00:59:27,599 +tags to the other language and it + +1375 +00:59:26,280 --> 00:59:29,440 +doesn't have to be part of speech tags + +1376 +00:59:27,599 --> 00:59:31,680 +it could be like named entity labels it + +1377 +00:59:29,440 --> 00:59:34,119 +could be you know any other variety of + +1378 +00:59:31,680 --> 00:59:36,720 +things like this + +1379 +00:59:34,119 --> 00:59:38,720 +and this gets tricky because basically + +1380 +00:59:36,720 --> 00:59:40,400 +you need to find alignments between the + +1381 +00:59:38,720 --> 00:59:43,559 +words in the + +1382 +00:59:40,400 --> 00:59:44,880 +languages and um and then project the + +1383 +00:59:43,559 --> 00:59:47,960 +labels and you need to think about + +1384 +00:59:44,880 --> 00:59:49,880 +things like okay well if this is a noun + +1385 +00:59:47,960 --> 00:59:51,359 +uh how do I turn it into two nouns do I + +1386 +00:59:49,880 --> 00:59:54,079 +treat this as a + +1387 +00:59:51,359 --> 00:59:55,680 +determiner um like what sorts of rules + +1388 +00:59:54,079 --> 00:59:58,599 +do I use to solve these problems and + +1389 +00:59:55,680 --> 01:00:00,640 +stuff like that that so um you can + +1390 +00:59:58,599 --> 01:00:02,280 +either just translate the data if you + +1391 +01:00:00,640 --> 01:00:03,520 +don't have these sorts of annotations or + +1392 +01:00:02,280 --> 01:00:06,839 +if you have these annotations you need + +1393 +01:00:03,520 --> 01:00:08,440 +to do some more tricky stuff + +1394 +01:00:06,839 --> 01:00:12,119 +basically + +1395 +01:00:08,440 --> 01:00:13,520 +um actually I'm I'm sorry I forgot to + +1396 +01:00:12,119 --> 01:00:15,160 +talk about word alignment and this is + +1397 +01:00:13,520 --> 01:00:18,039 +kind of important so I'll just talk + +1398 +01:00:15,160 --> 01:00:20,520 +about this um uh + +1399 +01:00:18,039 --> 01:00:22,680 +briefly so word alignment basically what + +1400 +01:00:20,520 --> 01:00:22,680 +it + +1401 +01:00:22,880 --> 01:00:27,880 +does is uh going back + +1402 +01:00:27,960 --> 01:00:30,920 +to the example + +1403 +01:00:33,760 --> 01:00:38,000 +here word alignment is basically getting + +1404 +01:00:36,200 --> 01:00:40,520 +these alignments between the individual + +1405 +01:00:38,000 --> 01:00:41,680 +words uh in the sentence and so the + +1406 +01:00:40,520 --> 01:00:45,920 +input + +1407 +01:00:41,680 --> 01:00:47,440 +is um a sentence in one language and a + +1408 +01:00:45,920 --> 01:00:51,640 +sentence in another language and the + +1409 +01:00:47,440 --> 01:00:54,720 +output is like 0 0 1 1 + +1410 +01:00:51,640 --> 01:00:57,920 +22 3 4 + +1411 +01:00:54,720 --> 01:00:59,559 +43 um and kind of like the matching + +1412 +01:00:57,920 --> 01:01:02,319 +indices between the + +1413 +01:00:59,559 --> 01:01:05,000 +languages and there's two ways to do + +1414 +01:01:02,319 --> 01:01:07,200 +this um one way is + +1415 +01:01:05,000 --> 01:01:10,119 +unsupervised uh and unsupervised word + +1416 +01:01:07,200 --> 01:01:13,520 +alignment just use Co uses cooccurrence + +1417 +01:01:10,119 --> 01:01:18,200 +statistics between different languages + +1418 +01:01:13,520 --> 01:01:20,079 +um and the most famous method for this + +1419 +01:01:18,200 --> 01:01:22,920 +sorry I should have a slide about this I + +1420 +01:01:20,079 --> 01:01:24,720 +just uh realize now that I I did not + +1421 +01:01:22,920 --> 01:01:28,079 +prepare a + +1422 +01:01:24,720 --> 01:01:30,880 +slide so so the kind + +1423 +01:01:28,079 --> 01:01:33,039 +of older uh older version of this that + +1424 +01:01:30,880 --> 01:01:36,240 +was used forever um is something called + +1425 +01:01:33,039 --> 01:01:40,039 +Giza Plus+ it uses um co-occurrent + +1426 +01:01:36,240 --> 01:01:42,880 +Statistics over a large purpose uh to do + +1427 +01:01:40,039 --> 01:01:46,319 +annotation um a more modern version of + +1428 +01:01:42,880 --> 01:01:49,599 +this is something called Sim + +1429 +01:01:46,319 --> 01:01:53,480 +align and the way this works is you + +1430 +01:01:49,599 --> 01:01:55,160 +basically do um multilingual uh BT + +1431 +01:01:53,480 --> 01:01:57,000 +between these different languages and + +1432 +01:01:55,160 --> 01:01:59,720 +you find + +1433 +01:01:57,000 --> 01:02:01,279 +the um the representations that are the + +1434 +01:01:59,720 --> 01:02:02,920 +most similar between the languages and + +1435 +01:02:01,279 --> 01:02:04,559 +you treat those as alignment links + +1436 +01:02:02,920 --> 01:02:07,599 +between the words and the + +1437 +01:02:04,559 --> 01:02:09,720 +languages and then there's also um + +1438 +01:02:07,599 --> 01:02:11,680 +supervised alignment and supervised + +1439 +01:02:09,720 --> 01:02:14,200 +alignment you have a very small amount + +1440 +01:02:11,680 --> 01:02:17,720 +of data and you try to train a model so + +1441 +01:02:14,200 --> 01:02:20,000 +that the alignment links um uh match + +1442 +01:02:17,720 --> 01:02:20,000 +with the + +1443 +01:02:21,160 --> 01:02:28,520 +supervised um the supervised ones and um + +1444 +01:02:26,240 --> 01:02:30,640 +this is uh this was created by us but I + +1445 +01:02:28,520 --> 01:02:32,559 +do think it's the best option from the + +1446 +01:02:30,640 --> 01:02:33,920 +point of view of supervised alignment + +1447 +01:02:32,559 --> 01:02:37,000 +and basically the way it works is you + +1448 +01:02:33,920 --> 01:02:40,960 +have a multilingual Bert model and uh + +1449 +01:02:37,000 --> 01:02:43,000 +based on this you uh try to find um like + +1450 +01:02:40,960 --> 01:02:45,160 +which links match together but it's also + +1451 +01:02:43,000 --> 01:02:47,480 +trained using a contrast of objective + +1452 +01:02:45,160 --> 01:02:49,160 +where you try to upweight the uh correct + +1453 +01:02:47,480 --> 01:02:51,599 +links and downweight the incorrect links + +1454 +01:02:49,160 --> 01:02:53,760 +on supervised data so it tends to be + +1455 +01:02:51,599 --> 01:02:56,359 +quite a bit more accurate than the + +1456 +01:02:53,760 --> 01:02:58,119 +Alternatives um another option if you + +1457 +01:02:56,359 --> 01:03:02,799 +want is to ask + +1458 +01:02:58,119 --> 01:03:05,359 +gp4 and you you could ask gp4 to do that + +1459 +01:03:02,799 --> 01:03:07,359 +but it's expensive and not a whole lot + +1460 +01:03:05,359 --> 01:03:09,440 +better than using one of these like + +1461 +01:03:07,359 --> 01:03:11,039 +trained alignment tools so I would + +1462 +01:03:09,440 --> 01:03:13,640 +suggest probably using this if you want + +1463 +01:03:11,039 --> 01:03:15,079 +to find alignments between words and + +1464 +01:03:13,640 --> 01:03:17,640 +this can be useful for a lot of things + +1465 +01:03:15,079 --> 01:03:20,799 +it can also be useful for um you know + +1466 +01:03:17,640 --> 01:03:24,279 +visualization or a better understanding + +1467 +01:03:20,799 --> 01:03:26,079 +of uh you know like how cross lingual + +1468 +01:03:24,279 --> 01:03:28,640 +models are working and stuff like that + +1469 +01:03:26,079 --> 01:03:30,920 +so um it comes in handy for a lot of + +1470 +01:03:28,640 --> 01:03:30,920 +different + +1471 +01:03:31,839 --> 01:03:39,319 +things cool um any questions here yeah + +1472 +01:03:36,359 --> 01:03:40,960 +so just looking at the alignment um it + +1473 +01:03:39,319 --> 01:03:43,359 +seems to be pretty crazy for like + +1474 +01:03:40,960 --> 01:03:45,480 +English and Japanese yeah is there any + +1475 +01:03:43,359 --> 01:03:47,680 +work where you try to hop between + +1476 +01:03:45,480 --> 01:03:49,599 +languages like from Japanese to Spanish + +1477 +01:03:47,680 --> 01:03:52,440 +that's where you try to Pivot between + +1478 +01:03:49,599 --> 01:03:55,359 +languages yeah so it's called pivot uh + +1479 +01:03:52,440 --> 01:03:59,440 +pivoting and there's a fair amount of + +1480 +01:03:55,359 --> 01:03:59,440 +work on this um + +1481 +01:04:00,240 --> 01:04:04,119 +the there there's a bunch of different + +1482 +01:04:02,400 --> 01:04:05,960 +ways you can pivot you can pivot for + +1483 +01:04:04,119 --> 01:04:07,599 +word alignment and pivoting is + +1484 +01:04:05,960 --> 01:04:09,520 +particularly useful when you have a low + +1485 +01:04:07,599 --> 01:04:10,920 +resource language that's very similar to + +1486 +01:04:09,520 --> 01:04:15,480 +a high resource language where you have + +1487 +01:04:10,920 --> 01:04:19,160 +lots of data um the other thing is like + +1488 +01:04:15,480 --> 01:04:20,039 +pivot translation is a thing um and so + +1489 +01:04:19,160 --> 01:04:22,640 +for + +1490 +01:04:20,039 --> 01:04:24,720 +example I don't know if Google does this + +1491 +01:04:22,640 --> 01:04:27,200 +anymore but for a long time google would + +1492 +01:04:24,720 --> 01:04:29,599 +actually be translating through English + +1493 +01:04:27,200 --> 01:04:31,160 +to get into other languages and it was + +1494 +01:04:29,599 --> 01:04:32,640 +all a black box but you could tell they + +1495 +01:04:31,160 --> 01:04:34,240 +were doing it because you would suddenly + +1496 +01:04:32,640 --> 01:04:36,359 +get English words when you Translate + +1497 +01:04:34,240 --> 01:04:40,680 +from Chinese to Arabic or something like + +1498 +01:04:36,359 --> 01:04:42,359 +this um and so like uh that's also done + +1499 +01:04:40,680 --> 01:04:45,039 +for translation in other multilingual + +1500 +01:04:42,359 --> 01:04:47,559 +tests too um another thing that I should + +1501 +01:04:45,039 --> 01:04:51,559 +mention is um I talked about translate + +1502 +01:04:47,559 --> 01:04:54,680 +train there's also translate + +1503 +01:04:51,559 --> 01:04:57,359 +test um so translate train basically you + +1504 +01:04:54,680 --> 01:05:01,720 +translate your training + +1505 +01:04:57,359 --> 01:05:03,599 +um and translate test you translate um + +1506 +01:05:01,720 --> 01:05:05,200 +at test time so basically like let's say + +1507 +01:05:03,599 --> 01:05:07,920 +you want to answer questions that were + +1508 +01:05:05,200 --> 01:05:09,480 +posed in Japanese or something um you + +1509 +01:05:07,920 --> 01:05:11,000 +translate the Japanese questions into + +1510 +01:05:09,480 --> 01:05:13,200 +English and answer the question using an + +1511 +01:05:11,000 --> 01:05:16,119 +English QA system and then translate the + +1512 +01:05:13,200 --> 01:05:18,480 +answer back into Japanese um and that's + +1513 +01:05:16,119 --> 01:05:20,359 +good to an extent like it's usually + +1514 +01:05:18,480 --> 01:05:21,839 +better than a bad multi-lingual system + +1515 +01:05:20,359 --> 01:05:23,319 +but worse than a good multilingual + +1516 +01:05:21,839 --> 01:05:24,760 +system if you put like a lot of effort + +1517 +01:05:23,319 --> 01:05:27,640 +into building a strong multilingual + +1518 +01:05:24,760 --> 01:05:27,640 +system so + +1519 +01:05:27,839 --> 01:05:33,240 +although um maybe for some of the really + +1520 +01:05:31,000 --> 01:05:34,640 +like difficult tasks like reasoning and + +1521 +01:05:33,240 --> 01:05:36,640 +stuff like that it's better to reason in + +1522 +01:05:34,640 --> 01:05:38,720 +English like I talked about multilingual + +1523 +01:05:36,640 --> 01:05:41,000 +um Chain of Thought + +1524 +01:05:38,720 --> 01:05:45,400 +reasoning um + +1525 +01:05:41,000 --> 01:05:47,680 +cool so yeah another thing is um if + +1526 +01:05:45,400 --> 01:05:50,319 +you're translating from if you're + +1527 +01:05:47,680 --> 01:05:52,359 +transferring from another language um + +1528 +01:05:50,319 --> 01:05:54,440 +which language to use as I mentioned it + +1529 +01:05:52,359 --> 01:05:56,559 +should be a similar to the target + +1530 +01:05:54,440 --> 01:05:59,480 +language in a data rich langu language + +1531 +01:05:56,559 --> 01:06:01,319 +um we actually have a study uh where we + +1532 +01:05:59,480 --> 01:06:05,839 +tried to figure out what variety of + +1533 +01:06:01,319 --> 01:06:08,160 +similarity is the best for Trans uh + +1534 +01:06:05,839 --> 01:06:11,319 +transferring so like let's say you want + +1535 +01:06:08,160 --> 01:06:14,000 +to tra train a good language for um you + +1536 +01:06:11,319 --> 01:06:15,839 +know high it's hard to come up with + +1537 +01:06:14,000 --> 01:06:17,359 +which language you should be using one + +1538 +01:06:15,839 --> 01:06:20,319 +of the interesting things we found in + +1539 +01:06:17,359 --> 01:06:22,319 +this paper is um we we actually trained + +1540 +01:06:20,319 --> 01:06:24,319 +a model to try to predict which language + +1541 +01:06:22,319 --> 01:06:27,240 +would be the best to transfer from but + +1542 +01:06:24,319 --> 01:06:31,160 +the most useful feature overall was how + +1543 +01:06:27,240 --> 01:06:32,839 +close uh the languages are on the globe + +1544 +01:06:31,160 --> 01:06:34,559 +um which is kind of weird right just + +1545 +01:06:32,839 --> 01:06:35,920 +because languages are close on the globe + +1546 +01:06:34,559 --> 01:06:39,160 +doesn't mean they're similar like you + +1547 +01:06:35,920 --> 01:06:43,880 +can come up with um Bas in Spanish which + +1548 +01:06:39,160 --> 01:06:46,119 +are very very different in every way um + +1549 +01:06:43,880 --> 01:06:47,880 +but languages that are close on the + +1550 +01:06:46,119 --> 01:06:50,079 +globe tend to be similar with respect to + +1551 +01:06:47,880 --> 01:06:51,359 +both vocabulary and syntax on average + +1552 +01:06:50,079 --> 01:06:53,240 +and so because of that it's a pretty + +1553 +01:06:51,359 --> 01:06:57,319 +good indicator that a language would be + +1554 +01:06:53,240 --> 01:06:57,319 +a good transfer language + +1555 +01:06:58,720 --> 01:07:01,880 +um if languages don't share the same + +1556 +01:07:00,240 --> 01:07:04,880 +script actually I have an example of + +1557 +01:07:01,880 --> 01:07:08,400 +pivoting here uh where we can pivot uh + +1558 +01:07:04,880 --> 01:07:10,760 +from uh morati into Hindi and then into + +1559 +01:07:08,400 --> 01:07:14,000 +another language for linking across + +1560 +01:07:10,760 --> 01:07:16,240 +entities and so um we demonstrated that + +1561 +01:07:14,000 --> 01:07:19,079 +you could pivot um another thing that + +1562 +01:07:16,240 --> 01:07:20,960 +you can do like um as we mentioned in + +1563 +01:07:19,079 --> 01:07:24,520 +the last class or lindia mentioned in + +1564 +01:07:20,960 --> 01:07:26,559 +the last class there's the idea of IPA + +1565 +01:07:24,520 --> 01:07:28,920 +um the international phonetic alphabet + +1566 +01:07:26,559 --> 01:07:32,319 +which kind of gives you an idea of how + +1567 +01:07:28,920 --> 01:07:34,640 +things are pronounced and in some cases + +1568 +01:07:32,319 --> 01:07:36,880 +languages might have a different script + +1569 +01:07:34,640 --> 01:07:37,920 +but if you normalize them into IPA you + +1570 +01:07:36,880 --> 01:07:39,359 +could normalize them into the + +1571 +01:07:37,920 --> 01:07:41,640 +pronunciation and actually things are + +1572 +01:07:39,359 --> 01:07:42,520 +pronounced rather similarly in a lot of + +1573 +01:07:41,640 --> 01:07:44,640 +related + +1574 +01:07:42,520 --> 01:07:46,079 +languages one thing you need to be + +1575 +01:07:44,640 --> 01:07:48,039 +careful about though is we actually + +1576 +01:07:46,079 --> 01:07:50,359 +found this hurt accuracy in a lot of + +1577 +01:07:48,039 --> 01:07:51,240 +languages uh to give an example English + +1578 +01:07:50,359 --> 01:07:55,039 +and + +1579 +01:07:51,240 --> 01:07:55,960 +French so if anybody has studied French + +1580 +01:07:55,039 --> 01:07:58,640 +you know + +1581 +01:07:55,960 --> 01:08:00,599 +that if anybody has studied French as a + +1582 +01:07:58,640 --> 01:08:02,520 +second language speaker after studying + +1583 +01:08:00,599 --> 01:08:04,160 +English you know that even though you + +1584 +01:08:02,520 --> 01:08:05,359 +can read the characters you have no idea + +1585 +01:08:04,160 --> 01:08:07,160 +how they're pronounced if you're not a + +1586 +01:08:05,359 --> 01:08:09,960 +very good french speaker and so + +1587 +01:08:07,160 --> 01:08:11,440 +basically um the the way it's written is + +1588 +01:08:09,960 --> 01:08:13,119 +closer than the way it's pronounced so + +1589 +01:08:11,440 --> 01:08:15,880 +you can't just normalize everything into + +1590 +01:08:13,119 --> 01:08:20,080 +pronunciation and just hope it works so + +1591 +01:08:15,880 --> 01:08:22,400 +um that's another technique that you can + +1592 +01:08:20,080 --> 01:08:24,239 +use um I'd also like to talk a little + +1593 +01:08:22,400 --> 01:08:26,920 +bit about how we share parameters + +1594 +01:08:24,239 --> 01:08:29,400 +between languages + +1595 +01:08:26,920 --> 01:08:31,960 +and um there is a bunch of different + +1596 +01:08:29,400 --> 01:08:34,880 +ways we can do this um one is sharing + +1597 +01:08:31,960 --> 01:08:36,719 +all parameters so uh just have a single + +1598 +01:08:34,880 --> 01:08:37,759 +bottle that where all of the parameters + +1599 +01:08:36,719 --> 01:08:39,319 +are the + +1600 +01:08:37,759 --> 01:08:42,080 +same + +1601 +01:08:39,319 --> 01:08:44,279 +um previously there were methods that + +1602 +01:08:42,080 --> 01:08:45,359 +shared only like an encoder or an + +1603 +01:08:44,279 --> 01:08:48,239 +attention + +1604 +01:08:45,359 --> 01:08:49,719 +mechanism um also sharing some matrices + +1605 +01:08:48,239 --> 01:08:53,120 +of the Transformer + +1606 +01:08:49,719 --> 01:08:56,480 +model um using a parameter generator to + +1607 +01:08:53,120 --> 01:08:58,719 +generate parameters per language this is + +1608 +01:08:56,480 --> 01:09:00,239 +I I like this paper uh it's one of my + +1609 +01:08:58,719 --> 01:09:02,080 +papers I like this paper but it's not + +1610 +01:09:00,239 --> 01:09:03,520 +super practical but basically we used a + +1611 +01:09:02,080 --> 01:09:05,880 +neural network to generate the + +1612 +01:09:03,520 --> 01:09:07,600 +parameters of the multilingual model um + +1613 +01:09:05,880 --> 01:09:09,359 +and we fed in things like information + +1614 +01:09:07,600 --> 01:09:11,960 +about what type of language it was and + +1615 +01:09:09,359 --> 01:09:14,719 +stuff like that um so kind of ambitious + +1616 +01:09:11,960 --> 01:09:16,400 +but not uh you know it requires a lot of + +1617 +01:09:14,719 --> 01:09:18,719 +parameters so it's not super practical + +1618 +01:09:16,400 --> 01:09:21,319 +but the more um common thing that people + +1619 +01:09:18,719 --> 01:09:23,880 +are using now are uh things like + +1620 +01:09:21,319 --> 01:09:26,159 +language experts or + +1621 +01:09:23,880 --> 01:09:29,120 +adapters and so + +1622 +01:09:26,159 --> 01:09:32,640 +the idea here about language experts is + +1623 +01:09:29,120 --> 01:09:37,440 +basically um it's a layer that you + +1624 +01:09:32,640 --> 01:09:42,120 +insert into a particular part of the uh + +1625 +01:09:37,440 --> 01:09:44,640 +into a particular part of the model and + +1626 +01:09:42,120 --> 01:09:47,199 +this is a kind of adapter style + +1627 +01:09:44,640 --> 01:09:51,279 +parameter efficient training layer where + +1628 +01:09:47,199 --> 01:09:54,840 +you uh downweight and upweight um uh + +1629 +01:09:51,279 --> 01:09:57,239 +sorry down uh down sample and up sample + +1630 +01:09:54,840 --> 01:09:58,960 +so it's kind of like Laura or an adapter + +1631 +01:09:57,239 --> 01:10:01,120 +or something like that so few parameters + +1632 +01:09:58,960 --> 01:10:03,360 +for the language and then they also have + +1633 +01:10:01,120 --> 01:10:05,360 +a task based adapter So based on the + +1634 +01:10:03,360 --> 01:10:06,520 +task that you're solving the the end an + +1635 +01:10:05,360 --> 01:10:08,960 +adapter + +1636 +01:10:06,520 --> 01:10:11,080 +here um and they also demonstrated that + +1637 +01:10:08,960 --> 01:10:15,000 +you can pre-train models with language + +1638 +01:10:11,080 --> 01:10:17,679 +specific parameters included um in them + +1639 +01:10:15,000 --> 01:10:20,480 +uh also from the point of view of an + +1640 +01:10:17,679 --> 01:10:22,320 +adapter um we have done a similar thing + +1641 +01:10:20,480 --> 01:10:26,199 +for summarization where we compared + +1642 +01:10:22,320 --> 01:10:29,199 +prefix tuning and um prefix tuning in + +1643 +01:10:26,199 --> 01:10:30,719 +Laura and uh we found that you could do + +1644 +01:10:29,199 --> 01:10:32,440 +a similar thing where you train a single + +1645 +01:10:30,719 --> 01:10:35,239 +model but each language has its own + +1646 +01:10:32,440 --> 01:10:37,040 +prefix tuning parameters or own Laura + +1647 +01:10:35,239 --> 01:10:38,640 +parameters and that can be pretty + +1648 +01:10:37,040 --> 01:10:40,760 +effective at improving the capacity of + +1649 +01:10:38,640 --> 01:10:40,760 +the + +1650 +01:10:41,280 --> 01:10:44,280 +model + +1651 +01:10:44,880 --> 01:10:51,000 +um yeah I have very little time to cover + +1652 +01:10:47,880 --> 01:10:53,239 +the last slides I guess so um I I'll + +1653 +01:10:51,000 --> 01:10:56,440 +just very quickly mention um what I was + +1654 +01:10:53,239 --> 01:10:58,800 +going to mention so um uh another thing + +1655 +01:10:56,440 --> 01:11:00,440 +you can do is create new data um one way + +1656 +01:10:58,800 --> 01:11:02,760 +you can create new data is just ask + +1657 +01:11:00,440 --> 01:11:04,440 +people to annotate data for you um but + +1658 +01:11:02,760 --> 01:11:06,199 +the problem is uh for low resource + +1659 +01:11:04,440 --> 01:11:09,080 +languages it's often hard to get lots of + +1660 +01:11:06,199 --> 01:11:11,640 +data so one thing we do is leverage + +1661 +01:11:09,080 --> 01:11:13,719 +something called Active Learning um the + +1662 +01:11:11,640 --> 01:11:16,840 +basic idea between behind Active + +1663 +01:11:13,719 --> 01:11:19,800 +Learning is that you use labeled data um + +1664 +01:11:16,840 --> 01:11:21,360 +you do some training you get a model um + +1665 +01:11:19,800 --> 01:11:24,719 +then you apply that model to lots of + +1666 +01:11:21,360 --> 01:11:27,480 +unlabeled data and select some data uh + +1667 +01:11:24,719 --> 01:11:29,760 +that the model is highly uncertain about + +1668 +01:11:27,480 --> 01:11:31,679 +and then throw that to annotation and so + +1669 +01:11:29,760 --> 01:11:32,960 +basically what this does is this um + +1670 +01:11:31,679 --> 01:11:34,719 +allows you to select data where the + +1671 +01:11:32,960 --> 01:11:37,159 +current model is not very confident or + +1672 +01:11:34,719 --> 01:11:38,560 +not very good or something like this and + +1673 +01:11:37,159 --> 01:11:40,159 +this can be really helpful it's not + +1674 +01:11:38,560 --> 01:11:40,920 +limited to multilingual learning but + +1675 +01:11:40,159 --> 01:11:43,520 +it's + +1676 +01:11:40,920 --> 01:11:45,679 +specifically uh quite helpful in cases + +1677 +01:11:43,520 --> 01:11:46,960 +where you uh can annotate only a small + +1678 +01:11:45,679 --> 01:11:51,040 +amount of + +1679 +01:11:46,960 --> 01:11:53,880 +data um and so the basic idea is um + +1680 +01:11:51,040 --> 01:11:56,120 +Illustrated here for for binary + +1681 +01:11:53,880 --> 01:11:58,840 +classification where if you only + +1682 +01:11:56,120 --> 01:12:01,320 +annotate um data randomly you might end + +1683 +01:11:58,840 --> 01:12:03,520 +up getting data that doesn't tell you uh + +1684 +01:12:01,320 --> 01:12:06,000 +very well about the specific decision + +1685 +01:12:03,520 --> 01:12:08,480 +boundary and as a result a model trained + +1686 +01:12:06,000 --> 01:12:10,719 +on the few data points that you randomly + +1687 +01:12:08,480 --> 01:12:13,679 +select could be inaccurate whereas + +1688 +01:12:10,719 --> 01:12:15,800 +Active Learning um kind of finds data + +1689 +01:12:13,679 --> 01:12:18,040 +directly on the decision boundary here + +1690 +01:12:15,800 --> 01:12:21,320 +and that allows you to find more uh you + +1691 +01:12:18,040 --> 01:12:24,159 +know um effective + +1692 +01:12:21,320 --> 01:12:27,040 +samples there's two fundamental ideas + +1693 +01:12:24,159 --> 01:12:29,120 +uncertainty and representativeness and + +1694 +01:12:27,040 --> 01:12:31,280 +you want to come up with models um you + +1695 +01:12:29,120 --> 01:12:34,760 +want to come up with a data that selects + +1696 +01:12:31,280 --> 01:12:37,400 +data where the model is uncertain but + +1697 +01:12:34,760 --> 01:12:39,840 +representative and so actually you can + +1698 +01:12:37,400 --> 01:12:42,159 +select data only for representativeness + +1699 +01:12:39,840 --> 01:12:44,360 +and it's only and it's relatively useful + +1700 +01:12:42,159 --> 01:12:46,360 +so you can select only data that has + +1701 +01:12:44,360 --> 01:12:48,880 +lots of high frequency phrases for + +1702 +01:12:46,360 --> 01:12:50,120 +example uh for machine translation and + +1703 +01:12:48,880 --> 01:12:51,960 +that will allow you to get better + +1704 +01:12:50,120 --> 01:12:53,600 +coverage of high frequency + +1705 +01:12:51,960 --> 01:12:57,199 +phrases + +1706 +01:12:53,600 --> 01:12:58,560 +um but you uncertainty is also good + +1707 +01:12:57,199 --> 01:13:01,120 +because it helps you find like the + +1708 +01:12:58,560 --> 01:13:02,840 +model's current blind spots the problem + +1709 +01:13:01,120 --> 01:13:04,880 +with only uncertainty is it gives you + +1710 +01:13:02,840 --> 01:13:06,760 +lots of garbage it'll get like for + +1711 +01:13:04,880 --> 01:13:09,560 +example for machine translation it will + +1712 +01:13:06,760 --> 01:13:11,960 +select things out with only emojis or + +1713 +01:13:09,560 --> 01:13:14,239 +something like that and uh you know + +1714 +01:13:11,960 --> 01:13:17,679 +that's not very useful to train your + +1715 +01:13:14,239 --> 01:13:20,800 +model so um I have more examples in the + +1716 +01:13:17,679 --> 01:13:22,679 +slides I'm going to finish that up um + +1717 +01:13:20,800 --> 01:13:25,280 +but uncertainty you can use model + +1718 +01:13:22,679 --> 01:13:26,600 +confidence representativeness basically + +1719 +01:13:25,280 --> 01:13:29,400 +what you do is you try to get good + +1720 +01:13:26,600 --> 01:13:31,280 +coverage of the embedding space of uh + +1721 +01:13:29,400 --> 01:13:34,440 +all of the embeddings that you have for + +1722 +01:13:31,280 --> 01:13:36,400 +models and um you can also do this + +1723 +01:13:34,440 --> 01:13:38,440 +multilingually combined together with + +1724 +01:13:36,400 --> 01:13:40,360 +crosslingual transfer and we have some + +1725 +01:13:38,440 --> 01:13:42,400 +examples of how you I have a few + +1726 +01:13:40,360 --> 01:13:47,000 +examples of how you can do that in uh in + +1727 +01:13:42,400 --> 01:13:47,000 +the slides there so um \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (23) Multilingual NLP/transcript.vtt b/CMU Advanced NLP 2024 (23) Multilingual NLP/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..80b1b69f49bd42494777fd1aa8383de8a47ea762 --- /dev/null +++ b/CMU Advanced NLP 2024 (23) Multilingual NLP/transcript.vtt @@ -0,0 +1,5182 @@ +WEBVTT + +00:00:00.960 --> 00:00:09.240 +okay um I'll talk about multilingual + +00:00:03.960 --> 00:00:10.400 +NLP and um multilingual NLP is uh NLP in + +00:00:09.240 --> 00:00:13.839 +many different + +00:00:10.400 --> 00:00:16.359 +languages um there is specifically two + +00:00:13.839 --> 00:00:18.480 +varieties of multilingual NLP um the + +00:00:16.359 --> 00:00:20.600 +first one is monolingual NLP in multiple + +00:00:18.480 --> 00:00:22.880 +languages and what I mean by this is + +00:00:20.600 --> 00:00:25.960 +basically any task that you could do in + +00:00:22.880 --> 00:00:28.880 +English um you could do it in languages + +00:00:25.960 --> 00:00:30.720 +that are not English and so uh this + +00:00:28.880 --> 00:00:32.800 +would be a question answer ing sentiment + +00:00:30.720 --> 00:00:36.320 +analysis chatbots code generation + +00:00:32.800 --> 00:00:40.120 +whatever else and then the other one is + +00:00:36.320 --> 00:00:43.440 +some variety of cross uh crosslingual + +00:00:40.120 --> 00:00:45.440 +NLP and this is specifically tasks that + +00:00:43.440 --> 00:00:46.840 +handle more than one language at once + +00:00:45.440 --> 00:00:51.160 +and so that would be things like machine + +00:00:46.840 --> 00:00:53.680 +translation crosslingual QA um etc etc + +00:00:51.160 --> 00:00:55.600 +um crosslingual QA is uh just like for + +00:00:53.680 --> 00:00:56.960 +example answering questions where the + +00:00:55.600 --> 00:00:59.039 +source material is in a different + +00:00:56.960 --> 00:01:01.039 +language so if I ask a question in + +00:00:59.039 --> 00:01:04.640 +Japanese it can go find some information + +00:01:01.039 --> 00:01:04.640 +in English and answer the question in + +00:01:04.960 --> 00:01:12.840 +Japanese so um right now so many of our + +00:01:09.400 --> 00:01:16.280 +systems are trained by you know using + +00:01:12.840 --> 00:01:18.119 +large data sets and probably by far the + +00:01:16.280 --> 00:01:21.079 +biggest challenge in multilingual nalp + +00:01:18.119 --> 00:01:23.720 +is this uh pity of data in uh many of + +00:01:21.079 --> 00:01:27.640 +the languages that we care about this + +00:01:23.720 --> 00:01:29.600 +particular example is uh of Wikipedia + +00:01:27.640 --> 00:01:30.920 +articles and how many Wikipedia articles + +00:01:29.600 --> 00:01:34.079 +there are in different + +00:01:30.920 --> 00:01:35.759 +languages so uh you can see that it + +00:01:34.079 --> 00:01:39.720 +drops off very quickly number one is + +00:01:35.759 --> 00:01:41.720 +English of course and um after the first + +00:01:39.720 --> 00:01:43.520 +20 to 30 languages there are just very + +00:01:41.720 --> 00:01:48.640 +few articles in any language that you + +00:01:43.520 --> 00:01:50.680 +want to use um it looks similar for just + +00:01:48.640 --> 00:01:53.920 +general text on the internet but it's + +00:01:50.680 --> 00:01:55.560 +not quite as sharp in general text on + +00:01:53.920 --> 00:01:57.560 +the internet so there is still this very + +00:01:55.560 --> 00:02:01.320 +longkill distribution but it's not quite + +00:01:57.560 --> 00:02:03.119 +as uh bad as in Wikipedia + +00:02:01.320 --> 00:02:05.039 +um one other thing to note is that + +00:02:03.119 --> 00:02:07.479 +there's even less annotated data of + +00:02:05.039 --> 00:02:11.440 +course because uh the annotated data is + +00:02:07.479 --> 00:02:13.680 +a subset of monolingual data and so that + +00:02:11.440 --> 00:02:15.640 +means we have less data for machine + +00:02:13.680 --> 00:02:17.840 +translation we have less data for + +00:02:15.640 --> 00:02:19.160 +sequence labeling dialogue question + +00:02:17.840 --> 00:02:22.280 +answering other stuff like that + +00:02:19.160 --> 00:02:22.280 +instruction following and + +00:02:22.480 --> 00:02:26.800 +things another thing that makes + +00:02:24.519 --> 00:02:30.000 +multilingual NLP difficult is we just + +00:02:26.800 --> 00:02:33.200 +had a a lecture on linguistics + +00:02:30.000 --> 00:02:34.760 +and not all languages are the same and I + +00:02:33.200 --> 00:02:37.519 +would say that this is the smaller + +00:02:34.760 --> 00:02:39.480 +problem but it's still a problem um and + +00:02:37.519 --> 00:02:41.720 +it can cause uh issues when you're + +00:02:39.480 --> 00:02:44.879 +trying to process something that is not + +00:02:41.720 --> 00:02:47.560 +English with models that were mostly uh + +00:02:44.879 --> 00:02:50.760 +trained in English and to give some + +00:02:47.560 --> 00:02:52.519 +examples um morphology is one of them so + +00:02:50.760 --> 00:02:55.319 +we talked about how morphology we can + +00:02:52.519 --> 00:02:57.200 +have things like um you know infix + +00:02:55.319 --> 00:03:00.680 +morphology where you change the inner + +00:02:57.200 --> 00:03:03.120 +letters of a um + +00:03:00.680 --> 00:03:07.000 +of a word and in English we mostly don't + +00:03:03.120 --> 00:03:09.519 +have that we have like you know um Goose + +00:03:07.000 --> 00:03:11.040 +geese uh and other things like this but + +00:03:09.519 --> 00:03:13.120 +it's very rare for us to change the + +00:03:11.040 --> 00:03:15.959 +middle uh to morphologically change the + +00:03:13.120 --> 00:03:17.799 +middle letters of a sentence um of a + +00:03:15.959 --> 00:03:19.560 +word but in other languages we have that + +00:03:17.799 --> 00:03:23.239 +all the time and that breaks sentence + +00:03:19.560 --> 00:03:26.159 +piece for example so that's one issue um + +00:03:23.239 --> 00:03:30.640 +another thing is accents and diacritics + +00:03:26.159 --> 00:03:34.120 +so um an accent or a diacritic is + +00:03:30.640 --> 00:03:35.280 +uh basically where you have a um another + +00:03:34.120 --> 00:03:37.959 +thing on top of the character to + +00:03:35.280 --> 00:03:39.840 +indicate its tone um does anybody speak + +00:03:37.959 --> 00:03:43.000 +a language that uses lots of accents or + +00:03:39.840 --> 00:03:47.120 +diacritics yeah Spanish Spanish okay + +00:03:43.000 --> 00:03:49.200 +yeah um so yeah that that's a good one + +00:03:47.120 --> 00:03:50.760 +um any any other + +00:03:49.200 --> 00:03:53.959 +ones + +00:03:50.760 --> 00:03:56.360 +yeah yeah French has a little bit um + +00:03:53.959 --> 00:04:00.799 +there there are some that are even more + +00:03:56.360 --> 00:04:03.599 +uh kind of uh rich or uh to give one + +00:04:00.799 --> 00:04:07.079 +example Yura which is spoken widely in + +00:04:03.599 --> 00:04:08.879 +Nigeria has lots of diacritics but very + +00:04:07.079 --> 00:04:11.000 +often it's written the language is + +00:04:08.879 --> 00:04:13.439 +written without them so it adds a lot of + +00:04:11.000 --> 00:04:16.199 +ambiguity and like lexical diversity to + +00:04:13.439 --> 00:04:20.040 +the language and stuff like this as well + +00:04:16.199 --> 00:04:21.880 +um pinion kind of has them so it has uh + +00:04:20.040 --> 00:04:23.759 +like pinion for Chinese kind of has them + +00:04:21.880 --> 00:04:25.520 +it has like a number at the end of the + +00:04:23.759 --> 00:04:27.600 +syllable that indicates the tone so it's + +00:04:25.520 --> 00:04:30.120 +kind of similar as + +00:04:27.600 --> 00:04:34.440 +well um other things are different + +00:04:30.120 --> 00:04:40.800 +scripts such as cjk um Chinese Japanese + +00:04:34.440 --> 00:04:44.160 +Korean scripts um so Chinese script uh + +00:04:40.800 --> 00:04:46.600 +is uh you know they they also use Roman + +00:04:44.160 --> 00:04:48.880 +characters but they have lots of + +00:04:46.600 --> 00:04:51.600 +ideographs where the characters mean + +00:04:48.880 --> 00:04:55.840 +things um is supposed to be indicating + +00:04:51.600 --> 00:04:59.240 +the pronunciation Japanese has both uh + +00:04:55.840 --> 00:05:00.160 +um both ideographs and uh regular + +00:04:59.240 --> 00:05:02.520 +characters + +00:05:00.160 --> 00:05:05.680 +that have pronunciations Korean is all + +00:05:02.520 --> 00:05:10.039 +pronunciation but they stick three + +00:05:05.680 --> 00:05:10.039 +pronunciations together in a single + +00:05:11.120 --> 00:05:14.520 +character + +00:05:12.919 --> 00:05:17.919 +um + +00:05:14.520 --> 00:05:20.560 +so like I don't I don't know much Korean + +00:05:17.919 --> 00:05:22.199 +but I maybe know enough to write this + +00:05:20.560 --> 00:05:25.080 +properly so this + +00:05:22.199 --> 00:05:27.400 +is + +00:05:25.080 --> 00:05:30.520 +uh is there a line down there does + +00:05:27.400 --> 00:05:37.319 +anyone know Korean yeah + +00:05:30.520 --> 00:05:37.319 +there's is there a line no okay cool + +00:05:39.600 --> 00:05:50.960 +what oh I this is Korea right + +00:05:43.840 --> 00:05:50.960 +yeah it has a okay okay yeah so so + +00:05:53.639 --> 00:06:00.520 +that's and then this is + +00:05:57.120 --> 00:06:03.600 +the yeah okay so so this is is um this + +00:06:00.520 --> 00:06:06.840 +is Korea H + +00:06:03.600 --> 00:06:12.280 +could okay thank you and then this + +00:06:06.840 --> 00:06:14.960 +is Yeah so basically this is like h a n + +00:06:12.280 --> 00:06:21.840 +and then kind of like K + +00:06:14.960 --> 00:06:23.360 +kg i u k g um and so like there's + +00:06:21.840 --> 00:06:25.120 +actually kind of three characters in + +00:06:23.360 --> 00:06:26.479 +each one of these characters so it's + +00:06:25.120 --> 00:06:29.800 +kind of like an alphabet but they're all + +00:06:26.479 --> 00:06:31.240 +stuck together and if you um if you deal + +00:06:29.800 --> 00:06:33.080 +with this on a computer and you're not + +00:06:31.240 --> 00:06:35.199 +very smart about it basically it will + +00:06:33.080 --> 00:06:37.360 +just look like a Chinese character and + +00:06:35.199 --> 00:06:41.120 +you it will be segmented with SE + +00:06:37.360 --> 00:06:43.520 +sentence piece in a you know uh in uh + +00:06:41.120 --> 00:06:48.160 +weird ways if you use bites or or + +00:06:43.520 --> 00:06:49.160 +whatever so um each one has their own uh + +00:06:48.160 --> 00:06:51.880 +their own + +00:06:49.160 --> 00:06:54.319 +peculiarities um another thing is + +00:06:51.880 --> 00:06:56.039 +dialectal language so sometimes people + +00:06:54.319 --> 00:06:59.160 +speak different dialects and that can + +00:06:56.039 --> 00:07:01.879 +throw things off and also lack of uh + +00:06:59.160 --> 00:07:04.759 +form writing system so for a lot of + +00:07:01.879 --> 00:07:07.280 +languages um there isn't really + +00:07:04.759 --> 00:07:08.800 +standardized writing and people you know + +00:07:07.280 --> 00:07:11.560 +sometimes write in the native script + +00:07:08.800 --> 00:07:14.800 +sometimes write in Roman script um and + +00:07:11.560 --> 00:07:17.800 +other things like that so these English + +00:07:14.800 --> 00:07:21.759 +is relatively standardized + +00:07:17.800 --> 00:07:25.599 +relatively um you know relatively poor + +00:07:21.759 --> 00:07:27.039 +morphology or simple morphology um + +00:07:25.599 --> 00:07:28.400 +doesn't have a whole lot of characters + +00:07:27.039 --> 00:07:30.319 +and stuff like that so it has a lot of + +00:07:28.400 --> 00:07:33.599 +simplifying things that you you uh you + +00:07:30.319 --> 00:07:35.280 +have to deal with when you work in other + +00:07:33.599 --> 00:07:37.479 +languages + +00:07:35.280 --> 00:07:39.639 +so how do we start attacking + +00:07:37.479 --> 00:07:42.840 +multilingual problems so like one really + +00:07:39.639 --> 00:07:46.639 +huge it um thing over the past five + +00:07:42.840 --> 00:07:48.599 +years or seven years or so is that um we + +00:07:46.639 --> 00:07:50.080 +can learn models that process multiple + +00:07:48.599 --> 00:07:51.319 +languages and all the languages can + +00:07:50.080 --> 00:07:52.960 +learn from each other and that really + +00:07:51.319 --> 00:07:58.360 +pulls up the languages that don't have a + +00:07:52.960 --> 00:08:00.120 +lot of data and um so this is a trans a + +00:07:58.360 --> 00:08:01.840 +variety of transfer learning + +00:08:00.120 --> 00:08:03.360 +and this allows you to improve accuracy + +00:08:01.840 --> 00:08:05.360 +on Lower resource Languages by + +00:08:03.360 --> 00:08:07.759 +leveraging data in higher resource + +00:08:05.360 --> 00:08:10.360 +languages um another really big + +00:08:07.759 --> 00:08:11.479 +advantage of uh multilingual learning + +00:08:10.360 --> 00:08:14.720 +and learning models that work in + +00:08:11.479 --> 00:08:18.080 +multiple languages is practical which is + +00:08:14.720 --> 00:08:21.240 +before Google translate would deploy a + +00:08:18.080 --> 00:08:23.520 +100 models or maybe even 200 models + +00:08:21.240 --> 00:08:25.720 +because if they were translating into + +00:08:23.520 --> 00:08:28.319 +English and then out of English they + +00:08:25.720 --> 00:08:31.360 +would have one model for like English to + +00:08:28.319 --> 00:08:33.719 +Chinese English to Japanese English to + +00:08:31.360 --> 00:08:36.519 +French English to Spanish and then + +00:08:33.719 --> 00:08:37.880 +Spanish to English French to English and + +00:08:36.519 --> 00:08:39.479 +so they would have to deal with like + +00:08:37.880 --> 00:08:40.919 +deploying all of these models having + +00:08:39.479 --> 00:08:43.320 +different servers that served all of + +00:08:40.919 --> 00:08:45.560 +them and stuff like this um now you can + +00:08:43.320 --> 00:08:47.800 +just have one big model uh that handles + +00:08:45.560 --> 00:08:49.680 +all of the languages at once and deploy + +00:08:47.800 --> 00:08:51.640 +it which allows you to make that model + +00:08:49.680 --> 00:08:56.120 +bigger itself because you need to deploy + +00:08:51.640 --> 00:08:58.920 +it fewer times and um also uh you know + +00:08:56.120 --> 00:09:00.040 +it can benefit from transfer learning so + +00:08:58.920 --> 00:09:01.839 +because of the + +00:09:00.040 --> 00:09:04.519 +uh a lot of places that handle + +00:09:01.839 --> 00:09:07.240 +multilingual stuff are are trans like + +00:09:04.519 --> 00:09:07.240 +changing to this + +00:09:08.320 --> 00:09:13.000 +Paradigm + +00:09:09.839 --> 00:09:15.200 +um in terms of like let's say you want + +00:09:13.000 --> 00:09:18.959 +to handle a different language other + +00:09:15.200 --> 00:09:20.800 +than English um this is a highle uh + +00:09:18.959 --> 00:09:23.560 +multilingual learning + +00:09:20.800 --> 00:09:25.760 +flowchart that you can kind of follow to + +00:09:23.560 --> 00:09:28.959 +decide which methodology you could be + +00:09:25.760 --> 00:09:31.079 +using and this is from the point of view + +00:09:28.959 --> 00:09:35.720 +of wanting to get the best possible + +00:09:31.079 --> 00:09:37.360 +model um so first is there sufficient + +00:09:35.720 --> 00:09:39.240 +labeled data in the target language and + +00:09:37.360 --> 00:09:41.160 +when I say sufficient you know obviously + +00:09:39.240 --> 00:09:44.320 +more is always better but you know a + +00:09:41.160 --> 00:09:46.320 +reasonably large amount uh from the + +00:09:44.320 --> 00:09:48.160 +point of view of you being able to train + +00:09:46.320 --> 00:09:50.399 +a good system for the task you're + +00:09:48.160 --> 00:09:52.120 +interested in for machine translation + +00:09:50.399 --> 00:09:53.680 +that's something like at least a million + +00:09:52.120 --> 00:09:55.360 +sentences for something like + +00:09:53.680 --> 00:09:57.440 +classification it might only be a + +00:09:55.360 --> 00:10:01.240 +thousand sentences or something like + +00:09:57.440 --> 00:10:03.839 +this um then the second question is uh + +00:10:01.240 --> 00:10:06.240 +must you serve many languages uh with + +00:10:03.839 --> 00:10:08.440 +strict memory constraints if the answer + +00:10:06.240 --> 00:10:10.279 +is yes you can do multi-lingual models + +00:10:08.440 --> 00:10:11.920 +if the answer is no you could still do a + +00:10:10.279 --> 00:10:13.680 +multilingual model but you could also + +00:10:11.920 --> 00:10:15.200 +adapt to the specific language that + +00:10:13.680 --> 00:10:17.480 +you're interested in processing and do + +00:10:15.200 --> 00:10:19.760 +better but by doing that + +00:10:17.480 --> 00:10:21.560 +adaption uh then if you don't have + +00:10:19.760 --> 00:10:24.360 +sufficient labeled data in the target + +00:10:21.560 --> 00:10:25.959 +language um if you have access to people + +00:10:24.360 --> 00:10:27.440 +who can provide that data for you and + +00:10:25.959 --> 00:10:29.600 +you're serious about building a model + +00:10:27.440 --> 00:10:31.920 +for that language you can just ask + +00:10:29.600 --> 00:10:33.920 +people to annotate things so there's a + +00:10:31.920 --> 00:10:35.519 +lot of work on zero shot adaptation + +00:10:33.920 --> 00:10:36.959 +which is essentially trying to get you + +00:10:35.519 --> 00:10:39.760 +know models to work well on new + +00:10:36.959 --> 00:10:42.160 +languages with no annotated data but in + +00:10:39.760 --> 00:10:46.279 +reality like if you're ever going to be + +00:10:42.160 --> 00:10:49.120 +deploying a model to users um you might + +00:10:46.279 --> 00:10:50.480 +as well label a thousand examples of + +00:10:49.120 --> 00:10:52.720 +whatever task you want to solve and + +00:10:50.480 --> 00:10:55.279 +train on those examples and that's far + +00:10:52.720 --> 00:10:58.279 +easier than doing zero shot adaptation + +00:10:55.279 --> 00:11:01.600 +um one caveat is um if you're trying to + +00:10:58.279 --> 00:11:04.560 +show somebody us like the possibility of + +00:11:01.600 --> 00:11:06.959 +something working so like for example um + +00:11:04.560 --> 00:11:08.480 +you have a nice spe speech recognition + +00:11:06.959 --> 00:11:10.760 +system that works in many different + +00:11:08.480 --> 00:11:12.399 +languages and you want to show people in + +00:11:10.760 --> 00:11:15.519 +a new country that it could possibly + +00:11:12.399 --> 00:11:18.120 +work then applying it zero shot uh and + +00:11:15.519 --> 00:11:19.440 +not using any uh training data might be + +00:11:18.120 --> 00:11:21.760 +a good way to convince them that they + +00:11:19.440 --> 00:11:23.519 +should work with you but in the end you + +00:11:21.760 --> 00:11:25.639 +know like if you care enough about it + +00:11:23.519 --> 00:11:27.920 +you'll be annotating data so this is my + +00:11:25.639 --> 00:11:30.920 +general like flow for building a usable + +00:11:27.920 --> 00:11:30.920 +system + +00:11:31.040 --> 00:11:34.399 +cool any questions so + +00:11:34.480 --> 00:11:39.320 +far + +00:11:36.000 --> 00:11:42.399 +okay um so let's go into multilingual + +00:11:39.320 --> 00:11:44.920 +language modeling um so multilingual + +00:11:42.399 --> 00:11:47.800 +language modeling + +00:11:44.920 --> 00:11:50.920 +um in the very simplest sense is just + +00:11:47.800 --> 00:11:55.160 +like train a language model on lots of + +00:11:50.920 --> 00:11:57.399 +data um and so you you know if you're + +00:11:55.160 --> 00:11:58.959 +just training like GPT or something like + +00:11:57.399 --> 00:11:59.839 +that you just throw all of the data in + +00:11:58.959 --> 00:12:01.680 +there + +00:11:59.839 --> 00:12:03.639 +um you train your subword vocabularies + +00:12:01.680 --> 00:12:06.680 +over all of the data and something will + +00:12:03.639 --> 00:12:07.880 +happen um you might want to do more than + +00:12:06.680 --> 00:12:11.200 +that though if you really care about + +00:12:07.880 --> 00:12:14.800 +performance um so anyway uh if we're + +00:12:11.200 --> 00:12:17.519 +doing multilingual modeling um there's + +00:12:14.800 --> 00:12:19.440 +two varieties of multilingual modeling + +00:12:17.519 --> 00:12:22.079 +um the first one is if you have + +00:12:19.440 --> 00:12:23.519 +multilingual inputs and if you have + +00:12:22.079 --> 00:12:26.560 +multilingual inputs you really don't + +00:12:23.519 --> 00:12:28.440 +need to do anything um in order to get + +00:12:26.560 --> 00:12:31.560 +it to work at least somewhat as long as + +00:12:28.440 --> 00:12:34.440 +your base model can uh can learn uh can + +00:12:31.560 --> 00:12:35.959 +handle multiple languages so to give an + +00:12:34.440 --> 00:12:38.440 +example like let's say I want to + +00:12:35.959 --> 00:12:41.199 +translate into English if I want to + +00:12:38.440 --> 00:12:43.480 +translate into English um then I can + +00:12:41.199 --> 00:12:45.120 +just throw in these sentences and not + +00:12:43.480 --> 00:12:47.680 +say anything and say please translate + +00:12:45.120 --> 00:12:49.240 +this into English and GPT will do a + +00:12:47.680 --> 00:12:50.720 +reasonably good job of translating it + +00:12:49.240 --> 00:12:52.279 +into English for me I don't even say + +00:12:50.720 --> 00:12:54.760 +this is a French sentence or this is a + +00:12:52.279 --> 00:12:58.079 +Japanese sentence or something like + +00:12:54.760 --> 00:13:00.199 +that however this may be very obvious + +00:12:58.079 --> 00:13:01.480 +but like if you have multi output at the + +00:13:00.199 --> 00:13:04.240 +very least you need to tell it which + +00:13:01.480 --> 00:13:05.880 +language it should be generating in so + +00:13:04.240 --> 00:13:07.760 +um there's different ways to do this but + +00:13:05.880 --> 00:13:09.880 +basically you can add a tag or prompt + +00:13:07.760 --> 00:13:12.920 +about the target language um for + +00:13:09.880 --> 00:13:14.959 +generative tasks and originally this is + +00:13:12.920 --> 00:13:16.399 +what was done in uh Google translate + +00:13:14.959 --> 00:13:18.519 +it's probably what they're doing right + +00:13:16.399 --> 00:13:20.920 +now and they basically added a single + +00:13:18.519 --> 00:13:23.440 +tag to the beginning that said French or + +00:13:20.920 --> 00:13:25.079 +Japanese and then uh the sentence they + +00:13:23.440 --> 00:13:28.360 +wanted to translate and then it gave + +00:13:25.079 --> 00:13:28.360 +them the output + +00:13:30.519 --> 00:13:34.399 +however um there are a few difficulties + +00:13:32.399 --> 00:13:37.320 +in multilingual learning so the first + +00:13:34.399 --> 00:13:39.680 +one um is the curse of + +00:13:37.320 --> 00:13:42.160 +multilinguality and so actually if you + +00:13:39.680 --> 00:13:44.839 +look at a lot of the open source models + +00:13:42.160 --> 00:13:46.920 +at least um most of them are only + +00:13:44.839 --> 00:13:49.360 +trained seriously on English this + +00:13:46.920 --> 00:13:52.639 +includes things like + +00:13:49.360 --> 00:13:53.920 +lamao um most of the language models + +00:13:52.639 --> 00:13:57.519 +that I talked about and I think the + +00:13:53.920 --> 00:14:00.680 +reason why is uh this curse of + +00:13:57.519 --> 00:14:03.839 +multilinguality and given fix size model + +00:14:00.680 --> 00:14:06.680 +um the per language capacity decreases + +00:14:03.839 --> 00:14:09.880 +uh as we increase the number of + +00:14:06.680 --> 00:14:14.279 +languages and this is an uh an older + +00:14:09.880 --> 00:14:15.959 +example um from the xlmr paper which was + +00:14:14.279 --> 00:14:18.759 +kind of a mass language model that was + +00:14:15.959 --> 00:14:22.519 +used in a bunch of different languages + +00:14:18.759 --> 00:14:25.800 +but what you can see is as they increase + +00:14:22.519 --> 00:14:28.480 +the number of languages the scores go + +00:14:25.800 --> 00:14:31.000 +down for the high resource languages as + +00:14:28.480 --> 00:14:32.880 +you get up to like 100 languages for the + +00:14:31.000 --> 00:14:36.079 +low resource languages the scores + +00:14:32.880 --> 00:14:38.000 +momentarily go up because you're now + +00:14:36.079 --> 00:14:40.000 +benefiting from transfer learning from + +00:14:38.000 --> 00:14:42.639 +other languages but then they start to + +00:14:40.000 --> 00:14:45.279 +go down again as the model capacity runs + +00:14:42.639 --> 00:14:47.279 +out and you essentially do worse and so + +00:14:45.279 --> 00:14:49.800 +this is an older paper it shows it more + +00:14:47.279 --> 00:14:51.480 +convincingly but there's also some other + +00:14:49.800 --> 00:14:54.320 +examples like there was a very big + +00:14:51.480 --> 00:14:56.440 +effort by hugging face called Bloom uh + +00:14:54.320 --> 00:15:00.600 +which was a model that was trained to be + +00:14:56.440 --> 00:15:02.880 +very multilingual and um and they were + +00:15:00.600 --> 00:15:05.399 +they trained a 175 billion parameter + +00:15:02.880 --> 00:15:06.839 +model to try to make it you know very + +00:15:05.399 --> 00:15:08.560 +strong in a lot of different languages + +00:15:06.839 --> 00:15:10.320 +and it just ended up not being very good + +00:15:08.560 --> 00:15:13.000 +on English and being even worse on the + +00:15:10.320 --> 00:15:16.480 +other languages um because they kind of + +00:15:13.000 --> 00:15:18.360 +overreached their language you know + +00:15:16.480 --> 00:15:19.880 +language modeling abilities at the time + +00:15:18.360 --> 00:15:21.600 +were just not good enough I think things + +00:15:19.880 --> 00:15:24.720 +have gotten significantly better now but + +00:15:21.600 --> 00:15:28.199 +still you'll notice that if you um + +00:15:24.720 --> 00:15:29.920 +emphasize more on multilingual data you + +00:15:28.199 --> 00:15:32.800 +do increase the number of tokens that + +00:15:29.920 --> 00:15:35.399 +you see in like English for example and + +00:15:32.800 --> 00:15:38.680 +that could cause accuracy to go down + +00:15:35.399 --> 00:15:41.279 +number of tokens and capacity + +00:15:38.680 --> 00:15:44.720 +available yeah so does that happen + +00:15:41.279 --> 00:15:47.920 +because we see a transf learning stop or + +00:15:44.720 --> 00:15:49.920 +is it like another underlying reason do + +00:15:47.920 --> 00:15:52.199 +does it happen because we see transfer + +00:15:49.920 --> 00:15:55.399 +learning stop or for another underlying + +00:15:52.199 --> 00:15:57.000 +reason I I think there's two reasons um + +00:15:55.399 --> 00:16:01.000 +the first reason is like if you have a + +00:15:57.000 --> 00:16:03.519 +fixed compute budget then you're + +00:16:01.000 --> 00:16:06.240 +fundamentally going to be limited in the + +00:16:03.519 --> 00:16:09.440 +number of uh things that you can see and + +00:16:06.240 --> 00:16:10.519 +actually there's a nice paper by um my + +00:16:09.440 --> 00:16:12.360 +student + +00:16:10.519 --> 00:16:14.720 +Patrick + +00:16:12.360 --> 00:16:18.040 +um I'm allowed to call it nice because + +00:16:14.720 --> 00:16:20.480 +it's not my paper but I wasn't involved + +00:16:18.040 --> 00:16:20.480 +in it + +00:16:25.480 --> 00:16:29.639 +but I should have put this on the slides + +00:16:27.800 --> 00:16:31.800 +actually but there there's the this um + +00:16:29.639 --> 00:16:31.800 +this + +00:16:32.639 --> 00:16:40.959 +paper actually may maybe I I did um but + +00:16:36.720 --> 00:16:46.480 +basically what they find + +00:16:40.959 --> 00:16:50.399 +is the amount of compute that you + +00:16:46.480 --> 00:16:55.639 +spend trying to find the main + +00:16:50.399 --> 00:17:00.800 +thing so essentially the amount of uh + +00:16:55.639 --> 00:17:00.800 +weight that you spend on any particular + +00:17:02.160 --> 00:17:06.760 +um sorry I I uh I'd have to go back and + +00:17:05.480 --> 00:17:08.799 +and take some time to look the figures + +00:17:06.760 --> 00:17:11.880 +to explain them accurately but one of + +00:17:08.799 --> 00:17:13.559 +the findings in this paper is the amount + +00:17:11.880 --> 00:17:15.439 +of time that you spend on any individual + +00:17:13.559 --> 00:17:17.439 +language kind of corresponds to how well + +00:17:15.439 --> 00:17:21.000 +you do on that language and so if you're + +00:17:17.439 --> 00:17:22.679 +spending more time on one language um + +00:17:21.000 --> 00:17:24.799 +you do better on the other language and + +00:17:22.679 --> 00:17:29.799 +there's the idea of scaling laws that I + +00:17:24.799 --> 00:17:32.559 +talked about before where your effective + +00:17:29.799 --> 00:17:34.640 +your effective capacity or like how good + +00:17:32.559 --> 00:17:36.039 +the model becomes is a function of like + +00:17:34.640 --> 00:17:38.520 +the parameter size and the amount of + +00:17:36.039 --> 00:17:40.160 +compute you spend and if you're doing + +00:17:38.520 --> 00:17:42.559 +multiple languages then you need to kind + +00:17:40.160 --> 00:17:44.400 +of to some extent split your parameters + +00:17:42.559 --> 00:17:46.960 +between languages and one of the + +00:17:44.400 --> 00:17:50.000 +interesting findings from this paper is + +00:17:46.960 --> 00:17:51.919 +that actually you would expect more + +00:17:50.000 --> 00:17:56.039 +benefit from sharing than you actually + +00:17:51.919 --> 00:17:57.960 +get like you you actually get relatively + +00:17:56.039 --> 00:18:00.559 +little benefit from sharing if you have + +00:17:57.960 --> 00:18:03.200 +enough data to train on and because in + +00:18:00.559 --> 00:18:06.080 +many of the cases when we're training + +00:18:03.200 --> 00:18:08.480 +models we're less bottleneck by data for + +00:18:06.080 --> 00:18:11.320 +the low resource languages + +00:18:08.480 --> 00:18:12.880 +um uh or we're still bottleneck by data + +00:18:11.320 --> 00:18:14.320 +for the low resource languages but for + +00:18:12.880 --> 00:18:17.080 +high resource languages we're usually + +00:18:14.320 --> 00:18:19.400 +bottleneck by compute so if you allocate + +00:18:17.080 --> 00:18:21.679 +more compute to other languages then + +00:18:19.400 --> 00:18:23.440 +you're going to allocate less compute to + +00:18:21.679 --> 00:18:24.840 +um English for example and a lot of + +00:18:23.440 --> 00:18:27.360 +people like a lot of the benchmarks are + +00:18:24.840 --> 00:18:29.520 +in English for better or worse so I + +00:18:27.360 --> 00:18:32.360 +think probably one + +00:18:29.520 --> 00:18:34.280 +um like one good strategy here is yes we + +00:18:32.360 --> 00:18:35.799 +have some people focusing on bu building + +00:18:34.280 --> 00:18:37.520 +really good English models we have some + +00:18:35.799 --> 00:18:39.960 +people focusing on building really good + +00:18:37.520 --> 00:18:41.440 +models for the top like 10 languages + +00:18:39.960 --> 00:18:42.679 +where we can afford to build a model for + +00:18:41.440 --> 00:18:45.039 +each of them and then we have some + +00:18:42.679 --> 00:18:46.520 +people working on you know really good + +00:18:45.039 --> 00:18:48.240 +multilingual models that can handle a + +00:18:46.520 --> 00:18:49.840 +whole bunch of languages and kind of + +00:18:48.240 --> 00:18:51.840 +just spread our efforts out and our + +00:18:49.840 --> 00:18:53.520 +compute for each of those models out + +00:18:51.840 --> 00:18:57.480 +over + +00:18:53.520 --> 00:19:00.240 +there um cool any other questions about + +00:18:57.480 --> 00:19:02.559 +this yeah + +00:19:00.240 --> 00:19:03.799 +get what sharing means in this context + +00:19:02.559 --> 00:19:06.039 +is it a shared + +00:19:03.799 --> 00:19:08.480 +representation Yeah so basically usually + +00:19:06.039 --> 00:19:10.320 +what we do is we just train um we just + +00:19:08.480 --> 00:19:11.919 +train a single model and all of the + +00:19:10.320 --> 00:19:13.919 +parameters are shared between all of the + +00:19:11.919 --> 00:19:16.480 +languages the only things that are not + +00:19:13.919 --> 00:19:18.280 +shared are the um the word embeddings + +00:19:16.480 --> 00:19:20.840 +and I'll I'll or the subword embeddings + +00:19:18.280 --> 00:19:22.600 +and I'll talk about that in a little + +00:19:20.840 --> 00:19:24.760 +bit + +00:19:22.600 --> 00:19:28.360 +cool + +00:19:24.760 --> 00:19:29.559 +um so there's a number of ways to + +00:19:28.360 --> 00:19:32.480 +mitigate + +00:19:29.559 --> 00:19:34.480 +um this curse of uh multilinguality I + +00:19:32.480 --> 00:19:37.640 +kind of got ahead of myself in talking + +00:19:34.480 --> 00:19:40.240 +about the other paper but um there's a + +00:19:37.640 --> 00:19:42.000 +couple things that we can do to improve + +00:19:40.240 --> 00:19:43.880 +this so the first one is the + +00:19:42.000 --> 00:19:47.400 +tokenization disparity which I just + +00:19:43.880 --> 00:19:50.000 +talked about um the subword embeddings + +00:19:47.400 --> 00:19:51.799 +we we share all of the parameters in the + +00:19:50.000 --> 00:19:53.320 +body of the model but fundamentally the + +00:19:51.799 --> 00:19:55.520 +words in the different languages are + +00:19:53.320 --> 00:19:58.840 +different sometimes the scripts are even + +00:19:55.520 --> 00:20:00.919 +different and so what we do is we + +00:19:58.840 --> 00:20:03.960 +usually have a shared tokenizer that's + +00:20:00.919 --> 00:20:06.400 +used between um all of the different + +00:20:03.960 --> 00:20:08.600 +languages and so if it's something like + +00:20:06.400 --> 00:20:10.960 +English and French uh with lots of + +00:20:08.600 --> 00:20:12.640 +shared words then actually many of the + +00:20:10.960 --> 00:20:13.600 +embeddings will be shared between the + +00:20:12.640 --> 00:20:16.200 +different + +00:20:13.600 --> 00:20:18.480 +languages that helps transfer but it's + +00:20:16.200 --> 00:20:20.440 +not absolutely essential for transfer so + +00:20:18.480 --> 00:20:22.159 +like there has been some work that + +00:20:20.440 --> 00:20:24.640 +demonstrates that even with no shared + +00:20:22.159 --> 00:20:26.640 +vocabulary you can still um you can + +00:20:24.640 --> 00:20:29.480 +still benefit from transfer to some + +00:20:26.640 --> 00:20:30.840 +extent but um anyway so with respect to + +00:20:29.480 --> 00:20:35.679 +the tokenization + +00:20:30.840 --> 00:20:40.080 +disparity um I tried tokenizing English + +00:20:35.679 --> 00:20:44.000 +um in using the uh open AI + +00:20:40.080 --> 00:20:47.640 +GPT 3.5 and4 tokenizer and you can see + +00:20:44.000 --> 00:20:51.440 +that this content uh gave me 58 + +00:20:47.640 --> 00:20:53.440 +tokens and then I tried to translate + +00:20:51.440 --> 00:20:55.760 +this into Burmese using Google translate + +00:20:53.440 --> 00:20:58.600 +I don't know burmes myself but I I tried + +00:20:55.760 --> 00:21:01.039 +to translate it and so this should + +00:20:58.600 --> 00:21:02.320 +should be the same content uh you know + +00:21:01.039 --> 00:21:06.280 +at least to the extent that Google + +00:21:02.320 --> 00:21:09.640 +Translate into burmes is accurate um and + +00:21:06.280 --> 00:21:12.600 +then I got 6117 tokens for the same + +00:21:09.640 --> 00:21:15.400 +content um and the reason why is because + +00:21:12.600 --> 00:21:18.960 +you can see here basically um it's still + +00:21:15.400 --> 00:21:20.600 +got open AI right um and interestingly I + +00:21:18.960 --> 00:21:22.840 +didn't realize this open AI isn't a + +00:21:20.600 --> 00:21:25.480 +single token in gpd's model it's two + +00:21:22.840 --> 00:21:28.600 +tokens but um all of the other things + +00:21:25.480 --> 00:21:31.720 +are basically converted into bite level + +00:21:28.600 --> 00:21:33.960 +tokens so each bite is a token and + +00:21:31.720 --> 00:21:36.720 +because burmes uses a different script + +00:21:33.960 --> 00:21:39.760 +each one of these is like three bytes so + +00:21:36.720 --> 00:21:45.520 +what you can see is uh it's a lot uh you + +00:21:39.760 --> 00:21:47.559 +know more so it's 10.6 times the uh the + +00:21:45.520 --> 00:21:50.320 +total amount and what does that mean + +00:21:47.559 --> 00:21:54.799 +this means number one um if you're + +00:21:50.320 --> 00:21:57.240 +processing burmes with uh gp4 it's 10 + +00:21:54.799 --> 00:21:58.320 +times more expensive for the same time + +00:21:57.240 --> 00:21:59.640 +because they count by the number of + +00:21:58.320 --> 00:22:02.320 +token + +00:21:59.640 --> 00:22:04.679 +number two um because you're letting it + +00:22:02.320 --> 00:22:06.799 +up into bites like this it's slow + +00:22:04.679 --> 00:22:08.720 +generation becomes 10 times slower + +00:22:06.799 --> 00:22:10.760 +because you know it generates tokens at + +00:22:08.720 --> 00:22:12.080 +a particular speed and also it gets + +00:22:10.760 --> 00:22:13.799 +worse because the reason why we do + +00:22:12.080 --> 00:22:15.960 +subord tokens in the first place is + +00:22:13.799 --> 00:22:18.039 +because they kind of allow us to Clump + +00:22:15.960 --> 00:22:19.720 +semantic units together and if the + +00:22:18.039 --> 00:22:21.640 +semantic units aren't clumped together + +00:22:19.720 --> 00:22:24.480 +the model has to use its capacity on + +00:22:21.640 --> 00:22:27.120 +combining together uh the tokens in the + +00:22:24.480 --> 00:22:30.440 +layers of the model and that is prone to + +00:22:27.120 --> 00:22:33.039 +error and uh it ca uses the models + +00:22:30.440 --> 00:22:36.279 +capacity and it does less well at + +00:22:33.039 --> 00:22:38.480 +modeling so this is a pretty big problem + +00:22:36.279 --> 00:22:41.159 +for a number of reasons you know cost + +00:22:38.480 --> 00:22:41.159 +efficiency + +00:22:41.400 --> 00:22:48.440 +accuracy so one way to fix this um both + +00:22:46.840 --> 00:22:51.480 +from the point of view of training and + +00:22:48.440 --> 00:22:54.360 +learning tokenizers is uh through heris + +00:22:51.480 --> 00:22:55.600 +sampling of data and the most common way + +00:22:54.360 --> 00:22:57.880 +to do this is something called + +00:22:55.600 --> 00:23:00.640 +temperature sampling and the way + +00:22:57.880 --> 00:23:04.080 +temperature sampling works is you + +00:23:00.640 --> 00:23:06.200 +essentially group the data into groups + +00:23:04.080 --> 00:23:10.159 +um and usually when we say groups we + +00:23:06.200 --> 00:23:14.679 +talk about grouping into languages um + +00:23:10.159 --> 00:23:17.120 +and we sample the data uh according to + +00:23:14.679 --> 00:23:20.440 +its frequency + +00:23:17.120 --> 00:23:22.279 +um where we exponentiate the frequency + +00:23:20.440 --> 00:23:25.480 +by one divided by a + +00:23:22.279 --> 00:23:27.279 +temperature and uh and then renormalize + +00:23:25.480 --> 00:23:29.799 +and then sample batches from each + +00:23:27.279 --> 00:23:31.320 +language according to that so the + +00:23:29.799 --> 00:23:34.600 +probability of + +00:23:31.320 --> 00:23:38.480 +sampling a batch from language + +00:23:34.600 --> 00:23:40.279 +L um becomes e to + +00:23:38.480 --> 00:23:43.520 +the uh + +00:23:40.279 --> 00:23:43.520 +frequency of + +00:23:45.919 --> 00:23:48.919 +L + +00:23:49.200 --> 00:23:53.320 +sorry frequency of + +00:23:54.600 --> 00:23:59.600 +L the oneid temperature + +00:24:00.720 --> 00:24:08.080 +um normalized so we have L Prime + +00:24:05.120 --> 00:24:08.080 +frequency L + +00:24:08.440 --> 00:24:13.880 +Prime + +00:24:10.159 --> 00:24:16.600 +um and so what that does is essentially + +00:24:13.880 --> 00:24:18.120 +if we have a data distribution from a l + +00:24:16.600 --> 00:24:21.279 +from each language which is kind of like + +00:24:18.120 --> 00:24:23.480 +a longtail distribution like this + +00:24:21.279 --> 00:24:26.000 +normally you would sample this language + +00:24:23.480 --> 00:24:27.399 +a lot more if you take the temperature + +00:24:26.000 --> 00:24:29.360 +temperature and turn it into something + +00:24:27.399 --> 00:24:31.679 +like five that will flatten the + +00:24:29.360 --> 00:24:33.320 +distribution so you'll sample the less + +00:24:31.679 --> 00:24:35.520 +frequent languages more and the more + +00:24:33.320 --> 00:24:37.559 +frequent languages less and if you take + +00:24:35.520 --> 00:24:42.600 +a very large number like 100 this will + +00:24:37.559 --> 00:24:46.000 +just flatten and say you um you sample + +00:24:42.600 --> 00:24:49.640 +like this uh from each language + +00:24:46.000 --> 00:24:52.039 +uniformly and this can this sampling can + +00:24:49.640 --> 00:24:55.840 +be done both at model training time and + +00:24:52.039 --> 00:24:57.559 +at vocabulary construction time and when + +00:24:55.840 --> 00:24:59.080 +you do it at vocabulary construction + +00:24:57.559 --> 00:25:01.360 +time basically what that means is it + +00:24:59.080 --> 00:25:04.039 +will down sample English and upsample + +00:25:01.360 --> 00:25:05.799 +Burmese or any of the other low resource + +00:25:04.039 --> 00:25:08.240 +languages so the low resource languages + +00:25:05.799 --> 00:25:10.320 +get more weight in creating the + +00:25:08.240 --> 00:25:13.440 +vocabulary so you won't they won't be + +00:25:10.320 --> 00:25:16.559 +split up as much like + +00:25:13.440 --> 00:25:20.240 +this so I don't know if GPD does this + +00:25:16.559 --> 00:25:22.039 +but xlmr does this um and quen does this + +00:25:20.240 --> 00:25:25.480 +so there are definitely language models + +00:25:22.039 --> 00:25:28.640 +that are attempting to be you know more + +00:25:25.480 --> 00:25:30.320 +fair across different languages + +00:25:28.640 --> 00:25:32.000 +another thing that quen does I talked + +00:25:30.320 --> 00:25:33.559 +about this previously in the tour of + +00:25:32.000 --> 00:25:36.840 +large language models but it also makes + +00:25:33.559 --> 00:25:39.880 +a much larger vocabulary um so normally + +00:25:36.840 --> 00:25:42.080 +llama is like 32 and I think quen was + +00:25:39.880 --> 00:25:44.919 +32k and quen was like something like + +00:25:42.080 --> 00:25:46.880 +215k or something so they intentionally + +00:25:44.919 --> 00:25:49.279 +made the vocabulary larger so the + +00:25:46.880 --> 00:25:51.559 +distribution uh would be better across + +00:25:49.279 --> 00:25:51.559 +different + +00:25:51.720 --> 00:25:56.799 +languages yeah so how do you construct a + +00:25:54.600 --> 00:25:58.440 +batch for like machine translation like + +00:25:56.799 --> 00:26:01.159 +when a batch be assigned to English + +00:25:58.440 --> 00:26:04.919 +French and then another yeah that's a + +00:26:01.159 --> 00:26:06.840 +good question um you can do both uh so + +00:26:04.919 --> 00:26:08.640 +uh sorry just to repeat the question how + +00:26:06.840 --> 00:26:10.159 +do you construct batches for like + +00:26:08.640 --> 00:26:12.000 +machine translation would you create a + +00:26:10.159 --> 00:26:14.080 +batch of English French or would you + +00:26:12.000 --> 00:26:19.000 +create uh a batch with like lots of + +00:26:14.080 --> 00:26:21.960 +different languages in it at once um I + +00:26:19.000 --> 00:26:24.080 +personally what I would do and I think + +00:26:21.960 --> 00:26:27.000 +what most people do is they don't + +00:26:24.080 --> 00:26:28.640 +actually sample batches uh that are + +00:26:27.000 --> 00:26:30.480 +uniform in is particular like language + +00:26:28.640 --> 00:26:32.120 +and the reason why is because SGD if you + +00:26:30.480 --> 00:26:34.640 +have lots of variant in your gradients + +00:26:32.120 --> 00:26:36.480 +it makes it less stable and so if you do + +00:26:34.640 --> 00:26:38.120 +all English French in a single batch at + +00:26:36.480 --> 00:26:39.520 +one time that'll make it less stable + +00:26:38.120 --> 00:26:40.799 +because it will move all in the French + +00:26:39.520 --> 00:26:43.240 +Direction than all in the German + +00:26:40.799 --> 00:26:44.600 +direction so um it's generally better + +00:26:43.240 --> 00:26:47.600 +practice to do this sort of upweighting + +00:26:44.600 --> 00:26:49.240 +or sampling before you form your batches + +00:26:47.600 --> 00:26:52.360 +um and another thing that you can do is + +00:26:49.240 --> 00:26:55.279 +you can do a bunch of this sampling it + +00:26:52.360 --> 00:26:57.480 +once form a bunch of batches and then + +00:26:55.279 --> 00:26:59.120 +run through all of them and then when + +00:26:57.480 --> 00:27:01.200 +you get near the end form some more + +00:26:59.120 --> 00:27:02.559 +batches and then throw them in there so + +00:27:01.200 --> 00:27:08.120 +you can write data loaders to do that + +00:27:02.559 --> 00:27:12.440 +sort of stuff yeah cool um + +00:27:08.120 --> 00:27:14.760 +nice so there's also work on learning + +00:27:12.440 --> 00:27:18.399 +how to balance data and this is a paper + +00:27:14.760 --> 00:27:21.480 +by uh my former student Cindy um and + +00:27:18.399 --> 00:27:23.520 +it's kind of interesting I don't know um + +00:27:21.480 --> 00:27:26.240 +I like it because it shows the + +00:27:23.520 --> 00:27:28.799 +possibility of what could be done with + +00:27:26.240 --> 00:27:30.760 +respect to things here um but it's a + +00:27:28.799 --> 00:27:32.799 +little bit uh it's a little bit + +00:27:30.760 --> 00:27:34.840 +expensive to run so it might not be uh + +00:27:32.799 --> 00:27:38.080 +well suited for large scale pre-training + +00:27:34.840 --> 00:27:41.840 +for example but the basic + +00:27:38.080 --> 00:27:44.480 +idea is um we have several training sets + +00:27:41.840 --> 00:27:47.279 +where the training sets are uh composed + +00:27:44.480 --> 00:27:49.200 +of different languages and then we have + +00:27:47.279 --> 00:27:51.559 +several development sets and these + +00:27:49.200 --> 00:27:54.679 +development sets are uh ones that we + +00:27:51.559 --> 00:27:57.679 +want to be good at trans like you know + +00:27:54.679 --> 00:28:02.760 +uh processing uh be it translation or QA + +00:27:57.679 --> 00:28:07.000 +or whatever else and what we can do is + +00:28:02.760 --> 00:28:09.519 +we calculate gradients according to um + +00:28:07.000 --> 00:28:12.640 +these various training sets we also + +00:28:09.519 --> 00:28:15.640 +calculate the gradient on the dev set + +00:28:12.640 --> 00:28:17.440 +and we calculate the alignment between + +00:28:15.640 --> 00:28:19.600 +the train gradient and the development + +00:28:17.440 --> 00:28:21.279 +gradient and see how closely they align + +00:28:19.600 --> 00:28:23.279 +if they align very well basically what + +00:28:21.279 --> 00:28:26.919 +it's saying is this training set is + +00:28:23.279 --> 00:28:28.200 +moving us in a good direction for um for + +00:28:26.919 --> 00:28:31.679 +optimizing the performance on the the + +00:28:28.200 --> 00:28:33.480 +dev set um and if uh they don't align + +00:28:31.679 --> 00:28:35.799 +well it's basically like harmful for + +00:28:33.480 --> 00:28:38.320 +optimizing performance on the dev set + +00:28:35.799 --> 00:28:41.000 +and then we upweight or downweight the + +00:28:38.320 --> 00:28:43.159 +kind of mixing factor of these uh data + +00:28:41.000 --> 00:28:45.360 +sets uh according to how well the + +00:28:43.159 --> 00:28:46.799 +gradients Align and so why is this + +00:28:45.360 --> 00:28:48.360 +interesting conceptually this is + +00:28:46.799 --> 00:28:51.679 +interesting conceptually because you can + +00:28:48.360 --> 00:28:54.960 +also do it um as you continue training + +00:28:51.679 --> 00:28:56.640 +the model and one of the problems with + +00:28:54.960 --> 00:28:58.960 +theistic + +00:28:56.640 --> 00:29:01.559 +sampling is + +00:28:58.960 --> 00:29:05.279 +this number is very + +00:29:01.559 --> 00:29:06.880 +fiddly and um like this uh this + +00:29:05.279 --> 00:29:10.840 +temperature number is very fiddly it + +00:29:06.880 --> 00:29:13.760 +differs from data set to data set and + +00:29:10.840 --> 00:29:16.799 +um it doesn't fully give you the full + +00:29:13.760 --> 00:29:19.080 +spectrum of uh you know which data you + +00:29:16.799 --> 00:29:20.559 +should be upsampling and this can allow + +00:29:19.080 --> 00:29:22.799 +you to learn it + +00:29:20.559 --> 00:29:25.080 +automatically and just to give it an + +00:29:22.799 --> 00:29:27.000 +example of something it can learn it can + +00:29:25.080 --> 00:29:28.799 +learn that at the beginning of training + +00:29:27.000 --> 00:29:31.159 +you should be maybe up ating the low + +00:29:28.799 --> 00:29:33.760 +resource data sets but then you start + +00:29:31.159 --> 00:29:35.559 +overfitting to the relatively small data + +00:29:33.760 --> 00:29:37.120 +you have for burmes or relatively small + +00:29:35.559 --> 00:29:39.760 +data you have for some under resource + +00:29:37.120 --> 00:29:41.960 +languages and so it starts actively like + +00:29:39.760 --> 00:29:44.320 +harming your model and when it starts + +00:29:41.960 --> 00:29:46.720 +actively harming your model then it will + +00:29:44.320 --> 00:29:49.760 +be automatically down weighted uh so it + +00:29:46.720 --> 00:29:51.200 +stops harming your model and um uh then + +00:29:49.760 --> 00:29:54.320 +it'll be upweighted again when it makes + +00:29:51.200 --> 00:29:56.480 +sense to use it again so um this allows + +00:29:54.320 --> 00:29:59.480 +you to learn more nuanced strategies + +00:29:56.480 --> 00:30:02.600 +than this um in + +00:29:59.480 --> 00:30:04.000 +the um in some papers I think this was + +00:30:02.600 --> 00:30:05.840 +the quen paper but it might have been + +00:30:04.000 --> 00:30:07.840 +another multilingual uh large language + +00:30:05.840 --> 00:30:10.360 +modeling paper they did something that + +00:30:07.840 --> 00:30:12.519 +was a little bit like this no sorry it + +00:30:10.360 --> 00:30:14.399 +was um the NLB paper that I'm going to + +00:30:12.519 --> 00:30:15.679 +talk about in a second so they did + +00:30:14.399 --> 00:30:17.240 +something a little bit like this which + +00:30:15.679 --> 00:30:19.440 +is that they started out training on + +00:30:17.240 --> 00:30:21.440 +higher resource languages and then + +00:30:19.440 --> 00:30:23.960 +gradually added in the lower resource + +00:30:21.440 --> 00:30:26.200 +languages later uh because the lower + +00:30:23.960 --> 00:30:27.799 +resource languages have very uh in some + +00:30:26.200 --> 00:30:29.399 +cases have very little data and you + +00:30:27.799 --> 00:30:30.559 +don't want to overfit to that data + +00:30:29.399 --> 00:30:33.559 +before the end of training when the + +00:30:30.559 --> 00:30:35.200 +model is capable of training on them um + +00:30:33.559 --> 00:30:36.640 +but again that's just a heris and + +00:30:35.200 --> 00:30:38.440 +there's probably better strategies and + +00:30:36.640 --> 00:30:40.919 +hopefully doing something automatically + +00:30:38.440 --> 00:30:43.919 +would allow you to to learn + +00:30:40.919 --> 00:30:43.919 +that + +00:30:44.200 --> 00:30:50.559 +cool + +00:30:46.880 --> 00:30:52.799 +um any questions for that yeah sure I + +00:30:50.559 --> 00:30:54.880 +guess going back to the one where you + +00:30:52.799 --> 00:30:58.559 +the model had like + +00:30:54.880 --> 00:31:02.600 +220k yeah do you not have the software + +00:30:58.559 --> 00:31:02.600 +issues and other issues you + +00:31:03.039 --> 00:31:08.360 +have uh so + +00:31:06.360 --> 00:31:10.600 +215k I guess + +00:31:08.360 --> 00:31:15.440 +the I mean it it definitely makes it + +00:31:10.600 --> 00:31:17.919 +slower yeah for sure um uh the thing is + +00:31:15.440 --> 00:31:19.960 +it only makes calculating the softmax + +00:31:17.919 --> 00:31:22.919 +and the word embedding slower but if it + +00:31:19.960 --> 00:31:24.559 +makes the sequence length shorter then + +00:31:22.919 --> 00:31:26.320 +you also benefit from a shorter sequence + +00:31:24.559 --> 00:31:28.159 +length right so it's kind of a trade-off + +00:31:26.320 --> 00:31:29.960 +and I I think especially if if you're + +00:31:28.159 --> 00:31:33.120 +processing lots of multilingual data + +00:31:29.960 --> 00:31:35.279 +like 32k for English turning into 215k + +00:31:33.120 --> 00:31:36.760 +for 100 languages doesn't seem like a + +00:31:35.279 --> 00:31:41.600 +bad idea + +00:31:36.760 --> 00:31:43.399 +right cool um yeah so then going to + +00:31:41.600 --> 00:31:46.240 +machine translation machine translation + +00:31:43.399 --> 00:31:48.679 +is like um probably still the most + +00:31:46.240 --> 00:31:52.159 +important multilingual uh inherently + +00:31:48.679 --> 00:31:53.840 +multilingual task that we handle um uh + +00:31:52.159 --> 00:31:57.120 +as we know translation is basically + +00:31:53.840 --> 00:31:59.039 +translating uh from one language to uh + +00:31:57.120 --> 00:32:00.760 +to another + +00:31:59.039 --> 00:32:04.320 +and if we look at why it's difficult to + +00:32:00.760 --> 00:32:06.360 +translate there's basically two reasons + +00:32:04.320 --> 00:32:08.919 +um the first reason is that there's + +00:32:06.360 --> 00:32:10.960 +syntactic divergences between languages + +00:32:08.919 --> 00:32:15.480 +um so we don't use the same + +00:32:10.960 --> 00:32:16.880 +syntax um so I I put in this sentence uh + +00:32:15.480 --> 00:32:20.320 +the development of artificial + +00:32:16.880 --> 00:32:22.240 +intelligence is a really big deal and + +00:32:20.320 --> 00:32:23.679 +maybe everybody can if you know another + +00:32:22.240 --> 00:32:25.159 +language you can take a moment to think + +00:32:23.679 --> 00:32:28.480 +about how you would translate that into + +00:32:25.159 --> 00:32:28.480 +another language + +00:32:32.200 --> 00:32:37.600 +and keep it in mind and then um I + +00:32:35.039 --> 00:32:39.039 +actually have uh Spanish so hopefully I + +00:32:37.600 --> 00:32:41.320 +didn't mess it up because I have two + +00:32:39.039 --> 00:32:44.399 +actual Spanish speakers in here my high + +00:32:41.320 --> 00:32:50.320 +school Spanish is is is that + +00:32:44.399 --> 00:32:51.960 +okay okay good okay thanks um so uh what + +00:32:50.320 --> 00:32:54.320 +we could see here is that there's some + +00:32:51.960 --> 00:32:57.519 +Divergence in syntax between English and + +00:32:54.320 --> 00:33:01.880 +Spanish um the these ones are in the + +00:32:57.519 --> 00:33:04.320 +same order um the first Divergence is + +00:33:01.880 --> 00:33:06.519 +that you use an article here in Spanish + +00:33:04.320 --> 00:33:08.600 +and you use no article in English so + +00:33:06.519 --> 00:33:10.519 +article there's no it's not the + +00:33:08.600 --> 00:33:12.760 +artificial intelligence in English it's + +00:33:10.519 --> 00:33:15.240 +just artificial intelligence the second + +00:33:12.760 --> 00:33:18.200 +one is swapping the order of nouns and + +00:33:15.240 --> 00:33:20.919 +adjectives which is a famous thing like + +00:33:18.200 --> 00:33:23.600 +actually English is the unusual language + +00:33:20.919 --> 00:33:25.080 +here um most languages with similar word + +00:33:23.600 --> 00:33:27.200 +order to English put the adjectives + +00:33:25.080 --> 00:33:28.840 +after the nouns more often than they put + +00:33:27.200 --> 00:33:31.480 +them before the + +00:33:28.840 --> 00:33:33.679 +and then um there's also really big deal + +00:33:31.480 --> 00:33:35.960 +I put in kind of like an idiomatic + +00:33:33.679 --> 00:33:38.840 +expression here like big deal and big + +00:33:35.960 --> 00:33:40.919 +deal uh won't be translated consistently + +00:33:38.840 --> 00:33:43.399 +into big and deal in other languages so + +00:33:40.919 --> 00:33:45.440 +you can see that it turned into + +00:33:43.399 --> 00:33:48.039 +something uh something + +00:33:45.440 --> 00:33:51.559 +else um also interestingly this kind of + +00:33:48.039 --> 00:33:53.799 +went in the middle of big deal um so I + +00:33:51.559 --> 00:33:56.039 +also did it in Japanese and Japanese has + +00:33:53.799 --> 00:33:58.360 +very different syntax from English um + +00:33:56.039 --> 00:34:00.120 +it's a subject object verb + +00:33:58.360 --> 00:34:02.360 +uh order language so the verb comes at + +00:34:00.120 --> 00:34:03.639 +the end instead of the middle and you + +00:34:02.360 --> 00:34:07.360 +can see that it's a little bit more + +00:34:03.639 --> 00:34:09.399 +crazy actually this doesn't convey the + +00:34:07.360 --> 00:34:11.159 +full extent of how crazy the reorderings + +00:34:09.399 --> 00:34:14.440 +are between English and Japanese it's + +00:34:11.159 --> 00:34:16.040 +just in a very very different order um + +00:34:14.440 --> 00:34:18.720 +but you can see that actually in some + +00:34:16.040 --> 00:34:22.399 +cases it's more similar to English than + +00:34:18.720 --> 00:34:24.079 +uh than uh Spanish is because uh + +00:34:22.399 --> 00:34:25.800 +artificial intelligence is in the same + +00:34:24.079 --> 00:34:28.320 +order so there are places where it's + +00:34:25.800 --> 00:34:30.040 +similar + +00:34:28.320 --> 00:34:32.399 +um another reason why it's difficult to + +00:34:30.040 --> 00:34:35.800 +translate is lexical ambiguities so this + +00:34:32.399 --> 00:34:41.040 +is an example from English and French um + +00:34:35.800 --> 00:34:43.240 +where different uh things in um uh + +00:34:41.040 --> 00:34:45.800 +different words in different languages + +00:34:43.240 --> 00:34:49.599 +uh kind + +00:34:45.800 --> 00:34:53.280 +of uh are translated differently so like + +00:34:49.599 --> 00:34:55.159 +a leg of a journey a leg of an animal a + +00:34:53.280 --> 00:34:58.640 +leg of a chair and a leg of a human are + +00:34:55.159 --> 00:35:00.280 +all translated differently um and so I'm + +00:34:58.640 --> 00:35:02.839 +sure you can come up with other examples + +00:35:00.280 --> 00:35:05.720 +here my favorite one is run like run a + +00:35:02.839 --> 00:35:07.720 +marathon run a program run a company a + +00:35:05.720 --> 00:35:10.960 +run in a stocking like all of these are + +00:35:07.720 --> 00:35:15.560 +different in most languages in the + +00:35:10.960 --> 00:35:17.440 +world um so so this is really hard um + +00:35:15.560 --> 00:35:19.839 +there's also some other difficulties + +00:35:17.440 --> 00:35:23.280 +like uh that are language specific like + +00:35:19.839 --> 00:35:24.400 +in Japanese they almost never say the uh + +00:35:23.280 --> 00:35:26.920 +who did + +00:35:24.400 --> 00:35:30.960 +something uh in conversational Japanese + +00:35:26.920 --> 00:35:34.880 +for example so so if I say um + +00:35:30.960 --> 00:35:37.800 +likea which means to eat uh it could or + +00:35:34.880 --> 00:35:42.160 +ate it could mean I ate you ate he ate + +00:35:37.800 --> 00:35:43.839 +she ate the dog ate you know um it ate + +00:35:42.160 --> 00:35:45.920 +any of those things and you don't know + +00:35:43.839 --> 00:35:48.400 +outside the context and language models + +00:35:45.920 --> 00:35:49.880 +are pretty bad at or uh translation + +00:35:48.400 --> 00:35:52.359 +systems are pretty bad at figuring out + +00:35:49.880 --> 00:35:55.359 +that context so um often in + +00:35:52.359 --> 00:35:59.480 +conversational translation um it should + +00:35:55.359 --> 00:36:04.000 +be like a question uh like did you eat + +00:35:59.480 --> 00:36:07.480 +uh so like t uh in that intonation means + +00:36:04.000 --> 00:36:09.000 +did you eat but it says I8 instead + +00:36:07.480 --> 00:36:11.520 +because that's the default and it's + +00:36:09.000 --> 00:36:13.560 +really confusing so there's all kinds of + +00:36:11.520 --> 00:36:18.760 +peculiarities like that as + +00:36:13.560 --> 00:36:21.079 +well um translation tasks um there's uh + +00:36:18.760 --> 00:36:22.599 +WMT which stands for the conference on + +00:36:21.079 --> 00:36:24.599 +machine + +00:36:22.599 --> 00:36:26.800 +translation uh which might seem a little + +00:36:24.599 --> 00:36:29.880 +bit strange to you because conference + +00:36:26.800 --> 00:36:31.599 +doesn't start with a W but um uh there + +00:36:29.880 --> 00:36:33.480 +are these shared tasks that are run + +00:36:31.599 --> 00:36:35.480 +every year for translation and + +00:36:33.480 --> 00:36:38.480 +evaluation + +00:36:35.480 --> 00:36:40.160 +um one interesting thing about this uh + +00:36:38.480 --> 00:36:42.160 +which I might have mentioned briefly + +00:36:40.160 --> 00:36:43.960 +before but I'll mention again is here + +00:36:42.160 --> 00:36:46.200 +are the translation systems in the + +00:36:43.960 --> 00:36:48.920 +evaluation systems co-evolve so + +00:36:46.200 --> 00:36:51.319 +basically every year they have a let's + +00:36:48.920 --> 00:36:54.280 +try to maximize translation accuracy + +00:36:51.319 --> 00:36:57.240 +task and they have a let's try to + +00:36:54.280 --> 00:36:59.680 +maximize evaluation accuracy task and + +00:36:57.240 --> 00:37:02.520 +always the evaluation accuracy task uses + +00:36:59.680 --> 00:37:05.240 +the systems from the translation + +00:37:02.520 --> 00:37:07.760 +accuracy task and so every time they're + +00:37:05.240 --> 00:37:10.920 +trying to improve automatic evaluation + +00:37:07.760 --> 00:37:12.280 +on the like best systems of that year um + +00:37:10.920 --> 00:37:14.359 +which makes it really challenging to + +00:37:12.280 --> 00:37:15.800 +build good evaluation systems as the + +00:37:14.359 --> 00:37:17.440 +translation systems get better and + +00:37:15.800 --> 00:37:19.160 +better so if you're interested in + +00:37:17.440 --> 00:37:22.160 +evaluation i' definitely take a look at + +00:37:19.160 --> 00:37:24.760 +this it's a good gold standard for + +00:37:22.160 --> 00:37:27.040 +that um another really good resource + +00:37:24.760 --> 00:37:28.319 +which is not a shared task it's called + +00:37:27.040 --> 00:37:31.680 +Flores + +00:37:28.319 --> 00:37:33.880 +um and it is a data set of 200 languages + +00:37:31.680 --> 00:37:37.560 +translated from English + +00:37:33.880 --> 00:37:39.240 +Wikipedia um and so it's uh data in lots + +00:37:37.560 --> 00:37:42.240 +of different languages I like this for + +00:37:39.240 --> 00:37:44.720 +two reasons um the first reason is I + +00:37:42.240 --> 00:37:47.200 +Believe Wikipedia is a really important + +00:37:44.720 --> 00:37:49.560 +domain to be able to translate because + +00:37:47.200 --> 00:37:51.680 +it has so much knowledge that's very + +00:37:49.560 --> 00:37:54.079 +useful and if we could convey that + +00:37:51.680 --> 00:37:56.200 +knowledge to people in you know many + +00:37:54.079 --> 00:37:57.760 +different languages it would be you know + +00:37:56.200 --> 00:38:00.200 +very beneficial to the people of the + +00:37:57.760 --> 00:38:01.760 +world um another reason why I like this + +00:38:00.200 --> 00:38:04.359 +is because it's really hard to get + +00:38:01.760 --> 00:38:06.800 +highquality translations in 200 + +00:38:04.359 --> 00:38:09.040 +languages um just because it's hard to + +00:38:06.800 --> 00:38:11.200 +hire that many good translators and do + +00:38:09.040 --> 00:38:12.920 +quality control and stuff um and this + +00:38:11.200 --> 00:38:15.040 +was a data set created by meta and I + +00:38:12.920 --> 00:38:16.800 +definitely like commend them for their + +00:38:15.040 --> 00:38:18.880 +ability in doing that so this is kind of + +00:38:16.800 --> 00:38:21.560 +a standard for low resource language + +00:38:18.880 --> 00:38:23.440 +translation um and then iwslt there are + +00:38:21.560 --> 00:38:25.400 +tasks on speech translation so if you're + +00:38:23.440 --> 00:38:28.720 +interested in speech that's one you can + +00:38:25.400 --> 00:38:30.960 +take a look at + +00:38:28.720 --> 00:38:32.640 +um I'm not going to this isn't a class + +00:38:30.960 --> 00:38:33.880 +only on machine translation so I'm only + +00:38:32.640 --> 00:38:37.040 +going to go through it briefly but there + +00:38:33.880 --> 00:38:39.760 +is one model that is worth uh taking a + +00:38:37.040 --> 00:38:42.880 +careful look at it's the NLB translation + +00:38:39.760 --> 00:38:46.240 +model um and it's an example of building + +00:38:42.880 --> 00:38:47.680 +a strong Mt model uh it's open- source + +00:38:46.240 --> 00:38:51.200 +and they describe all of the stuff they + +00:38:47.680 --> 00:38:54.160 +do in doing it and basically to + +00:38:51.200 --> 00:38:58.160 +summarize uh they start out with um + +00:38:54.160 --> 00:38:59.880 +public bitext in a small uh seed of + +00:38:58.160 --> 00:39:03.040 +uh bilingual data and lots of + +00:38:59.880 --> 00:39:05.760 +monolingual data they then train a + +00:39:03.040 --> 00:39:07.480 +multilingual embedding model where the + +00:39:05.760 --> 00:39:09.520 +goal of the multilingual embedding model + +00:39:07.480 --> 00:39:11.760 +is to identify things that are good + +00:39:09.520 --> 00:39:13.560 +translations of each other and then + +00:39:11.760 --> 00:39:15.400 +based on this they run this multilingual + +00:39:13.560 --> 00:39:17.079 +embedding model over all the monolingual + +00:39:15.400 --> 00:39:20.000 +data that they have from multiple + +00:39:17.079 --> 00:39:23.440 +languages and they try to extract Mind + +00:39:20.000 --> 00:39:27.000 +by text where the mind bitext has a kind + +00:39:23.440 --> 00:39:27.880 +of like confidence score of how good the + +00:39:27.000 --> 00:39:31.839 +uh + +00:39:27.880 --> 00:39:33.920 +the Mind data is um oh yeah and uh + +00:39:31.839 --> 00:39:36.000 +another thing that I forgot is language + +00:39:33.920 --> 00:39:38.800 +identification language identification + +00:39:36.000 --> 00:39:40.800 +is actually very hard um trying uh + +00:39:38.800 --> 00:39:42.800 +trying to figure out uh which language + +00:39:40.800 --> 00:39:44.839 +something is written in especially once + +00:39:42.800 --> 00:39:46.839 +you start talking about 200 languages + +00:39:44.839 --> 00:39:49.839 +they can be pretty similar and + +00:39:46.839 --> 00:39:49.839 +especially + +00:39:53.200 --> 00:40:01.520 +um there's this uh amazing + +00:39:58.200 --> 00:40:04.079 +uh amazing paper uh that I can show you + +00:40:01.520 --> 00:40:06.200 +I don't have it in the slides but it's a + +00:40:04.079 --> 00:40:08.280 +language ID in the wild unexpected + +00:40:06.200 --> 00:40:09.880 +challenges on the path to a thousand + +00:40:08.280 --> 00:40:13.880 +language web Text + +00:40:09.880 --> 00:40:16.119 +corpus and just look at this uh figure + +00:40:13.880 --> 00:40:16.119 +in + +00:40:17.880 --> 00:40:24.040 +here so + +00:40:20.119 --> 00:40:24.920 +here the predicted uh language of this + +00:40:24.040 --> 00:40:28.000 +is + +00:40:24.920 --> 00:40:30.359 +manip um I don't know + +00:40:28.000 --> 00:40:34.079 +I don't know why it's clearly all emojis + +00:40:30.359 --> 00:40:37.280 +but apparently the uh the language ID + +00:40:34.079 --> 00:40:40.280 +system had to had to do + +00:40:37.280 --> 00:40:42.440 +that also uh why you lying why you + +00:40:40.280 --> 00:40:44.480 +always lying and like all these little + +00:40:42.440 --> 00:40:46.800 +characters got predicted as + +00:40:44.480 --> 00:40:52.280 +twe + +00:40:46.800 --> 00:40:56.119 +um this is a Mis rendered PDF uh is of + +00:40:52.280 --> 00:40:59.240 +aradi um is Amara um this is a + +00:40:56.119 --> 00:40:59.240 +non-unicode font + +00:40:59.800 --> 00:41:05.640 +um yeah more uh more things like this + +00:41:03.040 --> 00:41:08.280 +creative use of Unicode + +00:41:05.640 --> 00:41:11.240 +me is + +00:41:08.280 --> 00:41:13.520 +pul so so there's just like lots of + +00:41:11.240 --> 00:41:16.800 +examples where you know like actual web + +00:41:13.520 --> 00:41:18.319 +text can make language ID pretty hard um + +00:41:16.800 --> 00:41:22.280 +uh + +00:41:18.319 --> 00:41:24.920 +fortunately uh there are ways around + +00:41:22.280 --> 00:41:26.599 +this by having like very well curated uh + +00:41:24.920 --> 00:41:28.480 +training data for language ID systems + +00:41:26.599 --> 00:41:32.440 +and also having confidence metrics for + +00:41:28.480 --> 00:41:36.000 +language ID systems but it's definitely + +00:41:32.440 --> 00:41:39.319 +non-trivial um so after they did that uh + +00:41:36.000 --> 00:41:41.800 +the language ID they get this model and + +00:41:39.319 --> 00:41:44.000 +um in terms of modeling techniques they + +00:41:41.800 --> 00:41:47.079 +use some uh some interesting modeling + +00:41:44.000 --> 00:41:51.640 +techniques the first one is mixture of + +00:41:47.079 --> 00:41:53.319 +experts um so mixture of experts is uh + +00:41:51.640 --> 00:41:57.800 +I've already talked about it before in + +00:41:53.319 --> 00:42:00.520 +the context of the mix model um this and + +00:41:57.800 --> 00:42:03.400 +you know GPD models are also allegedly + +00:42:00.520 --> 00:42:05.040 +using a mixture of experts but um here + +00:42:03.400 --> 00:42:06.760 +it's particularly important because + +00:42:05.040 --> 00:42:08.640 +they're doing many different languages + +00:42:06.760 --> 00:42:10.200 +and so if you're doing many different + +00:42:08.640 --> 00:42:12.440 +languages you can use particular + +00:42:10.200 --> 00:42:14.640 +parameters for some languages and other + +00:42:12.440 --> 00:42:17.400 +parameters for other languages so it's + +00:42:14.640 --> 00:42:19.040 +pretty helpful in that case um another + +00:42:17.400 --> 00:42:21.160 +thing is curriculum learning like I just + +00:42:19.040 --> 00:42:23.040 +mentioned before they uh start training + +00:42:21.160 --> 00:42:26.520 +on the large on the lower resource + +00:42:23.040 --> 00:42:28.839 +languages later um self-supervised + +00:42:26.520 --> 00:42:31.440 +training so basically um the way this + +00:42:28.839 --> 00:42:34.119 +works is by having a d noising objective + +00:42:31.440 --> 00:42:35.760 +where they add noise to the monolingual + +00:42:34.119 --> 00:42:39.599 +data and then try to reproduce the + +00:42:35.760 --> 00:42:43.640 +original monolingual data so um they use + +00:42:39.599 --> 00:42:45.839 +that as well also another uh important + +00:42:43.640 --> 00:42:48.960 +technique that is very widely used in + +00:42:45.839 --> 00:42:51.520 +machine translation now is uh something + +00:42:48.960 --> 00:42:53.240 +called back translation and the way back + +00:42:51.520 --> 00:42:59.040 +translation works is like let's say we + +00:42:53.240 --> 00:42:59.040 +want to translate in English to Swahili + +00:43:00.599 --> 00:43:05.280 +um English to + +00:43:02.720 --> 00:43:07.000 +Swahili uh translation model and we + +00:43:05.280 --> 00:43:08.839 +don't have a lot of biex for English and + +00:43:07.000 --> 00:43:11.480 +Swahili but we have lots of + +00:43:08.839 --> 00:43:13.839 +Swahili basically what we do is we train + +00:43:11.480 --> 00:43:18.520 +a Swahili to English + +00:43:13.839 --> 00:43:21.200 +model and we generate a lot of um like + +00:43:18.520 --> 00:43:23.119 +translated like pseudo translated + +00:43:21.200 --> 00:43:26.640 +English here um and then use that as + +00:43:23.119 --> 00:43:30.200 +parallel data um this is really widely + +00:43:26.640 --> 00:43:32.400 +used in machine translation um there is + +00:43:30.200 --> 00:43:35.400 +a caveat that this English is not + +00:43:32.400 --> 00:43:37.480 +natural um but that's actually kind of + +00:43:35.400 --> 00:43:41.520 +okay for a couple reasons um if the + +00:43:37.480 --> 00:43:43.119 +Swahili is actually natural then um at + +00:43:41.520 --> 00:43:44.920 +least the output is natural so you're + +00:43:43.119 --> 00:43:47.359 +generating natural output even if the + +00:43:44.920 --> 00:43:49.400 +input is a little bit unnatural the + +00:43:47.359 --> 00:43:51.400 +other thing is actually as models get + +00:43:49.400 --> 00:43:54.079 +better there's a lot of really bad + +00:43:51.400 --> 00:43:55.920 +translations online um like for example + +00:43:54.079 --> 00:43:59.119 +translations from not very good + +00:43:55.920 --> 00:44:01.119 +translators or translations from uh + +00:43:59.119 --> 00:44:03.960 +older versions of Google Translate for + +00:44:01.119 --> 00:44:06.760 +example a lot of the like data online is + +00:44:03.960 --> 00:44:08.319 +like old machine translation data and so + +00:44:06.760 --> 00:44:09.680 +be because of that if you have a good + +00:44:08.319 --> 00:44:11.240 +back translation model it might be + +00:44:09.680 --> 00:44:14.319 +actually better than your original data + +00:44:11.240 --> 00:44:16.280 +in the first place so um so because of + +00:44:14.319 --> 00:44:17.280 +that back translation can be uh can be + +00:44:16.280 --> 00:44:20.079 +pretty + +00:44:17.280 --> 00:44:23.480 +good and they also Incorporated the seed + +00:44:20.079 --> 00:44:26.520 +data that they created to to seed the + +00:44:23.480 --> 00:44:29.040 +the model so this is a pretty good model + +00:44:26.520 --> 00:44:29.040 +um we + +00:44:29.200 --> 00:44:35.319 +did um some evaluation of this from the + +00:44:32.400 --> 00:44:35.319 +language modeling + +00:44:38.720 --> 00:44:44.319 +perspective and basically what we found + +00:44:41.880 --> 00:44:48.880 +was that it + +00:44:44.319 --> 00:44:50.920 +is quite effective uh uh the NLB model + +00:44:48.880 --> 00:44:53.440 +is quite competitive even with respect + +00:44:50.920 --> 00:44:55.960 +to the GPT models for uh lower resource + +00:44:53.440 --> 00:44:59.040 +languages under like the top 40 it's not + +00:44:55.960 --> 00:45:01.040 +as good for the top 40 languages and if + +00:44:59.040 --> 00:45:04.040 +you compare Google translate and chat + +00:45:01.040 --> 00:45:06.200 +GPT on the top uh kind of like the + +00:45:04.040 --> 00:45:08.720 +higher resource languages actually um + +00:45:06.200 --> 00:45:11.319 +GPD 4 can beat Google translate on some + +00:45:08.720 --> 00:45:14.160 +languages like Romanian and uh other + +00:45:11.319 --> 00:45:16.280 +things like that so um but anyway the + +00:45:14.160 --> 00:45:17.760 +NLB model is quite good so if you want + +00:45:16.280 --> 00:45:20.000 +to start out with a model you can use + +00:45:17.760 --> 00:45:22.000 +this there's also another more recent + +00:45:20.000 --> 00:45:25.040 +model called seamless m4t that also + +00:45:22.000 --> 00:45:27.559 +allows you to do speech translation as + +00:45:25.040 --> 00:45:30.280 +well um and if you uh you want to show + +00:45:27.559 --> 00:45:33.359 +your CMU Pride there's also Lego MP from + +00:45:30.280 --> 00:45:36.359 +le le group that you can use for + +00:45:33.359 --> 00:45:36.359 +this + +00:45:36.440 --> 00:45:41.040 +cool okay um I'd like to move on to + +00:45:39.400 --> 00:45:42.760 +multilingual pre-train models are there + +00:45:41.040 --> 00:45:45.040 +any questions about what I talked about + +00:45:42.760 --> 00:45:45.040 +so + +00:45:45.559 --> 00:45:52.240 +far Okay cool so I I want to talk about + +00:45:48.920 --> 00:45:55.359 +multilingual pre-trained models um + +00:45:52.240 --> 00:45:58.280 +closed llms such as gp4 are typically + +00:45:55.359 --> 00:46:02.559 +kind of like incidentally multi lingual + +00:45:58.280 --> 00:46:05.920 +um due to large training data um open + +00:46:02.559 --> 00:46:07.480 +llms often uh do data filtering to allow + +00:46:05.920 --> 00:46:09.760 +for good performance on English like I + +00:46:07.480 --> 00:46:13.000 +mentioned before and so there aren't a + +00:46:09.760 --> 00:46:15.520 +whole lot of really good uh open options + +00:46:13.000 --> 00:46:18.319 +for standard left to right Auto + +00:46:15.520 --> 00:46:20.200 +regressive models I would say probably + +00:46:18.319 --> 00:46:24.200 +at the moment quen is the best one and I + +00:46:20.200 --> 00:46:25.800 +already uh I already covered that uh + +00:46:24.200 --> 00:46:28.359 +before in the previous class so I'm not + +00:46:25.800 --> 00:46:30.680 +going to do it again so but what I would + +00:46:28.359 --> 00:46:33.599 +like to talk about is multilingual um + +00:46:30.680 --> 00:46:37.720 +representation learning models and also + +00:46:33.599 --> 00:46:39.880 +um encoder decoder models because the + +00:46:37.720 --> 00:46:42.720 +they unlike in English where I would say + +00:46:39.880 --> 00:46:45.480 +all the good models are Auto regressive + +00:46:42.720 --> 00:46:47.000 +um the encoder decoder and um Mass + +00:46:45.480 --> 00:46:49.040 +language models are actually pretty + +00:46:47.000 --> 00:46:52.240 +competitive for multilingual + +00:46:49.040 --> 00:46:55.079 +tasks so um language model pre-training + +00:46:52.240 --> 00:46:57.599 +uh such as BT has been shown to be uh + +00:46:55.079 --> 00:46:59.240 +effective and it uses Mass language + +00:46:57.599 --> 00:47:02.200 +modeling uh + +00:46:59.240 --> 00:47:05.079 +objectives um in models such as embert + +00:47:02.200 --> 00:47:06.680 +and xlm and xlmr extend Bert style + +00:47:05.079 --> 00:47:08.680 +training for multilingual + +00:47:06.680 --> 00:47:11.480 +pre-training um before I get into + +00:47:08.680 --> 00:47:13.359 +exactly how they do this um uh I'd like + +00:47:11.480 --> 00:47:16.000 +to talk a little bit about multilingual + +00:47:13.359 --> 00:47:17.760 +uh representation evaluation and there's + +00:47:16.000 --> 00:47:19.839 +a few standard benchmarks that people + +00:47:17.760 --> 00:47:23.040 +use for kind of just general purpose + +00:47:19.839 --> 00:47:25.760 +skills of multilingual models um the + +00:47:23.040 --> 00:47:28.880 +first one is Extreme and the follow-up + +00:47:25.760 --> 00:47:31.240 +extreme r and basically uh the way they + +00:47:28.880 --> 00:47:33.040 +work is you have sentence classification + +00:47:31.240 --> 00:47:35.680 +structured prediction sentence retrieval + +00:47:33.040 --> 00:47:37.440 +and question answering tasks across a + +00:47:35.680 --> 00:47:39.760 +pretty wide variety of typologically + +00:47:37.440 --> 00:47:41.480 +diverse languages maybe uh something on + +00:47:39.760 --> 00:47:44.520 +the order of 40 different + +00:47:41.480 --> 00:47:46.599 +languages um then there's also exclu + +00:47:44.520 --> 00:47:49.200 +which is less typologically diverse but + +00:47:46.599 --> 00:47:52.079 +also contains generation style + +00:47:49.200 --> 00:47:54.359 +tasks um and yeah extreme R is a harder + +00:47:52.079 --> 00:47:58.200 +version based on + +00:47:54.359 --> 00:48:00.760 +Extreme um so the the way that people do + +00:47:58.200 --> 00:48:03.079 +multilingual mask language modeling + +00:48:00.760 --> 00:48:05.520 +style objectives is unlike Mas language + +00:48:03.079 --> 00:48:08.079 +modeling where you mask out the input + +00:48:05.520 --> 00:48:10.440 +and you try to predict the output um + +00:48:08.079 --> 00:48:14.359 +what they can do is they feed in one + +00:48:10.440 --> 00:48:16.319 +sentence that's in um in one language in + +00:48:14.359 --> 00:48:19.839 +another sentence that's in another + +00:48:16.319 --> 00:48:23.400 +language uh ideally parallel sentences + +00:48:19.839 --> 00:48:25.720 +and then Mas things out so the training + +00:48:23.400 --> 00:48:28.920 +code can be entirely the same or almost + +00:48:25.720 --> 00:48:31.400 +entirely the same minus adding uh kind + +00:48:28.920 --> 00:48:35.839 +of language embeddings here but they + +00:48:31.400 --> 00:48:37.760 +train the model um so that it uh it can + +00:48:35.839 --> 00:48:40.160 +kind of predict across the languages + +00:48:37.760 --> 00:48:42.079 +which aligns the representations across + +00:48:40.160 --> 00:48:44.079 +languages better and people have + +00:48:42.079 --> 00:48:46.480 +demonstrated that this is uh good for + +00:48:44.079 --> 00:48:46.480 +learning + +00:48:47.119 --> 00:48:50.119 +representations + +00:48:50.520 --> 00:48:56.359 +um actually in the + +00:48:53.880 --> 00:48:58.520 +um there there's also methods that can + +00:48:56.359 --> 00:49:00.240 +explicit L use alignments to improve the + +00:48:58.520 --> 00:49:02.440 +alignment between representations in + +00:49:00.240 --> 00:49:04.400 +different languages but in the interest + +00:49:02.440 --> 00:49:09.079 +of time I'll skip that and maybe get + +00:49:04.400 --> 00:49:11.680 +back to it if we have uh have time um a + +00:49:09.079 --> 00:49:14.119 +very good model to know about because I + +00:49:11.680 --> 00:49:18.400 +think this is still one of the the best + +00:49:14.119 --> 00:49:20.760 +models for um being used for + +00:49:18.400 --> 00:49:24.200 +multilingual processing is + +00:49:20.760 --> 00:49:26.000 +mt5 and mt5 is based on the T5 + +00:49:24.200 --> 00:49:28.599 +architecture which is basically an + +00:49:26.000 --> 00:49:33.240 +encoder decoder architecture with a + +00:49:28.599 --> 00:49:36.119 +masked uh a masked reconstruction + +00:49:33.240 --> 00:49:40.640 +objective and uh the way that works in + +00:49:36.119 --> 00:49:43.559 +case um uh you know we haven't talked + +00:49:40.640 --> 00:49:45.880 +about this in a while so in case uh we + +00:49:43.559 --> 00:49:48.520 +uh we need a refresher the way it works + +00:49:45.880 --> 00:49:52.920 +is basically um you have an encoder + +00:49:48.520 --> 00:49:55.799 +decoder model uh that takes in an input + +00:49:52.920 --> 00:49:58.960 +and you do like perturbations of the + +00:49:55.799 --> 00:50:01.040 +output so you you do things like + +00:49:58.960 --> 00:50:05.160 +dropping you do things like dropping + +00:50:01.040 --> 00:50:07.359 +words and you do things like + +00:50:05.160 --> 00:50:09.079 +um dropping words from the output + +00:50:07.359 --> 00:50:11.119 +reordering words from the output and you + +00:50:09.079 --> 00:50:15.240 +try to get the model to reconstruct the + +00:50:11.119 --> 00:50:17.960 +original uh output and so basically they + +00:50:15.240 --> 00:50:21.160 +train this on many different languages + +00:50:17.960 --> 00:50:23.520 +and uh this gives pretty high + +00:50:21.160 --> 00:50:27.280 +performance um overall for a lot of + +00:50:23.520 --> 00:50:29.240 +different tasks and in particular + +00:50:27.280 --> 00:50:32.079 +this model was trained explicitly to be + +00:50:29.240 --> 00:50:35.160 +multilingual so it's + +00:50:32.079 --> 00:50:36.960 +um it essentially has better performance + +00:50:35.160 --> 00:50:38.520 +at kind of like the longtail tasks than + +00:50:36.960 --> 00:50:41.799 +a lot of the standard language models + +00:50:38.520 --> 00:50:48.359 +that we have um there's also other + +00:50:41.799 --> 00:50:52.119 +versions uh like mt0 and um B T5 mt0 was + +00:50:48.359 --> 00:50:54.920 +instruction tuned further um B T5 was + +00:50:52.119 --> 00:50:58.119 +trained with no tokenization just bite + +00:50:54.920 --> 00:50:59.480 +level um uh bite level modeling and + +00:50:58.119 --> 00:51:01.440 +because it's bite level modeling it + +00:50:59.480 --> 00:51:03.640 +allows it to model any script you won't + +00:51:01.440 --> 00:51:06.480 +have troubles with Unicode and other + +00:51:03.640 --> 00:51:09.319 +stuff like that um I have personally + +00:51:06.480 --> 00:51:11.000 +found that mt5 performs better than b T5 + +00:51:09.319 --> 00:51:14.119 +um or at least it's easier to get it to + +00:51:11.000 --> 00:51:15.839 +work so um if you want a single one to + +00:51:14.119 --> 00:51:18.599 +start out with I'd say mt5 is pretty + +00:51:15.839 --> 00:51:20.880 +good um but there's also other uh + +00:51:18.599 --> 00:51:24.000 +options and I should also mention + +00:51:20.880 --> 00:51:27.480 +actually there was um a fine-tuned + +00:51:24.000 --> 00:51:31.150 +version of Mt I've + +00:51:27.480 --> 00:51:34.249 +recently uh which was + +00:51:31.150 --> 00:51:34.249 +[Music] + +00:51:38.160 --> 00:51:45.200 +called so um there was this model um + +00:51:42.319 --> 00:51:48.119 +based on a large amount of kind of + +00:51:45.200 --> 00:51:48.119 +instruction tuning + +00:51:48.880 --> 00:51:54.880 +data trying to find their + +00:51:51.280 --> 00:51:54.880 +modeling modeling + +00:51:55.280 --> 00:52:01.240 +page yes so it's based on mt5 um but + +00:51:58.960 --> 00:52:02.880 +they trained it on a whole bunch of uh + +00:52:01.240 --> 00:52:06.200 +kind of instruction tuning data so this + +00:52:02.880 --> 00:52:08.240 +is kind of like a more modern version of + +00:52:06.200 --> 00:52:10.799 +uh mt0 I guess and I haven't played + +00:52:08.240 --> 00:52:13.240 +around with it myself a lot but it's uh + +00:52:10.799 --> 00:52:16.440 +uh allegedly pretty good instruction + +00:52:13.240 --> 00:52:16.440 +following and other stuff like + +00:52:20.680 --> 00:52:24.839 +this + +00:52:22.559 --> 00:52:27.960 +cool um I'd like to talk a little bit + +00:52:24.839 --> 00:52:30.359 +about uh some more advanced modeling + +00:52:27.960 --> 00:52:32.760 +strategies um and so for crosslingual + +00:52:30.359 --> 00:52:35.839 +transfer learning um it leverages data + +00:52:32.760 --> 00:52:39.319 +from one or more High resource uh Source + +00:52:35.839 --> 00:52:42.200 +languages um another thing that we often + +00:52:39.319 --> 00:52:44.119 +do is uh pre-training and fine-tuning + +00:52:42.200 --> 00:52:45.799 +and pre-training and fine-tuning is good + +00:52:44.119 --> 00:52:47.359 +if you have a specific language that you + +00:52:45.799 --> 00:52:48.839 +want your model to be good at because it + +00:52:47.359 --> 00:52:50.640 +allows you to specialize to that + +00:52:48.839 --> 00:52:52.200 +language and as I mentioned with the + +00:52:50.640 --> 00:52:53.400 +curse of multilinguality it's hard to + +00:52:52.200 --> 00:52:54.079 +get a model that's really good at all + +00:52:53.400 --> 00:52:56.559 +the + +00:52:54.079 --> 00:53:00.079 +languages um there's also zero shot + +00:52:56.559 --> 00:53:02.520 +transfer um and finally something called + +00:53:00.079 --> 00:53:06.160 +annotation projection or it's also + +00:53:02.520 --> 00:53:08.960 +called translate train in some + +00:53:06.160 --> 00:53:10.480 +literature so pre-train and fine-tune um + +00:53:08.960 --> 00:53:13.599 +basically the way it works is you train + +00:53:10.480 --> 00:53:15.160 +on lots of uh lots and lots of data from + +00:53:13.599 --> 00:53:17.400 +lots of different languages and then you + +00:53:15.160 --> 00:53:20.359 +find tune on data in another + +00:53:17.400 --> 00:53:23.240 +language um + +00:53:20.359 --> 00:53:25.760 +and so this uh this tends to work pretty + +00:53:23.240 --> 00:53:28.799 +well it particularly works well if there + +00:53:25.760 --> 00:53:30.319 +was at least some uh data from the + +00:53:28.799 --> 00:53:32.559 +language that you want to train on in + +00:53:30.319 --> 00:53:34.400 +the original training data set if there + +00:53:32.559 --> 00:53:35.799 +was no data from the language you want + +00:53:34.400 --> 00:53:37.640 +to train on in the original training + +00:53:35.799 --> 00:53:39.400 +data set then it's pretty tough to get + +00:53:37.640 --> 00:53:41.880 +this to work because the model doesn't + +00:53:39.400 --> 00:53:43.799 +like already have knowledge of the of + +00:53:41.880 --> 00:53:47.040 +the + +00:53:43.799 --> 00:53:50.880 +language um another thing that you can + +00:53:47.040 --> 00:53:52.720 +do is you can uh so one of the problems + +00:53:50.880 --> 00:53:54.720 +with adapting to low resource languages + +00:53:52.720 --> 00:53:57.400 +is obviously like lack of data + +00:53:54.720 --> 00:53:59.319 +especially lack of supervised data + +00:53:57.400 --> 00:54:02.200 +so another thing that you can do is you + +00:53:59.319 --> 00:54:05.319 +can train on that language itself and + +00:54:02.200 --> 00:54:09.280 +then a few other very highly related + +00:54:05.319 --> 00:54:11.240 +languages and so if you know um like if + +00:54:09.280 --> 00:54:14.640 +you want to train for a + +00:54:11.240 --> 00:54:17.359 +particular uh you know language from + +00:54:14.640 --> 00:54:19.359 +India or something like this very often + +00:54:17.359 --> 00:54:20.680 +there's a few other languages from India + +00:54:19.359 --> 00:54:24.240 +and sometimes some of them might be + +00:54:20.680 --> 00:54:27.440 +higher resource so maybe like Hindu uh + +00:54:24.240 --> 00:54:29.000 +Hindi is related um and that's higher + +00:54:27.440 --> 00:54:31.680 +resource than the language that you're + +00:54:29.000 --> 00:54:33.200 +interested in trading on um you can also + +00:54:31.680 --> 00:54:36.119 +do it I mean obviously you can do this + +00:54:33.200 --> 00:54:37.760 +for any language um a lot for some + +00:54:36.119 --> 00:54:39.599 +languages there's just no higher + +00:54:37.760 --> 00:54:43.280 +resource counterpart which makes it a + +00:54:39.599 --> 00:54:46.839 +little bit tricky um but uh when there + +00:54:43.280 --> 00:54:46.839 +is you can take advantage of + +00:54:46.880 --> 00:54:52.400 +that this is also another example of + +00:54:49.680 --> 00:54:54.200 +something that you know is a little bit + +00:54:52.400 --> 00:54:55.760 +tricky to implement because it requires + +00:54:54.200 --> 00:54:58.960 +meta learning and calculation of + +00:54:55.760 --> 00:55:01.720 +gradients stuff like this but there's + +00:54:58.960 --> 00:55:03.680 +this really nice work that essentially + +00:55:01.720 --> 00:55:06.920 +tries to learn a model that's good for + +00:55:03.680 --> 00:55:09.839 +fine-tuning into different languages and + +00:55:06.920 --> 00:55:12.240 +so the way they do this is unlike + +00:55:09.839 --> 00:55:15.200 +standard multilingual learning where you + +00:55:12.240 --> 00:55:17.160 +learn a model that is good um at + +00:55:15.200 --> 00:55:19.200 +processing multiple languages and then + +00:55:17.160 --> 00:55:22.200 +try to fine-tune it to a lower resource + +00:55:19.200 --> 00:55:27.559 +language what you do is you try to learn + +00:55:22.200 --> 00:55:29.280 +a model that is good at adap in to low + +00:55:27.559 --> 00:55:30.559 +resource languages and the way you do + +00:55:29.280 --> 00:55:32.760 +this is through a method called metal + +00:55:30.559 --> 00:55:35.200 +learning um I'm not going to cover metal + +00:55:32.760 --> 00:55:36.520 +learning given the limited amount of + +00:55:35.200 --> 00:55:39.280 +time if you've heard about it before you + +00:55:36.520 --> 00:55:41.440 +know basically what it is um if you + +00:55:39.280 --> 00:55:43.760 +haven't heard about it um just to give a + +00:55:41.440 --> 00:55:46.240 +general idea what you do is you have a + +00:55:43.760 --> 00:55:48.839 +heldout dead Deb uh development set like + +00:55:46.240 --> 00:55:52.119 +I did for the uh data balancing thing + +00:55:48.839 --> 00:55:53.839 +and you try to learn models so that the + +00:55:52.119 --> 00:55:55.520 +gradients that you derive from the + +00:55:53.839 --> 00:55:58.720 +updates here align well with the + +00:55:55.520 --> 00:56:00.760 +gradients on this data um and so you're + +00:55:58.720 --> 00:56:02.760 +trying to learn um you're trying to + +00:56:00.760 --> 00:56:04.880 +learn things where you know you update + +00:56:02.760 --> 00:56:07.119 +in a direction that is uh good for + +00:56:04.880 --> 00:56:10.440 +updating towards uh the low resource + +00:56:07.119 --> 00:56:10.440 +language when you start training on + +00:56:11.720 --> 00:56:17.880 +it so these are all fine-tuning related + +00:56:15.000 --> 00:56:19.920 +things um there's other there's a lot + +00:56:17.880 --> 00:56:22.480 +more to talk about uh with respect to + +00:56:19.920 --> 00:56:25.880 +this um like how do you choose languages + +00:56:22.480 --> 00:56:29.319 +and other things like this um another + +00:56:25.880 --> 00:56:31.079 +big Paradigm is zero shot transfer um + +00:56:29.319 --> 00:56:33.200 +for pre-trained + +00:56:31.079 --> 00:56:35.760 +representations and the way that this + +00:56:33.200 --> 00:56:37.880 +works is um you pre-train a large + +00:56:35.760 --> 00:56:40.520 +language model using monolingual data + +00:56:37.880 --> 00:56:42.480 +from many different languages and then + +00:56:40.520 --> 00:56:45.319 +you fine tune using annotated data in a + +00:56:42.480 --> 00:56:48.280 +given language like English and then you + +00:56:45.319 --> 00:56:50.440 +test it on a a model on a different + +00:56:48.280 --> 00:56:51.160 +language uh from the fine tune language + +00:56:50.440 --> 00:56:54.079 +like + +00:56:51.160 --> 00:56:55.599 +French and uh we benefit from this + +00:56:54.079 --> 00:56:56.880 +because multilingual pre-training can + +00:56:55.599 --> 00:56:58.520 +learn something + +00:56:56.880 --> 00:57:00.480 +I shouldn't say a universal + +00:56:58.520 --> 00:57:02.200 +representation but at least a + +00:57:00.480 --> 00:57:04.559 +representation that is conducive for + +00:57:02.200 --> 00:57:07.400 +transfer across + +00:57:04.559 --> 00:57:11.359 +languages um there's a lot of work on + +00:57:07.400 --> 00:57:13.599 +this um I am not going to cover it in a + +00:57:11.359 --> 00:57:15.520 +lot of detail number one in the interest + +00:57:13.599 --> 00:57:18.280 +of time but also number two because I do + +00:57:15.520 --> 00:57:20.440 +actually kind of uh strongly believe + +00:57:18.280 --> 00:57:23.799 +that there are other uh reasonable + +00:57:20.440 --> 00:57:26.599 +options that outperform this uh a pretty + +00:57:23.799 --> 00:57:28.599 +fair amount of the time and one of them + +00:57:26.599 --> 00:57:33.280 +is something called uh annotation + +00:57:28.599 --> 00:57:39.960 +projection um or translate uh translate + +00:57:33.280 --> 00:57:39.960 +train and the way this works is very + +00:57:42.920 --> 00:57:47.680 +similar very similar to what I talked + +00:57:45.079 --> 00:57:47.680 +about with back + +00:57:48.000 --> 00:57:52.160 +translation but there's two varieties of + +00:57:50.480 --> 00:57:54.400 +annotation production the first one + +00:57:52.160 --> 00:57:57.680 +translate + +00:57:54.400 --> 00:58:00.480 +train is you have um annotated training + +00:57:57.680 --> 00:58:02.960 +data in English and you translate it to + +00:58:00.480 --> 00:58:04.839 +the language you want to process like SW + +00:58:02.960 --> 00:58:06.400 +and so this is relatively easy for like + +00:58:04.839 --> 00:58:07.920 +question answering or something like + +00:58:06.400 --> 00:58:10.119 +this so what you do is you just take + +00:58:07.920 --> 00:58:13.280 +question answering data you translate + +00:58:10.119 --> 00:58:15.039 +the question you translate the answer + +00:58:13.280 --> 00:58:16.640 +and now you have Swahili question + +00:58:15.039 --> 00:58:18.599 +answering data and sure there may be + +00:58:16.640 --> 00:58:20.760 +like translation errors or something + +00:58:18.599 --> 00:58:22.640 +like this but having some training data + +00:58:20.760 --> 00:58:25.000 +is better than having no training data + +00:58:22.640 --> 00:58:26.799 +and also machine translation systems are + +00:58:25.000 --> 00:58:28.839 +reasonably good nowadays so you can + +00:58:26.799 --> 00:58:32.880 +actually get reasonably high quality + +00:58:28.839 --> 00:58:35.119 +data um the more complex version of this + +00:58:32.880 --> 00:58:37.039 +is what if you can't just translate your + +00:58:35.119 --> 00:58:41.119 +data what if your your translated data + +00:58:37.039 --> 00:58:44.720 +is not just um text but it's some sort + +00:58:41.119 --> 00:58:47.960 +of annotations on top of text + +00:58:44.720 --> 00:58:49.760 +and um to take the hardest possible + +00:58:47.960 --> 00:58:51.520 +example the hardest possible example is + +00:58:49.760 --> 00:58:54.520 +something where you have like tags on + +00:58:51.520 --> 00:58:57.119 +every word and + +00:58:54.520 --> 00:58:59.760 +so this could be like for example for + +00:58:57.119 --> 00:59:03.079 +part of speech tagging and if you have + +00:58:59.760 --> 00:59:03.079 +part of speech tagging what you can + +00:59:03.359 --> 00:59:10.319 +do is um you can have English Swahili + +00:59:06.480 --> 00:59:14.599 +data Maybe This is even um you know like + +00:59:10.319 --> 00:59:16.240 +already translated data um it's already + +00:59:14.599 --> 00:59:18.599 +translated data but you have part of + +00:59:16.240 --> 00:59:20.520 +speech tags either manual or automatic + +00:59:18.599 --> 00:59:23.079 +for the English data and then you + +00:59:20.520 --> 00:59:26.280 +basically project the part of speech + +00:59:23.079 --> 00:59:27.599 +tags to the other language and it + +00:59:26.280 --> 00:59:29.440 +doesn't have to be part of speech tags + +00:59:27.599 --> 00:59:31.680 +it could be like named entity labels it + +00:59:29.440 --> 00:59:34.119 +could be you know any other variety of + +00:59:31.680 --> 00:59:36.720 +things like this + +00:59:34.119 --> 00:59:38.720 +and this gets tricky because basically + +00:59:36.720 --> 00:59:40.400 +you need to find alignments between the + +00:59:38.720 --> 00:59:43.559 +words in the + +00:59:40.400 --> 00:59:44.880 +languages and um and then project the + +00:59:43.559 --> 00:59:47.960 +labels and you need to think about + +00:59:44.880 --> 00:59:49.880 +things like okay well if this is a noun + +00:59:47.960 --> 00:59:51.359 +uh how do I turn it into two nouns do I + +00:59:49.880 --> 00:59:54.079 +treat this as a + +00:59:51.359 --> 00:59:55.680 +determiner um like what sorts of rules + +00:59:54.079 --> 00:59:58.599 +do I use to solve these problems and + +00:59:55.680 --> 01:00:00.640 +stuff like that that so um you can + +00:59:58.599 --> 01:00:02.280 +either just translate the data if you + +01:00:00.640 --> 01:00:03.520 +don't have these sorts of annotations or + +01:00:02.280 --> 01:00:06.839 +if you have these annotations you need + +01:00:03.520 --> 01:00:08.440 +to do some more tricky stuff + +01:00:06.839 --> 01:00:12.119 +basically + +01:00:08.440 --> 01:00:13.520 +um actually I'm I'm sorry I forgot to + +01:00:12.119 --> 01:00:15.160 +talk about word alignment and this is + +01:00:13.520 --> 01:00:18.039 +kind of important so I'll just talk + +01:00:15.160 --> 01:00:20.520 +about this um uh + +01:00:18.039 --> 01:00:22.680 +briefly so word alignment basically what + +01:00:20.520 --> 01:00:22.680 +it + +01:00:22.880 --> 01:00:27.880 +does is uh going back + +01:00:27.960 --> 01:00:30.920 +to the example + +01:00:33.760 --> 01:00:38.000 +here word alignment is basically getting + +01:00:36.200 --> 01:00:40.520 +these alignments between the individual + +01:00:38.000 --> 01:00:41.680 +words uh in the sentence and so the + +01:00:40.520 --> 01:00:45.920 +input + +01:00:41.680 --> 01:00:47.440 +is um a sentence in one language and a + +01:00:45.920 --> 01:00:51.640 +sentence in another language and the + +01:00:47.440 --> 01:00:54.720 +output is like 0 0 1 1 + +01:00:51.640 --> 01:00:57.920 +22 3 4 + +01:00:54.720 --> 01:00:59.559 +43 um and kind of like the matching + +01:00:57.920 --> 01:01:02.319 +indices between the + +01:00:59.559 --> 01:01:05.000 +languages and there's two ways to do + +01:01:02.319 --> 01:01:07.200 +this um one way is + +01:01:05.000 --> 01:01:10.119 +unsupervised uh and unsupervised word + +01:01:07.200 --> 01:01:13.520 +alignment just use Co uses cooccurrence + +01:01:10.119 --> 01:01:18.200 +statistics between different languages + +01:01:13.520 --> 01:01:20.079 +um and the most famous method for this + +01:01:18.200 --> 01:01:22.920 +sorry I should have a slide about this I + +01:01:20.079 --> 01:01:24.720 +just uh realize now that I I did not + +01:01:22.920 --> 01:01:28.079 +prepare a + +01:01:24.720 --> 01:01:30.880 +slide so so the kind + +01:01:28.079 --> 01:01:33.039 +of older uh older version of this that + +01:01:30.880 --> 01:01:36.240 +was used forever um is something called + +01:01:33.039 --> 01:01:40.039 +Giza Plus+ it uses um co-occurrent + +01:01:36.240 --> 01:01:42.880 +Statistics over a large purpose uh to do + +01:01:40.039 --> 01:01:46.319 +annotation um a more modern version of + +01:01:42.880 --> 01:01:49.599 +this is something called Sim + +01:01:46.319 --> 01:01:53.480 +align and the way this works is you + +01:01:49.599 --> 01:01:55.160 +basically do um multilingual uh BT + +01:01:53.480 --> 01:01:57.000 +between these different languages and + +01:01:55.160 --> 01:01:59.720 +you find + +01:01:57.000 --> 01:02:01.279 +the um the representations that are the + +01:01:59.720 --> 01:02:02.920 +most similar between the languages and + +01:02:01.279 --> 01:02:04.559 +you treat those as alignment links + +01:02:02.920 --> 01:02:07.599 +between the words and the + +01:02:04.559 --> 01:02:09.720 +languages and then there's also um + +01:02:07.599 --> 01:02:11.680 +supervised alignment and supervised + +01:02:09.720 --> 01:02:14.200 +alignment you have a very small amount + +01:02:11.680 --> 01:02:17.720 +of data and you try to train a model so + +01:02:14.200 --> 01:02:20.000 +that the alignment links um uh match + +01:02:17.720 --> 01:02:20.000 +with the + +01:02:21.160 --> 01:02:28.520 +supervised um the supervised ones and um + +01:02:26.240 --> 01:02:30.640 +this is uh this was created by us but I + +01:02:28.520 --> 01:02:32.559 +do think it's the best option from the + +01:02:30.640 --> 01:02:33.920 +point of view of supervised alignment + +01:02:32.559 --> 01:02:37.000 +and basically the way it works is you + +01:02:33.920 --> 01:02:40.960 +have a multilingual Bert model and uh + +01:02:37.000 --> 01:02:43.000 +based on this you uh try to find um like + +01:02:40.960 --> 01:02:45.160 +which links match together but it's also + +01:02:43.000 --> 01:02:47.480 +trained using a contrast of objective + +01:02:45.160 --> 01:02:49.160 +where you try to upweight the uh correct + +01:02:47.480 --> 01:02:51.599 +links and downweight the incorrect links + +01:02:49.160 --> 01:02:53.760 +on supervised data so it tends to be + +01:02:51.599 --> 01:02:56.359 +quite a bit more accurate than the + +01:02:53.760 --> 01:02:58.119 +Alternatives um another option if you + +01:02:56.359 --> 01:03:02.799 +want is to ask + +01:02:58.119 --> 01:03:05.359 +gp4 and you you could ask gp4 to do that + +01:03:02.799 --> 01:03:07.359 +but it's expensive and not a whole lot + +01:03:05.359 --> 01:03:09.440 +better than using one of these like + +01:03:07.359 --> 01:03:11.039 +trained alignment tools so I would + +01:03:09.440 --> 01:03:13.640 +suggest probably using this if you want + +01:03:11.039 --> 01:03:15.079 +to find alignments between words and + +01:03:13.640 --> 01:03:17.640 +this can be useful for a lot of things + +01:03:15.079 --> 01:03:20.799 +it can also be useful for um you know + +01:03:17.640 --> 01:03:24.279 +visualization or a better understanding + +01:03:20.799 --> 01:03:26.079 +of uh you know like how cross lingual + +01:03:24.279 --> 01:03:28.640 +models are working and stuff like that + +01:03:26.079 --> 01:03:30.920 +so um it comes in handy for a lot of + +01:03:28.640 --> 01:03:30.920 +different + +01:03:31.839 --> 01:03:39.319 +things cool um any questions here yeah + +01:03:36.359 --> 01:03:40.960 +so just looking at the alignment um it + +01:03:39.319 --> 01:03:43.359 +seems to be pretty crazy for like + +01:03:40.960 --> 01:03:45.480 +English and Japanese yeah is there any + +01:03:43.359 --> 01:03:47.680 +work where you try to hop between + +01:03:45.480 --> 01:03:49.599 +languages like from Japanese to Spanish + +01:03:47.680 --> 01:03:52.440 +that's where you try to Pivot between + +01:03:49.599 --> 01:03:55.359 +languages yeah so it's called pivot uh + +01:03:52.440 --> 01:03:59.440 +pivoting and there's a fair amount of + +01:03:55.359 --> 01:03:59.440 +work on this um + +01:04:00.240 --> 01:04:04.119 +the there there's a bunch of different + +01:04:02.400 --> 01:04:05.960 +ways you can pivot you can pivot for + +01:04:04.119 --> 01:04:07.599 +word alignment and pivoting is + +01:04:05.960 --> 01:04:09.520 +particularly useful when you have a low + +01:04:07.599 --> 01:04:10.920 +resource language that's very similar to + +01:04:09.520 --> 01:04:15.480 +a high resource language where you have + +01:04:10.920 --> 01:04:19.160 +lots of data um the other thing is like + +01:04:15.480 --> 01:04:20.039 +pivot translation is a thing um and so + +01:04:19.160 --> 01:04:22.640 +for + +01:04:20.039 --> 01:04:24.720 +example I don't know if Google does this + +01:04:22.640 --> 01:04:27.200 +anymore but for a long time google would + +01:04:24.720 --> 01:04:29.599 +actually be translating through English + +01:04:27.200 --> 01:04:31.160 +to get into other languages and it was + +01:04:29.599 --> 01:04:32.640 +all a black box but you could tell they + +01:04:31.160 --> 01:04:34.240 +were doing it because you would suddenly + +01:04:32.640 --> 01:04:36.359 +get English words when you Translate + +01:04:34.240 --> 01:04:40.680 +from Chinese to Arabic or something like + +01:04:36.359 --> 01:04:42.359 +this um and so like uh that's also done + +01:04:40.680 --> 01:04:45.039 +for translation in other multilingual + +01:04:42.359 --> 01:04:47.559 +tests too um another thing that I should + +01:04:45.039 --> 01:04:51.559 +mention is um I talked about translate + +01:04:47.559 --> 01:04:54.680 +train there's also translate + +01:04:51.559 --> 01:04:57.359 +test um so translate train basically you + +01:04:54.680 --> 01:05:01.720 +translate your training + +01:04:57.359 --> 01:05:03.599 +um and translate test you translate um + +01:05:01.720 --> 01:05:05.200 +at test time so basically like let's say + +01:05:03.599 --> 01:05:07.920 +you want to answer questions that were + +01:05:05.200 --> 01:05:09.480 +posed in Japanese or something um you + +01:05:07.920 --> 01:05:11.000 +translate the Japanese questions into + +01:05:09.480 --> 01:05:13.200 +English and answer the question using an + +01:05:11.000 --> 01:05:16.119 +English QA system and then translate the + +01:05:13.200 --> 01:05:18.480 +answer back into Japanese um and that's + +01:05:16.119 --> 01:05:20.359 +good to an extent like it's usually + +01:05:18.480 --> 01:05:21.839 +better than a bad multi-lingual system + +01:05:20.359 --> 01:05:23.319 +but worse than a good multilingual + +01:05:21.839 --> 01:05:24.760 +system if you put like a lot of effort + +01:05:23.319 --> 01:05:27.640 +into building a strong multilingual + +01:05:24.760 --> 01:05:27.640 +system so + +01:05:27.839 --> 01:05:33.240 +although um maybe for some of the really + +01:05:31.000 --> 01:05:34.640 +like difficult tasks like reasoning and + +01:05:33.240 --> 01:05:36.640 +stuff like that it's better to reason in + +01:05:34.640 --> 01:05:38.720 +English like I talked about multilingual + +01:05:36.640 --> 01:05:41.000 +um Chain of Thought + +01:05:38.720 --> 01:05:45.400 +reasoning um + +01:05:41.000 --> 01:05:47.680 +cool so yeah another thing is um if + +01:05:45.400 --> 01:05:50.319 +you're translating from if you're + +01:05:47.680 --> 01:05:52.359 +transferring from another language um + +01:05:50.319 --> 01:05:54.440 +which language to use as I mentioned it + +01:05:52.359 --> 01:05:56.559 +should be a similar to the target + +01:05:54.440 --> 01:05:59.480 +language in a data rich langu language + +01:05:56.559 --> 01:06:01.319 +um we actually have a study uh where we + +01:05:59.480 --> 01:06:05.839 +tried to figure out what variety of + +01:06:01.319 --> 01:06:08.160 +similarity is the best for Trans uh + +01:06:05.839 --> 01:06:11.319 +transferring so like let's say you want + +01:06:08.160 --> 01:06:14.000 +to tra train a good language for um you + +01:06:11.319 --> 01:06:15.839 +know high it's hard to come up with + +01:06:14.000 --> 01:06:17.359 +which language you should be using one + +01:06:15.839 --> 01:06:20.319 +of the interesting things we found in + +01:06:17.359 --> 01:06:22.319 +this paper is um we we actually trained + +01:06:20.319 --> 01:06:24.319 +a model to try to predict which language + +01:06:22.319 --> 01:06:27.240 +would be the best to transfer from but + +01:06:24.319 --> 01:06:31.160 +the most useful feature overall was how + +01:06:27.240 --> 01:06:32.839 +close uh the languages are on the globe + +01:06:31.160 --> 01:06:34.559 +um which is kind of weird right just + +01:06:32.839 --> 01:06:35.920 +because languages are close on the globe + +01:06:34.559 --> 01:06:39.160 +doesn't mean they're similar like you + +01:06:35.920 --> 01:06:43.880 +can come up with um Bas in Spanish which + +01:06:39.160 --> 01:06:46.119 +are very very different in every way um + +01:06:43.880 --> 01:06:47.880 +but languages that are close on the + +01:06:46.119 --> 01:06:50.079 +globe tend to be similar with respect to + +01:06:47.880 --> 01:06:51.359 +both vocabulary and syntax on average + +01:06:50.079 --> 01:06:53.240 +and so because of that it's a pretty + +01:06:51.359 --> 01:06:57.319 +good indicator that a language would be + +01:06:53.240 --> 01:06:57.319 +a good transfer language + +01:06:58.720 --> 01:07:01.880 +um if languages don't share the same + +01:07:00.240 --> 01:07:04.880 +script actually I have an example of + +01:07:01.880 --> 01:07:08.400 +pivoting here uh where we can pivot uh + +01:07:04.880 --> 01:07:10.760 +from uh morati into Hindi and then into + +01:07:08.400 --> 01:07:14.000 +another language for linking across + +01:07:10.760 --> 01:07:16.240 +entities and so um we demonstrated that + +01:07:14.000 --> 01:07:19.079 +you could pivot um another thing that + +01:07:16.240 --> 01:07:20.960 +you can do like um as we mentioned in + +01:07:19.079 --> 01:07:24.520 +the last class or lindia mentioned in + +01:07:20.960 --> 01:07:26.559 +the last class there's the idea of IPA + +01:07:24.520 --> 01:07:28.920 +um the international phonetic alphabet + +01:07:26.559 --> 01:07:32.319 +which kind of gives you an idea of how + +01:07:28.920 --> 01:07:34.640 +things are pronounced and in some cases + +01:07:32.319 --> 01:07:36.880 +languages might have a different script + +01:07:34.640 --> 01:07:37.920 +but if you normalize them into IPA you + +01:07:36.880 --> 01:07:39.359 +could normalize them into the + +01:07:37.920 --> 01:07:41.640 +pronunciation and actually things are + +01:07:39.359 --> 01:07:42.520 +pronounced rather similarly in a lot of + +01:07:41.640 --> 01:07:44.640 +related + +01:07:42.520 --> 01:07:46.079 +languages one thing you need to be + +01:07:44.640 --> 01:07:48.039 +careful about though is we actually + +01:07:46.079 --> 01:07:50.359 +found this hurt accuracy in a lot of + +01:07:48.039 --> 01:07:51.240 +languages uh to give an example English + +01:07:50.359 --> 01:07:55.039 +and + +01:07:51.240 --> 01:07:55.960 +French so if anybody has studied French + +01:07:55.039 --> 01:07:58.640 +you know + +01:07:55.960 --> 01:08:00.599 +that if anybody has studied French as a + +01:07:58.640 --> 01:08:02.520 +second language speaker after studying + +01:08:00.599 --> 01:08:04.160 +English you know that even though you + +01:08:02.520 --> 01:08:05.359 +can read the characters you have no idea + +01:08:04.160 --> 01:08:07.160 +how they're pronounced if you're not a + +01:08:05.359 --> 01:08:09.960 +very good french speaker and so + +01:08:07.160 --> 01:08:11.440 +basically um the the way it's written is + +01:08:09.960 --> 01:08:13.119 +closer than the way it's pronounced so + +01:08:11.440 --> 01:08:15.880 +you can't just normalize everything into + +01:08:13.119 --> 01:08:20.080 +pronunciation and just hope it works so + +01:08:15.880 --> 01:08:22.400 +um that's another technique that you can + +01:08:20.080 --> 01:08:24.239 +use um I'd also like to talk a little + +01:08:22.400 --> 01:08:26.920 +bit about how we share parameters + +01:08:24.239 --> 01:08:29.400 +between languages + +01:08:26.920 --> 01:08:31.960 +and um there is a bunch of different + +01:08:29.400 --> 01:08:34.880 +ways we can do this um one is sharing + +01:08:31.960 --> 01:08:36.719 +all parameters so uh just have a single + +01:08:34.880 --> 01:08:37.759 +bottle that where all of the parameters + +01:08:36.719 --> 01:08:39.319 +are the + +01:08:37.759 --> 01:08:42.080 +same + +01:08:39.319 --> 01:08:44.279 +um previously there were methods that + +01:08:42.080 --> 01:08:45.359 +shared only like an encoder or an + +01:08:44.279 --> 01:08:48.239 +attention + +01:08:45.359 --> 01:08:49.719 +mechanism um also sharing some matrices + +01:08:48.239 --> 01:08:53.120 +of the Transformer + +01:08:49.719 --> 01:08:56.480 +model um using a parameter generator to + +01:08:53.120 --> 01:08:58.719 +generate parameters per language this is + +01:08:56.480 --> 01:09:00.239 +I I like this paper uh it's one of my + +01:08:58.719 --> 01:09:02.080 +papers I like this paper but it's not + +01:09:00.239 --> 01:09:03.520 +super practical but basically we used a + +01:09:02.080 --> 01:09:05.880 +neural network to generate the + +01:09:03.520 --> 01:09:07.600 +parameters of the multilingual model um + +01:09:05.880 --> 01:09:09.359 +and we fed in things like information + +01:09:07.600 --> 01:09:11.960 +about what type of language it was and + +01:09:09.359 --> 01:09:14.719 +stuff like that um so kind of ambitious + +01:09:11.960 --> 01:09:16.400 +but not uh you know it requires a lot of + +01:09:14.719 --> 01:09:18.719 +parameters so it's not super practical + +01:09:16.400 --> 01:09:21.319 +but the more um common thing that people + +01:09:18.719 --> 01:09:23.880 +are using now are uh things like + +01:09:21.319 --> 01:09:26.159 +language experts or + +01:09:23.880 --> 01:09:29.120 +adapters and so + +01:09:26.159 --> 01:09:32.640 +the idea here about language experts is + +01:09:29.120 --> 01:09:37.440 +basically um it's a layer that you + +01:09:32.640 --> 01:09:42.120 +insert into a particular part of the uh + +01:09:37.440 --> 01:09:44.640 +into a particular part of the model and + +01:09:42.120 --> 01:09:47.199 +this is a kind of adapter style + +01:09:44.640 --> 01:09:51.279 +parameter efficient training layer where + +01:09:47.199 --> 01:09:54.840 +you uh downweight and upweight um uh + +01:09:51.279 --> 01:09:57.239 +sorry down uh down sample and up sample + +01:09:54.840 --> 01:09:58.960 +so it's kind of like Laura or an adapter + +01:09:57.239 --> 01:10:01.120 +or something like that so few parameters + +01:09:58.960 --> 01:10:03.360 +for the language and then they also have + +01:10:01.120 --> 01:10:05.360 +a task based adapter So based on the + +01:10:03.360 --> 01:10:06.520 +task that you're solving the the end an + +01:10:05.360 --> 01:10:08.960 +adapter + +01:10:06.520 --> 01:10:11.080 +here um and they also demonstrated that + +01:10:08.960 --> 01:10:15.000 +you can pre-train models with language + +01:10:11.080 --> 01:10:17.679 +specific parameters included um in them + +01:10:15.000 --> 01:10:20.480 +uh also from the point of view of an + +01:10:17.679 --> 01:10:22.320 +adapter um we have done a similar thing + +01:10:20.480 --> 01:10:26.199 +for summarization where we compared + +01:10:22.320 --> 01:10:29.199 +prefix tuning and um prefix tuning in + +01:10:26.199 --> 01:10:30.719 +Laura and uh we found that you could do + +01:10:29.199 --> 01:10:32.440 +a similar thing where you train a single + +01:10:30.719 --> 01:10:35.239 +model but each language has its own + +01:10:32.440 --> 01:10:37.040 +prefix tuning parameters or own Laura + +01:10:35.239 --> 01:10:38.640 +parameters and that can be pretty + +01:10:37.040 --> 01:10:40.760 +effective at improving the capacity of + +01:10:38.640 --> 01:10:40.760 +the + +01:10:41.280 --> 01:10:44.280 +model + +01:10:44.880 --> 01:10:51.000 +um yeah I have very little time to cover + +01:10:47.880 --> 01:10:53.239 +the last slides I guess so um I I'll + +01:10:51.000 --> 01:10:56.440 +just very quickly mention um what I was + +01:10:53.239 --> 01:10:58.800 +going to mention so um uh another thing + +01:10:56.440 --> 01:11:00.440 +you can do is create new data um one way + +01:10:58.800 --> 01:11:02.760 +you can create new data is just ask + +01:11:00.440 --> 01:11:04.440 +people to annotate data for you um but + +01:11:02.760 --> 01:11:06.199 +the problem is uh for low resource + +01:11:04.440 --> 01:11:09.080 +languages it's often hard to get lots of + +01:11:06.199 --> 01:11:11.640 +data so one thing we do is leverage + +01:11:09.080 --> 01:11:13.719 +something called Active Learning um the + +01:11:11.640 --> 01:11:16.840 +basic idea between behind Active + +01:11:13.719 --> 01:11:19.800 +Learning is that you use labeled data um + +01:11:16.840 --> 01:11:21.360 +you do some training you get a model um + +01:11:19.800 --> 01:11:24.719 +then you apply that model to lots of + +01:11:21.360 --> 01:11:27.480 +unlabeled data and select some data uh + +01:11:24.719 --> 01:11:29.760 +that the model is highly uncertain about + +01:11:27.480 --> 01:11:31.679 +and then throw that to annotation and so + +01:11:29.760 --> 01:11:32.960 +basically what this does is this um + +01:11:31.679 --> 01:11:34.719 +allows you to select data where the + +01:11:32.960 --> 01:11:37.159 +current model is not very confident or + +01:11:34.719 --> 01:11:38.560 +not very good or something like this and + +01:11:37.159 --> 01:11:40.159 +this can be really helpful it's not + +01:11:38.560 --> 01:11:40.920 +limited to multilingual learning but + +01:11:40.159 --> 01:11:43.520 +it's + +01:11:40.920 --> 01:11:45.679 +specifically uh quite helpful in cases + +01:11:43.520 --> 01:11:46.960 +where you uh can annotate only a small + +01:11:45.679 --> 01:11:51.040 +amount of + +01:11:46.960 --> 01:11:53.880 +data um and so the basic idea is um + +01:11:51.040 --> 01:11:56.120 +Illustrated here for for binary + +01:11:53.880 --> 01:11:58.840 +classification where if you only + +01:11:56.120 --> 01:12:01.320 +annotate um data randomly you might end + +01:11:58.840 --> 01:12:03.520 +up getting data that doesn't tell you uh + +01:12:01.320 --> 01:12:06.000 +very well about the specific decision + +01:12:03.520 --> 01:12:08.480 +boundary and as a result a model trained + +01:12:06.000 --> 01:12:10.719 +on the few data points that you randomly + +01:12:08.480 --> 01:12:13.679 +select could be inaccurate whereas + +01:12:10.719 --> 01:12:15.800 +Active Learning um kind of finds data + +01:12:13.679 --> 01:12:18.040 +directly on the decision boundary here + +01:12:15.800 --> 01:12:21.320 +and that allows you to find more uh you + +01:12:18.040 --> 01:12:24.159 +know um effective + +01:12:21.320 --> 01:12:27.040 +samples there's two fundamental ideas + +01:12:24.159 --> 01:12:29.120 +uncertainty and representativeness and + +01:12:27.040 --> 01:12:31.280 +you want to come up with models um you + +01:12:29.120 --> 01:12:34.760 +want to come up with a data that selects + +01:12:31.280 --> 01:12:37.400 +data where the model is uncertain but + +01:12:34.760 --> 01:12:39.840 +representative and so actually you can + +01:12:37.400 --> 01:12:42.159 +select data only for representativeness + +01:12:39.840 --> 01:12:44.360 +and it's only and it's relatively useful + +01:12:42.159 --> 01:12:46.360 +so you can select only data that has + +01:12:44.360 --> 01:12:48.880 +lots of high frequency phrases for + +01:12:46.360 --> 01:12:50.120 +example uh for machine translation and + +01:12:48.880 --> 01:12:51.960 +that will allow you to get better + +01:12:50.120 --> 01:12:53.600 +coverage of high frequency + +01:12:51.960 --> 01:12:57.199 +phrases + +01:12:53.600 --> 01:12:58.560 +um but you uncertainty is also good + +01:12:57.199 --> 01:13:01.120 +because it helps you find like the + +01:12:58.560 --> 01:13:02.840 +model's current blind spots the problem + +01:13:01.120 --> 01:13:04.880 +with only uncertainty is it gives you + +01:13:02.840 --> 01:13:06.760 +lots of garbage it'll get like for + +01:13:04.880 --> 01:13:09.560 +example for machine translation it will + +01:13:06.760 --> 01:13:11.960 +select things out with only emojis or + +01:13:09.560 --> 01:13:14.239 +something like that and uh you know + +01:13:11.960 --> 01:13:17.679 +that's not very useful to train your + +01:13:14.239 --> 01:13:20.800 +model so um I have more examples in the + +01:13:17.679 --> 01:13:22.679 +slides I'm going to finish that up um + +01:13:20.800 --> 01:13:25.280 +but uncertainty you can use model + +01:13:22.679 --> 01:13:26.600 +confidence representativeness basically + +01:13:25.280 --> 01:13:29.400 +what you do is you try to get good + +01:13:26.600 --> 01:13:31.280 +coverage of the embedding space of uh + +01:13:29.400 --> 01:13:34.440 +all of the embeddings that you have for + +01:13:31.280 --> 01:13:36.400 +models and um you can also do this + +01:13:34.440 --> 01:13:38.440 +multilingually combined together with + +01:13:36.400 --> 01:13:40.360 +crosslingual transfer and we have some + +01:13:38.440 --> 01:13:42.400 +examples of how you I have a few + +01:13:40.360 --> 01:13:47.000 +examples of how you can do that in uh in + +01:13:42.400 --> 01:13:47.000 +the slides there so um diff --git a/CMU Advanced NLP 2024 (3) Language Modeling/CMU Advanced NLP 2024 (3) Language Modeling.mp4 b/CMU Advanced NLP 2024 (3) Language Modeling/CMU Advanced NLP 2024 (3) Language Modeling.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..20ca7fd599bc2c29f32e2d5540f6431902f03da0 --- /dev/null +++ b/CMU Advanced NLP 2024 (3) Language Modeling/CMU Advanced NLP 2024 (3) Language Modeling.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac044f8a648713e7d8f0857336d4a0f7ff026ea16003d21d0234d316d3f57301 +size 75139287 diff --git a/CMU Advanced NLP 2024 (3) Language Modeling/metadata.json b/CMU Advanced NLP 2024 (3) Language Modeling/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..24159057a18e371c7af2f270ecd5c6e79abeec9e --- /dev/null +++ b/CMU Advanced NLP 2024 (3) Language Modeling/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=69EAJOwV3Es", + "title": "CMU Advanced NLP 2024 (3) Language Modeling" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (3) Language Modeling/transcript.srt b/CMU Advanced NLP 2024 (3) Language Modeling/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..844a72b0cb728620160f250cc5d6ef0b60a843c0 --- /dev/null +++ b/CMU Advanced NLP 2024 (3) Language Modeling/transcript.srt @@ -0,0 +1,6987 @@ +1 +00:00:00,399 --> 00:00:04,120 +so this time I'm going to be talking + +2 +00:00:02,080 --> 00:00:05,799 +about language modeling uh obviously + +3 +00:00:04,120 --> 00:00:07,240 +language modeling is a big topic and I'm + +4 +00:00:05,799 --> 00:00:09,880 +not going to be able to cover it all in + +5 +00:00:07,240 --> 00:00:11,320 +one class but this is kind of the basics + +6 +00:00:09,880 --> 00:00:13,080 +of uh what does it mean to build a + +7 +00:00:11,320 --> 00:00:15,320 +language model what is a language model + +8 +00:00:13,080 --> 00:00:18,439 +how do we evaluate language models and + +9 +00:00:15,320 --> 00:00:19,920 +other stuff like that and around the end + +10 +00:00:18,439 --> 00:00:21,320 +I'm going to talk a little bit about + +11 +00:00:19,920 --> 00:00:23,039 +efficiently implementing things in + +12 +00:00:21,320 --> 00:00:25,080 +neural networks it's not directly + +13 +00:00:23,039 --> 00:00:27,760 +related to language models but it's very + +14 +00:00:25,080 --> 00:00:31,200 +important to know how to do uh to solve + +15 +00:00:27,760 --> 00:00:34,200 +your assignments so I'll cover both + +16 +00:00:31,200 --> 00:00:34,200 +is + +17 +00:00:34,239 --> 00:00:38,480 +cool okay so the first thing I'd like to + +18 +00:00:36,760 --> 00:00:41,239 +talk about is generative versus + +19 +00:00:38,480 --> 00:00:43,000 +discriminative models and the reason why + +20 +00:00:41,239 --> 00:00:45,280 +is up until now we've been talking about + +21 +00:00:43,000 --> 00:00:47,559 +discriminative models and these are + +22 +00:00:45,280 --> 00:00:49,640 +models uh that are mainly designed to + +23 +00:00:47,559 --> 00:00:53,800 +calculate the probability of a latent + +24 +00:00:49,640 --> 00:00:56,039 +trait uh given the data and so this is + +25 +00:00:53,800 --> 00:00:58,800 +uh P of Y given X where Y is the lat and + +26 +00:00:56,039 --> 00:01:00,800 +trait we want to calculate and X is uh + +27 +00:00:58,800 --> 00:01:04,760 +the input data that we're calculating it + +28 +00:01:00,800 --> 00:01:07,799 +over so just review from last class what + +29 +00:01:04,760 --> 00:01:10,240 +was X from last class from the example + +30 +00:01:07,799 --> 00:01:10,240 +in L + +31 +00:01:11,360 --> 00:01:15,880 +class + +32 +00:01:13,040 --> 00:01:18,280 +anybody yeah some text yeah and then + +33 +00:01:15,880 --> 00:01:18,280 +what was + +34 +00:01:20,400 --> 00:01:26,119 +why it shouldn't be too + +35 +00:01:23,799 --> 00:01:27,920 +hard yeah it was a category or a + +36 +00:01:26,119 --> 00:01:31,680 +sentiment label precisely in the + +37 +00:01:27,920 --> 00:01:33,399 +sentiment analysis tasks so so um a + +38 +00:01:31,680 --> 00:01:35,560 +generative model on the other hand is a + +39 +00:01:33,399 --> 00:01:38,840 +model that calculates the probability of + +40 +00:01:35,560 --> 00:01:40,880 +data itself and is not specifically + +41 +00:01:38,840 --> 00:01:43,439 +conditional and there's a couple of + +42 +00:01:40,880 --> 00:01:45,439 +varieties um this isn't like super + +43 +00:01:43,439 --> 00:01:48,280 +standard terminology I just uh wrote it + +44 +00:01:45,439 --> 00:01:51,520 +myself but here we have a standalone + +45 +00:01:48,280 --> 00:01:54,360 +probability of P of X and we can also + +46 +00:01:51,520 --> 00:01:58,000 +calculate the joint probability P of X + +47 +00:01:54,360 --> 00:01:58,000 +and Y + +48 +00:01:58,159 --> 00:02:02,880 +so probabilistic language models + +49 +00:02:01,079 --> 00:02:06,640 +basically what they do is they calculate + +50 +00:02:02,880 --> 00:02:08,560 +this uh probability usually uh we think + +51 +00:02:06,640 --> 00:02:10,360 +of it as a standalone probability of P + +52 +00:02:08,560 --> 00:02:11,800 +of X where X is something like a + +53 +00:02:10,360 --> 00:02:15,160 +sentence or a + +54 +00:02:11,800 --> 00:02:16,920 +document and it's a generative model + +55 +00:02:15,160 --> 00:02:19,640 +that calculates the probability of + +56 +00:02:16,920 --> 00:02:22,360 +language recently the definition of + +57 +00:02:19,640 --> 00:02:23,959 +language model has expanded a little bit + +58 +00:02:22,360 --> 00:02:26,160 +so now + +59 +00:02:23,959 --> 00:02:28,640 +um people also call things that + +60 +00:02:26,160 --> 00:02:31,080 +calculate the probability of text and + +61 +00:02:28,640 --> 00:02:35,200 +images as like multimodal language + +62 +00:02:31,080 --> 00:02:38,160 +models or uh what are some of the other + +63 +00:02:35,200 --> 00:02:40,480 +ones yeah I think that's the main the + +64 +00:02:38,160 --> 00:02:42,840 +main exception to this rule usually + +65 +00:02:40,480 --> 00:02:45,080 +usually it's calculating either of text + +66 +00:02:42,840 --> 00:02:47,680 +or over text in some multimodal data but + +67 +00:02:45,080 --> 00:02:47,680 +for now we're going to + +68 +00:02:48,800 --> 00:02:54,200 +consider + +69 +00:02:50,319 --> 00:02:56,440 +um then there's kind of two fundamental + +70 +00:02:54,200 --> 00:02:58,159 +operations that we perform with LMS + +71 +00:02:56,440 --> 00:03:00,519 +almost everything else we do with LMS + +72 +00:02:58,159 --> 00:03:03,640 +can be considered like one of these two + +73 +00:03:00,519 --> 00:03:05,319 +types of things the first thing is calc + +74 +00:03:03,640 --> 00:03:06,440 +scoring sentences or calculating the + +75 +00:03:05,319 --> 00:03:09,599 +probability of + +76 +00:03:06,440 --> 00:03:12,280 +sentences and this + +77 +00:03:09,599 --> 00:03:14,720 +is uh for example if we calculate the + +78 +00:03:12,280 --> 00:03:16,400 +probability of Jane went to the store uh + +79 +00:03:14,720 --> 00:03:19,000 +this would have a high probability + +80 +00:03:16,400 --> 00:03:20,879 +ideally um and if we have this kind of + +81 +00:03:19,000 --> 00:03:23,400 +word salid like this this would be given + +82 +00:03:20,879 --> 00:03:26,080 +a low probability uh according to a + +83 +00:03:23,400 --> 00:03:28,000 +English language model if we had a + +84 +00:03:26,080 --> 00:03:30,000 +Chinese language model ideally it would + +85 +00:03:28,000 --> 00:03:31,319 +also probably give low probability first + +86 +00:03:30,000 --> 00:03:32,879 +sentence too because it's a language + +87 +00:03:31,319 --> 00:03:35,000 +model of natural Chinese and not of + +88 +00:03:32,879 --> 00:03:36,200 +natural English so there's also + +89 +00:03:35,000 --> 00:03:37,360 +different types of language models + +90 +00:03:36,200 --> 00:03:38,400 +depending on the type of data you play + +91 +00:03:37,360 --> 00:03:41,360 +in + +92 +00:03:38,400 --> 00:03:43,599 +the another thing I can do is generate + +93 +00:03:41,360 --> 00:03:45,239 +sentences and we'll talk more about the + +94 +00:03:43,599 --> 00:03:48,280 +different methods for generating + +95 +00:03:45,239 --> 00:03:50,319 +sentences but typically they fall into + +96 +00:03:48,280 --> 00:03:51,799 +one of two categories one is sampling + +97 +00:03:50,319 --> 00:03:53,200 +like this where you try to sample a + +98 +00:03:51,799 --> 00:03:55,480 +sentence from the probability + +99 +00:03:53,200 --> 00:03:57,280 +distribution of the language model + +100 +00:03:55,480 --> 00:03:58,360 +possibly with some modifications to the + +101 +00:03:57,280 --> 00:04:00,760 +probability + +102 +00:03:58,360 --> 00:04:03,079 +distribution um the other thing which I + +103 +00:04:00,760 --> 00:04:04,760 +didn't write on the slide is uh finding + +104 +00:04:03,079 --> 00:04:07,439 +the highest scoring sentence according + +105 +00:04:04,760 --> 00:04:09,760 +to the language model um and we do both + +106 +00:04:07,439 --> 00:04:09,760 +of those + +107 +00:04:10,560 --> 00:04:17,600 +S so more concretely how can we apply + +108 +00:04:15,199 --> 00:04:21,199 +these these can be applied to answer + +109 +00:04:17,600 --> 00:04:23,840 +questions so for example um if we have a + +110 +00:04:21,199 --> 00:04:27,240 +multiple choice question we can score + +111 +00:04:23,840 --> 00:04:30,639 +possible multiple choice answers and uh + +112 +00:04:27,240 --> 00:04:32,880 +the way we do this is we calculate + +113 +00:04:30,639 --> 00:04:35,440 +we first + +114 +00:04:32,880 --> 00:04:38,440 +take uh like we have + +115 +00:04:35,440 --> 00:04:38,440 +like + +116 +00:04:38,560 --> 00:04:43,919 +um + +117 +00:04:40,960 --> 00:04:46,919 +where is + +118 +00:04:43,919 --> 00:04:46,919 +CMU + +119 +00:04:47,560 --> 00:04:51,600 +located um + +120 +00:04:51,960 --> 00:04:59,560 +that's and actually maybe promete this + +121 +00:04:54,560 --> 00:05:01,360 +all again to an a here and then we say X + +122 +00:04:59,560 --> 00:05:05,800 +X1 is equal to + +123 +00:05:01,360 --> 00:05:07,520 +this and then we have X2 which is + +124 +00:05:05,800 --> 00:05:09,720 +Q + +125 +00:05:07,520 --> 00:05:12,479 +where is + +126 +00:05:09,720 --> 00:05:14,120 +CMU + +127 +00:05:12,479 --> 00:05:18,080 +located + +128 +00:05:14,120 --> 00:05:19,720 +a um what's something + +129 +00:05:18,080 --> 00:05:21,960 +plausible + +130 +00:05:19,720 --> 00:05:24,560 +uh what was + +131 +00:05:21,960 --> 00:05:26,319 +it okay now now you're going to make it + +132 +00:05:24,560 --> 00:05:27,960 +tricky and make me talk about when we + +133 +00:05:26,319 --> 00:05:29,960 +have multiple right answers and how we + +134 +00:05:27,960 --> 00:05:31,759 +evaluate and stuff let let's ignore that + +135 +00:05:29,960 --> 00:05:35,080 +for now it's say New + +136 +00:05:31,759 --> 00:05:37,199 +York it's not located in New York is + +137 +00:05:35,080 --> 00:05:40,560 +it + +138 +00:05:37,199 --> 00:05:40,560 +okay let's say + +139 +00:05:40,960 --> 00:05:45,199 +Birmingham hopefully there's no CMU + +140 +00:05:43,199 --> 00:05:47,120 +affiliate in Birmingham I think we're + +141 +00:05:45,199 --> 00:05:49,000 +we're pretty so um and then you would + +142 +00:05:47,120 --> 00:05:53,880 +just calculate the probability of X1 and + +143 +00:05:49,000 --> 00:05:56,440 +the probability of X2 X3 X4 Etc and um + +144 +00:05:53,880 --> 00:06:01,479 +then pick the highest saring one and + +145 +00:05:56,440 --> 00:06:01,479 +actually um there's a famous + +146 +00:06:03,199 --> 00:06:07,440 +there's a famous uh leaderboard for + +147 +00:06:05,840 --> 00:06:08,759 +language models that probably a lot of + +148 +00:06:07,440 --> 00:06:09,759 +people know about it's called the open + +149 +00:06:08,759 --> 00:06:13,120 +llm + +150 +00:06:09,759 --> 00:06:15,639 +leaderboard and a lot of these tasks + +151 +00:06:13,120 --> 00:06:17,319 +here basically correspond to doing + +152 +00:06:15,639 --> 00:06:21,000 +something like that like hel swag is + +153 +00:06:17,319 --> 00:06:22,599 +kind of a multiple choice uh is a + +154 +00:06:21,000 --> 00:06:24,160 +multiple choice question answering thing + +155 +00:06:22,599 --> 00:06:27,880 +about common sense where they calculate + +156 +00:06:24,160 --> 00:06:30,280 +it by scoring uh scoring the + +157 +00:06:27,880 --> 00:06:31,880 +outputs so that's a very common way to + +158 +00:06:30,280 --> 00:06:35,000 +use language + +159 +00:06:31,880 --> 00:06:36,960 +models um another thing is generating a + +160 +00:06:35,000 --> 00:06:40,080 +continuation of a question prompt so + +161 +00:06:36,960 --> 00:06:42,639 +basically this is when you uh + +162 +00:06:40,080 --> 00:06:44,759 +sample and so what you would do is you + +163 +00:06:42,639 --> 00:06:48,440 +would prompt the + +164 +00:06:44,759 --> 00:06:50,560 +model with this uh X here and then you + +165 +00:06:48,440 --> 00:06:53,800 +would ask it to generate either the most + +166 +00:06:50,560 --> 00:06:56,400 +likely uh completion or generate um + +167 +00:06:53,800 --> 00:06:58,960 +sample multiple completions to get the + +168 +00:06:56,400 --> 00:07:00,720 +answer so this is very common uh people + +169 +00:06:58,960 --> 00:07:03,759 +are very familiar with this there's lots + +170 +00:07:00,720 --> 00:07:07,160 +of other uh things you can do though so + +171 +00:07:03,759 --> 00:07:09,400 +um you can classify text and there's a + +172 +00:07:07,160 --> 00:07:12,720 +couple ways you can do this uh one way + +173 +00:07:09,400 --> 00:07:15,960 +you can do this is um like let's say we + +174 +00:07:12,720 --> 00:07:15,960 +have a sentiment sentence + +175 +00:07:16,160 --> 00:07:21,520 +here + +176 +00:07:17,759 --> 00:07:25,440 +um you can say uh + +177 +00:07:21,520 --> 00:07:30,919 +this is + +178 +00:07:25,440 --> 00:07:33,919 +gr and then you can say um + +179 +00:07:30,919 --> 00:07:37,680 +star + +180 +00:07:33,919 --> 00:07:38,879 +rating five or something like that and + +181 +00:07:37,680 --> 00:07:41,400 +then you could also have star rating + +182 +00:07:38,879 --> 00:07:43,680 +four star rating three star rating two + +183 +00:07:41,400 --> 00:07:45,080 +star rating one and calculate the + +184 +00:07:43,680 --> 00:07:46,639 +probability of all of these and find + +185 +00:07:45,080 --> 00:07:50,360 +which one has the highest probability so + +186 +00:07:46,639 --> 00:07:51,800 +this is a a common way you can do things + +187 +00:07:50,360 --> 00:07:54,319 +another thing you can do which is kind + +188 +00:07:51,800 --> 00:07:55,240 +of interesting and um there are papers + +189 +00:07:54,319 --> 00:07:58,319 +on this but they're kind of + +190 +00:07:55,240 --> 00:08:00,800 +underexplored is you can do like star + +191 +00:07:58,319 --> 00:08:04,800 +rating + +192 +00:08:00,800 --> 00:08:04,800 +five and then + +193 +00:08:04,879 --> 00:08:13,280 +generate generate the output um and so + +194 +00:08:10,319 --> 00:08:15,039 +that basically says Okay I I want a + +195 +00:08:13,280 --> 00:08:16,680 +positive sentence now I'm going to score + +196 +00:08:15,039 --> 00:08:19,120 +the actual review and see whether that + +197 +00:08:16,680 --> 00:08:22,319 +matches my like conception of a positive + +198 +00:08:19,120 --> 00:08:24,080 +sentence and there's a few uh papers + +199 +00:08:22,319 --> 00:08:25,680 +that do + +200 +00:08:24,080 --> 00:08:28,240 +this + +201 +00:08:25,680 --> 00:08:31,240 +um let + +202 +00:08:28,240 --> 00:08:31,240 +me + +203 +00:08:34,640 --> 00:08:38,760 +this is a kind of older one and then + +204 +00:08:36,240 --> 00:08:42,080 +there's another more recent one by Sean + +205 +00:08:38,760 --> 00:08:43,839 +Min I believe um uh but they demonstrate + +206 +00:08:42,080 --> 00:08:45,480 +how you can do both generative and + +207 +00:08:43,839 --> 00:08:47,600 +discriminative classification in this + +208 +00:08:45,480 --> 00:08:51,760 +way so that's another thing that you can + +209 +00:08:47,600 --> 00:08:51,760 +do uh with language + +210 +00:08:53,279 --> 00:08:56,839 +models and then the other thing you can + +211 +00:08:55,200 --> 00:08:59,000 +do is you can generate the label given a + +212 +00:08:56,839 --> 00:09:00,680 +classification proc so you you say this + +213 +00:08:59,000 --> 00:09:03,079 +is is great star rating and then + +214 +00:09:00,680 --> 00:09:05,720 +generate five + +215 +00:09:03,079 --> 00:09:09,320 +whatever finally um you can do things + +216 +00:09:05,720 --> 00:09:10,920 +like correct a grammar so uh for example + +217 +00:09:09,320 --> 00:09:12,560 +if you score the probability of each + +218 +00:09:10,920 --> 00:09:14,839 +word and you find words that are really + +219 +00:09:12,560 --> 00:09:17,760 +low probability then you can uh replace + +220 +00:09:14,839 --> 00:09:20,160 +them with higher probability words um or + +221 +00:09:17,760 --> 00:09:21,720 +you could ask a model please paraphrase + +222 +00:09:20,160 --> 00:09:24,000 +this output and it will paraphrase it + +223 +00:09:21,720 --> 00:09:27,640 +into something that gives you uh you + +224 +00:09:24,000 --> 00:09:30,720 +know that has better gra so basically + +225 +00:09:27,640 --> 00:09:33,079 +like as I said language models are very + +226 +00:09:30,720 --> 00:09:34,600 +diverse um and they can do a ton of + +227 +00:09:33,079 --> 00:09:35,680 +different things but most of them boil + +228 +00:09:34,600 --> 00:09:38,440 +down to doing one of these two + +229 +00:09:35,680 --> 00:09:42,079 +operations scoring or + +230 +00:09:38,440 --> 00:09:42,079 +generating any questions + +231 +00:09:42,480 --> 00:09:47,600 +s + +232 +00:09:44,640 --> 00:09:50,000 +okay so next I I want to talk about a + +233 +00:09:47,600 --> 00:09:52,279 +specific type of language models uh Auto + +234 +00:09:50,000 --> 00:09:54,240 +regressive language models and auto + +235 +00:09:52,279 --> 00:09:56,720 +regressive language models are language + +236 +00:09:54,240 --> 00:10:00,240 +models that specifically calculate this + +237 +00:09:56,720 --> 00:10:02,320 +probability um in a fashion where you + +238 +00:10:00,240 --> 00:10:03,680 +calculate the probability of one token + +239 +00:10:02,320 --> 00:10:05,519 +and then you calculate the probability + +240 +00:10:03,680 --> 00:10:07,680 +of the next token given the previous + +241 +00:10:05,519 --> 00:10:10,519 +token the probability of the third token + +242 +00:10:07,680 --> 00:10:13,760 +G given the previous two tokens almost + +243 +00:10:10,519 --> 00:10:18,600 +always this happens left to right um or + +244 +00:10:13,760 --> 00:10:20,519 +start to finish um and so this is the + +245 +00:10:18,600 --> 00:10:25,000 +next token here this is a context where + +246 +00:10:20,519 --> 00:10:28,440 +usually um the context is the previous + +247 +00:10:25,000 --> 00:10:29,640 +tokens Can anyone think of a time when + +248 +00:10:28,440 --> 00:10:32,440 +you might want to do + +249 +00:10:29,640 --> 00:10:37,839 +right to left instead of left to + +250 +00:10:32,440 --> 00:10:40,399 +right yeah language that's from right to + +251 +00:10:37,839 --> 00:10:41,680 +yeah that's actually exactly what I what + +252 +00:10:40,399 --> 00:10:43,079 +I was looking for so if you have a + +253 +00:10:41,680 --> 00:10:46,839 +language that's written from right to + +254 +00:10:43,079 --> 00:10:49,320 +left actually uh things like uh Arabic + +255 +00:10:46,839 --> 00:10:51,360 +and Hebrew are written right to left so + +256 +00:10:49,320 --> 00:10:53,720 +um both of those are + +257 +00:10:51,360 --> 00:10:56,360 +chronologically like earlier to later + +258 +00:10:53,720 --> 00:10:59,399 +because you know if if you're thinking + +259 +00:10:56,360 --> 00:11:01,079 +about how people speak um the the first + +260 +00:10:59,399 --> 00:11:02,440 +word that an English speaker speaks is + +261 +00:11:01,079 --> 00:11:04,000 +on the left just because that's the way + +262 +00:11:02,440 --> 00:11:06,079 +you write it but the first word that an + +263 +00:11:04,000 --> 00:11:09,639 +Arabic speaker speaks is on the the + +264 +00:11:06,079 --> 00:11:12,360 +right because chronologically that's uh + +265 +00:11:09,639 --> 00:11:13,519 +that's how it works um there's other + +266 +00:11:12,360 --> 00:11:16,320 +reasons why you might want to do right + +267 +00:11:13,519 --> 00:11:17,839 +to left but uh it's not really that left + +268 +00:11:16,320 --> 00:11:21,720 +to right is important it's that like + +269 +00:11:17,839 --> 00:11:24,440 +start to finish is important in spoken + +270 +00:11:21,720 --> 00:11:27,880 +language so um one thing I should + +271 +00:11:24,440 --> 00:11:30,240 +mention here is that this is just a rule + +272 +00:11:27,880 --> 00:11:31,560 +of probability that if you have multiple + +273 +00:11:30,240 --> 00:11:33,720 +variables and you're calculating the + +274 +00:11:31,560 --> 00:11:35,760 +joint probability of variables the + +275 +00:11:33,720 --> 00:11:38,000 +probability of all of the variables + +276 +00:11:35,760 --> 00:11:40,240 +together is equal to this probability + +277 +00:11:38,000 --> 00:11:41,920 +here so we're not making any + +278 +00:11:40,240 --> 00:11:44,399 +approximations we're not making any + +279 +00:11:41,920 --> 00:11:46,959 +compromises in order to do this but it + +280 +00:11:44,399 --> 00:11:51,639 +all hinges on whether we can predict + +281 +00:11:46,959 --> 00:11:53,440 +this probability um accurately uh + +282 +00:11:51,639 --> 00:11:56,160 +actually another question does anybody + +283 +00:11:53,440 --> 00:11:57,800 +know why we do this decomposition why + +284 +00:11:56,160 --> 00:12:00,959 +don't we just try to predict the + +285 +00:11:57,800 --> 00:12:00,959 +probability of x + +286 +00:12:02,120 --> 00:12:05,399 +directly any + +287 +00:12:07,680 --> 00:12:12,760 +ideas uh of big X sorry uh why don't we + +288 +00:12:11,079 --> 00:12:17,560 +try to calculate the probability of this + +289 +00:12:12,760 --> 00:12:21,360 +is great directly without deated the + +290 +00:12:17,560 --> 00:12:21,360 +IND that + +291 +00:12:25,519 --> 00:12:31,560 +possibility it could be word salid if + +292 +00:12:27,760 --> 00:12:35,279 +you did it in a in a particular way yes + +293 +00:12:31,560 --> 00:12:35,279 +um so that that's a good point + +294 +00:12:39,519 --> 00:12:47,000 +yeah yeah so for example we talked about + +295 +00:12:43,760 --> 00:12:50,120 +um uh we'll talk about + +296 +00:12:47,000 --> 00:12:51,920 +models um or I I mentioned this briefly + +297 +00:12:50,120 --> 00:12:54,000 +last time you can mention it in more + +298 +00:12:51,920 --> 00:12:55,639 +detail this time but this is great we + +299 +00:12:54,000 --> 00:12:59,880 +probably have never seen this before + +300 +00:12:55,639 --> 00:13:01,399 +right so if we predict only things that + +301 +00:12:59,880 --> 00:13:03,199 +we've seen before if we only assign a + +302 +00:13:01,399 --> 00:13:04,600 +non-zero probability to the things we've + +303 +00:13:03,199 --> 00:13:06,000 +seen before there's going to be lots of + +304 +00:13:04,600 --> 00:13:07,079 +sentences that we've never seen before + +305 +00:13:06,000 --> 00:13:10,000 +it makes it + +306 +00:13:07,079 --> 00:13:13,760 +supercars um that that's basically close + +307 +00:13:10,000 --> 00:13:16,399 +to what I wanted to say so um the reason + +308 +00:13:13,760 --> 00:13:18,040 +why we don't typically do it with um + +309 +00:13:16,399 --> 00:13:21,240 +predicting the whole sentence directly + +310 +00:13:18,040 --> 00:13:22,800 +is because if we think about the size of + +311 +00:13:21,240 --> 00:13:24,959 +the classification problem we need to + +312 +00:13:22,800 --> 00:13:27,880 +solve in order to predict the next word + +313 +00:13:24,959 --> 00:13:30,320 +it's a v uh where V is the size of the + +314 +00:13:27,880 --> 00:13:33,120 +vocabulary but the size of the + +315 +00:13:30,320 --> 00:13:35,399 +classification problem that we need to + +316 +00:13:33,120 --> 00:13:38,040 +um we need to solve if we predict + +317 +00:13:35,399 --> 00:13:40,079 +everything directly is V to the N where + +318 +00:13:38,040 --> 00:13:42,240 +n is the length of the sequence and + +319 +00:13:40,079 --> 00:13:45,240 +that's just huge the vocabulary is so + +320 +00:13:42,240 --> 00:13:48,440 +big that it's hard to kind of uh know + +321 +00:13:45,240 --> 00:13:51,000 +how we handle that so basically by doing + +322 +00:13:48,440 --> 00:13:53,160 +this sort of decomposition we decompose + +323 +00:13:51,000 --> 00:13:56,440 +this into uh + +324 +00:13:53,160 --> 00:13:58,120 +n um prediction problems of size V and + +325 +00:13:56,440 --> 00:13:59,519 +that's kind of just a lot more + +326 +00:13:58,120 --> 00:14:03,079 +manageable for from the point of view of + +327 +00:13:59,519 --> 00:14:06,000 +how we train uh know how we train + +328 +00:14:03,079 --> 00:14:09,399 +models um that being said there are + +329 +00:14:06,000 --> 00:14:11,360 +other Alternatives um something very + +330 +00:14:09,399 --> 00:14:13,920 +widely known uh very widely used is + +331 +00:14:11,360 --> 00:14:16,440 +called a MK language model um a mast + +332 +00:14:13,920 --> 00:14:19,480 +language model is something like Bert or + +333 +00:14:16,440 --> 00:14:21,680 +debera or Roberta or all of these models + +334 +00:14:19,480 --> 00:14:25,000 +that you might have heard if you've been + +335 +00:14:21,680 --> 00:14:28,279 +in MLP for more than two years I guess + +336 +00:14:25,000 --> 00:14:30,680 +um and basically what they do is they + +337 +00:14:28,279 --> 00:14:30,680 +predict + +338 +00:14:32,199 --> 00:14:37,480 +uh they like mask out this word and they + +339 +00:14:34,839 --> 00:14:39,480 +predict the middle word so they mask out + +340 +00:14:37,480 --> 00:14:41,440 +is and then try to predict that given + +341 +00:14:39,480 --> 00:14:45,320 +all the other words the problem with + +342 +00:14:41,440 --> 00:14:48,959 +these models is uh twofold number one + +343 +00:14:45,320 --> 00:14:51,880 +they don't actually give you a uh good + +344 +00:14:48,959 --> 00:14:55,399 +probability here uh like a a properly + +345 +00:14:51,880 --> 00:14:57,800 +formed probability here + +346 +00:14:55,399 --> 00:14:59,160 +because this is true only as long as + +347 +00:14:57,800 --> 00:15:01,920 +you're only conditioning on things that + +348 +00:14:59,160 --> 00:15:03,480 +you've previously generated so that + +349 +00:15:01,920 --> 00:15:04,839 +they're not actually true language + +350 +00:15:03,480 --> 00:15:06,920 +models from the point of view of being + +351 +00:15:04,839 --> 00:15:10,040 +able to easily predict the probability + +352 +00:15:06,920 --> 00:15:11,399 +of a sequence um and also it's hard to + +353 +00:15:10,040 --> 00:15:13,399 +generate from them because you need to + +354 +00:15:11,399 --> 00:15:15,440 +generate in some order and mass language + +355 +00:15:13,399 --> 00:15:17,600 +models don't specify economical orders + +356 +00:15:15,440 --> 00:15:19,120 +so they're good for some things like + +357 +00:15:17,600 --> 00:15:21,720 +calculating representations of the + +358 +00:15:19,120 --> 00:15:22,920 +output but they're not useful uh they're + +359 +00:15:21,720 --> 00:15:25,240 +not as useful for + +360 +00:15:22,920 --> 00:15:26,880 +Generation Um there's also energy based + +361 +00:15:25,240 --> 00:15:28,759 +language models which basically create a + +362 +00:15:26,880 --> 00:15:30,000 +scoring function that's not necessarily + +363 +00:15:28,759 --> 00:15:31,279 +left to right or right to left or + +364 +00:15:30,000 --> 00:15:33,120 +anything like that but that's very + +365 +00:15:31,279 --> 00:15:34,639 +Advanced um if you're interested in them + +366 +00:15:33,120 --> 00:15:36,319 +I can talk more about them that we'll + +367 +00:15:34,639 --> 00:15:38,920 +skip + +368 +00:15:36,319 --> 00:15:41,600 +them and um also all of the language + +369 +00:15:38,920 --> 00:15:45,639 +models that you hear about nowadays GPT + +370 +00:15:41,600 --> 00:15:48,800 +uh llama whatever else are all other + +371 +00:15:45,639 --> 00:15:52,880 +models cool so I'm going to go into the + +372 +00:15:48,800 --> 00:15:52,880 +very um any questions about that + +373 +00:15:57,600 --> 00:16:00,600 +yeah + +374 +00:16:00,680 --> 00:16:04,160 +yeah so in Mass language models the + +375 +00:16:02,680 --> 00:16:06,000 +question was in Mass language models + +376 +00:16:04,160 --> 00:16:08,360 +couldn't you just mask out the last + +377 +00:16:06,000 --> 00:16:10,759 +token and predict that sure you could do + +378 +00:16:08,360 --> 00:16:13,079 +that but there it's just not trained + +379 +00:16:10,759 --> 00:16:14,720 +that way so it won't do a very good job + +380 +00:16:13,079 --> 00:16:16,880 +if you always trained it that way it's + +381 +00:16:14,720 --> 00:16:18,160 +an autor regressive language model so + +382 +00:16:16,880 --> 00:16:22,240 +you're you're back to where you were in + +383 +00:16:18,160 --> 00:16:24,800 +the first place um cool so now we I'll + +384 +00:16:22,240 --> 00:16:26,399 +talk about unigram language models and + +385 +00:16:24,800 --> 00:16:29,319 +so the simplest language models are + +386 +00:16:26,399 --> 00:16:33,560 +count-based unigram language models and + +387 +00:16:29,319 --> 00:16:35,319 +the way they work is um basically we + +388 +00:16:33,560 --> 00:16:38,519 +want to calculate this probability + +389 +00:16:35,319 --> 00:16:41,240 +conditioned on all the previous ones and + +390 +00:16:38,519 --> 00:16:42,360 +the way we do this is we just say + +391 +00:16:41,240 --> 00:16:45,680 +actually we're not going to worry about + +392 +00:16:42,360 --> 00:16:48,759 +the order at all and we're just going to + +393 +00:16:45,680 --> 00:16:52,240 +uh predict the probability of the next + +394 +00:16:48,759 --> 00:16:55,279 +word uh independently of all the other + +395 +00:16:52,240 --> 00:16:57,519 +words so if you have something like this + +396 +00:16:55,279 --> 00:16:59,720 +it's actually extremely easy to predict + +397 +00:16:57,519 --> 00:17:02,480 +the probability of this word and the way + +398 +00:16:59,720 --> 00:17:04,280 +you do this is you just count up the + +399 +00:17:02,480 --> 00:17:08,360 +number of times this word appeared in + +400 +00:17:04,280 --> 00:17:10,480 +the training data set and divide by the + +401 +00:17:08,360 --> 00:17:12,559 +uh divide by the total number of words + +402 +00:17:10,480 --> 00:17:14,240 +in the pring data set and now you have a + +403 +00:17:12,559 --> 00:17:15,959 +language model this is like language + +404 +00:17:14,240 --> 00:17:17,760 +model 101 it's the easiest possible + +405 +00:17:15,959 --> 00:17:19,520 +language model you can write in you know + +406 +00:17:17,760 --> 00:17:21,120 +three lines of python + +407 +00:17:19,520 --> 00:17:25,039 +basically + +408 +00:17:21,120 --> 00:17:28,480 +um so it has a few problems uh the first + +409 +00:17:25,039 --> 00:17:31,120 +problem with this language model is um + +410 +00:17:28,480 --> 00:17:32,960 +handling unknown words so what happens + +411 +00:17:31,120 --> 00:17:38,679 +if you have a word that you've never + +412 +00:17:32,960 --> 00:17:41,000 +seen before um in this language model + +413 +00:17:38,679 --> 00:17:42,240 +here what is the probability of any + +414 +00:17:41,000 --> 00:17:44,720 +sequence that has a word that you've + +415 +00:17:42,240 --> 00:17:47,440 +never seen before yeah the probability + +416 +00:17:44,720 --> 00:17:49,240 +of the sequence gets zero so there might + +417 +00:17:47,440 --> 00:17:51,120 +not be such a big problem for generating + +418 +00:17:49,240 --> 00:17:52,480 +things from the language model because + +419 +00:17:51,120 --> 00:17:54,520 +you know maybe it's fine if you only + +420 +00:17:52,480 --> 00:17:55,960 +generate words that you've seen before + +421 +00:17:54,520 --> 00:17:57,679 +uh but it is definitely a problem of + +422 +00:17:55,960 --> 00:17:59,720 +scoring things with the language model + +423 +00:17:57,679 --> 00:18:02,039 +and it's also a problem of uh for + +424 +00:17:59,720 --> 00:18:04,440 +something like translation if you get an + +425 +00:18:02,039 --> 00:18:05,840 +unknown word uh when you're translating + +426 +00:18:04,440 --> 00:18:07,799 +something then you would like to be able + +427 +00:18:05,840 --> 00:18:11,320 +to translate it reasonably but you can't + +428 +00:18:07,799 --> 00:18:13,799 +do that so um that's an issue so how do + +429 +00:18:11,320 --> 00:18:15,840 +we how do we fix this um there's a + +430 +00:18:13,799 --> 00:18:17,640 +couple options the first option is to + +431 +00:18:15,840 --> 00:18:19,440 +segment to characters and subwords and + +432 +00:18:17,640 --> 00:18:21,720 +this is now the preferred option that + +433 +00:18:19,440 --> 00:18:24,360 +most people use nowadays uh just run + +434 +00:18:21,720 --> 00:18:26,840 +sentence piece segment your vocabulary + +435 +00:18:24,360 --> 00:18:28,400 +and you're all set you're you'll now no + +436 +00:18:26,840 --> 00:18:29,679 +longer have any unknown words because + +437 +00:18:28,400 --> 00:18:30,840 +all the unknown words get split into + +438 +00:18:29,679 --> 00:18:33,559 +shorter + +439 +00:18:30,840 --> 00:18:36,240 +units there's also other options that + +440 +00:18:33,559 --> 00:18:37,919 +you can use if you're uh very interested + +441 +00:18:36,240 --> 00:18:41,280 +in or serious about this and want to + +442 +00:18:37,919 --> 00:18:43,720 +handle this like uh as part of a + +443 +00:18:41,280 --> 00:18:45,960 +research project or something like this + +444 +00:18:43,720 --> 00:18:48,520 +and uh the way you can do this is you + +445 +00:18:45,960 --> 00:18:50,120 +can build an unknown word model and an + +446 +00:18:48,520 --> 00:18:52,200 +unknown word model basically what it + +447 +00:18:50,120 --> 00:18:54,520 +does is it uh predicts the probability + +448 +00:18:52,200 --> 00:18:56,200 +of unknown words using characters and + +449 +00:18:54,520 --> 00:18:59,559 +then it models the probability of words + +450 +00:18:56,200 --> 00:19:01,159 +using words and so now you can you have + +451 +00:18:59,559 --> 00:19:02,559 +kind of like a hierarchical model where + +452 +00:19:01,159 --> 00:19:03,919 +you first try to predict words and then + +453 +00:19:02,559 --> 00:19:06,720 +if you can't predict words you predict + +454 +00:19:03,919 --> 00:19:08,960 +unknown words so this isn't us as widely + +455 +00:19:06,720 --> 00:19:11,520 +anymore but it's worth thinking about uh + +456 +00:19:08,960 --> 00:19:11,520 +or knowing + +457 +00:19:11,840 --> 00:19:20,880 +about okay uh so a second detail um a + +458 +00:19:17,200 --> 00:19:22,799 +parameter uh so parameterizing in log + +459 +00:19:20,880 --> 00:19:25,880 +space + +460 +00:19:22,799 --> 00:19:28,400 +so the um multiplication of + +461 +00:19:25,880 --> 00:19:29,840 +probabilities can be reexpressed is the + +462 +00:19:28,400 --> 00:19:31,840 +addition of log + +463 +00:19:29,840 --> 00:19:34,159 +probabilities uh so this is really + +464 +00:19:31,840 --> 00:19:35,720 +important and this is widely used in all + +465 +00:19:34,159 --> 00:19:37,520 +language models whether they're unigram + +466 +00:19:35,720 --> 00:19:39,640 +language models or or neural language + +467 +00:19:37,520 --> 00:19:41,799 +models there's actually a very simple + +468 +00:19:39,640 --> 00:19:45,440 +reason why we why we do it this way does + +469 +00:19:41,799 --> 00:19:45,440 +anybody uh know the + +470 +00:19:46,440 --> 00:19:52,679 +answer what would happen if we + +471 +00:19:48,280 --> 00:19:56,720 +multiplied uh let's say uh 30 30 tokens + +472 +00:19:52,679 --> 00:20:00,360 +worth of probabilities together um + +473 +00:19:56,720 --> 00:20:02,120 +yeah uh yeah too too small um so + +474 +00:20:00,360 --> 00:20:06,120 +basically the problem is numerical + +475 +00:20:02,120 --> 00:20:07,520 +underflow um so modern computers if if + +476 +00:20:06,120 --> 00:20:08,840 +we weren't doing this on a computer and + +477 +00:20:07,520 --> 00:20:11,240 +we were just doing math it wouldn't + +478 +00:20:08,840 --> 00:20:14,280 +matter at all um but because we're doing + +479 +00:20:11,240 --> 00:20:17,280 +it on a computer uh we + +480 +00:20:14,280 --> 00:20:17,280 +have + +481 +00:20:20,880 --> 00:20:26,000 +ours we have our + +482 +00:20:23,000 --> 00:20:26,000 +32bit + +483 +00:20:27,159 --> 00:20:30,159 +float + +484 +00:20:32,320 --> 00:20:37,720 +where we have uh the exponent in the the + +485 +00:20:35,799 --> 00:20:40,159 +fraction over here so the largest the + +486 +00:20:37,720 --> 00:20:41,960 +exponent can get is limited by the + +487 +00:20:40,159 --> 00:20:45,880 +number of exponent bits that we have in + +488 +00:20:41,960 --> 00:20:48,039 +a 32-bit float and um if that's the case + +489 +00:20:45,880 --> 00:20:52,480 +I forget exactly how large it is it's + +490 +00:20:48,039 --> 00:20:53,440 +like yeah something like 30 minus 38 is + +491 +00:20:52,480 --> 00:20:56,640 +that + +492 +00:20:53,440 --> 00:20:58,520 +right yeah but anyway like if the number + +493 +00:20:56,640 --> 00:21:00,640 +gets too small you'll underflow it goes + +494 +00:20:58,520 --> 00:21:02,400 +to zero and you'll get a zero + +495 +00:21:00,640 --> 00:21:05,720 +probability despite the fact that it's + +496 +00:21:02,400 --> 00:21:07,640 +not actually zero so um that's usually + +497 +00:21:05,720 --> 00:21:09,440 +why we do this it's also a little bit + +498 +00:21:07,640 --> 00:21:12,960 +easier for people just to look at like + +499 +00:21:09,440 --> 00:21:15,200 +minus 30 instead of looking to something + +500 +00:21:12,960 --> 00:21:19,960 +something time 10 to the minus 30 or + +501 +00:21:15,200 --> 00:21:24,520 +something so uh that is why we normally + +502 +00:21:19,960 --> 00:21:27,159 +go um another thing that you can note is + +503 +00:21:24,520 --> 00:21:28,760 +uh you can treat each of these in a + +504 +00:21:27,159 --> 00:21:31,360 +unigram model you can treat each of + +505 +00:21:28,760 --> 00:21:37,039 +these as parameters so we talked about + +506 +00:21:31,360 --> 00:21:39,640 +parameters of a model uh like a um like + +507 +00:21:37,039 --> 00:21:41,120 +a bag of words model and we can + +508 +00:21:39,640 --> 00:21:44,080 +similarly treat these unigram + +509 +00:21:41,120 --> 00:21:47,760 +probabilities as parameters so um how + +510 +00:21:44,080 --> 00:21:47,760 +many parameters does a unigram model + +511 +00:21:48,080 --> 00:21:51,320 +have any + +512 +00:21:57,039 --> 00:22:02,400 +ideas + +513 +00:21:59,600 --> 00:22:04,440 +yeah yeah exactly parameters equal to + +514 +00:22:02,400 --> 00:22:08,120 +the size of the vocabulary so this one's + +515 +00:22:04,440 --> 00:22:10,880 +easy and then we can go um we can go to + +516 +00:22:08,120 --> 00:22:13,880 +the slightly less easy ones + +517 +00:22:10,880 --> 00:22:16,039 +there so anyway this is a unigram model + +518 +00:22:13,880 --> 00:22:17,960 +uh it's it's not too hard um you + +519 +00:22:16,039 --> 00:22:20,480 +basically count up and divide and then + +520 +00:22:17,960 --> 00:22:22,720 +you add the the probabilities here you + +521 +00:22:20,480 --> 00:22:25,440 +could easily do it in a short Python + +522 +00:22:22,720 --> 00:22:28,400 +program higher order engram models so + +523 +00:22:25,440 --> 00:22:31,600 +higher order engram models um what these + +524 +00:22:28,400 --> 00:22:35,520 +do is they essentially limit the context + +525 +00:22:31,600 --> 00:22:40,240 +length to a length of N and then they + +526 +00:22:35,520 --> 00:22:42,600 +count and divide so the way it works + +527 +00:22:40,240 --> 00:22:45,559 +here maybe this is a little bit uh + +528 +00:22:42,600 --> 00:22:47,320 +tricky but I can show an example so what + +529 +00:22:45,559 --> 00:22:49,840 +we do is we count up the number of times + +530 +00:22:47,320 --> 00:22:51,320 +we've seen this is an example and then + +531 +00:22:49,840 --> 00:22:53,480 +we divide by the number of times we've + +532 +00:22:51,320 --> 00:22:55,960 +seen this is n and that's the + +533 +00:22:53,480 --> 00:22:56,960 +probability of example given the the + +534 +00:22:55,960 --> 00:22:58,720 +previous + +535 +00:22:56,960 --> 00:23:00,559 +coms + +536 +00:22:58,720 --> 00:23:02,039 +so the problem with this is anytime we + +537 +00:23:00,559 --> 00:23:03,400 +get a sequence that we've never seen + +538 +00:23:02,039 --> 00:23:04,960 +before like we would like to model + +539 +00:23:03,400 --> 00:23:07,200 +longer sequences to make this more + +540 +00:23:04,960 --> 00:23:08,600 +accurate but anytime we've get a uh we + +541 +00:23:07,200 --> 00:23:10,720 +get a sequence that we've never seen + +542 +00:23:08,600 --> 00:23:12,919 +before um it will get a probability of + +543 +00:23:10,720 --> 00:23:15,919 +zero similarly because this count on top + +544 +00:23:12,919 --> 00:23:19,919 +of here will be zero so the way that uh + +545 +00:23:15,919 --> 00:23:22,640 +engram language models work with this uh + +546 +00:23:19,919 --> 00:23:27,320 +handle this is they have fall back to + +547 +00:23:22,640 --> 00:23:31,840 +Shorter uh engram models so um this + +548 +00:23:27,320 --> 00:23:33,480 +model sorry when I say NR uh n is the + +549 +00:23:31,840 --> 00:23:35,520 +length of the context so this is a four + +550 +00:23:33,480 --> 00:23:37,679 +gr model here because the top context is + +551 +00:23:35,520 --> 00:23:40,520 +four so the photogram model would + +552 +00:23:37,679 --> 00:23:46,640 +calculate this and then interpolate it + +553 +00:23:40,520 --> 00:23:48,640 +like this with a um with a trigram model + +554 +00:23:46,640 --> 00:23:50,400 +uh and then the trigram model itself + +555 +00:23:48,640 --> 00:23:51,720 +would interpolate with the Byram model + +556 +00:23:50,400 --> 00:23:53,440 +the Byram model would interpolate with + +557 +00:23:51,720 --> 00:23:56,880 +the unram + +558 +00:23:53,440 --> 00:23:59,880 +model oh this one oh + +559 +00:23:56,880 --> 00:23:59,880 +okay + +560 +00:24:02,159 --> 00:24:05,440 +um one + +561 +00:24:07,039 --> 00:24:12,320 +second could you uh help get it from the + +562 +00:24:10,000 --> 00:24:12,320 +lock + +563 +00:24:26,799 --> 00:24:29,799 +box + +564 +00:24:43,640 --> 00:24:50,200 +um okay sorry + +565 +00:24:46,880 --> 00:24:53,640 +so getting bad + +566 +00:24:50,200 --> 00:24:56,640 +here just + +567 +00:24:53,640 --> 00:24:56,640 +actually + +568 +00:24:56,760 --> 00:25:02,559 +okay uh oh wow that's a lot + +569 +00:25:02,960 --> 00:25:12,080 +better cool okay so + +570 +00:25:08,279 --> 00:25:14,159 +um so this is uh how we deal with the + +571 +00:25:12,080 --> 00:25:18,799 +fact that models can + +572 +00:25:14,159 --> 00:25:23,919 +be um models can be more precise but + +573 +00:25:18,799 --> 00:25:26,679 +more sparse and less precise but less + +574 +00:25:23,919 --> 00:25:28,720 +sparse this is also another concept that + +575 +00:25:26,679 --> 00:25:31,039 +we're going to talk about more later uh + +576 +00:25:28,720 --> 00:25:33,240 +in another class but this is a variety + +577 +00:25:31,039 --> 00:25:33,240 +of + +578 +00:25:33,679 --> 00:25:38,440 +ensembling where we have different + +579 +00:25:35,960 --> 00:25:40,360 +models that are good at different things + +580 +00:25:38,440 --> 00:25:42,279 +and we combine them together so this is + +581 +00:25:40,360 --> 00:25:44,760 +the first instance that you would see of + +582 +00:25:42,279 --> 00:25:46,159 +this there are other instances of this + +583 +00:25:44,760 --> 00:25:50,320 +but the reason why I mentioned that this + +584 +00:25:46,159 --> 00:25:51,840 +is a a variety of ensembling is actually + +585 +00:25:50,320 --> 00:25:55,520 +you're probably not going to be using + +586 +00:25:51,840 --> 00:25:57,840 +engram models super widely unless you + +587 +00:25:55,520 --> 00:26:00,520 +really want to process huge data sets + +588 +00:25:57,840 --> 00:26:02,399 +because that is one advantage of them + +589 +00:26:00,520 --> 00:26:03,960 +but some of these smoothing methods + +590 +00:26:02,399 --> 00:26:05,720 +actually might be interesting even if + +591 +00:26:03,960 --> 00:26:10,520 +you're using other models and ensembling + +592 +00:26:05,720 --> 00:26:10,520 +them together so + +593 +00:26:10,600 --> 00:26:15,679 +the in order to decide this + +594 +00:26:13,679 --> 00:26:19,559 +interpolation coefficient one way we can + +595 +00:26:15,679 --> 00:26:23,440 +do it is just set a fixed um set a fixed + +596 +00:26:19,559 --> 00:26:26,039 +amount of probability that we use for + +597 +00:26:23,440 --> 00:26:29,000 +every um every time so we could say that + +598 +00:26:26,039 --> 00:26:32,000 +we always set this Lambda to 0.8 and + +599 +00:26:29,000 --> 00:26:34,320 +some always set this Lambda 1us Lambda + +600 +00:26:32,000 --> 00:26:36,559 +to 0.2 and interpolate those two + +601 +00:26:34,320 --> 00:26:39,120 +together but actually there's more + +602 +00:26:36,559 --> 00:26:42,240 +sophisticated methods of doing this and + +603 +00:26:39,120 --> 00:26:44,080 +so one way of doing this is uh called + +604 +00:26:42,240 --> 00:26:47,240 +additive + +605 +00:26:44,080 --> 00:26:50,600 +smoothing excuse me and the the way that + +606 +00:26:47,240 --> 00:26:54,039 +additive smoothing works is um basically + +607 +00:26:50,600 --> 00:26:54,919 +we add Alpha to the uh to the top and + +608 +00:26:54,039 --> 00:26:58,000 +the + +609 +00:26:54,919 --> 00:27:02,159 +bottom and the reason why this is slight + +610 +00:26:58,000 --> 00:27:06,279 +different as is as our accounts get + +611 +00:27:02,159 --> 00:27:10,799 +larger we start to approach the true + +612 +00:27:06,279 --> 00:27:10,799 +distribution so just to give an + +613 +00:27:12,080 --> 00:27:19,480 +example let's say we have uh the + +614 +00:27:17,640 --> 00:27:21,640 +box + +615 +00:27:19,480 --> 00:27:26,279 +is + +616 +00:27:21,640 --> 00:27:26,279 +um let's say initially we + +617 +00:27:26,520 --> 00:27:29,520 +have + +618 +00:27:31,159 --> 00:27:37,600 +uh let let's say our Alpha is + +619 +00:27:33,840 --> 00:27:43,559 +one so initially if we have + +620 +00:27:37,600 --> 00:27:47,320 +nothing um if we have no no evidence for + +621 +00:27:43,559 --> 00:27:47,320 +our sorry I I + +622 +00:27:49,720 --> 00:27:54,960 +realize let's say this is + +623 +00:27:52,640 --> 00:27:56,840 +our fallback + +624 +00:27:54,960 --> 00:27:59,240 +distribution um where this is a + +625 +00:27:56,840 --> 00:28:01,880 +probability of Z 0.5 this is a + +626 +00:27:59,240 --> 00:28:03,360 +probability of 0.3 and this is a + +627 +00:28:01,880 --> 00:28:06,559 +probability of + +628 +00:28:03,360 --> 00:28:09,919 +0.2 so now let's talk about our byr + +629 +00:28:06,559 --> 00:28:13,399 +model um and our byr + +630 +00:28:09,919 --> 00:28:18,000 +model has counts which is the + +631 +00:28:13,399 --> 00:28:18,000 +the the box and the + +632 +00:28:19,039 --> 00:28:24,480 +is so if we do something like this then + +633 +00:28:22,720 --> 00:28:26,720 +um initially we have no counts like + +634 +00:28:24,480 --> 00:28:28,159 +let's say we we have no data uh about + +635 +00:28:26,720 --> 00:28:30,760 +this distribution + +636 +00:28:28,159 --> 00:28:33,200 +um our counts would be zero and our + +637 +00:28:30,760 --> 00:28:35,919 +Alpha would be + +638 +00:28:33,200 --> 00:28:37,840 +one and so we would just fall back to + +639 +00:28:35,919 --> 00:28:40,960 +this distribution we just have like one + +640 +00:28:37,840 --> 00:28:43,320 +times uh one times this distribution + +641 +00:28:40,960 --> 00:28:45,679 +let's say we then we have one piece of + +642 +00:28:43,320 --> 00:28:48,640 +evidence and once we have one piece of + +643 +00:28:45,679 --> 00:28:52,279 +evidence now this would be + +644 +00:28:48,640 --> 00:28:53,960 +0.33 um and this would uh be Alpha equal + +645 +00:28:52,279 --> 00:28:56,399 +to 1 so we'd have + +646 +00:28:53,960 --> 00:28:58,679 +0.5 * + +647 +00:28:56,399 --> 00:29:00,399 +0.33 + +648 +00:28:58,679 --> 00:29:04,039 +uh and + +649 +00:29:00,399 --> 00:29:07,720 +0.5 time + +650 +00:29:04,039 --> 00:29:10,840 +0.3 uh is the probability of the Box + +651 +00:29:07,720 --> 00:29:12,840 +because um basically we we have one + +652 +00:29:10,840 --> 00:29:14,720 +piece of evidence and we are adding a + +653 +00:29:12,840 --> 00:29:17,080 +count of one to the lower order + +654 +00:29:14,720 --> 00:29:18,320 +distribution then if we increase our + +655 +00:29:17,080 --> 00:29:24,159 +count + +656 +00:29:18,320 --> 00:29:24,159 +here um now we rely more + +657 +00:29:24,880 --> 00:29:30,960 +strongly sorry that that would be wrong + +658 +00:29:27,720 --> 00:29:32,399 +so so now we rely more strongly on the + +659 +00:29:30,960 --> 00:29:33,880 +higher order distribution because we + +660 +00:29:32,399 --> 00:29:37,039 +have more evidence for the higher order + +661 +00:29:33,880 --> 00:29:39,610 +distribution so basically in this case + +662 +00:29:37,039 --> 00:29:41,240 +um the probability + +663 +00:29:39,610 --> 00:29:44,559 +[Music] + +664 +00:29:41,240 --> 00:29:48,200 +of Lambda which I showed + +665 +00:29:44,559 --> 00:29:52,000 +before is equal to the the sum of the + +666 +00:29:48,200 --> 00:29:54,200 +counts plus um the sum of the counts + +667 +00:29:52,000 --> 00:29:56,480 +over the sum of the counts plus + +668 +00:29:54,200 --> 00:29:58,159 +Ali so as the sum of the counts gets + +669 +00:29:56,480 --> 00:30:00,240 +larger you rely on the higher order + +670 +00:29:58,159 --> 00:30:01,640 +distribution is the sum of the counts is + +671 +00:30:00,240 --> 00:30:02,760 +if the sum of the counts is smaller you + +672 +00:30:01,640 --> 00:30:04,320 +rely more on the lower order + +673 +00:30:02,760 --> 00:30:06,720 +distribution so the more evidence you + +674 +00:30:04,320 --> 00:30:11,640 +have the more you rely on so that's the + +675 +00:30:06,720 --> 00:30:11,640 +basic idea behind these smoothing things + +676 +00:30:11,679 --> 00:30:16,679 +um there's also a number of other + +677 +00:30:14,519 --> 00:30:18,760 +varieties called uh + +678 +00:30:16,679 --> 00:30:20,799 +discounting so uh the discount + +679 +00:30:18,760 --> 00:30:23,679 +hyperparameter basically you subtract + +680 +00:30:20,799 --> 00:30:26,080 +this off um uh you subtract this from + +681 +00:30:23,679 --> 00:30:27,840 +the count so you would subtract like 0.5 + +682 +00:30:26,080 --> 00:30:32,679 +from each of the counts that you it's + +683 +00:30:27,840 --> 00:30:36,279 +just empirically this is a better match + +684 +00:30:32,679 --> 00:30:38,600 +for the fact that um natural language + +685 +00:30:36,279 --> 00:30:40,039 +has a very longtailed distribution um + +686 +00:30:38,600 --> 00:30:41,600 +you can kind of do the math and show + +687 +00:30:40,039 --> 00:30:43,720 +that that works and that's actually in + +688 +00:30:41,600 --> 00:30:46,080 +this um in this paper if you're + +689 +00:30:43,720 --> 00:30:49,880 +interested in looking at more details of + +690 +00:30:46,080 --> 00:30:51,519 +that um and then kind of the + +691 +00:30:49,880 --> 00:30:53,440 +stateoftheart in language modeling + +692 +00:30:51,519 --> 00:30:56,600 +before neural language models came out + +693 +00:30:53,440 --> 00:30:59,919 +was this kesser smoothing and what it + +694 +00:30:56,600 --> 00:31:02,440 +does is it discounts but it also + +695 +00:30:59,919 --> 00:31:04,480 +modifies the lower order distribution so + +696 +00:31:02,440 --> 00:31:07,200 +in the lower order distribution you + +697 +00:31:04,480 --> 00:31:09,039 +basically um modify the counts with + +698 +00:31:07,200 --> 00:31:11,919 +respect to how many times that word has + +699 +00:31:09,039 --> 00:31:13,519 +appeared in new contexts with the IDE + +700 +00:31:11,919 --> 00:31:16,360 +idea being that you only use the lower + +701 +00:31:13,519 --> 00:31:18,880 +order distribution when you have uh new + +702 +00:31:16,360 --> 00:31:21,200 +contexts um and so you can kind of Be + +703 +00:31:18,880 --> 00:31:23,600 +Clever + +704 +00:31:21,200 --> 00:31:25,399 +About You Can Be Clever about how you + +705 +00:31:23,600 --> 00:31:27,639 +build this distribution based on the + +706 +00:31:25,399 --> 00:31:29,360 +fact that you're only using it in the + +707 +00:31:27,639 --> 00:31:31,320 +case when this distribution is not very + +708 +00:31:29,360 --> 00:31:33,960 +Rel + +709 +00:31:31,320 --> 00:31:36,080 +so I I would spend a lot more time + +710 +00:31:33,960 --> 00:31:37,960 +teaching this when uh engram models were + +711 +00:31:36,080 --> 00:31:39,840 +kind of the thing uh that people were + +712 +00:31:37,960 --> 00:31:41,960 +using but now I'm going to go over them + +713 +00:31:39,840 --> 00:31:43,600 +very quickly so you know don't worry if + +714 +00:31:41,960 --> 00:31:46,559 +you weren't able to follow all the + +715 +00:31:43,600 --> 00:31:47,960 +details but the basic um the basic thing + +716 +00:31:46,559 --> 00:31:49,279 +take away from this is number one these + +717 +00:31:47,960 --> 00:31:51,639 +are the methods that people use for + +718 +00:31:49,279 --> 00:31:53,440 +engram language models number two if + +719 +00:31:51,639 --> 00:31:55,720 +you're thinking about combining language + +720 +00:31:53,440 --> 00:31:57,519 +models together in some way through you + +721 +00:31:55,720 --> 00:31:59,279 +know ensembling their probability or + +722 +00:31:57,519 --> 00:32:00,480 +something like this this is something + +723 +00:31:59,279 --> 00:32:02,279 +that you should think about a little bit + +724 +00:32:00,480 --> 00:32:03,679 +more carefully because like some + +725 +00:32:02,279 --> 00:32:05,240 +language models might be good in some + +726 +00:32:03,679 --> 00:32:07,440 +context other language models might be + +727 +00:32:05,240 --> 00:32:09,440 +good in other contexts so you would need + +728 +00:32:07,440 --> 00:32:11,799 +to think about that when you're doing um + +729 +00:32:09,440 --> 00:32:18,200 +when you're combining the model + +730 +00:32:11,799 --> 00:32:18,200 +that cool um any any questions about + +731 +00:32:19,080 --> 00:32:24,840 +this Okay + +732 +00:32:21,159 --> 00:32:27,840 +cool so there's a lot of problems that + +733 +00:32:24,840 --> 00:32:30,760 +we have to deal with um when were + +734 +00:32:27,840 --> 00:32:32,600 +creating engram models and that actually + +735 +00:32:30,760 --> 00:32:35,279 +kind of motivated the reason why we + +736 +00:32:32,600 --> 00:32:36,639 +moved to neural language models the + +737 +00:32:35,279 --> 00:32:38,720 +first one is similar to what I talked + +738 +00:32:36,639 --> 00:32:40,519 +about last time with text classification + +739 +00:32:38,720 --> 00:32:42,600 +um that they can't share strength among + +740 +00:32:40,519 --> 00:32:45,159 +similar words like bought and + +741 +00:32:42,600 --> 00:32:46,919 +purchase um another thing is that they + +742 +00:32:45,159 --> 00:32:49,440 +can't easily condition on context with + +743 +00:32:46,919 --> 00:32:51,240 +intervening words so engram models if + +744 +00:32:49,440 --> 00:32:52,799 +you have a rare word in your context + +745 +00:32:51,240 --> 00:32:54,320 +immediately start falling back to the + +746 +00:32:52,799 --> 00:32:56,799 +unigram distribution and they end up + +747 +00:32:54,320 --> 00:32:58,720 +being very bad so uh that was another + +748 +00:32:56,799 --> 00:33:01,000 +issue + +749 +00:32:58,720 --> 00:33:04,760 +and they couldn't handle long distance + +750 +00:33:01,000 --> 00:33:09,080 +um dependencies so if this was beyond + +751 +00:33:04,760 --> 00:33:10,559 +the engram context that they would uh be + +752 +00:33:09,080 --> 00:33:14,320 +handling then you wouldn't be able to + +753 +00:33:10,559 --> 00:33:15,840 +manage this so actually before neural + +754 +00:33:14,320 --> 00:33:18,000 +language models became a really big + +755 +00:33:15,840 --> 00:33:19,960 +thing uh people came up with a bunch of + +756 +00:33:18,000 --> 00:33:22,760 +individual solutions for this in order + +757 +00:33:19,960 --> 00:33:24,440 +to solve the problems but actually it + +758 +00:33:22,760 --> 00:33:26,679 +wasn't that these Solutions didn't work + +759 +00:33:24,440 --> 00:33:29,159 +at all it was just that engineering all + +760 +00:33:26,679 --> 00:33:30,519 +of them together was so hard that nobody + +761 +00:33:29,159 --> 00:33:32,120 +actually ever did that and so they + +762 +00:33:30,519 --> 00:33:35,120 +relied on just engram models out of the + +763 +00:33:32,120 --> 00:33:37,600 +box and that wasn't scalable so it's + +764 +00:33:35,120 --> 00:33:39,279 +kind of a funny example of how like + +765 +00:33:37,600 --> 00:33:42,000 +actually neural networks despite all the + +766 +00:33:39,279 --> 00:33:43,559 +pain that they cause in some areas are a + +767 +00:33:42,000 --> 00:33:47,120 +much better engineering solution to + +768 +00:33:43,559 --> 00:33:51,279 +solve all the issues that previous + +769 +00:33:47,120 --> 00:33:53,159 +method cool um so when they use uh Eng + +770 +00:33:51,279 --> 00:33:54,799 +grab models neural language models + +771 +00:33:53,159 --> 00:33:56,559 +achieve better performance but Eng grab + +772 +00:33:54,799 --> 00:33:58,440 +models are very very fast to estimate + +773 +00:33:56,559 --> 00:33:59,880 +and apply you can even estimate them + +774 +00:33:58,440 --> 00:34:04,399 +completely in + +775 +00:33:59,880 --> 00:34:07,720 +parallel um engram models also I I don't + +776 +00:34:04,399 --> 00:34:10,399 +know if this is necessarily + +777 +00:34:07,720 --> 00:34:13,200 +A a thing that + +778 +00:34:10,399 --> 00:34:15,079 +you a reason to use engram language + +779 +00:34:13,200 --> 00:34:17,720 +models but it is a reason to think a + +780 +00:34:15,079 --> 00:34:20,320 +little bit critically about uh neural + +781 +00:34:17,720 --> 00:34:22,720 +language models which is neural language + +782 +00:34:20,320 --> 00:34:24,320 +models actually can be worse than engram + +783 +00:34:22,720 --> 00:34:26,679 +language models at modeling very low + +784 +00:34:24,320 --> 00:34:28,480 +frequency phenomenas so engram language + +785 +00:34:26,679 --> 00:34:29,960 +model can learn from a single example + +786 +00:34:28,480 --> 00:34:32,119 +they only need a single example of + +787 +00:34:29,960 --> 00:34:36,879 +anything before the probability of that + +788 +00:34:32,119 --> 00:34:38,639 +continuation goes up very high um and uh + +789 +00:34:36,879 --> 00:34:41,359 +but neural language models actually can + +790 +00:34:38,639 --> 00:34:43,599 +forget or not memorize uh appropriately + +791 +00:34:41,359 --> 00:34:46,280 +from single examples so they can be + +792 +00:34:43,599 --> 00:34:48,040 +better at that um there's a toolkit the + +793 +00:34:46,280 --> 00:34:49,919 +standard toolkit for estimating engram + +794 +00:34:48,040 --> 00:34:54,359 +language models is called KLM it's kind + +795 +00:34:49,919 --> 00:34:57,599 +of frighteningly fast um and so people + +796 +00:34:54,359 --> 00:35:00,400 +have been uh saying like I've seen some + +797 +00:34:57,599 --> 00:35:01,599 +jokes which are like job postings that + +798 +00:35:00,400 --> 00:35:04,040 +say people who have been working on + +799 +00:35:01,599 --> 00:35:05,880 +large language models uh for we want + +800 +00:35:04,040 --> 00:35:07,359 +people who have been 10 years of + +801 +00:35:05,880 --> 00:35:09,240 +experience working on large language + +802 +00:35:07,359 --> 00:35:11,960 +models or something like that and a lot + +803 +00:35:09,240 --> 00:35:13,440 +of people are saying wait nobody has 10 + +804 +00:35:11,960 --> 00:35:16,400 +years of experience working on large + +805 +00:35:13,440 --> 00:35:18,160 +language models well Kenneth hfield who + +806 +00:35:16,400 --> 00:35:19,440 +created KLM does have 10 years of + +807 +00:35:18,160 --> 00:35:22,800 +experience working on large language + +808 +00:35:19,440 --> 00:35:24,599 +models because he was estimating uh + +809 +00:35:22,800 --> 00:35:27,720 +seven gr + +810 +00:35:24,599 --> 00:35:30,320 +bottles um seven models with a + +811 +00:35:27,720 --> 00:35:35,040 +vocabulary of let's say + +812 +00:35:30,320 --> 00:35:37,720 +100,000 on um you know web text so how + +813 +00:35:35,040 --> 00:35:41,119 +many parameters is at that's more than + +814 +00:35:37,720 --> 00:35:44,320 +any you know large neural language model + +815 +00:35:41,119 --> 00:35:45,640 +that we have nowadays so um they they + +816 +00:35:44,320 --> 00:35:47,520 +have a lot of these parameters are + +817 +00:35:45,640 --> 00:35:49,400 +sparse they're zero counts so obviously + +818 +00:35:47,520 --> 00:35:52,160 +you don't uh you don't memorize all of + +819 +00:35:49,400 --> 00:35:55,040 +them but uh + +820 +00:35:52,160 --> 00:35:57,800 +yeah cool um another thing that maybe I + +821 +00:35:55,040 --> 00:35:59,359 +should mention like so this doesn't + +822 +00:35:57,800 --> 00:36:01,960 +sound completely outdated there was a + +823 +00:35:59,359 --> 00:36:05,400 +really good paper + +824 +00:36:01,960 --> 00:36:08,400 +recently that used the fact that engrams + +825 +00:36:05,400 --> 00:36:08,400 +are + +826 +00:36:11,079 --> 00:36:17,319 +so uses effect that engram models are so + +827 +00:36:14,280 --> 00:36:18,960 +scalable it's this paper um it's called + +828 +00:36:17,319 --> 00:36:21,079 +Data selection for language models via + +829 +00:36:18,960 --> 00:36:22,359 +importance rese sampling and one + +830 +00:36:21,079 --> 00:36:24,359 +interesting thing that they do in this + +831 +00:36:22,359 --> 00:36:28,920 +paper is that they don't + +832 +00:36:24,359 --> 00:36:31,560 +actually um they don't + +833 +00:36:28,920 --> 00:36:32,800 +actually use neural models in any way + +834 +00:36:31,560 --> 00:36:34,920 +despite the fact that they use the + +835 +00:36:32,800 --> 00:36:36,880 +downstream data that they sample in + +836 +00:36:34,920 --> 00:36:41,319 +order to calculate neural models but + +837 +00:36:36,880 --> 00:36:42,880 +they run engram models over um over lots + +838 +00:36:41,319 --> 00:36:47,359 +and lots of data and then they fit a + +839 +00:36:42,880 --> 00:36:50,000 +gaussian distribution to the enr model + +840 +00:36:47,359 --> 00:36:51,520 +counts basically uh in order to select + +841 +00:36:50,000 --> 00:36:53,040 +the data in the reason why they do this + +842 +00:36:51,520 --> 00:36:55,280 +is they want to do this over the entire + +843 +00:36:53,040 --> 00:36:56,760 +web and running a neural model over the + +844 +00:36:55,280 --> 00:36:58,920 +entire web would be too expensive so + +845 +00:36:56,760 --> 00:37:00,319 +they use angr models instead so that's + +846 +00:36:58,920 --> 00:37:02,359 +just an example of something in the + +847 +00:37:00,319 --> 00:37:04,920 +modern context where keeping this in + +848 +00:37:02,359 --> 00:37:04,920 +mind is a good + +849 +00:37:08,200 --> 00:37:14,000 +idea okay I'd like to move to the next + +850 +00:37:10,960 --> 00:37:15,319 +part so a language model evaluation uh + +851 +00:37:14,000 --> 00:37:17,200 +this is important to know I'm not going + +852 +00:37:15,319 --> 00:37:19,079 +to talk about language model evaluation + +853 +00:37:17,200 --> 00:37:20,599 +on other tasks I'm only going to talk + +854 +00:37:19,079 --> 00:37:23,800 +right now about language model + +855 +00:37:20,599 --> 00:37:26,280 +evaluation on the task of language + +856 +00:37:23,800 --> 00:37:29,079 +modeling and there's a number of metrics + +857 +00:37:26,280 --> 00:37:30,680 +that we use for the task of language + +858 +00:37:29,079 --> 00:37:32,720 +modeling evaluating language models on + +859 +00:37:30,680 --> 00:37:35,560 +the task of language modeling the first + +860 +00:37:32,720 --> 00:37:38,480 +one is log likelihood and basically uh + +861 +00:37:35,560 --> 00:37:40,160 +the way we calculate log likelihood is + +862 +00:37:38,480 --> 00:37:41,640 +uh sorry there's an extra parenthesis + +863 +00:37:40,160 --> 00:37:45,480 +here but the way we calculate log + +864 +00:37:41,640 --> 00:37:47,160 +likelihood is we get a test set that + +865 +00:37:45,480 --> 00:37:50,400 +ideally has not been included in our + +866 +00:37:47,160 --> 00:37:52,520 +training data and we take all of the + +867 +00:37:50,400 --> 00:37:54,200 +documents or sentences in the test set + +868 +00:37:52,520 --> 00:37:57,040 +we calculate the log probability of all + +869 +00:37:54,200 --> 00:37:59,520 +of them uh we don't actually use this + +870 +00:37:57,040 --> 00:38:02,640 +super broadly to evaluate models and the + +871 +00:37:59,520 --> 00:38:04,200 +reason why is because this number is + +872 +00:38:02,640 --> 00:38:05,720 +very dependent on the size of the data + +873 +00:38:04,200 --> 00:38:07,119 +set so if you have a larger data set + +874 +00:38:05,720 --> 00:38:08,720 +this number will be larger if you have a + +875 +00:38:07,119 --> 00:38:10,960 +smaller data set this number will be + +876 +00:38:08,720 --> 00:38:14,040 +smaller so the more common thing to do + +877 +00:38:10,960 --> 00:38:15,839 +is per word uh log likelihood and per + +878 +00:38:14,040 --> 00:38:19,800 +word log likelihood is basically + +879 +00:38:15,839 --> 00:38:22,760 +dividing the um dividing the log + +880 +00:38:19,800 --> 00:38:25,520 +probability of the entire corpus with uh + +881 +00:38:22,760 --> 00:38:28,359 +the number of words that you have in the + +882 +00:38:25,520 --> 00:38:31,000 +corpus + +883 +00:38:28,359 --> 00:38:34,599 +um it's also common for papers to report + +884 +00:38:31,000 --> 00:38:36,359 +negative log likelihood uh where because + +885 +00:38:34,599 --> 00:38:37,800 +that's used as a loss and there lower is + +886 +00:38:36,359 --> 00:38:40,440 +better so you just need to be careful + +887 +00:38:37,800 --> 00:38:42,560 +about which one is being + +888 +00:38:40,440 --> 00:38:43,880 +reported so this is pretty common I + +889 +00:38:42,560 --> 00:38:45,400 +think most people are are somewhat + +890 +00:38:43,880 --> 00:38:49,040 +familiar with + +891 +00:38:45,400 --> 00:38:49,800 +this another thing that you might see is + +892 +00:38:49,040 --> 00:38:53,079 +uh + +893 +00:38:49,800 --> 00:38:55,000 +entropy and uh specifically this is + +894 +00:38:53,079 --> 00:38:57,319 +often called cross entropy because + +895 +00:38:55,000 --> 00:38:59,880 +you're calculating + +896 +00:38:57,319 --> 00:39:01,599 +the you're estimating the model on a + +897 +00:38:59,880 --> 00:39:05,079 +training data set and then evaluating it + +898 +00:39:01,599 --> 00:39:08,400 +on a separate data set uh so uh on the + +899 +00:39:05,079 --> 00:39:12,200 +test data set and this is calcul often + +900 +00:39:08,400 --> 00:39:14,640 +or usually calculated as log 2 um of the + +901 +00:39:12,200 --> 00:39:17,119 +probability divided by the number of + +902 +00:39:14,640 --> 00:39:18,760 +words or units in the Corpus does anyone + +903 +00:39:17,119 --> 00:39:23,839 +know why this is log + +904 +00:39:18,760 --> 00:39:23,839 +two as opposed to a normal uh + +905 +00:39:25,440 --> 00:39:31,319 +log + +906 +00:39:28,440 --> 00:39:31,319 +anyone yeah + +907 +00:39:33,119 --> 00:39:38,720 +so yeah so it's calculating as bits um + +908 +00:39:36,760 --> 00:39:43,160 +and this is kind of + +909 +00:39:38,720 --> 00:39:45,240 +a um this is kind of a historical thing + +910 +00:39:43,160 --> 00:39:47,119 +and it's not super super important for + +911 +00:39:45,240 --> 00:39:51,800 +language models but it's actually pretty + +912 +00:39:47,119 --> 00:39:54,599 +interesting uh to to think about and so + +913 +00:39:51,800 --> 00:39:57,480 +actually any probabilistic distribution + +914 +00:39:54,599 --> 00:40:00,040 +can also be used for data compression + +915 +00:39:57,480 --> 00:40:03,319 +um and so you know when you're running a + +916 +00:40:00,040 --> 00:40:05,000 +zip file or you're running gzip or bz2 + +917 +00:40:03,319 --> 00:40:07,359 +or something like that uh you're + +918 +00:40:05,000 --> 00:40:09,240 +compressing a file into a smaller file + +919 +00:40:07,359 --> 00:40:12,000 +and any language model can also be used + +920 +00:40:09,240 --> 00:40:15,280 +to compress a SM file into a smaller + +921 +00:40:12,000 --> 00:40:17,119 +file um and so the way it does this is + +922 +00:40:15,280 --> 00:40:19,200 +if you have more likely + +923 +00:40:17,119 --> 00:40:20,960 +sequences uh for example more likely + +924 +00:40:19,200 --> 00:40:25,079 +sentences or more likely documents you + +925 +00:40:20,960 --> 00:40:26,920 +can press them into a a shorter uh + +926 +00:40:25,079 --> 00:40:29,440 +output and + +927 +00:40:26,920 --> 00:40:29,440 +kind of + +928 +00:40:29,640 --> 00:40:33,800 +the + +929 +00:40:31,480 --> 00:40:35,720 +ideal I I think it's pretty safe to say + +930 +00:40:33,800 --> 00:40:37,920 +ideal because I think you can't get a + +931 +00:40:35,720 --> 00:40:42,920 +better method for compression than this + +932 +00:40:37,920 --> 00:40:45,000 +uh if I unless I'm uh you know not well + +933 +00:40:42,920 --> 00:40:46,800 +versed enough in information Theory but + +934 +00:40:45,000 --> 00:40:49,240 +I I think this is basically the ideal + +935 +00:40:46,800 --> 00:40:51,960 +method for data compression and the way + +936 +00:40:49,240 --> 00:40:54,640 +it works is um I have a figure up here + +937 +00:40:51,960 --> 00:40:58,800 +but I'd like to recreate it here which + +938 +00:40:54,640 --> 00:41:02,640 +is let's say we have a vocabulary of + +939 +00:40:58,800 --> 00:41:07,200 +a um which has + +940 +00:41:02,640 --> 00:41:08,800 +50% and then we have a vocabulary uh B + +941 +00:41:07,200 --> 00:41:11,560 +which is + +942 +00:41:08,800 --> 00:41:14,040 +33% and a vocabulary + +943 +00:41:11,560 --> 00:41:18,520 +C + +944 +00:41:14,040 --> 00:41:18,520 +uh yeah C which is about + +945 +00:41:18,640 --> 00:41:25,640 +17% and so if you have a single token + +946 +00:41:22,960 --> 00:41:26,839 +sequence um if you have a single token + +947 +00:41:25,640 --> 00:41:30,880 +sequence + +948 +00:41:26,839 --> 00:41:30,880 +what you do is you can + +949 +00:41:31,319 --> 00:41:38,800 +see divide this into zero and one so if + +950 +00:41:36,400 --> 00:41:40,680 +your single token sequence is a you can + +951 +00:41:38,800 --> 00:41:42,760 +just put zero and you'll be done + +952 +00:41:40,680 --> 00:41:46,800 +encoding it if your single token + +953 +00:41:42,760 --> 00:41:51,920 +sequence is B + +954 +00:41:46,800 --> 00:41:56,520 +then um one overlaps with b and c so now + +955 +00:41:51,920 --> 00:42:00,920 +you need to further split this up into + +956 +00:41:56,520 --> 00:42:00,920 +uh o and one and you can see + +957 +00:42:04,880 --> 00:42:11,440 +that let make sure I did that right yeah + +958 +00:42:08,359 --> 00:42:11,440 +you can you can see + +959 +00:42:15,599 --> 00:42:25,720 +that one zero is entirely encompassed by + +960 +00:42:19,680 --> 00:42:29,200 +uh by B so now B is one Z and C uh C is + +961 +00:42:25,720 --> 00:42:32,359 +not L encompassed by that so you would + +962 +00:42:29,200 --> 00:42:39,240 +need to further break this up and say + +963 +00:42:32,359 --> 00:42:41,880 +it's Z one here and now one one + +964 +00:42:39,240 --> 00:42:45,520 +one is encompassed by this so you would + +965 +00:42:41,880 --> 00:42:48,680 +get uh you would get C if it was 111 and + +966 +00:42:45,520 --> 00:42:51,119 +so every every sequence that started + +967 +00:42:48,680 --> 00:42:53,000 +with zero would start out with a every + +968 +00:42:51,119 --> 00:42:54,960 +sequence that started out with one zero + +969 +00:42:53,000 --> 00:42:57,200 +would start with b and every sequence + +970 +00:42:54,960 --> 00:43:02,079 +that started with 11 one1 + +971 +00:42:57,200 --> 00:43:04,920 +start um and so then you can look at the + +972 +00:43:02,079 --> 00:43:06,960 +next word and let's say we're using a + +973 +00:43:04,920 --> 00:43:09,839 +unigram model if we're using a unigram + +974 +00:43:06,960 --> 00:43:12,960 +model for the next uh the next token + +975 +00:43:09,839 --> 00:43:18,200 +let's say the next token is C + +976 +00:43:12,960 --> 00:43:23,640 +so now the next token being C we already + +977 +00:43:18,200 --> 00:43:27,920 +have B and now we take we subdivide + +978 +00:43:23,640 --> 00:43:33,040 +B into + +979 +00:43:27,920 --> 00:43:35,720 +a BC ba a BB and BC and then we find the + +980 +00:43:33,040 --> 00:43:40,720 +next binary sequence that is entirely + +981 +00:43:35,720 --> 00:43:44,000 +encompassed by uh BC by this like + +982 +00:43:40,720 --> 00:43:45,359 +interval and so the moment we find a a + +983 +00:43:44,000 --> 00:43:48,520 +binary sequence that's entirely + +984 +00:43:45,359 --> 00:43:50,599 +encompassed by the interval uh then that + +985 +00:43:48,520 --> 00:43:53,400 +is the the sequence that we can use to + +986 +00:43:50,599 --> 00:43:54,640 +represent that SC and so um if you're + +987 +00:43:53,400 --> 00:43:56,520 +interested in this you can look up the + +988 +00:43:54,640 --> 00:44:00,400 +arithmetic coding on on wikip it's + +989 +00:43:56,520 --> 00:44:02,079 +pretty fascinating but basically um here + +990 +00:44:00,400 --> 00:44:04,040 +this is showing the example of the + +991 +00:44:02,079 --> 00:44:07,160 +unigram model where the probabilities + +992 +00:44:04,040 --> 00:44:10,240 +don't change based on the context but + +993 +00:44:07,160 --> 00:44:13,000 +what if we knew that + +994 +00:44:10,240 --> 00:44:15,599 +c had a really high probability of + +995 +00:44:13,000 --> 00:44:22,160 +following B so if that's the case now we + +996 +00:44:15,599 --> 00:44:24,559 +have like a a b c here um like based on + +997 +00:44:22,160 --> 00:44:25,880 +our our byr model or neural language + +998 +00:44:24,559 --> 00:44:29,319 +model or something like that so now this + +999 +00:44:25,880 --> 00:44:31,240 +is interval is much much larger so it's + +1000 +00:44:29,319 --> 00:44:35,079 +much more likely to entirely Encompass a + +1001 +00:44:31,240 --> 00:44:39,720 +shorter string and because of that the + +1002 +00:44:35,079 --> 00:44:42,440 +um the output can be much shorter and so + +1003 +00:44:39,720 --> 00:44:45,760 +if you use this arithmetic encoding um + +1004 +00:44:42,440 --> 00:44:49,440 +over a very long sequence of outputs + +1005 +00:44:45,760 --> 00:44:52,440 +your the length of the sequence that is + +1006 +00:44:49,440 --> 00:44:56,000 +needed to encode this uh this particular + +1007 +00:44:52,440 --> 00:45:00,359 +output is going to be essentially um the + +1008 +00:44:56,000 --> 00:45:03,319 +number of bits according to times the + +1009 +00:45:00,359 --> 00:45:06,480 +times the sequence so this is very + +1010 +00:45:03,319 --> 00:45:10,000 +directly connected to like compression + +1011 +00:45:06,480 --> 00:45:13,160 +and information Theory and stuff like + +1012 +00:45:10,000 --> 00:45:15,359 +that so that that's where entropy comes + +1013 +00:45:13,160 --> 00:45:17,680 +from uh are are there any questions + +1014 +00:45:15,359 --> 00:45:17,680 +about + +1015 +00:45:19,319 --> 00:45:22,319 +this + +1016 +00:45:24,880 --> 00:45:28,119 +yeah + +1017 +00:45:26,800 --> 00:45:31,880 +uh for + +1018 +00:45:28,119 --> 00:45:34,319 +c um so + +1019 +00:45:31,880 --> 00:45:36,599 +111 is + +1020 +00:45:34,319 --> 00:45:37,920 +because let me let me see if I can do + +1021 +00:45:36,599 --> 00:45:40,559 +this + +1022 +00:45:37,920 --> 00:45:44,240 +again + +1023 +00:45:40,559 --> 00:45:44,240 +so I had one + +1024 +00:45:46,079 --> 00:45:54,520 +one so here this interval is + +1025 +00:45:50,920 --> 00:45:56,839 +one this interval is one one this + +1026 +00:45:54,520 --> 00:46:00,079 +interval is 111 + +1027 +00:45:56,839 --> 00:46:03,520 +and 111 is the first interval that is + +1028 +00:46:00,079 --> 00:46:05,520 +entirely overlapping with with c um and + +1029 +00:46:03,520 --> 00:46:08,760 +it's not one Z because one one Z is + +1030 +00:46:05,520 --> 00:46:08,760 +overlaping with b and + +1031 +00:46:09,960 --> 00:46:13,599 +c so which + +1032 +00:46:14,280 --> 00:46:21,720 +Cas so which case one + +1033 +00:46:20,160 --> 00:46:24,800 +Z + +1034 +00:46:21,720 --> 00:46:26,319 +one one one + +1035 +00:46:24,800 --> 00:46:30,800 +Z + +1036 +00:46:26,319 --> 00:46:30,800 +when would you use 110 to represent + +1037 +00:46:32,119 --> 00:46:38,839 +something it's a good question I guess + +1038 +00:46:36,119 --> 00:46:40,599 +maybe you wouldn't which seems a little + +1039 +00:46:38,839 --> 00:46:43,280 +bit wasteful + +1040 +00:46:40,599 --> 00:46:46,160 +so let me let me think about that I + +1041 +00:46:43,280 --> 00:46:49,920 +think um it might be the case that you + +1042 +00:46:46,160 --> 00:46:52,319 +just don't use it um + +1043 +00:46:49,920 --> 00:46:53,559 +but yeah I'll try to think about that a + +1044 +00:46:52,319 --> 00:46:55,920 +little bit more because it seems like + +1045 +00:46:53,559 --> 00:46:59,200 +you should use every bet string right so + +1046 +00:46:55,920 --> 00:47:01,559 +um yeah if anybody uh has has the answer + +1047 +00:46:59,200 --> 00:47:05,160 +I'd be happy to hear it otherwise I take + +1048 +00:47:01,559 --> 00:47:07,079 +you cool um so next thing is perplexity + +1049 +00:47:05,160 --> 00:47:10,640 +so this is another one that you see + +1050 +00:47:07,079 --> 00:47:13,240 +commonly and um so perplexity is + +1051 +00:47:10,640 --> 00:47:16,880 +basically two to the ENT uh two to the + +1052 +00:47:13,240 --> 00:47:20,760 +per word entropy or e to the uh negative + +1053 +00:47:16,880 --> 00:47:24,880 +word level log likelihood in log space + +1054 +00:47:20,760 --> 00:47:28,240 +um and so this uh T larger tends to be + +1055 +00:47:24,880 --> 00:47:32,559 +better I'd like to do a little exercise + +1056 +00:47:28,240 --> 00:47:34,599 +to see uh if this works so like let's + +1057 +00:47:32,559 --> 00:47:39,079 +say we have one a dog sees a squirrel it + +1058 +00:47:34,599 --> 00:47:40,960 +will usually um and can anyone guess the + +1059 +00:47:39,079 --> 00:47:43,480 +next word just yell it + +1060 +00:47:40,960 --> 00:47:46,400 +out bar + +1061 +00:47:43,480 --> 00:47:47,400 +okay uh what about that what about + +1062 +00:47:46,400 --> 00:47:50,400 +something + +1063 +00:47:47,400 --> 00:47:50,400 +else + +1064 +00:47:52,640 --> 00:47:57,520 +Chase Run + +1065 +00:47:54,720 --> 00:48:00,800 +Run + +1066 +00:47:57,520 --> 00:48:00,800 +okay John + +1067 +00:48:01,960 --> 00:48:05,280 +John anything + +1068 +00:48:07,000 --> 00:48:10,400 +else any other + +1069 +00:48:11,280 --> 00:48:16,960 +ones so basically what this shows is + +1070 +00:48:13,640 --> 00:48:16,960 +humans are really bad language + +1071 +00:48:17,160 --> 00:48:24,079 +models so uh interestingly every single + +1072 +00:48:21,520 --> 00:48:26,559 +one of the words you predicted here is a + +1073 +00:48:24,079 --> 00:48:32,240 +uh a regular verb + +1074 +00:48:26,559 --> 00:48:35,200 +um but in natural language model gpt2 uh + +1075 +00:48:32,240 --> 00:48:38,079 +the first thing it predicts is B uh + +1076 +00:48:35,200 --> 00:48:40,440 +which is kind of a like the Cula there's + +1077 +00:48:38,079 --> 00:48:43,400 +also start and that will be like start + +1078 +00:48:40,440 --> 00:48:44,880 +running start something um and humans + +1079 +00:48:43,400 --> 00:48:46,400 +actually are really bad at doing this + +1080 +00:48:44,880 --> 00:48:49,079 +are really bad at predicting next words + +1081 +00:48:46,400 --> 00:48:51,760 +we're not trained that way um and so uh + +1082 +00:48:49,079 --> 00:48:54,319 +we end up having these biases but anyway + +1083 +00:48:51,760 --> 00:48:55,799 +um the reason why I did this quiz was + +1084 +00:48:54,319 --> 00:48:57,280 +because that's essentially what + +1085 +00:48:55,799 --> 00:49:01,160 +perplexity + +1086 +00:48:57,280 --> 00:49:02,680 +means um and what what perplexity is is + +1087 +00:49:01,160 --> 00:49:04,559 +it's the number of times you'd have to + +1088 +00:49:02,680 --> 00:49:07,000 +sample from the probability distribution + +1089 +00:49:04,559 --> 00:49:09,200 +before you get the answer right so you + +1090 +00:49:07,000 --> 00:49:11,160 +were a little bit biased here because we + +1091 +00:49:09,200 --> 00:49:13,359 +were doing sampling without replacement + +1092 +00:49:11,160 --> 00:49:15,480 +so like nobody was actually picking a + +1093 +00:49:13,359 --> 00:49:17,000 +word that had already been said but it's + +1094 +00:49:15,480 --> 00:49:18,319 +essentially like if you guessed over and + +1095 +00:49:17,000 --> 00:49:20,839 +over and over again how many times would + +1096 +00:49:18,319 --> 00:49:22,720 +you need until you get it right and so + +1097 +00:49:20,839 --> 00:49:25,119 +here like if the actual answer was start + +1098 +00:49:22,720 --> 00:49:27,480 +the perplexity would be 4.66 so we'd + +1099 +00:49:25,119 --> 00:49:30,240 +expect language model to get it in uh + +1100 +00:49:27,480 --> 00:49:34,400 +four guesses uh between four and five + +1101 +00:49:30,240 --> 00:49:38,559 +guesses and you guys all did six so you + +1102 +00:49:34,400 --> 00:49:41,599 +lose um so uh another important thing to + +1103 +00:49:38,559 --> 00:49:42,799 +mention is evaluation in vocabulary uh + +1104 +00:49:41,599 --> 00:49:44,880 +so for fair + +1105 +00:49:42,799 --> 00:49:47,319 +comparison um make sure that the + +1106 +00:49:44,880 --> 00:49:49,559 +denominator is the same so uh if you're + +1107 +00:49:47,319 --> 00:49:51,559 +calculating the perplexity make sure + +1108 +00:49:49,559 --> 00:49:53,359 +that you're dividing by the same number + +1109 +00:49:51,559 --> 00:49:55,799 +uh every time you're dividing by words + +1110 +00:49:53,359 --> 00:49:58,520 +if it's uh the other paper or whatever + +1111 +00:49:55,799 --> 00:50:00,680 +is dividing by words or like let's say + +1112 +00:49:58,520 --> 00:50:02,160 +you're comparing llama to gp2 they have + +1113 +00:50:00,680 --> 00:50:04,880 +different tokenizers so they'll have + +1114 +00:50:02,160 --> 00:50:07,040 +different numbers of tokens so comparing + +1115 +00:50:04,880 --> 00:50:10,880 +uh with different denominators is not uh + +1116 +00:50:07,040 --> 00:50:12,440 +not fair um if you're allowing unknown + +1117 +00:50:10,880 --> 00:50:14,559 +words or characters so if you allow the + +1118 +00:50:12,440 --> 00:50:17,640 +model to not predict + +1119 +00:50:14,559 --> 00:50:19,119 +any token then you need to be fair about + +1120 +00:50:17,640 --> 00:50:22,040 +that + +1121 +00:50:19,119 --> 00:50:25,160 +too um so I'd like to go into a few + +1122 +00:50:22,040 --> 00:50:27,960 +Alternatives these are very similar to + +1123 +00:50:25,160 --> 00:50:29,400 +the Network classifiers and bag of words + +1124 +00:50:27,960 --> 00:50:30,680 +classifiers that I talked about before + +1125 +00:50:29,400 --> 00:50:32,480 +so I'm going to go through them rather + +1126 +00:50:30,680 --> 00:50:35,480 +quickly because I think you should get + +1127 +00:50:32,480 --> 00:50:38,119 +the basic idea but basically the + +1128 +00:50:35,480 --> 00:50:40,000 +alternative is uh featued models so we + +1129 +00:50:38,119 --> 00:50:42,559 +calculate features of to account based + +1130 +00:50:40,000 --> 00:50:44,599 +models as featued models so we calculate + +1131 +00:50:42,559 --> 00:50:46,880 +features of the context and based on the + +1132 +00:50:44,599 --> 00:50:48,280 +features calculate probabilities + +1133 +00:50:46,880 --> 00:50:50,480 +optimize the feature weights using + +1134 +00:50:48,280 --> 00:50:53,839 +gradient descent uh + +1135 +00:50:50,480 --> 00:50:56,119 +Etc and so for example if we have uh + +1136 +00:50:53,839 --> 00:50:58,880 +input giving a + +1137 +00:50:56,119 --> 00:51:02,960 +uh we calculate features so um we might + +1138 +00:50:58,880 --> 00:51:05,400 +look up uh the word identity of the two + +1139 +00:51:02,960 --> 00:51:08,240 +previous words look up the word identity + +1140 +00:51:05,400 --> 00:51:11,000 +of the word uh directly previous add a + +1141 +00:51:08,240 --> 00:51:13,480 +bias add them all together get scores + +1142 +00:51:11,000 --> 00:51:14,960 +and calculate probabilities where each + +1143 +00:51:13,480 --> 00:51:16,920 +Vector is the size of the output + +1144 +00:51:14,960 --> 00:51:19,680 +vocabulary and feature weights are + +1145 +00:51:16,920 --> 00:51:21,799 +optimized using SGD so this is basically + +1146 +00:51:19,680 --> 00:51:24,240 +a bag of words classifier but it's a + +1147 +00:51:21,799 --> 00:51:27,200 +multiclass bag of words classifier over + +1148 +00:51:24,240 --> 00:51:28,960 +the next token so it's very similar to + +1149 +00:51:27,200 --> 00:51:30,839 +our classification task before except + +1150 +00:51:28,960 --> 00:51:33,160 +now instead of having two classes we + +1151 +00:51:30,839 --> 00:51:36,280 +have you know 10,000 classes or 100,000 + +1152 +00:51:33,160 --> 00:51:38,480 +classes oh yeah sorry very quick aside + +1153 +00:51:36,280 --> 00:51:40,280 +um these were actually invented by Rony + +1154 +00:51:38,480 --> 00:51:41,440 +Rosenfeld who's the head of the machine + +1155 +00:51:40,280 --> 00:51:45,119 +learning department at the end the + +1156 +00:51:41,440 --> 00:51:47,799 +machine learning Department uh so um 27 + +1157 +00:51:45,119 --> 00:51:50,760 +years ago I guess so he has even more + +1158 +00:51:47,799 --> 00:51:52,680 +experience large language modeling than + +1159 +00:51:50,760 --> 00:51:55,880 +um + +1160 +00:51:52,680 --> 00:51:58,599 +cool so um the one difference with a bag + +1161 +00:51:55,880 --> 00:52:02,119 +of words classifier is + +1162 +00:51:58,599 --> 00:52:05,480 +um we we have + +1163 +00:52:02,119 --> 00:52:07,640 +biases um and we have the probability + +1164 +00:52:05,480 --> 00:52:09,400 +Vector given the previous word but + +1165 +00:52:07,640 --> 00:52:11,720 +instead of using a bag of words this + +1166 +00:52:09,400 --> 00:52:15,440 +actually is using uh How likely is it + +1167 +00:52:11,720 --> 00:52:16,960 +giving given two words previous so uh + +1168 +00:52:15,440 --> 00:52:18,040 +the feature design would be a little bit + +1169 +00:52:16,960 --> 00:52:19,119 +different and that would give you a + +1170 +00:52:18,040 --> 00:52:22,920 +total + +1171 +00:52:19,119 --> 00:52:24,359 +score um as a reminder uh last time we + +1172 +00:52:22,920 --> 00:52:26,440 +did a training algorithm where we + +1173 +00:52:24,359 --> 00:52:27,480 +calculated gradients loss function with + +1174 +00:52:26,440 --> 00:52:29,960 +respect to the + +1175 +00:52:27,480 --> 00:52:32,319 +parameters and uh we can use the chain + +1176 +00:52:29,960 --> 00:52:33,839 +Rule and back propagation and updates to + +1177 +00:52:32,319 --> 00:52:36,400 +move in the direction that increases + +1178 +00:52:33,839 --> 00:52:39,040 +enough so nothing extremely different + +1179 +00:52:36,400 --> 00:52:42,640 +from what we had for our + +1180 +00:52:39,040 --> 00:52:44,240 +B um similarly this solves some problems + +1181 +00:52:42,640 --> 00:52:47,240 +so this didn't solve the problem of + +1182 +00:52:44,240 --> 00:52:49,119 +sharing strength among similar words it + +1183 +00:52:47,240 --> 00:52:50,839 +did solve the problem of conditioning on + +1184 +00:52:49,119 --> 00:52:52,839 +context with intervening words because + +1185 +00:52:50,839 --> 00:52:56,920 +now we can condition directly on Doctor + +1186 +00:52:52,839 --> 00:52:59,680 +without having to um combine with + +1187 +00:52:56,920 --> 00:53:01,200 +gitrid um and it doesn't necessarily + +1188 +00:52:59,680 --> 00:53:03,480 +handle longdistance dependencies because + +1189 +00:53:01,200 --> 00:53:05,240 +we're still limited in our context with + +1190 +00:53:03,480 --> 00:53:09,079 +the model I just + +1191 +00:53:05,240 --> 00:53:11,920 +described so um if we so sorry back to + +1192 +00:53:09,079 --> 00:53:13,480 +neural networks is what I should say um + +1193 +00:53:11,920 --> 00:53:15,160 +so if we have a feedforward neural + +1194 +00:53:13,480 --> 00:53:18,480 +network language model the way this + +1195 +00:53:15,160 --> 00:53:20,400 +could work is instead of looking up + +1196 +00:53:18,480 --> 00:53:23,079 +discrete features uh like we had in a + +1197 +00:53:20,400 --> 00:53:25,960 +bag of words model uh we would look up + +1198 +00:53:23,079 --> 00:53:27,400 +dents embeddings and so we concatenate + +1199 +00:53:25,960 --> 00:53:29,359 +together these dense + +1200 +00:53:27,400 --> 00:53:32,319 +embeddings and based on the dense + +1201 +00:53:29,359 --> 00:53:34,599 +embeddings uh we do some sort of uh + +1202 +00:53:32,319 --> 00:53:36,079 +intermediate layer transforms to extract + +1203 +00:53:34,599 --> 00:53:37,200 +features like we did for our neural + +1204 +00:53:36,079 --> 00:53:39,359 +network based + +1205 +00:53:37,200 --> 00:53:41,520 +classifier um we multiply this by + +1206 +00:53:39,359 --> 00:53:43,559 +weights uh we have a bias and we + +1207 +00:53:41,520 --> 00:53:46,559 +calculate + +1208 +00:53:43,559 --> 00:53:49,200 +scores and uh then we take a soft Max to + +1209 +00:53:46,559 --> 00:53:49,200 +do + +1210 +00:53:50,400 --> 00:53:55,799 +classification so um this can calculate + +1211 +00:53:53,359 --> 00:53:58,000 +combination features uh like we we also + +1212 +00:53:55,799 --> 00:54:02,280 +used in our uh neural network based + +1213 +00:53:58,000 --> 00:54:04,119 +classifiers so um this could uh give us + +1214 +00:54:02,280 --> 00:54:05,760 +a positive number for example if the + +1215 +00:54:04,119 --> 00:54:07,760 +previous word is a determiner and the + +1216 +00:54:05,760 --> 00:54:10,440 +second previous word is a verb so that + +1217 +00:54:07,760 --> 00:54:14,520 +would be like uh in giving and then that + +1218 +00:54:10,440 --> 00:54:14,520 +would allow us upway to that particular + +1219 +00:54:15,000 --> 00:54:19,559 +examples um so this allows us to share + +1220 +00:54:17,640 --> 00:54:21,640 +strength in various places in our model + +1221 +00:54:19,559 --> 00:54:23,520 +which was also You Know instrumental in + +1222 +00:54:21,640 --> 00:54:25,599 +making our our neural network + +1223 +00:54:23,520 --> 00:54:28,000 +classifiers work for similar work and + +1224 +00:54:25,599 --> 00:54:30,119 +stuff and so these would be word + +1225 +00:54:28,000 --> 00:54:32,160 +embeddings so similar words get similar + +1226 +00:54:30,119 --> 00:54:35,079 +embeddings another really important + +1227 +00:54:32,160 --> 00:54:38,480 +thing is uh similar output words also + +1228 +00:54:35,079 --> 00:54:41,839 +get similar rows in The softmax Matrix + +1229 +00:54:38,480 --> 00:54:44,440 +and so here remember if you remember + +1230 +00:54:41,839 --> 00:54:48,240 +from last class this was a big Matrix + +1231 +00:54:44,440 --> 00:54:50,400 +where the size of the Matrix was the + +1232 +00:54:48,240 --> 00:54:53,319 +number of vocabulary items times the + +1233 +00:54:50,400 --> 00:54:55,920 +size of a word embedding this is also a + +1234 +00:54:53,319 --> 00:54:58,319 +matrix where this is + +1235 +00:54:55,920 --> 00:55:02,200 +the number of vocabulary items times the + +1236 +00:54:58,319 --> 00:55:04,160 +size of a context embedding gr and so + +1237 +00:55:02,200 --> 00:55:06,160 +these will also be similar because words + +1238 +00:55:04,160 --> 00:55:08,280 +that appear in similar contexts will + +1239 +00:55:06,160 --> 00:55:11,920 +also you know want similar embeddings so + +1240 +00:55:08,280 --> 00:55:15,119 +they get uploaded in at the same + +1241 +00:55:11,920 --> 00:55:17,119 +time and similar hidden States will have + +1242 +00:55:15,119 --> 00:55:19,799 +similar context so ideally like if you + +1243 +00:55:17,119 --> 00:55:20,920 +have giving a or delivering a or + +1244 +00:55:19,799 --> 00:55:22,680 +something like that those would be + +1245 +00:55:20,920 --> 00:55:27,000 +similar contexts so they would get + +1246 +00:55:22,680 --> 00:55:27,000 +similar purple embeddings out out of the + +1247 +00:55:28,440 --> 00:55:31,599 +so one trick that's widely used in + +1248 +00:55:30,200 --> 00:55:34,960 +language model that further takes + +1249 +00:55:31,599 --> 00:55:38,799 +advantage of this is uh tying + +1250 +00:55:34,960 --> 00:55:44,160 +embeddings so here what this does is + +1251 +00:55:38,799 --> 00:55:48,280 +sharing parameters between this um + +1252 +00:55:44,160 --> 00:55:49,920 +lookup Matrix here and this uh Matrix + +1253 +00:55:48,280 --> 00:55:51,119 +over here that we use for calculating + +1254 +00:55:49,920 --> 00:55:56,200 +the + +1255 +00:55:51,119 --> 00:55:58,839 +softmax and um the reason why this is + +1256 +00:55:56,200 --> 00:56:00,559 +useful is twofold number one it gives + +1257 +00:55:58,839 --> 00:56:02,079 +you essentially more training data to + +1258 +00:56:00,559 --> 00:56:04,440 +learn these embeddings because instead + +1259 +00:56:02,079 --> 00:56:05,799 +of learning the embeddings whenever a + +1260 +00:56:04,440 --> 00:56:08,520 +word is in + +1261 +00:56:05,799 --> 00:56:10,599 +context separately from learning the + +1262 +00:56:08,520 --> 00:56:13,520 +embeddings whenever a word is predicted + +1263 +00:56:10,599 --> 00:56:15,480 +you learn the the same embedding Matrix + +1264 +00:56:13,520 --> 00:56:19,319 +whenever the word is in the context or + +1265 +00:56:15,480 --> 00:56:21,520 +whatever it's predicted and so um that + +1266 +00:56:19,319 --> 00:56:24,119 +makes it more accurate to learn these uh + +1267 +00:56:21,520 --> 00:56:26,960 +embeddings well another thing is the + +1268 +00:56:24,119 --> 00:56:31,119 +embedding mat can actually be very large + +1269 +00:56:26,960 --> 00:56:34,920 +so like let's say we have aab of + +1270 +00:56:31,119 --> 00:56:37,520 +10 100,000 and we have an embedding a + +1271 +00:56:34,920 --> 00:56:40,799 +word embedding size of like 512 or + +1272 +00:56:37,520 --> 00:56:45,319 +something like that + +1273 +00:56:40,799 --> 00:56:45,319 +that's um 51 million + +1274 +00:56:46,839 --> 00:56:52,440 +parameters um and this doesn't sound + +1275 +00:56:49,559 --> 00:56:55,520 +like a lot of parameters at first but it + +1276 +00:56:52,440 --> 00:56:57,880 +actually is a lot to learn when um + +1277 +00:56:55,520 --> 00:57:01,000 +these get updated relatively + +1278 +00:56:57,880 --> 00:57:03,400 +infrequently uh because + +1279 +00:57:01,000 --> 00:57:06,079 +um these get updated relatively + +1280 +00:57:03,400 --> 00:57:07,960 +infrequently because they only are + +1281 +00:57:06,079 --> 00:57:09,559 +updated whenever that word or token + +1282 +00:57:07,960 --> 00:57:12,319 +actually appears in your training data + +1283 +00:57:09,559 --> 00:57:14,119 +so um this can be a good thing for + +1284 +00:57:12,319 --> 00:57:16,319 +parameter savings parameter efficiency + +1285 +00:57:14,119 --> 00:57:16,319 +as + +1286 +00:57:16,440 --> 00:57:22,520 +well um so this uh solves most of the + +1287 +00:57:19,599 --> 00:57:24,319 +problems here um but it doesn't solve + +1288 +00:57:22,520 --> 00:57:26,839 +the problem of longdistance dependencies + +1289 +00:57:24,319 --> 00:57:29,839 +because still limited by the overall + +1290 +00:57:26,839 --> 00:57:31,359 +length of uh the context that we're + +1291 +00:57:29,839 --> 00:57:32,520 +concatenating together here sure we + +1292 +00:57:31,359 --> 00:57:35,760 +could make that longer but that would + +1293 +00:57:32,520 --> 00:57:37,200 +make our model larger and um and bring + +1294 +00:57:35,760 --> 00:57:39,720 +various + +1295 +00:57:37,200 --> 00:57:42,520 +issues and so what I'm going to talk + +1296 +00:57:39,720 --> 00:57:44,599 +about in on thur day is how we solve + +1297 +00:57:42,520 --> 00:57:47,559 +this problem of modeling long contexts + +1298 +00:57:44,599 --> 00:57:49,720 +so how do we um build recurrent neural + +1299 +00:57:47,559 --> 00:57:52,559 +networks uh how do we build + +1300 +00:57:49,720 --> 00:57:54,960 +convolutional uh convolutional networks + +1301 +00:57:52,559 --> 00:57:57,520 +or how do we build attention based + +1302 +00:57:54,960 --> 00:58:00,720 +Transformer models and these are all + +1303 +00:57:57,520 --> 00:58:02,119 +options that are used um Transformers + +1304 +00:58:00,720 --> 00:58:04,359 +are kind of + +1305 +00:58:02,119 --> 00:58:06,039 +the the main thing that people use + +1306 +00:58:04,359 --> 00:58:08,400 +nowadays but there's a lot of versions + +1307 +00:58:06,039 --> 00:58:11,880 +of Transformers that borrow ideas from + +1308 +00:58:08,400 --> 00:58:14,960 +recurrent uh and convolutional models + +1309 +00:58:11,880 --> 00:58:17,359 +um recently a lot of long context models + +1310 +00:58:14,960 --> 00:58:19,440 +us use ideas from recurrent networks and + +1311 +00:58:17,359 --> 00:58:22,160 +a lot of for example speech models or + +1312 +00:58:19,440 --> 00:58:24,160 +things like or image models use ideas + +1313 +00:58:22,160 --> 00:58:25,920 +from convolutional networks so I think + +1314 +00:58:24,160 --> 00:58:28,760 +learning all but at the same time is a + +1315 +00:58:25,920 --> 00:58:32,160 +good idea in comparing + +1316 +00:58:28,760 --> 00:58:34,319 +them cool uh any any questions about + +1317 +00:58:32,160 --> 00:58:35,799 +this part I went through this kind of + +1318 +00:58:34,319 --> 00:58:37,319 +quickly because it's pretty similar to + +1319 +00:58:35,799 --> 00:58:40,079 +the the classification stuff that we + +1320 +00:58:37,319 --> 00:58:42,680 +covered last time but uh any any things + +1321 +00:58:40,079 --> 00:58:42,680 +that people want to + +1322 +00:58:43,880 --> 00:58:49,039 +ask okay so next I'm going to talk about + +1323 +00:58:46,839 --> 00:58:51,559 +a few other desiderata of language + +1324 +00:58:49,039 --> 00:58:53,039 +models so the next one is really really + +1325 +00:58:51,559 --> 00:58:55,640 +important it's a concept I want + +1326 +00:58:53,039 --> 00:58:57,640 +everybody to know I actually + +1327 +00:58:55,640 --> 00:58:59,520 +taught this informally up until this + +1328 +00:58:57,640 --> 00:59:02,039 +class but now I I actually made slides + +1329 +00:58:59,520 --> 00:59:05,079 +for it starting this time which is + +1330 +00:59:02,039 --> 00:59:07,240 +calibration so the idea of calibration + +1331 +00:59:05,079 --> 00:59:10,200 +is that the model quote unquote knows + +1332 +00:59:07,240 --> 00:59:14,559 +when it knows or the the fact that it is + +1333 +00:59:10,200 --> 00:59:17,480 +able to provide a a good answer um uh + +1334 +00:59:14,559 --> 00:59:21,640 +provide a good confidence in its answer + +1335 +00:59:17,480 --> 00:59:23,640 +and more formally this can be specified + +1336 +00:59:21,640 --> 00:59:25,240 +as + +1337 +00:59:23,640 --> 00:59:27,799 +the + +1338 +00:59:25,240 --> 00:59:29,200 +feature that the model probability of + +1339 +00:59:27,799 --> 00:59:33,119 +the answer matches the actual + +1340 +00:59:29,200 --> 00:59:37,319 +probability of getting it right um and + +1341 +00:59:33,119 --> 00:59:37,319 +so what this means + +1342 +00:59:41,960 --> 00:59:47,480 +is the + +1343 +00:59:44,240 --> 00:59:51,839 +probability of the + +1344 +00:59:47,480 --> 00:59:51,839 +answer um is + +1345 +00:59:52,720 --> 00:59:59,880 +correct given the fact that + +1346 +00:59:56,319 --> 00:59:59,880 +the model + +1347 +01:00:00,160 --> 01:00:07,440 +probability is equal to + +1348 +01:00:03,640 --> 01:00:07,440 +P is equal to + +1349 +01:00:08,559 --> 01:00:12,760 +ke + +1350 +01:00:10,480 --> 01:00:15,319 +so I know this is a little bit hard to + +1351 +01:00:12,760 --> 01:00:18,240 +parse I it always took me like a few + +1352 +01:00:15,319 --> 01:00:21,720 +seconds to parse before I uh like when I + +1353 +01:00:18,240 --> 01:00:25,160 +looked at it but basically if the model + +1354 +01:00:21,720 --> 01:00:26,920 +if the model says the probability of it + +1355 +01:00:25,160 --> 01:00:29,440 +being correct is + +1356 +01:00:26,920 --> 01:00:33,559 +0.7 then the probability that the answer + +1357 +01:00:29,440 --> 01:00:35,960 +is correct is actually 0.7 so um you + +1358 +01:00:33,559 --> 01:00:41,520 +know if it says uh the probability is + +1359 +01:00:35,960 --> 01:00:41,520 +0.7 100 times then it will be right 70 + +1360 +01:00:43,640 --> 01:00:52,160 +times and so the way we formalize this + +1361 +01:00:48,039 --> 01:00:55,200 +um is is by this uh it was proposed by + +1362 +01:00:52,160 --> 01:00:57,760 +this seminal paper by gu it all in + +1363 +01:00:55,200 --> 01:01:00,319 +2017 + +1364 +01:00:57,760 --> 01:01:03,319 +and + +1365 +01:01:00,319 --> 01:01:05,520 +unfortunately this data itself is hard + +1366 +01:01:03,319 --> 01:01:08,119 +to collect + +1367 +01:01:05,520 --> 01:01:11,200 +because the model probability is always + +1368 +01:01:08,119 --> 01:01:13,359 +different right and so if the model + +1369 +01:01:11,200 --> 01:01:15,359 +probability is like if the model + +1370 +01:01:13,359 --> 01:01:20,480 +probability was actually 0.7 that'd be + +1371 +01:01:15,359 --> 01:01:22,000 +nice but actually it's 0.793 to 6 8 5 + +1372 +01:01:20,480 --> 01:01:24,599 +and you never get another example where + +1373 +01:01:22,000 --> 01:01:26,319 +the probability is exactly the same so + +1374 +01:01:24,599 --> 01:01:28,280 +what we do instead is we divide the + +1375 +01:01:26,319 --> 01:01:30,240 +model probabilities into buckets so we + +1376 +01:01:28,280 --> 01:01:32,880 +say the model probability is between 0 + +1377 +01:01:30,240 --> 01:01:36,599 +and 0.1 we say the model probability is + +1378 +01:01:32,880 --> 01:01:40,319 +between 0.1 and 0.2 0.2 and 0.3 so we + +1379 +01:01:36,599 --> 01:01:44,599 +create buckets like this like these and + +1380 +01:01:40,319 --> 01:01:46,520 +then we looked at the model confidence + +1381 +01:01:44,599 --> 01:01:52,839 +the average model confidence within that + +1382 +01:01:46,520 --> 01:01:55,000 +bucket so maybe uh between 0.1 and 0 uh + +1383 +01:01:52,839 --> 01:01:58,000 +between 0 and 0.1 the model confidence + +1384 +01:01:55,000 --> 01:02:00,920 +on average is 0 055 or something like + +1385 +01:01:58,000 --> 01:02:02,640 +that so that would be this T here and + +1386 +01:02:00,920 --> 01:02:05,079 +then the accuracy is how often did it + +1387 +01:02:02,640 --> 01:02:06,680 +actually get a correct and this can be + +1388 +01:02:05,079 --> 01:02:09,720 +plotted in this thing called a + +1389 +01:02:06,680 --> 01:02:15,039 +reliability diagram and the reliability + +1390 +01:02:09,720 --> 01:02:17,599 +diagram basically um the the + +1391 +01:02:15,039 --> 01:02:20,359 +outputs uh + +1392 +01:02:17,599 --> 01:02:26,359 +here so this is + +1393 +01:02:20,359 --> 01:02:26,359 +um the this is the model + +1394 +01:02:27,520 --> 01:02:34,119 +yeah I think the red is the model + +1395 +01:02:30,760 --> 01:02:36,400 +um expected probability and then the + +1396 +01:02:34,119 --> 01:02:40,559 +blue uh the blue is the actual + +1397 +01:02:36,400 --> 01:02:43,240 +probability and then um + +1398 +01:02:40,559 --> 01:02:45,160 +the difference between the expected and + +1399 +01:02:43,240 --> 01:02:47,160 +the actual probability is kind of like + +1400 +01:02:45,160 --> 01:02:48,359 +the penalty there is how how poorly + +1401 +01:02:47,160 --> 01:02:52,000 +calibrated + +1402 +01:02:48,359 --> 01:02:55,880 +the and one really important thing to + +1403 +01:02:52,000 --> 01:02:58,440 +know is that calibration in accuracy are + +1404 +01:02:55,880 --> 01:03:00,599 +not necessarily they don't go hand inand + +1405 +01:02:58,440 --> 01:03:02,359 +uh they do to some extent but they don't + +1406 +01:03:00,599 --> 01:03:06,440 +uh they don't necessarily go hand in + +1407 +01:03:02,359 --> 01:03:06,440 +hand and + +1408 +01:03:07,200 --> 01:03:14,319 +the example on the left is a a bad model + +1409 +01:03:11,200 --> 01:03:16,279 +but a well calibrated so its accuracy is + +1410 +01:03:14,319 --> 01:03:18,720 +uh its error is + +1411 +01:03:16,279 --> 01:03:20,000 +44.9% um but it's well calibrated as you + +1412 +01:03:18,720 --> 01:03:21,440 +can see like when it says it knows the + +1413 +01:03:20,000 --> 01:03:23,880 +answer it knows the answer when it + +1414 +01:03:21,440 --> 01:03:27,799 +doesn't answer does this model on the + +1415 +01:03:23,880 --> 01:03:30,000 +other hand has better erir and um but + +1416 +01:03:27,799 --> 01:03:31,880 +worse calibration so the reason why is + +1417 +01:03:30,000 --> 01:03:36,680 +the model is very very confident all the + +1418 +01:03:31,880 --> 01:03:39,640 +time and usually what happens is um + +1419 +01:03:36,680 --> 01:03:41,200 +models that overfit to the data + +1420 +01:03:39,640 --> 01:03:43,359 +especially when you do early stopping on + +1421 +01:03:41,200 --> 01:03:44,760 +something like accuracy uh when you stop + +1422 +01:03:43,359 --> 01:03:47,279 +the training on something like accuracy + +1423 +01:03:44,760 --> 01:03:49,960 +will become very overconfident and uh + +1424 +01:03:47,279 --> 01:03:52,599 +give confidence estimates um that are in + +1425 +01:03:49,960 --> 01:03:54,000 +cor like this so this is important to + +1426 +01:03:52,599 --> 01:03:56,079 +know and the reason why it's important + +1427 +01:03:54,000 --> 01:03:58,000 +to know is actually because you know + +1428 +01:03:56,079 --> 01:04:00,960 +models are very good at making up things + +1429 +01:03:58,000 --> 01:04:02,359 +that aren't actually correct nowadays um + +1430 +01:04:00,960 --> 01:04:04,920 +and but if you have a really well + +1431 +01:04:02,359 --> 01:04:07,760 +calibrated model you could at least say + +1432 +01:04:04,920 --> 01:04:09,920 +with what confidence you have this + +1433 +01:04:07,760 --> 01:04:12,760 +working so how do you calculate the + +1434 +01:04:09,920 --> 01:04:14,160 +probability of an answer so H yeah sorry + +1435 +01:04:12,760 --> 01:04:17,599 +uh yes + +1436 +01:04:14,160 --> 01:04:17,599 +yes yeah please + +1437 +01:04:17,799 --> 01:04:26,559 +go the probability of percent or + +1438 +01:04:23,200 --> 01:04:28,039 +percent um usually this would be for a + +1439 +01:04:26,559 --> 01:04:29,599 +generated output because you want to + +1440 +01:04:28,039 --> 01:04:32,559 +know the the probability that the + +1441 +01:04:29,599 --> 01:04:32,559 +generated output is + +1442 +01:04:53,160 --> 01:04:56,160 +cor + +1443 +01:05:01,079 --> 01:05:06,319 +great that's what I'm about to talk + +1444 +01:05:03,000 --> 01:05:07,839 +about so perfect perfect question um so + +1445 +01:05:06,319 --> 01:05:10,160 +how do we calculate the answer + +1446 +01:05:07,839 --> 01:05:13,279 +probability or um how do we calculate + +1447 +01:05:10,160 --> 01:05:15,039 +the confidence in an answer um we're + +1448 +01:05:13,279 --> 01:05:18,319 +actually going to go into more detail + +1449 +01:05:15,039 --> 01:05:20,760 +about this um in a a later class but the + +1450 +01:05:18,319 --> 01:05:23,200 +first thing is probability of the answer + +1451 +01:05:20,760 --> 01:05:25,799 +and this is easy when there's a single + +1452 +01:05:23,200 --> 01:05:29,079 +answer um like if there's only one + +1453 +01:05:25,799 --> 01:05:31,839 +correct answer and you want your model + +1454 +01:05:29,079 --> 01:05:34,160 +to be solving math problems and you want + +1455 +01:05:31,839 --> 01:05:38,319 +it to return only the answer and nothing + +1456 +01:05:34,160 --> 01:05:40,760 +else if it returns anything else like it + +1457 +01:05:38,319 --> 01:05:44,920 +won't work then you can just use the + +1458 +01:05:40,760 --> 01:05:47,119 +probability of the answer but what + +1459 +01:05:44,920 --> 01:05:49,559 +if + +1460 +01:05:47,119 --> 01:05:52,000 +um what if there are multiple acceptable + +1461 +01:05:49,559 --> 01:05:54,680 +answers um and maybe a perfect example + +1462 +01:05:52,000 --> 01:06:02,240 +of that is like where is CMU located + +1463 +01:05:54,680 --> 01:06:04,400 +or um uh where where are we right now um + +1464 +01:06:02,240 --> 01:06:06,960 +if the answer is where are we right + +1465 +01:06:04,400 --> 01:06:08,880 +now um could be + +1466 +01:06:06,960 --> 01:06:12,880 +Pittsburgh could be + +1467 +01:06:08,880 --> 01:06:12,880 +CMU could be carnegy + +1468 +01:06:16,200 --> 01:06:24,440 +melon could be other other things like + +1469 +01:06:18,760 --> 01:06:26,760 +this right um and so another way that + +1470 +01:06:24,440 --> 01:06:28,319 +you can calculate the confidence is + +1471 +01:06:26,760 --> 01:06:31,240 +calculating the probability of the + +1472 +01:06:28,319 --> 01:06:33,680 +answer plus uh you know paraphrases of + +1473 +01:06:31,240 --> 01:06:35,799 +the answer or other uh other things like + +1474 +01:06:33,680 --> 01:06:37,680 +this and so then you would just sum the + +1475 +01:06:35,799 --> 01:06:38,839 +probability over all the qu like + +1476 +01:06:37,680 --> 01:06:41,680 +acceptable + +1477 +01:06:38,839 --> 01:06:45,359 +answers + +1478 +01:06:41,680 --> 01:06:47,680 +um another thing that you can do is um + +1479 +01:06:45,359 --> 01:06:49,279 +sample multiple outputs and count the + +1480 +01:06:47,680 --> 01:06:51,000 +number of times you get a particular + +1481 +01:06:49,279 --> 01:06:54,440 +answer this doesn't solve the problem of + +1482 +01:06:51,000 --> 01:06:58,119 +paraphrasing ex paraphrases existing but + +1483 +01:06:54,440 --> 01:06:59,880 +it does solve the problem of uh it does + +1484 +01:06:58,119 --> 01:07:01,480 +solve two problems sometimes there are + +1485 +01:06:59,880 --> 01:07:05,240 +language models where you can't get + +1486 +01:07:01,480 --> 01:07:06,640 +probabilities out of them um this is not + +1487 +01:07:05,240 --> 01:07:08,680 +so much of a problem anymore with the + +1488 +01:07:06,640 --> 01:07:11,240 +GPT models because they're reintroducing + +1489 +01:07:08,680 --> 01:07:12,440 +the ability to get probabilities but um + +1490 +01:07:11,240 --> 01:07:13,720 +there are some models where you can just + +1491 +01:07:12,440 --> 01:07:16,279 +sample from them and you can't get + +1492 +01:07:13,720 --> 01:07:18,680 +probabilities out but also more + +1493 +01:07:16,279 --> 01:07:21,039 +importantly um sometimes when you're + +1494 +01:07:18,680 --> 01:07:23,000 +using things like uh Chain of Thought + +1495 +01:07:21,039 --> 01:07:26,520 +reasoning which I'll talk about in more + +1496 +01:07:23,000 --> 01:07:29,839 +detail but basically it's like um please + +1497 +01:07:26,520 --> 01:07:31,480 +solve this math problem and explain + +1498 +01:07:29,839 --> 01:07:33,480 +explain your solution and then if it + +1499 +01:07:31,480 --> 01:07:35,119 +will do that it will generate you know a + +1500 +01:07:33,480 --> 01:07:36,279 +really long explanation of how it got to + +1501 +01:07:35,119 --> 01:07:40,119 +the solution and then it will give you + +1502 +01:07:36,279 --> 01:07:41,640 +the answer at the very end and so then + +1503 +01:07:40,119 --> 01:07:44,960 +you can't calculate the probability of + +1504 +01:07:41,640 --> 01:07:47,720 +the actual like answer itself because + +1505 +01:07:44,960 --> 01:07:49,359 +there's this long reasoning chain in + +1506 +01:07:47,720 --> 01:07:51,960 +between and you have like all these + +1507 +01:07:49,359 --> 01:07:53,559 +other all that other text there but what + +1508 +01:07:51,960 --> 01:07:55,480 +you can do is you can sample those + +1509 +01:07:53,559 --> 01:07:56,920 +reasoning chains 100 times and then see + +1510 +01:07:55,480 --> 01:07:59,599 +how many times you got a particular + +1511 +01:07:56,920 --> 01:08:02,960 +answer and that's actually a pretty um a + +1512 +01:07:59,599 --> 01:08:06,079 +Prett pretty reasonable way of uh + +1513 +01:08:02,960 --> 01:08:09,000 +getting a have + +1514 +01:08:06,079 --> 01:08:11,200 +yet this is my favorite one I I love how + +1515 +01:08:09,000 --> 01:08:12,880 +we can do this now it's just absolutely + +1516 +01:08:11,200 --> 01:08:16,480 +ridiculous but you could ask the model + +1517 +01:08:12,880 --> 01:08:20,279 +how confident it is and um it sometimes + +1518 +01:08:16,480 --> 01:08:22,359 +gives you a reasonable uh a reasonable + +1519 +01:08:20,279 --> 01:08:24,600 +answer um there's a really nice + +1520 +01:08:22,359 --> 01:08:26,400 +comparison of different methods uh in + +1521 +01:08:24,600 --> 01:08:29,679 +this paper which is also on on the + +1522 +01:08:26,400 --> 01:08:31,960 +website and basically long story short + +1523 +01:08:29,679 --> 01:08:34,000 +the conclusion from this paper is the + +1524 +01:08:31,960 --> 01:08:35,640 +sampling multiple outputs one is the + +1525 +01:08:34,000 --> 01:08:36,839 +best way to do it if you can't directly + +1526 +01:08:35,640 --> 01:08:39,520 +calculate + +1527 +01:08:36,839 --> 01:08:41,359 +probabilities um another thing that I'd + +1528 +01:08:39,520 --> 01:08:42,600 +like people to pay very close attention + +1529 +01:08:41,359 --> 01:08:45,040 +to is in the + +1530 +01:08:42,600 --> 01:08:46,480 +Generation Um in the generation class + +1531 +01:08:45,040 --> 01:08:49,600 +we're going to be talking about minimum + +1532 +01:08:46,480 --> 01:08:52,600 +based risk which is a Criterion for + +1533 +01:08:49,600 --> 01:08:54,719 +deciding how risky an output is and it's + +1534 +01:08:52,600 --> 01:08:56,199 +actually a really good uh confidence + +1535 +01:08:54,719 --> 01:08:58,000 +metric as well but I'm going to leave + +1536 +01:08:56,199 --> 01:08:59,440 +that till when we discuss it more detail + +1537 +01:08:58,000 --> 01:09:02,759 +with + +1538 +01:08:59,440 --> 01:09:05,359 +it um any any questions + +1539 +01:09:02,759 --> 01:09:08,440 +here okay + +1540 +01:09:05,359 --> 01:09:10,480 +cool um so the other Criterion uh this + +1541 +01:09:08,440 --> 01:09:12,520 +is just yet another Criterion that we + +1542 +01:09:10,480 --> 01:09:15,239 +would like language models to be good at + +1543 +01:09:12,520 --> 01:09:17,600 +um its efficiency and so basically the + +1544 +01:09:15,239 --> 01:09:21,920 +model is easy to run on limited Hardware + +1545 +01:09:17,600 --> 01:09:25,400 +by some you know uh metric of easy and + +1546 +01:09:21,920 --> 01:09:29,319 +some metrics that we like to talk about + +1547 +01:09:25,400 --> 01:09:32,400 +our parameter account so often you will + +1548 +01:09:29,319 --> 01:09:34,239 +see oh this is the best model under + +1549 +01:09:32,400 --> 01:09:35,520 +three billion parameters or this is the + +1550 +01:09:34,239 --> 01:09:37,960 +best model under seven billion + +1551 +01:09:35,520 --> 01:09:39,600 +parameters or um we trained a model with + +1552 +01:09:37,960 --> 01:09:42,159 +one trillion parameters or something + +1553 +01:09:39,600 --> 01:09:44,719 +like that you know + +1554 +01:09:42,159 --> 01:09:46,839 +uh the thing is parameter count doesn't + +1555 +01:09:44,719 --> 01:09:49,640 +really mean that much um from the point + +1556 +01:09:46,839 --> 01:09:52,839 +of view of like ease of using the model + +1557 +01:09:49,640 --> 01:09:54,400 +um unless you also think about other uh + +1558 +01:09:52,839 --> 01:09:56,480 +you know deser + +1559 +01:09:54,400 --> 01:09:58,840 +like just to give one example this is a + +1560 +01:09:56,480 --> 01:10:00,880 +parameter count um let's say you have a + +1561 +01:09:58,840 --> 01:10:02,960 +parameter count of 7 billion is that 7 + +1562 +01:10:00,880 --> 01:10:05,719 +billion parameters at 32-bit Precision + +1563 +01:10:02,960 --> 01:10:07,800 +or is that 7 billion parameters at 4bit + +1564 +01:10:05,719 --> 01:10:09,400 +Precision um will make a huge difference + +1565 +01:10:07,800 --> 01:10:12,960 +in your memory footprint your speed + +1566 +01:10:09,400 --> 01:10:14,920 +other things like that um so some of the + +1567 +01:10:12,960 --> 01:10:18,040 +things that are more direct with respect + +1568 +01:10:14,920 --> 01:10:19,800 +to efficiency are memory usage um and + +1569 +01:10:18,040 --> 01:10:22,440 +there's two varieties of memory usage + +1570 +01:10:19,800 --> 01:10:24,280 +one is model uh model only memory usage + +1571 +01:10:22,440 --> 01:10:27,120 +so when you load loaded the model into + +1572 +01:10:24,280 --> 01:10:29,120 +memory uh how much space does it take + +1573 +01:10:27,120 --> 01:10:31,159 +and also Peak memory consumption when + +1574 +01:10:29,120 --> 01:10:33,159 +you run have run the model over a + +1575 +01:10:31,159 --> 01:10:35,920 +sequence of a certain length how much is + +1576 +01:10:33,159 --> 01:10:40,040 +it going to P so that's another + +1577 +01:10:35,920 --> 01:10:43,000 +thing another thing is latency um and + +1578 +01:10:40,040 --> 01:10:46,440 +with respect to latency this can be + +1579 +01:10:43,000 --> 01:10:49,440 +either how long does it take to start + +1580 +01:10:46,440 --> 01:10:52,080 +outputting the first token um and how + +1581 +01:10:49,440 --> 01:10:54,840 +long does it take to uh finish + +1582 +01:10:52,080 --> 01:10:59,480 +outputting uh a generation of a certain + +1583 +01:10:54,840 --> 01:11:01,199 +length and the first will have more to + +1584 +01:10:59,480 --> 01:11:04,960 +do with how long does it take to encode + +1585 +01:11:01,199 --> 01:11:06,480 +a sequence um which is usually faster + +1586 +01:11:04,960 --> 01:11:09,080 +than how long does it take to generate a + +1587 +01:11:06,480 --> 01:11:11,360 +sequence so this will have to do with + +1588 +01:11:09,080 --> 01:11:13,000 +like encoding time this will require + +1589 +01:11:11,360 --> 01:11:15,880 +encoding time of course but it will also + +1590 +01:11:13,000 --> 01:11:15,880 +require generation + +1591 +01:11:16,280 --> 01:11:21,840 +time also throughput so you know how + +1592 +01:11:19,239 --> 01:11:23,679 +much um how many sentences can you + +1593 +01:11:21,840 --> 01:11:25,400 +process in a certain amount of time so + +1594 +01:11:23,679 --> 01:11:26,480 +of these are kind of desad that you you + +1595 +01:11:25,400 --> 01:11:29,000 +would + +1596 +01:11:26,480 --> 01:11:30,280 +say um we're going to be talking about + +1597 +01:11:29,000 --> 01:11:31,920 +this more in the distillation and + +1598 +01:11:30,280 --> 01:11:33,199 +compression and generation algorithms + +1599 +01:11:31,920 --> 01:11:35,640 +classes so I won't go into a whole lot + +1600 +01:11:33,199 --> 01:11:36,840 +of detail about this but um it's just + +1601 +01:11:35,640 --> 01:11:39,960 +another thing that we want to be + +1602 +01:11:36,840 --> 01:11:43,560 +thinking about in addition to + +1603 +01:11:39,960 --> 01:11:45,360 +complexity um but since I'm I'm on the + +1604 +01:11:43,560 --> 01:11:47,800 +topic of efficiency I would like to talk + +1605 +01:11:45,360 --> 01:11:49,480 +just a little bit about it um in terms + +1606 +01:11:47,800 --> 01:11:51,000 +of especially things that will be useful + +1607 +01:11:49,480 --> 01:11:53,600 +for implementing your first + +1608 +01:11:51,000 --> 01:11:55,840 +assignment and uh one thing that every + +1609 +01:11:53,600 --> 01:11:58,639 +body should know about um if you've done + +1610 +01:11:55,840 --> 01:11:59,920 +any like deep learning with pytorch or + +1611 +01:11:58,639 --> 01:12:02,639 +something like this you already know + +1612 +01:11:59,920 --> 01:12:05,880 +about this probably but uh I think it's + +1613 +01:12:02,639 --> 01:12:08,760 +worth mentioning but basically mini + +1614 +01:12:05,880 --> 01:12:12,120 +batching or batching uh is uh very + +1615 +01:12:08,760 --> 01:12:15,320 +useful and the basic idea behind it is + +1616 +01:12:12,120 --> 01:12:17,560 +that on Modern Hardware if you do many + +1617 +01:12:15,320 --> 01:12:20,520 +of the same operations at once it's much + +1618 +01:12:17,560 --> 01:12:24,320 +faster than doing um + +1619 +01:12:20,520 --> 01:12:25,480 +like uh operations executively and + +1620 +01:12:24,320 --> 01:12:27,280 +that's especially the case if you're + +1621 +01:12:25,480 --> 01:12:30,520 +programming in an extremely slow + +1622 +01:12:27,280 --> 01:12:33,239 +programming language like python um I + +1623 +01:12:30,520 --> 01:12:37,239 +love python but it's slow I mean like + +1624 +01:12:33,239 --> 01:12:38,719 +there's no argument about that um and so + +1625 +01:12:37,239 --> 01:12:40,520 +what mini batching does is it combines + +1626 +01:12:38,719 --> 01:12:43,600 +together smaller operations into one big + +1627 +01:12:40,520 --> 01:12:47,480 +one and the basic idea uh for example if + +1628 +01:12:43,600 --> 01:12:51,679 +we want to calculate our um our linear + +1629 +01:12:47,480 --> 01:12:56,560 +layer with a t uh nonlinearity after it + +1630 +01:12:51,679 --> 01:12:59,760 +we will take several inputs X1 X2 X3 + +1631 +01:12:56,560 --> 01:13:02,040 +concatenate them together and do a + +1632 +01:12:59,760 --> 01:13:04,600 +Matrix Matrix multiply instead of doing + +1633 +01:13:02,040 --> 01:13:07,960 +three Vector Matrix + +1634 +01:13:04,600 --> 01:13:09,239 +multiplies and so what we do is we take + +1635 +01:13:07,960 --> 01:13:11,280 +a whole bunch of examples we take like + +1636 +01:13:09,239 --> 01:13:13,840 +64 examples or something like that and + +1637 +01:13:11,280 --> 01:13:18,000 +we combine them together and calculate + +1638 +01:13:13,840 --> 01:13:21,280 +out thingsit one thing to know is that + +1639 +01:13:18,000 --> 01:13:22,560 +if you're working with sentences there's + +1640 +01:13:21,280 --> 01:13:24,719 +different ways you can calculate the + +1641 +01:13:22,560 --> 01:13:27,360 +size of your mini + +1642 +01:13:24,719 --> 01:13:28,880 +normally nowadays the thing that people + +1643 +01:13:27,360 --> 01:13:30,400 +do and the thing that I recommend is to + +1644 +01:13:28,880 --> 01:13:31,679 +calculate the size of your mini batches + +1645 +01:13:30,400 --> 01:13:33,639 +based on the number of tokens in the + +1646 +01:13:31,679 --> 01:13:35,840 +mini batch it used to be that you would + +1647 +01:13:33,639 --> 01:13:39,719 +do it based on the number of sequences + +1648 +01:13:35,840 --> 01:13:43,800 +but the the problem is um one like 50 + +1649 +01:13:39,719 --> 01:13:47,120 +sequences of length like 100 is much + +1650 +01:13:43,800 --> 01:13:49,480 +more memory intensive than uh 50 + +1651 +01:13:47,120 --> 01:13:51,960 +sequences of Link five and so you get + +1652 +01:13:49,480 --> 01:13:53,920 +these vastly varying these mini batches + +1653 +01:13:51,960 --> 01:13:57,000 +of vastly varying size and that's both + +1654 +01:13:53,920 --> 01:13:59,800 +bad for you know memory overflows and + +1655 +01:13:57,000 --> 01:14:01,639 +bad for um and bad for learning + +1656 +01:13:59,800 --> 01:14:04,280 +stability so I I definitely recommend + +1657 +01:14:01,639 --> 01:14:06,880 +doing it based on the number of + +1658 +01:14:04,280 --> 01:14:09,080 +comps uh another thing is gpus versus + +1659 +01:14:06,880 --> 01:14:12,400 +CPUs so + +1660 +01:14:09,080 --> 01:14:14,600 +um uh CPUs one way you can think of it + +1661 +01:14:12,400 --> 01:14:17,320 +is a CPUs kind of like a motorcycle it's + +1662 +01:14:14,600 --> 01:14:19,600 +very fast at picking up and doing a + +1663 +01:14:17,320 --> 01:14:23,960 +bunch of uh things very quickly + +1664 +01:14:19,600 --> 01:14:26,600 +accelerating uh into starting new uh new + +1665 +01:14:23,960 --> 01:14:28,760 +tasks a GPU is more like an airplane + +1666 +01:14:26,600 --> 01:14:30,719 +which uh you wait forever in line in + +1667 +01:14:28,760 --> 01:14:33,360 +security and + +1668 +01:14:30,719 --> 01:14:34,800 +then and then uh it takes a long time to + +1669 +01:14:33,360 --> 01:14:40,400 +get off the ground and start working but + +1670 +01:14:34,800 --> 01:14:43,679 +once it does it's extremely fast um and + +1671 +01:14:40,400 --> 01:14:45,360 +so if we do a simple example of how long + +1672 +01:14:43,679 --> 01:14:47,600 +does it take to do a Matrix Matrix + +1673 +01:14:45,360 --> 01:14:49,040 +multiply I calculated this a really long + +1674 +01:14:47,600 --> 01:14:51,280 +time ago it's probably horribly out of + +1675 +01:14:49,040 --> 01:14:55,120 +date now but the same general principle + +1676 +01:14:51,280 --> 01:14:56,560 +stands which is if we have have um the + +1677 +01:14:55,120 --> 01:14:58,480 +number of seconds that it takes to do a + +1678 +01:14:56,560 --> 01:15:02,080 +Matrix Matrix multiply doing one of size + +1679 +01:14:58,480 --> 01:15:03,920 +16 is actually faster on CPU because uh + +1680 +01:15:02,080 --> 01:15:07,760 +the overhead it takes to get started is + +1681 +01:15:03,920 --> 01:15:10,880 +very low but if you um once you start + +1682 +01:15:07,760 --> 01:15:13,360 +getting up to size like 128 by 128 + +1683 +01:15:10,880 --> 01:15:15,800 +Matrix multiplies then doing it on GPU + +1684 +01:15:13,360 --> 01:15:17,320 +is faster and then um it's you know a + +1685 +01:15:15,800 --> 01:15:19,679 +100 times faster once you start getting + +1686 +01:15:17,320 --> 01:15:21,600 +up to very large matrices so um if + +1687 +01:15:19,679 --> 01:15:24,000 +you're dealing with very large networks + +1688 +01:15:21,600 --> 01:15:26,800 +handling a GPU is good + +1689 +01:15:24,000 --> 01:15:30,159 +um and this is the the speed up + +1690 +01:15:26,800 --> 01:15:31,440 +percentage um one thing I should mention + +1691 +01:15:30,159 --> 01:15:34,239 +is + +1692 +01:15:31,440 --> 01:15:36,440 +um compute with respect to like doing + +1693 +01:15:34,239 --> 01:15:39,800 +the assignments for this class if you + +1694 +01:15:36,440 --> 01:15:43,199 +have a relatively recent Mac you're kind + +1695 +01:15:39,800 --> 01:15:44,760 +of in luck because actually the gpus on + +1696 +01:15:43,199 --> 01:15:47,239 +the Mac are pretty fast and they're well + +1697 +01:15:44,760 --> 01:15:48,960 +integrated with um they're well + +1698 +01:15:47,239 --> 01:15:52,080 +integrated with pipor and other things + +1699 +01:15:48,960 --> 01:15:53,440 +like that so decently sized models maybe + +1700 +01:15:52,080 --> 01:15:54,840 +up to the size that you would need to + +1701 +01:15:53,440 --> 01:15:57,840 +run for assignment one or even + +1702 +01:15:54,840 --> 01:16:00,880 +assignment two might uh just run on your + +1703 +01:15:57,840 --> 01:16:03,639 +uh laptop computer um if you don't have + +1704 +01:16:00,880 --> 01:16:05,280 +a GPU uh that you have immediately + +1705 +01:16:03,639 --> 01:16:06,760 +accessible to you I we're going to + +1706 +01:16:05,280 --> 01:16:08,400 +recommend that you use collab where you + +1707 +01:16:06,760 --> 01:16:10,120 +can get a GPU uh for the first + +1708 +01:16:08,400 --> 01:16:12,440 +assignments and then we'll have plug + +1709 +01:16:10,120 --> 01:16:15,159 +reddits that you can use otherwise but + +1710 +01:16:12,440 --> 01:16:16,800 +um GPU is usually like something that + +1711 +01:16:15,159 --> 01:16:18,440 +you can get on the cloud or one that you + +1712 +01:16:16,800 --> 01:16:21,080 +have on your Mac or one that you have on + +1713 +01:16:18,440 --> 01:16:24,600 +your gaming computer or something like + +1714 +01:16:21,080 --> 01:16:26,040 +that um there's a few speed tricks that + +1715 +01:16:24,600 --> 01:16:30,000 +you should know for efficient GPU + +1716 +01:16:26,040 --> 01:16:32,480 +operations so um one mistake that people + +1717 +01:16:30,000 --> 01:16:35,880 +make when creating models is they repeat + +1718 +01:16:32,480 --> 01:16:38,080 +operations over and over again and um + +1719 +01:16:35,880 --> 01:16:40,600 +you don't want to be doing this so like + +1720 +01:16:38,080 --> 01:16:43,239 +for example um this is multiplying a + +1721 +01:16:40,600 --> 01:16:45,320 +matrix by a constant multiple times and + +1722 +01:16:43,239 --> 01:16:46,880 +if you're just using out of thee box pie + +1723 +01:16:45,320 --> 01:16:49,280 +torch this would be really bad because + +1724 +01:16:46,880 --> 01:16:50,400 +you'd be repeating the operation uh when + +1725 +01:16:49,280 --> 01:16:52,679 +it's not + +1726 +01:16:50,400 --> 01:16:54,480 +necessary um you can also reduce the + +1727 +01:16:52,679 --> 01:16:57,360 +number of operations that you need to + +1728 +01:16:54,480 --> 01:17:00,320 +use so uh use Matrix Matrix multiplies + +1729 +01:16:57,360 --> 01:17:03,080 +instead of Matrix Vector + +1730 +01:17:00,320 --> 01:17:07,920 +multiplies and another thing is uh + +1731 +01:17:03,080 --> 01:17:10,719 +reducing CPU GPU data movement and um so + +1732 +01:17:07,920 --> 01:17:12,360 +when you do try to move memory um when + +1733 +01:17:10,719 --> 01:17:17,080 +you do try to move memory try to do it + +1734 +01:17:12,360 --> 01:17:20,040 +as early as possible and as uh and as + +1735 +01:17:17,080 --> 01:17:22,199 +few times as possible and the reason why + +1736 +01:17:20,040 --> 01:17:24,199 +you want to move things early or start + +1737 +01:17:22,199 --> 01:17:25,920 +operations early is many GPU operations + +1738 +01:17:24,199 --> 01:17:27,159 +are asynchronous so you can start the + +1739 +01:17:25,920 --> 01:17:28,800 +operation and it will run in the + +1740 +01:17:27,159 --> 01:17:33,120 +background while other things are + +1741 +01:17:28,800 --> 01:17:36,080 +processing so um it's a good idea to try + +1742 +01:17:33,120 --> 01:17:39,840 +to um to optimize and you can also use + +1743 +01:17:36,080 --> 01:17:42,360 +your python profiler or um envidia GPU + +1744 +01:17:39,840 --> 01:17:43,679 +profilers to try to optimize these + +1745 +01:17:42,360 --> 01:17:46,520 +things as + +1746 +01:17:43,679 --> 01:17:49,840 +well cool that's all I have uh we're + +1747 +01:17:46,520 --> 01:17:49,840 +right at time \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (3) Language Modeling/transcript.vtt b/CMU Advanced NLP 2024 (3) Language Modeling/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..c769b301b4703a26eeb05fbc146f5690fca8599b --- /dev/null +++ b/CMU Advanced NLP 2024 (3) Language Modeling/transcript.vtt @@ -0,0 +1,5242 @@ +WEBVTT + +00:00:00.399 --> 00:00:04.120 +so this time I'm going to be talking + +00:00:02.080 --> 00:00:05.799 +about language modeling uh obviously + +00:00:04.120 --> 00:00:07.240 +language modeling is a big topic and I'm + +00:00:05.799 --> 00:00:09.880 +not going to be able to cover it all in + +00:00:07.240 --> 00:00:11.320 +one class but this is kind of the basics + +00:00:09.880 --> 00:00:13.080 +of uh what does it mean to build a + +00:00:11.320 --> 00:00:15.320 +language model what is a language model + +00:00:13.080 --> 00:00:18.439 +how do we evaluate language models and + +00:00:15.320 --> 00:00:19.920 +other stuff like that and around the end + +00:00:18.439 --> 00:00:21.320 +I'm going to talk a little bit about + +00:00:19.920 --> 00:00:23.039 +efficiently implementing things in + +00:00:21.320 --> 00:00:25.080 +neural networks it's not directly + +00:00:23.039 --> 00:00:27.760 +related to language models but it's very + +00:00:25.080 --> 00:00:31.200 +important to know how to do uh to solve + +00:00:27.760 --> 00:00:34.200 +your assignments so I'll cover both + +00:00:31.200 --> 00:00:34.200 +is + +00:00:34.239 --> 00:00:38.480 +cool okay so the first thing I'd like to + +00:00:36.760 --> 00:00:41.239 +talk about is generative versus + +00:00:38.480 --> 00:00:43.000 +discriminative models and the reason why + +00:00:41.239 --> 00:00:45.280 +is up until now we've been talking about + +00:00:43.000 --> 00:00:47.559 +discriminative models and these are + +00:00:45.280 --> 00:00:49.640 +models uh that are mainly designed to + +00:00:47.559 --> 00:00:53.800 +calculate the probability of a latent + +00:00:49.640 --> 00:00:56.039 +trait uh given the data and so this is + +00:00:53.800 --> 00:00:58.800 +uh P of Y given X where Y is the lat and + +00:00:56.039 --> 00:01:00.800 +trait we want to calculate and X is uh + +00:00:58.800 --> 00:01:04.760 +the input data that we're calculating it + +00:01:00.800 --> 00:01:07.799 +over so just review from last class what + +00:01:04.760 --> 00:01:10.240 +was X from last class from the example + +00:01:07.799 --> 00:01:10.240 +in L + +00:01:11.360 --> 00:01:15.880 +class + +00:01:13.040 --> 00:01:18.280 +anybody yeah some text yeah and then + +00:01:15.880 --> 00:01:18.280 +what was + +00:01:20.400 --> 00:01:26.119 +why it shouldn't be too + +00:01:23.799 --> 00:01:27.920 +hard yeah it was a category or a + +00:01:26.119 --> 00:01:31.680 +sentiment label precisely in the + +00:01:27.920 --> 00:01:33.399 +sentiment analysis tasks so so um a + +00:01:31.680 --> 00:01:35.560 +generative model on the other hand is a + +00:01:33.399 --> 00:01:38.840 +model that calculates the probability of + +00:01:35.560 --> 00:01:40.880 +data itself and is not specifically + +00:01:38.840 --> 00:01:43.439 +conditional and there's a couple of + +00:01:40.880 --> 00:01:45.439 +varieties um this isn't like super + +00:01:43.439 --> 00:01:48.280 +standard terminology I just uh wrote it + +00:01:45.439 --> 00:01:51.520 +myself but here we have a standalone + +00:01:48.280 --> 00:01:54.360 +probability of P of X and we can also + +00:01:51.520 --> 00:01:58.000 +calculate the joint probability P of X + +00:01:54.360 --> 00:01:58.000 +and Y + +00:01:58.159 --> 00:02:02.880 +so probabilistic language models + +00:02:01.079 --> 00:02:06.640 +basically what they do is they calculate + +00:02:02.880 --> 00:02:08.560 +this uh probability usually uh we think + +00:02:06.640 --> 00:02:10.360 +of it as a standalone probability of P + +00:02:08.560 --> 00:02:11.800 +of X where X is something like a + +00:02:10.360 --> 00:02:15.160 +sentence or a + +00:02:11.800 --> 00:02:16.920 +document and it's a generative model + +00:02:15.160 --> 00:02:19.640 +that calculates the probability of + +00:02:16.920 --> 00:02:22.360 +language recently the definition of + +00:02:19.640 --> 00:02:23.959 +language model has expanded a little bit + +00:02:22.360 --> 00:02:26.160 +so now + +00:02:23.959 --> 00:02:28.640 +um people also call things that + +00:02:26.160 --> 00:02:31.080 +calculate the probability of text and + +00:02:28.640 --> 00:02:35.200 +images as like multimodal language + +00:02:31.080 --> 00:02:38.160 +models or uh what are some of the other + +00:02:35.200 --> 00:02:40.480 +ones yeah I think that's the main the + +00:02:38.160 --> 00:02:42.840 +main exception to this rule usually + +00:02:40.480 --> 00:02:45.080 +usually it's calculating either of text + +00:02:42.840 --> 00:02:47.680 +or over text in some multimodal data but + +00:02:45.080 --> 00:02:47.680 +for now we're going to + +00:02:48.800 --> 00:02:54.200 +consider + +00:02:50.319 --> 00:02:56.440 +um then there's kind of two fundamental + +00:02:54.200 --> 00:02:58.159 +operations that we perform with LMS + +00:02:56.440 --> 00:03:00.519 +almost everything else we do with LMS + +00:02:58.159 --> 00:03:03.640 +can be considered like one of these two + +00:03:00.519 --> 00:03:05.319 +types of things the first thing is calc + +00:03:03.640 --> 00:03:06.440 +scoring sentences or calculating the + +00:03:05.319 --> 00:03:09.599 +probability of + +00:03:06.440 --> 00:03:12.280 +sentences and this + +00:03:09.599 --> 00:03:14.720 +is uh for example if we calculate the + +00:03:12.280 --> 00:03:16.400 +probability of Jane went to the store uh + +00:03:14.720 --> 00:03:19.000 +this would have a high probability + +00:03:16.400 --> 00:03:20.879 +ideally um and if we have this kind of + +00:03:19.000 --> 00:03:23.400 +word salid like this this would be given + +00:03:20.879 --> 00:03:26.080 +a low probability uh according to a + +00:03:23.400 --> 00:03:28.000 +English language model if we had a + +00:03:26.080 --> 00:03:30.000 +Chinese language model ideally it would + +00:03:28.000 --> 00:03:31.319 +also probably give low probability first + +00:03:30.000 --> 00:03:32.879 +sentence too because it's a language + +00:03:31.319 --> 00:03:35.000 +model of natural Chinese and not of + +00:03:32.879 --> 00:03:36.200 +natural English so there's also + +00:03:35.000 --> 00:03:37.360 +different types of language models + +00:03:36.200 --> 00:03:38.400 +depending on the type of data you play + +00:03:37.360 --> 00:03:41.360 +in + +00:03:38.400 --> 00:03:43.599 +the another thing I can do is generate + +00:03:41.360 --> 00:03:45.239 +sentences and we'll talk more about the + +00:03:43.599 --> 00:03:48.280 +different methods for generating + +00:03:45.239 --> 00:03:50.319 +sentences but typically they fall into + +00:03:48.280 --> 00:03:51.799 +one of two categories one is sampling + +00:03:50.319 --> 00:03:53.200 +like this where you try to sample a + +00:03:51.799 --> 00:03:55.480 +sentence from the probability + +00:03:53.200 --> 00:03:57.280 +distribution of the language model + +00:03:55.480 --> 00:03:58.360 +possibly with some modifications to the + +00:03:57.280 --> 00:04:00.760 +probability + +00:03:58.360 --> 00:04:03.079 +distribution um the other thing which I + +00:04:00.760 --> 00:04:04.760 +didn't write on the slide is uh finding + +00:04:03.079 --> 00:04:07.439 +the highest scoring sentence according + +00:04:04.760 --> 00:04:09.760 +to the language model um and we do both + +00:04:07.439 --> 00:04:09.760 +of those + +00:04:10.560 --> 00:04:17.600 +S so more concretely how can we apply + +00:04:15.199 --> 00:04:21.199 +these these can be applied to answer + +00:04:17.600 --> 00:04:23.840 +questions so for example um if we have a + +00:04:21.199 --> 00:04:27.240 +multiple choice question we can score + +00:04:23.840 --> 00:04:30.639 +possible multiple choice answers and uh + +00:04:27.240 --> 00:04:32.880 +the way we do this is we calculate + +00:04:30.639 --> 00:04:35.440 +we first + +00:04:32.880 --> 00:04:38.440 +take uh like we have + +00:04:35.440 --> 00:04:38.440 +like + +00:04:38.560 --> 00:04:43.919 +um + +00:04:40.960 --> 00:04:46.919 +where is + +00:04:43.919 --> 00:04:46.919 +CMU + +00:04:47.560 --> 00:04:51.600 +located um + +00:04:51.960 --> 00:04:59.560 +that's and actually maybe promete this + +00:04:54.560 --> 00:05:01.360 +all again to an a here and then we say X + +00:04:59.560 --> 00:05:05.800 +X1 is equal to + +00:05:01.360 --> 00:05:07.520 +this and then we have X2 which is + +00:05:05.800 --> 00:05:09.720 +Q + +00:05:07.520 --> 00:05:12.479 +where is + +00:05:09.720 --> 00:05:14.120 +CMU + +00:05:12.479 --> 00:05:18.080 +located + +00:05:14.120 --> 00:05:19.720 +a um what's something + +00:05:18.080 --> 00:05:21.960 +plausible + +00:05:19.720 --> 00:05:24.560 +uh what was + +00:05:21.960 --> 00:05:26.319 +it okay now now you're going to make it + +00:05:24.560 --> 00:05:27.960 +tricky and make me talk about when we + +00:05:26.319 --> 00:05:29.960 +have multiple right answers and how we + +00:05:27.960 --> 00:05:31.759 +evaluate and stuff let let's ignore that + +00:05:29.960 --> 00:05:35.080 +for now it's say New + +00:05:31.759 --> 00:05:37.199 +York it's not located in New York is + +00:05:35.080 --> 00:05:40.560 +it + +00:05:37.199 --> 00:05:40.560 +okay let's say + +00:05:40.960 --> 00:05:45.199 +Birmingham hopefully there's no CMU + +00:05:43.199 --> 00:05:47.120 +affiliate in Birmingham I think we're + +00:05:45.199 --> 00:05:49.000 +we're pretty so um and then you would + +00:05:47.120 --> 00:05:53.880 +just calculate the probability of X1 and + +00:05:49.000 --> 00:05:56.440 +the probability of X2 X3 X4 Etc and um + +00:05:53.880 --> 00:06:01.479 +then pick the highest saring one and + +00:05:56.440 --> 00:06:01.479 +actually um there's a famous + +00:06:03.199 --> 00:06:07.440 +there's a famous uh leaderboard for + +00:06:05.840 --> 00:06:08.759 +language models that probably a lot of + +00:06:07.440 --> 00:06:09.759 +people know about it's called the open + +00:06:08.759 --> 00:06:13.120 +llm + +00:06:09.759 --> 00:06:15.639 +leaderboard and a lot of these tasks + +00:06:13.120 --> 00:06:17.319 +here basically correspond to doing + +00:06:15.639 --> 00:06:21.000 +something like that like hel swag is + +00:06:17.319 --> 00:06:22.599 +kind of a multiple choice uh is a + +00:06:21.000 --> 00:06:24.160 +multiple choice question answering thing + +00:06:22.599 --> 00:06:27.880 +about common sense where they calculate + +00:06:24.160 --> 00:06:30.280 +it by scoring uh scoring the + +00:06:27.880 --> 00:06:31.880 +outputs so that's a very common way to + +00:06:30.280 --> 00:06:35.000 +use language + +00:06:31.880 --> 00:06:36.960 +models um another thing is generating a + +00:06:35.000 --> 00:06:40.080 +continuation of a question prompt so + +00:06:36.960 --> 00:06:42.639 +basically this is when you uh + +00:06:40.080 --> 00:06:44.759 +sample and so what you would do is you + +00:06:42.639 --> 00:06:48.440 +would prompt the + +00:06:44.759 --> 00:06:50.560 +model with this uh X here and then you + +00:06:48.440 --> 00:06:53.800 +would ask it to generate either the most + +00:06:50.560 --> 00:06:56.400 +likely uh completion or generate um + +00:06:53.800 --> 00:06:58.960 +sample multiple completions to get the + +00:06:56.400 --> 00:07:00.720 +answer so this is very common uh people + +00:06:58.960 --> 00:07:03.759 +are very familiar with this there's lots + +00:07:00.720 --> 00:07:07.160 +of other uh things you can do though so + +00:07:03.759 --> 00:07:09.400 +um you can classify text and there's a + +00:07:07.160 --> 00:07:12.720 +couple ways you can do this uh one way + +00:07:09.400 --> 00:07:15.960 +you can do this is um like let's say we + +00:07:12.720 --> 00:07:15.960 +have a sentiment sentence + +00:07:16.160 --> 00:07:21.520 +here + +00:07:17.759 --> 00:07:25.440 +um you can say uh + +00:07:21.520 --> 00:07:30.919 +this is + +00:07:25.440 --> 00:07:33.919 +gr and then you can say um + +00:07:30.919 --> 00:07:37.680 +star + +00:07:33.919 --> 00:07:38.879 +rating five or something like that and + +00:07:37.680 --> 00:07:41.400 +then you could also have star rating + +00:07:38.879 --> 00:07:43.680 +four star rating three star rating two + +00:07:41.400 --> 00:07:45.080 +star rating one and calculate the + +00:07:43.680 --> 00:07:46.639 +probability of all of these and find + +00:07:45.080 --> 00:07:50.360 +which one has the highest probability so + +00:07:46.639 --> 00:07:51.800 +this is a a common way you can do things + +00:07:50.360 --> 00:07:54.319 +another thing you can do which is kind + +00:07:51.800 --> 00:07:55.240 +of interesting and um there are papers + +00:07:54.319 --> 00:07:58.319 +on this but they're kind of + +00:07:55.240 --> 00:08:00.800 +underexplored is you can do like star + +00:07:58.319 --> 00:08:04.800 +rating + +00:08:00.800 --> 00:08:04.800 +five and then + +00:08:04.879 --> 00:08:13.280 +generate generate the output um and so + +00:08:10.319 --> 00:08:15.039 +that basically says Okay I I want a + +00:08:13.280 --> 00:08:16.680 +positive sentence now I'm going to score + +00:08:15.039 --> 00:08:19.120 +the actual review and see whether that + +00:08:16.680 --> 00:08:22.319 +matches my like conception of a positive + +00:08:19.120 --> 00:08:24.080 +sentence and there's a few uh papers + +00:08:22.319 --> 00:08:25.680 +that do + +00:08:24.080 --> 00:08:28.240 +this + +00:08:25.680 --> 00:08:31.240 +um let + +00:08:28.240 --> 00:08:31.240 +me + +00:08:34.640 --> 00:08:38.760 +this is a kind of older one and then + +00:08:36.240 --> 00:08:42.080 +there's another more recent one by Sean + +00:08:38.760 --> 00:08:43.839 +Min I believe um uh but they demonstrate + +00:08:42.080 --> 00:08:45.480 +how you can do both generative and + +00:08:43.839 --> 00:08:47.600 +discriminative classification in this + +00:08:45.480 --> 00:08:51.760 +way so that's another thing that you can + +00:08:47.600 --> 00:08:51.760 +do uh with language + +00:08:53.279 --> 00:08:56.839 +models and then the other thing you can + +00:08:55.200 --> 00:08:59.000 +do is you can generate the label given a + +00:08:56.839 --> 00:09:00.680 +classification proc so you you say this + +00:08:59.000 --> 00:09:03.079 +is is great star rating and then + +00:09:00.680 --> 00:09:05.720 +generate five + +00:09:03.079 --> 00:09:09.320 +whatever finally um you can do things + +00:09:05.720 --> 00:09:10.920 +like correct a grammar so uh for example + +00:09:09.320 --> 00:09:12.560 +if you score the probability of each + +00:09:10.920 --> 00:09:14.839 +word and you find words that are really + +00:09:12.560 --> 00:09:17.760 +low probability then you can uh replace + +00:09:14.839 --> 00:09:20.160 +them with higher probability words um or + +00:09:17.760 --> 00:09:21.720 +you could ask a model please paraphrase + +00:09:20.160 --> 00:09:24.000 +this output and it will paraphrase it + +00:09:21.720 --> 00:09:27.640 +into something that gives you uh you + +00:09:24.000 --> 00:09:30.720 +know that has better gra so basically + +00:09:27.640 --> 00:09:33.079 +like as I said language models are very + +00:09:30.720 --> 00:09:34.600 +diverse um and they can do a ton of + +00:09:33.079 --> 00:09:35.680 +different things but most of them boil + +00:09:34.600 --> 00:09:38.440 +down to doing one of these two + +00:09:35.680 --> 00:09:42.079 +operations scoring or + +00:09:38.440 --> 00:09:42.079 +generating any questions + +00:09:42.480 --> 00:09:47.600 +s + +00:09:44.640 --> 00:09:50.000 +okay so next I I want to talk about a + +00:09:47.600 --> 00:09:52.279 +specific type of language models uh Auto + +00:09:50.000 --> 00:09:54.240 +regressive language models and auto + +00:09:52.279 --> 00:09:56.720 +regressive language models are language + +00:09:54.240 --> 00:10:00.240 +models that specifically calculate this + +00:09:56.720 --> 00:10:02.320 +probability um in a fashion where you + +00:10:00.240 --> 00:10:03.680 +calculate the probability of one token + +00:10:02.320 --> 00:10:05.519 +and then you calculate the probability + +00:10:03.680 --> 00:10:07.680 +of the next token given the previous + +00:10:05.519 --> 00:10:10.519 +token the probability of the third token + +00:10:07.680 --> 00:10:13.760 +G given the previous two tokens almost + +00:10:10.519 --> 00:10:18.600 +always this happens left to right um or + +00:10:13.760 --> 00:10:20.519 +start to finish um and so this is the + +00:10:18.600 --> 00:10:25.000 +next token here this is a context where + +00:10:20.519 --> 00:10:28.440 +usually um the context is the previous + +00:10:25.000 --> 00:10:29.640 +tokens Can anyone think of a time when + +00:10:28.440 --> 00:10:32.440 +you might want to do + +00:10:29.640 --> 00:10:37.839 +right to left instead of left to + +00:10:32.440 --> 00:10:40.399 +right yeah language that's from right to + +00:10:37.839 --> 00:10:41.680 +yeah that's actually exactly what I what + +00:10:40.399 --> 00:10:43.079 +I was looking for so if you have a + +00:10:41.680 --> 00:10:46.839 +language that's written from right to + +00:10:43.079 --> 00:10:49.320 +left actually uh things like uh Arabic + +00:10:46.839 --> 00:10:51.360 +and Hebrew are written right to left so + +00:10:49.320 --> 00:10:53.720 +um both of those are + +00:10:51.360 --> 00:10:56.360 +chronologically like earlier to later + +00:10:53.720 --> 00:10:59.399 +because you know if if you're thinking + +00:10:56.360 --> 00:11:01.079 +about how people speak um the the first + +00:10:59.399 --> 00:11:02.440 +word that an English speaker speaks is + +00:11:01.079 --> 00:11:04.000 +on the left just because that's the way + +00:11:02.440 --> 00:11:06.079 +you write it but the first word that an + +00:11:04.000 --> 00:11:09.639 +Arabic speaker speaks is on the the + +00:11:06.079 --> 00:11:12.360 +right because chronologically that's uh + +00:11:09.639 --> 00:11:13.519 +that's how it works um there's other + +00:11:12.360 --> 00:11:16.320 +reasons why you might want to do right + +00:11:13.519 --> 00:11:17.839 +to left but uh it's not really that left + +00:11:16.320 --> 00:11:21.720 +to right is important it's that like + +00:11:17.839 --> 00:11:24.440 +start to finish is important in spoken + +00:11:21.720 --> 00:11:27.880 +language so um one thing I should + +00:11:24.440 --> 00:11:30.240 +mention here is that this is just a rule + +00:11:27.880 --> 00:11:31.560 +of probability that if you have multiple + +00:11:30.240 --> 00:11:33.720 +variables and you're calculating the + +00:11:31.560 --> 00:11:35.760 +joint probability of variables the + +00:11:33.720 --> 00:11:38.000 +probability of all of the variables + +00:11:35.760 --> 00:11:40.240 +together is equal to this probability + +00:11:38.000 --> 00:11:41.920 +here so we're not making any + +00:11:40.240 --> 00:11:44.399 +approximations we're not making any + +00:11:41.920 --> 00:11:46.959 +compromises in order to do this but it + +00:11:44.399 --> 00:11:51.639 +all hinges on whether we can predict + +00:11:46.959 --> 00:11:53.440 +this probability um accurately uh + +00:11:51.639 --> 00:11:56.160 +actually another question does anybody + +00:11:53.440 --> 00:11:57.800 +know why we do this decomposition why + +00:11:56.160 --> 00:12:00.959 +don't we just try to predict the + +00:11:57.800 --> 00:12:00.959 +probability of x + +00:12:02.120 --> 00:12:05.399 +directly any + +00:12:07.680 --> 00:12:12.760 +ideas uh of big X sorry uh why don't we + +00:12:11.079 --> 00:12:17.560 +try to calculate the probability of this + +00:12:12.760 --> 00:12:21.360 +is great directly without deated the + +00:12:17.560 --> 00:12:21.360 +IND that + +00:12:25.519 --> 00:12:31.560 +possibility it could be word salid if + +00:12:27.760 --> 00:12:35.279 +you did it in a in a particular way yes + +00:12:31.560 --> 00:12:35.279 +um so that that's a good point + +00:12:39.519 --> 00:12:47.000 +yeah yeah so for example we talked about + +00:12:43.760 --> 00:12:50.120 +um uh we'll talk about + +00:12:47.000 --> 00:12:51.920 +models um or I I mentioned this briefly + +00:12:50.120 --> 00:12:54.000 +last time you can mention it in more + +00:12:51.920 --> 00:12:55.639 +detail this time but this is great we + +00:12:54.000 --> 00:12:59.880 +probably have never seen this before + +00:12:55.639 --> 00:13:01.399 +right so if we predict only things that + +00:12:59.880 --> 00:13:03.199 +we've seen before if we only assign a + +00:13:01.399 --> 00:13:04.600 +non-zero probability to the things we've + +00:13:03.199 --> 00:13:06.000 +seen before there's going to be lots of + +00:13:04.600 --> 00:13:07.079 +sentences that we've never seen before + +00:13:06.000 --> 00:13:10.000 +it makes it + +00:13:07.079 --> 00:13:13.760 +supercars um that that's basically close + +00:13:10.000 --> 00:13:16.399 +to what I wanted to say so um the reason + +00:13:13.760 --> 00:13:18.040 +why we don't typically do it with um + +00:13:16.399 --> 00:13:21.240 +predicting the whole sentence directly + +00:13:18.040 --> 00:13:22.800 +is because if we think about the size of + +00:13:21.240 --> 00:13:24.959 +the classification problem we need to + +00:13:22.800 --> 00:13:27.880 +solve in order to predict the next word + +00:13:24.959 --> 00:13:30.320 +it's a v uh where V is the size of the + +00:13:27.880 --> 00:13:33.120 +vocabulary but the size of the + +00:13:30.320 --> 00:13:35.399 +classification problem that we need to + +00:13:33.120 --> 00:13:38.040 +um we need to solve if we predict + +00:13:35.399 --> 00:13:40.079 +everything directly is V to the N where + +00:13:38.040 --> 00:13:42.240 +n is the length of the sequence and + +00:13:40.079 --> 00:13:45.240 +that's just huge the vocabulary is so + +00:13:42.240 --> 00:13:48.440 +big that it's hard to kind of uh know + +00:13:45.240 --> 00:13:51.000 +how we handle that so basically by doing + +00:13:48.440 --> 00:13:53.160 +this sort of decomposition we decompose + +00:13:51.000 --> 00:13:56.440 +this into uh + +00:13:53.160 --> 00:13:58.120 +n um prediction problems of size V and + +00:13:56.440 --> 00:13:59.519 +that's kind of just a lot more + +00:13:58.120 --> 00:14:03.079 +manageable for from the point of view of + +00:13:59.519 --> 00:14:06.000 +how we train uh know how we train + +00:14:03.079 --> 00:14:09.399 +models um that being said there are + +00:14:06.000 --> 00:14:11.360 +other Alternatives um something very + +00:14:09.399 --> 00:14:13.920 +widely known uh very widely used is + +00:14:11.360 --> 00:14:16.440 +called a MK language model um a mast + +00:14:13.920 --> 00:14:19.480 +language model is something like Bert or + +00:14:16.440 --> 00:14:21.680 +debera or Roberta or all of these models + +00:14:19.480 --> 00:14:25.000 +that you might have heard if you've been + +00:14:21.680 --> 00:14:28.279 +in MLP for more than two years I guess + +00:14:25.000 --> 00:14:30.680 +um and basically what they do is they + +00:14:28.279 --> 00:14:30.680 +predict + +00:14:32.199 --> 00:14:37.480 +uh they like mask out this word and they + +00:14:34.839 --> 00:14:39.480 +predict the middle word so they mask out + +00:14:37.480 --> 00:14:41.440 +is and then try to predict that given + +00:14:39.480 --> 00:14:45.320 +all the other words the problem with + +00:14:41.440 --> 00:14:48.959 +these models is uh twofold number one + +00:14:45.320 --> 00:14:51.880 +they don't actually give you a uh good + +00:14:48.959 --> 00:14:55.399 +probability here uh like a a properly + +00:14:51.880 --> 00:14:57.800 +formed probability here + +00:14:55.399 --> 00:14:59.160 +because this is true only as long as + +00:14:57.800 --> 00:15:01.920 +you're only conditioning on things that + +00:14:59.160 --> 00:15:03.480 +you've previously generated so that + +00:15:01.920 --> 00:15:04.839 +they're not actually true language + +00:15:03.480 --> 00:15:06.920 +models from the point of view of being + +00:15:04.839 --> 00:15:10.040 +able to easily predict the probability + +00:15:06.920 --> 00:15:11.399 +of a sequence um and also it's hard to + +00:15:10.040 --> 00:15:13.399 +generate from them because you need to + +00:15:11.399 --> 00:15:15.440 +generate in some order and mass language + +00:15:13.399 --> 00:15:17.600 +models don't specify economical orders + +00:15:15.440 --> 00:15:19.120 +so they're good for some things like + +00:15:17.600 --> 00:15:21.720 +calculating representations of the + +00:15:19.120 --> 00:15:22.920 +output but they're not useful uh they're + +00:15:21.720 --> 00:15:25.240 +not as useful for + +00:15:22.920 --> 00:15:26.880 +Generation Um there's also energy based + +00:15:25.240 --> 00:15:28.759 +language models which basically create a + +00:15:26.880 --> 00:15:30.000 +scoring function that's not necessarily + +00:15:28.759 --> 00:15:31.279 +left to right or right to left or + +00:15:30.000 --> 00:15:33.120 +anything like that but that's very + +00:15:31.279 --> 00:15:34.639 +Advanced um if you're interested in them + +00:15:33.120 --> 00:15:36.319 +I can talk more about them that we'll + +00:15:34.639 --> 00:15:38.920 +skip + +00:15:36.319 --> 00:15:41.600 +them and um also all of the language + +00:15:38.920 --> 00:15:45.639 +models that you hear about nowadays GPT + +00:15:41.600 --> 00:15:48.800 +uh llama whatever else are all other + +00:15:45.639 --> 00:15:52.880 +models cool so I'm going to go into the + +00:15:48.800 --> 00:15:52.880 +very um any questions about that + +00:15:57.600 --> 00:16:00.600 +yeah + +00:16:00.680 --> 00:16:04.160 +yeah so in Mass language models the + +00:16:02.680 --> 00:16:06.000 +question was in Mass language models + +00:16:04.160 --> 00:16:08.360 +couldn't you just mask out the last + +00:16:06.000 --> 00:16:10.759 +token and predict that sure you could do + +00:16:08.360 --> 00:16:13.079 +that but there it's just not trained + +00:16:10.759 --> 00:16:14.720 +that way so it won't do a very good job + +00:16:13.079 --> 00:16:16.880 +if you always trained it that way it's + +00:16:14.720 --> 00:16:18.160 +an autor regressive language model so + +00:16:16.880 --> 00:16:22.240 +you're you're back to where you were in + +00:16:18.160 --> 00:16:24.800 +the first place um cool so now we I'll + +00:16:22.240 --> 00:16:26.399 +talk about unigram language models and + +00:16:24.800 --> 00:16:29.319 +so the simplest language models are + +00:16:26.399 --> 00:16:33.560 +count-based unigram language models and + +00:16:29.319 --> 00:16:35.319 +the way they work is um basically we + +00:16:33.560 --> 00:16:38.519 +want to calculate this probability + +00:16:35.319 --> 00:16:41.240 +conditioned on all the previous ones and + +00:16:38.519 --> 00:16:42.360 +the way we do this is we just say + +00:16:41.240 --> 00:16:45.680 +actually we're not going to worry about + +00:16:42.360 --> 00:16:48.759 +the order at all and we're just going to + +00:16:45.680 --> 00:16:52.240 +uh predict the probability of the next + +00:16:48.759 --> 00:16:55.279 +word uh independently of all the other + +00:16:52.240 --> 00:16:57.519 +words so if you have something like this + +00:16:55.279 --> 00:16:59.720 +it's actually extremely easy to predict + +00:16:57.519 --> 00:17:02.480 +the probability of this word and the way + +00:16:59.720 --> 00:17:04.280 +you do this is you just count up the + +00:17:02.480 --> 00:17:08.360 +number of times this word appeared in + +00:17:04.280 --> 00:17:10.480 +the training data set and divide by the + +00:17:08.360 --> 00:17:12.559 +uh divide by the total number of words + +00:17:10.480 --> 00:17:14.240 +in the pring data set and now you have a + +00:17:12.559 --> 00:17:15.959 +language model this is like language + +00:17:14.240 --> 00:17:17.760 +model 101 it's the easiest possible + +00:17:15.959 --> 00:17:19.520 +language model you can write in you know + +00:17:17.760 --> 00:17:21.120 +three lines of python + +00:17:19.520 --> 00:17:25.039 +basically + +00:17:21.120 --> 00:17:28.480 +um so it has a few problems uh the first + +00:17:25.039 --> 00:17:31.120 +problem with this language model is um + +00:17:28.480 --> 00:17:32.960 +handling unknown words so what happens + +00:17:31.120 --> 00:17:38.679 +if you have a word that you've never + +00:17:32.960 --> 00:17:41.000 +seen before um in this language model + +00:17:38.679 --> 00:17:42.240 +here what is the probability of any + +00:17:41.000 --> 00:17:44.720 +sequence that has a word that you've + +00:17:42.240 --> 00:17:47.440 +never seen before yeah the probability + +00:17:44.720 --> 00:17:49.240 +of the sequence gets zero so there might + +00:17:47.440 --> 00:17:51.120 +not be such a big problem for generating + +00:17:49.240 --> 00:17:52.480 +things from the language model because + +00:17:51.120 --> 00:17:54.520 +you know maybe it's fine if you only + +00:17:52.480 --> 00:17:55.960 +generate words that you've seen before + +00:17:54.520 --> 00:17:57.679 +uh but it is definitely a problem of + +00:17:55.960 --> 00:17:59.720 +scoring things with the language model + +00:17:57.679 --> 00:18:02.039 +and it's also a problem of uh for + +00:17:59.720 --> 00:18:04.440 +something like translation if you get an + +00:18:02.039 --> 00:18:05.840 +unknown word uh when you're translating + +00:18:04.440 --> 00:18:07.799 +something then you would like to be able + +00:18:05.840 --> 00:18:11.320 +to translate it reasonably but you can't + +00:18:07.799 --> 00:18:13.799 +do that so um that's an issue so how do + +00:18:11.320 --> 00:18:15.840 +we how do we fix this um there's a + +00:18:13.799 --> 00:18:17.640 +couple options the first option is to + +00:18:15.840 --> 00:18:19.440 +segment to characters and subwords and + +00:18:17.640 --> 00:18:21.720 +this is now the preferred option that + +00:18:19.440 --> 00:18:24.360 +most people use nowadays uh just run + +00:18:21.720 --> 00:18:26.840 +sentence piece segment your vocabulary + +00:18:24.360 --> 00:18:28.400 +and you're all set you're you'll now no + +00:18:26.840 --> 00:18:29.679 +longer have any unknown words because + +00:18:28.400 --> 00:18:30.840 +all the unknown words get split into + +00:18:29.679 --> 00:18:33.559 +shorter + +00:18:30.840 --> 00:18:36.240 +units there's also other options that + +00:18:33.559 --> 00:18:37.919 +you can use if you're uh very interested + +00:18:36.240 --> 00:18:41.280 +in or serious about this and want to + +00:18:37.919 --> 00:18:43.720 +handle this like uh as part of a + +00:18:41.280 --> 00:18:45.960 +research project or something like this + +00:18:43.720 --> 00:18:48.520 +and uh the way you can do this is you + +00:18:45.960 --> 00:18:50.120 +can build an unknown word model and an + +00:18:48.520 --> 00:18:52.200 +unknown word model basically what it + +00:18:50.120 --> 00:18:54.520 +does is it uh predicts the probability + +00:18:52.200 --> 00:18:56.200 +of unknown words using characters and + +00:18:54.520 --> 00:18:59.559 +then it models the probability of words + +00:18:56.200 --> 00:19:01.159 +using words and so now you can you have + +00:18:59.559 --> 00:19:02.559 +kind of like a hierarchical model where + +00:19:01.159 --> 00:19:03.919 +you first try to predict words and then + +00:19:02.559 --> 00:19:06.720 +if you can't predict words you predict + +00:19:03.919 --> 00:19:08.960 +unknown words so this isn't us as widely + +00:19:06.720 --> 00:19:11.520 +anymore but it's worth thinking about uh + +00:19:08.960 --> 00:19:11.520 +or knowing + +00:19:11.840 --> 00:19:20.880 +about okay uh so a second detail um a + +00:19:17.200 --> 00:19:22.799 +parameter uh so parameterizing in log + +00:19:20.880 --> 00:19:25.880 +space + +00:19:22.799 --> 00:19:28.400 +so the um multiplication of + +00:19:25.880 --> 00:19:29.840 +probabilities can be reexpressed is the + +00:19:28.400 --> 00:19:31.840 +addition of log + +00:19:29.840 --> 00:19:34.159 +probabilities uh so this is really + +00:19:31.840 --> 00:19:35.720 +important and this is widely used in all + +00:19:34.159 --> 00:19:37.520 +language models whether they're unigram + +00:19:35.720 --> 00:19:39.640 +language models or or neural language + +00:19:37.520 --> 00:19:41.799 +models there's actually a very simple + +00:19:39.640 --> 00:19:45.440 +reason why we why we do it this way does + +00:19:41.799 --> 00:19:45.440 +anybody uh know the + +00:19:46.440 --> 00:19:52.679 +answer what would happen if we + +00:19:48.280 --> 00:19:56.720 +multiplied uh let's say uh 30 30 tokens + +00:19:52.679 --> 00:20:00.360 +worth of probabilities together um + +00:19:56.720 --> 00:20:02.120 +yeah uh yeah too too small um so + +00:20:00.360 --> 00:20:06.120 +basically the problem is numerical + +00:20:02.120 --> 00:20:07.520 +underflow um so modern computers if if + +00:20:06.120 --> 00:20:08.840 +we weren't doing this on a computer and + +00:20:07.520 --> 00:20:11.240 +we were just doing math it wouldn't + +00:20:08.840 --> 00:20:14.280 +matter at all um but because we're doing + +00:20:11.240 --> 00:20:17.280 +it on a computer uh we + +00:20:14.280 --> 00:20:17.280 +have + +00:20:20.880 --> 00:20:26.000 +ours we have our + +00:20:23.000 --> 00:20:26.000 +32bit + +00:20:27.159 --> 00:20:30.159 +float + +00:20:32.320 --> 00:20:37.720 +where we have uh the exponent in the the + +00:20:35.799 --> 00:20:40.159 +fraction over here so the largest the + +00:20:37.720 --> 00:20:41.960 +exponent can get is limited by the + +00:20:40.159 --> 00:20:45.880 +number of exponent bits that we have in + +00:20:41.960 --> 00:20:48.039 +a 32-bit float and um if that's the case + +00:20:45.880 --> 00:20:52.480 +I forget exactly how large it is it's + +00:20:48.039 --> 00:20:53.440 +like yeah something like 30 minus 38 is + +00:20:52.480 --> 00:20:56.640 +that + +00:20:53.440 --> 00:20:58.520 +right yeah but anyway like if the number + +00:20:56.640 --> 00:21:00.640 +gets too small you'll underflow it goes + +00:20:58.520 --> 00:21:02.400 +to zero and you'll get a zero + +00:21:00.640 --> 00:21:05.720 +probability despite the fact that it's + +00:21:02.400 --> 00:21:07.640 +not actually zero so um that's usually + +00:21:05.720 --> 00:21:09.440 +why we do this it's also a little bit + +00:21:07.640 --> 00:21:12.960 +easier for people just to look at like + +00:21:09.440 --> 00:21:15.200 +minus 30 instead of looking to something + +00:21:12.960 --> 00:21:19.960 +something time 10 to the minus 30 or + +00:21:15.200 --> 00:21:24.520 +something so uh that is why we normally + +00:21:19.960 --> 00:21:27.159 +go um another thing that you can note is + +00:21:24.520 --> 00:21:28.760 +uh you can treat each of these in a + +00:21:27.159 --> 00:21:31.360 +unigram model you can treat each of + +00:21:28.760 --> 00:21:37.039 +these as parameters so we talked about + +00:21:31.360 --> 00:21:39.640 +parameters of a model uh like a um like + +00:21:37.039 --> 00:21:41.120 +a bag of words model and we can + +00:21:39.640 --> 00:21:44.080 +similarly treat these unigram + +00:21:41.120 --> 00:21:47.760 +probabilities as parameters so um how + +00:21:44.080 --> 00:21:47.760 +many parameters does a unigram model + +00:21:48.080 --> 00:21:51.320 +have any + +00:21:57.039 --> 00:22:02.400 +ideas + +00:21:59.600 --> 00:22:04.440 +yeah yeah exactly parameters equal to + +00:22:02.400 --> 00:22:08.120 +the size of the vocabulary so this one's + +00:22:04.440 --> 00:22:10.880 +easy and then we can go um we can go to + +00:22:08.120 --> 00:22:13.880 +the slightly less easy ones + +00:22:10.880 --> 00:22:16.039 +there so anyway this is a unigram model + +00:22:13.880 --> 00:22:17.960 +uh it's it's not too hard um you + +00:22:16.039 --> 00:22:20.480 +basically count up and divide and then + +00:22:17.960 --> 00:22:22.720 +you add the the probabilities here you + +00:22:20.480 --> 00:22:25.440 +could easily do it in a short Python + +00:22:22.720 --> 00:22:28.400 +program higher order engram models so + +00:22:25.440 --> 00:22:31.600 +higher order engram models um what these + +00:22:28.400 --> 00:22:35.520 +do is they essentially limit the context + +00:22:31.600 --> 00:22:40.240 +length to a length of N and then they + +00:22:35.520 --> 00:22:42.600 +count and divide so the way it works + +00:22:40.240 --> 00:22:45.559 +here maybe this is a little bit uh + +00:22:42.600 --> 00:22:47.320 +tricky but I can show an example so what + +00:22:45.559 --> 00:22:49.840 +we do is we count up the number of times + +00:22:47.320 --> 00:22:51.320 +we've seen this is an example and then + +00:22:49.840 --> 00:22:53.480 +we divide by the number of times we've + +00:22:51.320 --> 00:22:55.960 +seen this is n and that's the + +00:22:53.480 --> 00:22:56.960 +probability of example given the the + +00:22:55.960 --> 00:22:58.720 +previous + +00:22:56.960 --> 00:23:00.559 +coms + +00:22:58.720 --> 00:23:02.039 +so the problem with this is anytime we + +00:23:00.559 --> 00:23:03.400 +get a sequence that we've never seen + +00:23:02.039 --> 00:23:04.960 +before like we would like to model + +00:23:03.400 --> 00:23:07.200 +longer sequences to make this more + +00:23:04.960 --> 00:23:08.600 +accurate but anytime we've get a uh we + +00:23:07.200 --> 00:23:10.720 +get a sequence that we've never seen + +00:23:08.600 --> 00:23:12.919 +before um it will get a probability of + +00:23:10.720 --> 00:23:15.919 +zero similarly because this count on top + +00:23:12.919 --> 00:23:19.919 +of here will be zero so the way that uh + +00:23:15.919 --> 00:23:22.640 +engram language models work with this uh + +00:23:19.919 --> 00:23:27.320 +handle this is they have fall back to + +00:23:22.640 --> 00:23:31.840 +Shorter uh engram models so um this + +00:23:27.320 --> 00:23:33.480 +model sorry when I say NR uh n is the + +00:23:31.840 --> 00:23:35.520 +length of the context so this is a four + +00:23:33.480 --> 00:23:37.679 +gr model here because the top context is + +00:23:35.520 --> 00:23:40.520 +four so the photogram model would + +00:23:37.679 --> 00:23:46.640 +calculate this and then interpolate it + +00:23:40.520 --> 00:23:48.640 +like this with a um with a trigram model + +00:23:46.640 --> 00:23:50.400 +uh and then the trigram model itself + +00:23:48.640 --> 00:23:51.720 +would interpolate with the Byram model + +00:23:50.400 --> 00:23:53.440 +the Byram model would interpolate with + +00:23:51.720 --> 00:23:56.880 +the unram + +00:23:53.440 --> 00:23:59.880 +model oh this one oh + +00:23:56.880 --> 00:23:59.880 +okay + +00:24:02.159 --> 00:24:05.440 +um one + +00:24:07.039 --> 00:24:12.320 +second could you uh help get it from the + +00:24:10.000 --> 00:24:12.320 +lock + +00:24:26.799 --> 00:24:29.799 +box + +00:24:43.640 --> 00:24:50.200 +um okay sorry + +00:24:46.880 --> 00:24:53.640 +so getting bad + +00:24:50.200 --> 00:24:56.640 +here just + +00:24:53.640 --> 00:24:56.640 +actually + +00:24:56.760 --> 00:25:02.559 +okay uh oh wow that's a lot + +00:25:02.960 --> 00:25:12.080 +better cool okay so + +00:25:08.279 --> 00:25:14.159 +um so this is uh how we deal with the + +00:25:12.080 --> 00:25:18.799 +fact that models can + +00:25:14.159 --> 00:25:23.919 +be um models can be more precise but + +00:25:18.799 --> 00:25:26.679 +more sparse and less precise but less + +00:25:23.919 --> 00:25:28.720 +sparse this is also another concept that + +00:25:26.679 --> 00:25:31.039 +we're going to talk about more later uh + +00:25:28.720 --> 00:25:33.240 +in another class but this is a variety + +00:25:31.039 --> 00:25:33.240 +of + +00:25:33.679 --> 00:25:38.440 +ensembling where we have different + +00:25:35.960 --> 00:25:40.360 +models that are good at different things + +00:25:38.440 --> 00:25:42.279 +and we combine them together so this is + +00:25:40.360 --> 00:25:44.760 +the first instance that you would see of + +00:25:42.279 --> 00:25:46.159 +this there are other instances of this + +00:25:44.760 --> 00:25:50.320 +but the reason why I mentioned that this + +00:25:46.159 --> 00:25:51.840 +is a a variety of ensembling is actually + +00:25:50.320 --> 00:25:55.520 +you're probably not going to be using + +00:25:51.840 --> 00:25:57.840 +engram models super widely unless you + +00:25:55.520 --> 00:26:00.520 +really want to process huge data sets + +00:25:57.840 --> 00:26:02.399 +because that is one advantage of them + +00:26:00.520 --> 00:26:03.960 +but some of these smoothing methods + +00:26:02.399 --> 00:26:05.720 +actually might be interesting even if + +00:26:03.960 --> 00:26:10.520 +you're using other models and ensembling + +00:26:05.720 --> 00:26:10.520 +them together so + +00:26:10.600 --> 00:26:15.679 +the in order to decide this + +00:26:13.679 --> 00:26:19.559 +interpolation coefficient one way we can + +00:26:15.679 --> 00:26:23.440 +do it is just set a fixed um set a fixed + +00:26:19.559 --> 00:26:26.039 +amount of probability that we use for + +00:26:23.440 --> 00:26:29.000 +every um every time so we could say that + +00:26:26.039 --> 00:26:32.000 +we always set this Lambda to 0.8 and + +00:26:29.000 --> 00:26:34.320 +some always set this Lambda 1us Lambda + +00:26:32.000 --> 00:26:36.559 +to 0.2 and interpolate those two + +00:26:34.320 --> 00:26:39.120 +together but actually there's more + +00:26:36.559 --> 00:26:42.240 +sophisticated methods of doing this and + +00:26:39.120 --> 00:26:44.080 +so one way of doing this is uh called + +00:26:42.240 --> 00:26:47.240 +additive + +00:26:44.080 --> 00:26:50.600 +smoothing excuse me and the the way that + +00:26:47.240 --> 00:26:54.039 +additive smoothing works is um basically + +00:26:50.600 --> 00:26:54.919 +we add Alpha to the uh to the top and + +00:26:54.039 --> 00:26:58.000 +the + +00:26:54.919 --> 00:27:02.159 +bottom and the reason why this is slight + +00:26:58.000 --> 00:27:06.279 +different as is as our accounts get + +00:27:02.159 --> 00:27:10.799 +larger we start to approach the true + +00:27:06.279 --> 00:27:10.799 +distribution so just to give an + +00:27:12.080 --> 00:27:19.480 +example let's say we have uh the + +00:27:17.640 --> 00:27:21.640 +box + +00:27:19.480 --> 00:27:26.279 +is + +00:27:21.640 --> 00:27:26.279 +um let's say initially we + +00:27:26.520 --> 00:27:29.520 +have + +00:27:31.159 --> 00:27:37.600 +uh let let's say our Alpha is + +00:27:33.840 --> 00:27:43.559 +one so initially if we have + +00:27:37.600 --> 00:27:47.320 +nothing um if we have no no evidence for + +00:27:43.559 --> 00:27:47.320 +our sorry I I + +00:27:49.720 --> 00:27:54.960 +realize let's say this is + +00:27:52.640 --> 00:27:56.840 +our fallback + +00:27:54.960 --> 00:27:59.240 +distribution um where this is a + +00:27:56.840 --> 00:28:01.880 +probability of Z 0.5 this is a + +00:27:59.240 --> 00:28:03.360 +probability of 0.3 and this is a + +00:28:01.880 --> 00:28:06.559 +probability of + +00:28:03.360 --> 00:28:09.919 +0.2 so now let's talk about our byr + +00:28:06.559 --> 00:28:13.399 +model um and our byr + +00:28:09.919 --> 00:28:18.000 +model has counts which is the + +00:28:13.399 --> 00:28:18.000 +the the box and the + +00:28:19.039 --> 00:28:24.480 +is so if we do something like this then + +00:28:22.720 --> 00:28:26.720 +um initially we have no counts like + +00:28:24.480 --> 00:28:28.159 +let's say we we have no data uh about + +00:28:26.720 --> 00:28:30.760 +this distribution + +00:28:28.159 --> 00:28:33.200 +um our counts would be zero and our + +00:28:30.760 --> 00:28:35.919 +Alpha would be + +00:28:33.200 --> 00:28:37.840 +one and so we would just fall back to + +00:28:35.919 --> 00:28:40.960 +this distribution we just have like one + +00:28:37.840 --> 00:28:43.320 +times uh one times this distribution + +00:28:40.960 --> 00:28:45.679 +let's say we then we have one piece of + +00:28:43.320 --> 00:28:48.640 +evidence and once we have one piece of + +00:28:45.679 --> 00:28:52.279 +evidence now this would be + +00:28:48.640 --> 00:28:53.960 +0.33 um and this would uh be Alpha equal + +00:28:52.279 --> 00:28:56.399 +to 1 so we'd have + +00:28:53.960 --> 00:28:58.679 +0.5 * + +00:28:56.399 --> 00:29:00.399 +0.33 + +00:28:58.679 --> 00:29:04.039 +uh and + +00:29:00.399 --> 00:29:07.720 +0.5 time + +00:29:04.039 --> 00:29:10.840 +0.3 uh is the probability of the Box + +00:29:07.720 --> 00:29:12.840 +because um basically we we have one + +00:29:10.840 --> 00:29:14.720 +piece of evidence and we are adding a + +00:29:12.840 --> 00:29:17.080 +count of one to the lower order + +00:29:14.720 --> 00:29:18.320 +distribution then if we increase our + +00:29:17.080 --> 00:29:24.159 +count + +00:29:18.320 --> 00:29:24.159 +here um now we rely more + +00:29:24.880 --> 00:29:30.960 +strongly sorry that that would be wrong + +00:29:27.720 --> 00:29:32.399 +so so now we rely more strongly on the + +00:29:30.960 --> 00:29:33.880 +higher order distribution because we + +00:29:32.399 --> 00:29:37.039 +have more evidence for the higher order + +00:29:33.880 --> 00:29:39.610 +distribution so basically in this case + +00:29:37.039 --> 00:29:41.240 +um the probability + +00:29:39.610 --> 00:29:44.559 +[Music] + +00:29:41.240 --> 00:29:48.200 +of Lambda which I showed + +00:29:44.559 --> 00:29:52.000 +before is equal to the the sum of the + +00:29:48.200 --> 00:29:54.200 +counts plus um the sum of the counts + +00:29:52.000 --> 00:29:56.480 +over the sum of the counts plus + +00:29:54.200 --> 00:29:58.159 +Ali so as the sum of the counts gets + +00:29:56.480 --> 00:30:00.240 +larger you rely on the higher order + +00:29:58.159 --> 00:30:01.640 +distribution is the sum of the counts is + +00:30:00.240 --> 00:30:02.760 +if the sum of the counts is smaller you + +00:30:01.640 --> 00:30:04.320 +rely more on the lower order + +00:30:02.760 --> 00:30:06.720 +distribution so the more evidence you + +00:30:04.320 --> 00:30:11.640 +have the more you rely on so that's the + +00:30:06.720 --> 00:30:11.640 +basic idea behind these smoothing things + +00:30:11.679 --> 00:30:16.679 +um there's also a number of other + +00:30:14.519 --> 00:30:18.760 +varieties called uh + +00:30:16.679 --> 00:30:20.799 +discounting so uh the discount + +00:30:18.760 --> 00:30:23.679 +hyperparameter basically you subtract + +00:30:20.799 --> 00:30:26.080 +this off um uh you subtract this from + +00:30:23.679 --> 00:30:27.840 +the count so you would subtract like 0.5 + +00:30:26.080 --> 00:30:32.679 +from each of the counts that you it's + +00:30:27.840 --> 00:30:36.279 +just empirically this is a better match + +00:30:32.679 --> 00:30:38.600 +for the fact that um natural language + +00:30:36.279 --> 00:30:40.039 +has a very longtailed distribution um + +00:30:38.600 --> 00:30:41.600 +you can kind of do the math and show + +00:30:40.039 --> 00:30:43.720 +that that works and that's actually in + +00:30:41.600 --> 00:30:46.080 +this um in this paper if you're + +00:30:43.720 --> 00:30:49.880 +interested in looking at more details of + +00:30:46.080 --> 00:30:51.519 +that um and then kind of the + +00:30:49.880 --> 00:30:53.440 +stateoftheart in language modeling + +00:30:51.519 --> 00:30:56.600 +before neural language models came out + +00:30:53.440 --> 00:30:59.919 +was this kesser smoothing and what it + +00:30:56.600 --> 00:31:02.440 +does is it discounts but it also + +00:30:59.919 --> 00:31:04.480 +modifies the lower order distribution so + +00:31:02.440 --> 00:31:07.200 +in the lower order distribution you + +00:31:04.480 --> 00:31:09.039 +basically um modify the counts with + +00:31:07.200 --> 00:31:11.919 +respect to how many times that word has + +00:31:09.039 --> 00:31:13.519 +appeared in new contexts with the IDE + +00:31:11.919 --> 00:31:16.360 +idea being that you only use the lower + +00:31:13.519 --> 00:31:18.880 +order distribution when you have uh new + +00:31:16.360 --> 00:31:21.200 +contexts um and so you can kind of Be + +00:31:18.880 --> 00:31:23.600 +Clever + +00:31:21.200 --> 00:31:25.399 +About You Can Be Clever about how you + +00:31:23.600 --> 00:31:27.639 +build this distribution based on the + +00:31:25.399 --> 00:31:29.360 +fact that you're only using it in the + +00:31:27.639 --> 00:31:31.320 +case when this distribution is not very + +00:31:29.360 --> 00:31:33.960 +Rel + +00:31:31.320 --> 00:31:36.080 +so I I would spend a lot more time + +00:31:33.960 --> 00:31:37.960 +teaching this when uh engram models were + +00:31:36.080 --> 00:31:39.840 +kind of the thing uh that people were + +00:31:37.960 --> 00:31:41.960 +using but now I'm going to go over them + +00:31:39.840 --> 00:31:43.600 +very quickly so you know don't worry if + +00:31:41.960 --> 00:31:46.559 +you weren't able to follow all the + +00:31:43.600 --> 00:31:47.960 +details but the basic um the basic thing + +00:31:46.559 --> 00:31:49.279 +take away from this is number one these + +00:31:47.960 --> 00:31:51.639 +are the methods that people use for + +00:31:49.279 --> 00:31:53.440 +engram language models number two if + +00:31:51.639 --> 00:31:55.720 +you're thinking about combining language + +00:31:53.440 --> 00:31:57.519 +models together in some way through you + +00:31:55.720 --> 00:31:59.279 +know ensembling their probability or + +00:31:57.519 --> 00:32:00.480 +something like this this is something + +00:31:59.279 --> 00:32:02.279 +that you should think about a little bit + +00:32:00.480 --> 00:32:03.679 +more carefully because like some + +00:32:02.279 --> 00:32:05.240 +language models might be good in some + +00:32:03.679 --> 00:32:07.440 +context other language models might be + +00:32:05.240 --> 00:32:09.440 +good in other contexts so you would need + +00:32:07.440 --> 00:32:11.799 +to think about that when you're doing um + +00:32:09.440 --> 00:32:18.200 +when you're combining the model + +00:32:11.799 --> 00:32:18.200 +that cool um any any questions about + +00:32:19.080 --> 00:32:24.840 +this Okay + +00:32:21.159 --> 00:32:27.840 +cool so there's a lot of problems that + +00:32:24.840 --> 00:32:30.760 +we have to deal with um when were + +00:32:27.840 --> 00:32:32.600 +creating engram models and that actually + +00:32:30.760 --> 00:32:35.279 +kind of motivated the reason why we + +00:32:32.600 --> 00:32:36.639 +moved to neural language models the + +00:32:35.279 --> 00:32:38.720 +first one is similar to what I talked + +00:32:36.639 --> 00:32:40.519 +about last time with text classification + +00:32:38.720 --> 00:32:42.600 +um that they can't share strength among + +00:32:40.519 --> 00:32:45.159 +similar words like bought and + +00:32:42.600 --> 00:32:46.919 +purchase um another thing is that they + +00:32:45.159 --> 00:32:49.440 +can't easily condition on context with + +00:32:46.919 --> 00:32:51.240 +intervening words so engram models if + +00:32:49.440 --> 00:32:52.799 +you have a rare word in your context + +00:32:51.240 --> 00:32:54.320 +immediately start falling back to the + +00:32:52.799 --> 00:32:56.799 +unigram distribution and they end up + +00:32:54.320 --> 00:32:58.720 +being very bad so uh that was another + +00:32:56.799 --> 00:33:01.000 +issue + +00:32:58.720 --> 00:33:04.760 +and they couldn't handle long distance + +00:33:01.000 --> 00:33:09.080 +um dependencies so if this was beyond + +00:33:04.760 --> 00:33:10.559 +the engram context that they would uh be + +00:33:09.080 --> 00:33:14.320 +handling then you wouldn't be able to + +00:33:10.559 --> 00:33:15.840 +manage this so actually before neural + +00:33:14.320 --> 00:33:18.000 +language models became a really big + +00:33:15.840 --> 00:33:19.960 +thing uh people came up with a bunch of + +00:33:18.000 --> 00:33:22.760 +individual solutions for this in order + +00:33:19.960 --> 00:33:24.440 +to solve the problems but actually it + +00:33:22.760 --> 00:33:26.679 +wasn't that these Solutions didn't work + +00:33:24.440 --> 00:33:29.159 +at all it was just that engineering all + +00:33:26.679 --> 00:33:30.519 +of them together was so hard that nobody + +00:33:29.159 --> 00:33:32.120 +actually ever did that and so they + +00:33:30.519 --> 00:33:35.120 +relied on just engram models out of the + +00:33:32.120 --> 00:33:37.600 +box and that wasn't scalable so it's + +00:33:35.120 --> 00:33:39.279 +kind of a funny example of how like + +00:33:37.600 --> 00:33:42.000 +actually neural networks despite all the + +00:33:39.279 --> 00:33:43.559 +pain that they cause in some areas are a + +00:33:42.000 --> 00:33:47.120 +much better engineering solution to + +00:33:43.559 --> 00:33:51.279 +solve all the issues that previous + +00:33:47.120 --> 00:33:53.159 +method cool um so when they use uh Eng + +00:33:51.279 --> 00:33:54.799 +grab models neural language models + +00:33:53.159 --> 00:33:56.559 +achieve better performance but Eng grab + +00:33:54.799 --> 00:33:58.440 +models are very very fast to estimate + +00:33:56.559 --> 00:33:59.880 +and apply you can even estimate them + +00:33:58.440 --> 00:34:04.399 +completely in + +00:33:59.880 --> 00:34:07.720 +parallel um engram models also I I don't + +00:34:04.399 --> 00:34:10.399 +know if this is necessarily + +00:34:07.720 --> 00:34:13.200 +A a thing that + +00:34:10.399 --> 00:34:15.079 +you a reason to use engram language + +00:34:13.200 --> 00:34:17.720 +models but it is a reason to think a + +00:34:15.079 --> 00:34:20.320 +little bit critically about uh neural + +00:34:17.720 --> 00:34:22.720 +language models which is neural language + +00:34:20.320 --> 00:34:24.320 +models actually can be worse than engram + +00:34:22.720 --> 00:34:26.679 +language models at modeling very low + +00:34:24.320 --> 00:34:28.480 +frequency phenomenas so engram language + +00:34:26.679 --> 00:34:29.960 +model can learn from a single example + +00:34:28.480 --> 00:34:32.119 +they only need a single example of + +00:34:29.960 --> 00:34:36.879 +anything before the probability of that + +00:34:32.119 --> 00:34:38.639 +continuation goes up very high um and uh + +00:34:36.879 --> 00:34:41.359 +but neural language models actually can + +00:34:38.639 --> 00:34:43.599 +forget or not memorize uh appropriately + +00:34:41.359 --> 00:34:46.280 +from single examples so they can be + +00:34:43.599 --> 00:34:48.040 +better at that um there's a toolkit the + +00:34:46.280 --> 00:34:49.919 +standard toolkit for estimating engram + +00:34:48.040 --> 00:34:54.359 +language models is called KLM it's kind + +00:34:49.919 --> 00:34:57.599 +of frighteningly fast um and so people + +00:34:54.359 --> 00:35:00.400 +have been uh saying like I've seen some + +00:34:57.599 --> 00:35:01.599 +jokes which are like job postings that + +00:35:00.400 --> 00:35:04.040 +say people who have been working on + +00:35:01.599 --> 00:35:05.880 +large language models uh for we want + +00:35:04.040 --> 00:35:07.359 +people who have been 10 years of + +00:35:05.880 --> 00:35:09.240 +experience working on large language + +00:35:07.359 --> 00:35:11.960 +models or something like that and a lot + +00:35:09.240 --> 00:35:13.440 +of people are saying wait nobody has 10 + +00:35:11.960 --> 00:35:16.400 +years of experience working on large + +00:35:13.440 --> 00:35:18.160 +language models well Kenneth hfield who + +00:35:16.400 --> 00:35:19.440 +created KLM does have 10 years of + +00:35:18.160 --> 00:35:22.800 +experience working on large language + +00:35:19.440 --> 00:35:24.599 +models because he was estimating uh + +00:35:22.800 --> 00:35:27.720 +seven gr + +00:35:24.599 --> 00:35:30.320 +bottles um seven models with a + +00:35:27.720 --> 00:35:35.040 +vocabulary of let's say + +00:35:30.320 --> 00:35:37.720 +100,000 on um you know web text so how + +00:35:35.040 --> 00:35:41.119 +many parameters is at that's more than + +00:35:37.720 --> 00:35:44.320 +any you know large neural language model + +00:35:41.119 --> 00:35:45.640 +that we have nowadays so um they they + +00:35:44.320 --> 00:35:47.520 +have a lot of these parameters are + +00:35:45.640 --> 00:35:49.400 +sparse they're zero counts so obviously + +00:35:47.520 --> 00:35:52.160 +you don't uh you don't memorize all of + +00:35:49.400 --> 00:35:55.040 +them but uh + +00:35:52.160 --> 00:35:57.800 +yeah cool um another thing that maybe I + +00:35:55.040 --> 00:35:59.359 +should mention like so this doesn't + +00:35:57.800 --> 00:36:01.960 +sound completely outdated there was a + +00:35:59.359 --> 00:36:05.400 +really good paper + +00:36:01.960 --> 00:36:08.400 +recently that used the fact that engrams + +00:36:05.400 --> 00:36:08.400 +are + +00:36:11.079 --> 00:36:17.319 +so uses effect that engram models are so + +00:36:14.280 --> 00:36:18.960 +scalable it's this paper um it's called + +00:36:17.319 --> 00:36:21.079 +Data selection for language models via + +00:36:18.960 --> 00:36:22.359 +importance rese sampling and one + +00:36:21.079 --> 00:36:24.359 +interesting thing that they do in this + +00:36:22.359 --> 00:36:28.920 +paper is that they don't + +00:36:24.359 --> 00:36:31.560 +actually um they don't + +00:36:28.920 --> 00:36:32.800 +actually use neural models in any way + +00:36:31.560 --> 00:36:34.920 +despite the fact that they use the + +00:36:32.800 --> 00:36:36.880 +downstream data that they sample in + +00:36:34.920 --> 00:36:41.319 +order to calculate neural models but + +00:36:36.880 --> 00:36:42.880 +they run engram models over um over lots + +00:36:41.319 --> 00:36:47.359 +and lots of data and then they fit a + +00:36:42.880 --> 00:36:50.000 +gaussian distribution to the enr model + +00:36:47.359 --> 00:36:51.520 +counts basically uh in order to select + +00:36:50.000 --> 00:36:53.040 +the data in the reason why they do this + +00:36:51.520 --> 00:36:55.280 +is they want to do this over the entire + +00:36:53.040 --> 00:36:56.760 +web and running a neural model over the + +00:36:55.280 --> 00:36:58.920 +entire web would be too expensive so + +00:36:56.760 --> 00:37:00.319 +they use angr models instead so that's + +00:36:58.920 --> 00:37:02.359 +just an example of something in the + +00:37:00.319 --> 00:37:04.920 +modern context where keeping this in + +00:37:02.359 --> 00:37:04.920 +mind is a good + +00:37:08.200 --> 00:37:14.000 +idea okay I'd like to move to the next + +00:37:10.960 --> 00:37:15.319 +part so a language model evaluation uh + +00:37:14.000 --> 00:37:17.200 +this is important to know I'm not going + +00:37:15.319 --> 00:37:19.079 +to talk about language model evaluation + +00:37:17.200 --> 00:37:20.599 +on other tasks I'm only going to talk + +00:37:19.079 --> 00:37:23.800 +right now about language model + +00:37:20.599 --> 00:37:26.280 +evaluation on the task of language + +00:37:23.800 --> 00:37:29.079 +modeling and there's a number of metrics + +00:37:26.280 --> 00:37:30.680 +that we use for the task of language + +00:37:29.079 --> 00:37:32.720 +modeling evaluating language models on + +00:37:30.680 --> 00:37:35.560 +the task of language modeling the first + +00:37:32.720 --> 00:37:38.480 +one is log likelihood and basically uh + +00:37:35.560 --> 00:37:40.160 +the way we calculate log likelihood is + +00:37:38.480 --> 00:37:41.640 +uh sorry there's an extra parenthesis + +00:37:40.160 --> 00:37:45.480 +here but the way we calculate log + +00:37:41.640 --> 00:37:47.160 +likelihood is we get a test set that + +00:37:45.480 --> 00:37:50.400 +ideally has not been included in our + +00:37:47.160 --> 00:37:52.520 +training data and we take all of the + +00:37:50.400 --> 00:37:54.200 +documents or sentences in the test set + +00:37:52.520 --> 00:37:57.040 +we calculate the log probability of all + +00:37:54.200 --> 00:37:59.520 +of them uh we don't actually use this + +00:37:57.040 --> 00:38:02.640 +super broadly to evaluate models and the + +00:37:59.520 --> 00:38:04.200 +reason why is because this number is + +00:38:02.640 --> 00:38:05.720 +very dependent on the size of the data + +00:38:04.200 --> 00:38:07.119 +set so if you have a larger data set + +00:38:05.720 --> 00:38:08.720 +this number will be larger if you have a + +00:38:07.119 --> 00:38:10.960 +smaller data set this number will be + +00:38:08.720 --> 00:38:14.040 +smaller so the more common thing to do + +00:38:10.960 --> 00:38:15.839 +is per word uh log likelihood and per + +00:38:14.040 --> 00:38:19.800 +word log likelihood is basically + +00:38:15.839 --> 00:38:22.760 +dividing the um dividing the log + +00:38:19.800 --> 00:38:25.520 +probability of the entire corpus with uh + +00:38:22.760 --> 00:38:28.359 +the number of words that you have in the + +00:38:25.520 --> 00:38:31.000 +corpus + +00:38:28.359 --> 00:38:34.599 +um it's also common for papers to report + +00:38:31.000 --> 00:38:36.359 +negative log likelihood uh where because + +00:38:34.599 --> 00:38:37.800 +that's used as a loss and there lower is + +00:38:36.359 --> 00:38:40.440 +better so you just need to be careful + +00:38:37.800 --> 00:38:42.560 +about which one is being + +00:38:40.440 --> 00:38:43.880 +reported so this is pretty common I + +00:38:42.560 --> 00:38:45.400 +think most people are are somewhat + +00:38:43.880 --> 00:38:49.040 +familiar with + +00:38:45.400 --> 00:38:49.800 +this another thing that you might see is + +00:38:49.040 --> 00:38:53.079 +uh + +00:38:49.800 --> 00:38:55.000 +entropy and uh specifically this is + +00:38:53.079 --> 00:38:57.319 +often called cross entropy because + +00:38:55.000 --> 00:38:59.880 +you're calculating + +00:38:57.319 --> 00:39:01.599 +the you're estimating the model on a + +00:38:59.880 --> 00:39:05.079 +training data set and then evaluating it + +00:39:01.599 --> 00:39:08.400 +on a separate data set uh so uh on the + +00:39:05.079 --> 00:39:12.200 +test data set and this is calcul often + +00:39:08.400 --> 00:39:14.640 +or usually calculated as log 2 um of the + +00:39:12.200 --> 00:39:17.119 +probability divided by the number of + +00:39:14.640 --> 00:39:18.760 +words or units in the Corpus does anyone + +00:39:17.119 --> 00:39:23.839 +know why this is log + +00:39:18.760 --> 00:39:23.839 +two as opposed to a normal uh + +00:39:25.440 --> 00:39:31.319 +log + +00:39:28.440 --> 00:39:31.319 +anyone yeah + +00:39:33.119 --> 00:39:38.720 +so yeah so it's calculating as bits um + +00:39:36.760 --> 00:39:43.160 +and this is kind of + +00:39:38.720 --> 00:39:45.240 +a um this is kind of a historical thing + +00:39:43.160 --> 00:39:47.119 +and it's not super super important for + +00:39:45.240 --> 00:39:51.800 +language models but it's actually pretty + +00:39:47.119 --> 00:39:54.599 +interesting uh to to think about and so + +00:39:51.800 --> 00:39:57.480 +actually any probabilistic distribution + +00:39:54.599 --> 00:40:00.040 +can also be used for data compression + +00:39:57.480 --> 00:40:03.319 +um and so you know when you're running a + +00:40:00.040 --> 00:40:05.000 +zip file or you're running gzip or bz2 + +00:40:03.319 --> 00:40:07.359 +or something like that uh you're + +00:40:05.000 --> 00:40:09.240 +compressing a file into a smaller file + +00:40:07.359 --> 00:40:12.000 +and any language model can also be used + +00:40:09.240 --> 00:40:15.280 +to compress a SM file into a smaller + +00:40:12.000 --> 00:40:17.119 +file um and so the way it does this is + +00:40:15.280 --> 00:40:19.200 +if you have more likely + +00:40:17.119 --> 00:40:20.960 +sequences uh for example more likely + +00:40:19.200 --> 00:40:25.079 +sentences or more likely documents you + +00:40:20.960 --> 00:40:26.920 +can press them into a a shorter uh + +00:40:25.079 --> 00:40:29.440 +output and + +00:40:26.920 --> 00:40:29.440 +kind of + +00:40:29.640 --> 00:40:33.800 +the + +00:40:31.480 --> 00:40:35.720 +ideal I I think it's pretty safe to say + +00:40:33.800 --> 00:40:37.920 +ideal because I think you can't get a + +00:40:35.720 --> 00:40:42.920 +better method for compression than this + +00:40:37.920 --> 00:40:45.000 +uh if I unless I'm uh you know not well + +00:40:42.920 --> 00:40:46.800 +versed enough in information Theory but + +00:40:45.000 --> 00:40:49.240 +I I think this is basically the ideal + +00:40:46.800 --> 00:40:51.960 +method for data compression and the way + +00:40:49.240 --> 00:40:54.640 +it works is um I have a figure up here + +00:40:51.960 --> 00:40:58.800 +but I'd like to recreate it here which + +00:40:54.640 --> 00:41:02.640 +is let's say we have a vocabulary of + +00:40:58.800 --> 00:41:07.200 +a um which has + +00:41:02.640 --> 00:41:08.800 +50% and then we have a vocabulary uh B + +00:41:07.200 --> 00:41:11.560 +which is + +00:41:08.800 --> 00:41:14.040 +33% and a vocabulary + +00:41:11.560 --> 00:41:18.520 +C + +00:41:14.040 --> 00:41:18.520 +uh yeah C which is about + +00:41:18.640 --> 00:41:25.640 +17% and so if you have a single token + +00:41:22.960 --> 00:41:26.839 +sequence um if you have a single token + +00:41:25.640 --> 00:41:30.880 +sequence + +00:41:26.839 --> 00:41:30.880 +what you do is you can + +00:41:31.319 --> 00:41:38.800 +see divide this into zero and one so if + +00:41:36.400 --> 00:41:40.680 +your single token sequence is a you can + +00:41:38.800 --> 00:41:42.760 +just put zero and you'll be done + +00:41:40.680 --> 00:41:46.800 +encoding it if your single token + +00:41:42.760 --> 00:41:51.920 +sequence is B + +00:41:46.800 --> 00:41:56.520 +then um one overlaps with b and c so now + +00:41:51.920 --> 00:42:00.920 +you need to further split this up into + +00:41:56.520 --> 00:42:00.920 +uh o and one and you can see + +00:42:04.880 --> 00:42:11.440 +that let make sure I did that right yeah + +00:42:08.359 --> 00:42:11.440 +you can you can see + +00:42:15.599 --> 00:42:25.720 +that one zero is entirely encompassed by + +00:42:19.680 --> 00:42:29.200 +uh by B so now B is one Z and C uh C is + +00:42:25.720 --> 00:42:32.359 +not L encompassed by that so you would + +00:42:29.200 --> 00:42:39.240 +need to further break this up and say + +00:42:32.359 --> 00:42:41.880 +it's Z one here and now one one + +00:42:39.240 --> 00:42:45.520 +one is encompassed by this so you would + +00:42:41.880 --> 00:42:48.680 +get uh you would get C if it was 111 and + +00:42:45.520 --> 00:42:51.119 +so every every sequence that started + +00:42:48.680 --> 00:42:53.000 +with zero would start out with a every + +00:42:51.119 --> 00:42:54.960 +sequence that started out with one zero + +00:42:53.000 --> 00:42:57.200 +would start with b and every sequence + +00:42:54.960 --> 00:43:02.079 +that started with 11 one1 + +00:42:57.200 --> 00:43:04.920 +start um and so then you can look at the + +00:43:02.079 --> 00:43:06.960 +next word and let's say we're using a + +00:43:04.920 --> 00:43:09.839 +unigram model if we're using a unigram + +00:43:06.960 --> 00:43:12.960 +model for the next uh the next token + +00:43:09.839 --> 00:43:18.200 +let's say the next token is C + +00:43:12.960 --> 00:43:23.640 +so now the next token being C we already + +00:43:18.200 --> 00:43:27.920 +have B and now we take we subdivide + +00:43:23.640 --> 00:43:33.040 +B into + +00:43:27.920 --> 00:43:35.720 +a BC ba a BB and BC and then we find the + +00:43:33.040 --> 00:43:40.720 +next binary sequence that is entirely + +00:43:35.720 --> 00:43:44.000 +encompassed by uh BC by this like + +00:43:40.720 --> 00:43:45.359 +interval and so the moment we find a a + +00:43:44.000 --> 00:43:48.520 +binary sequence that's entirely + +00:43:45.359 --> 00:43:50.599 +encompassed by the interval uh then that + +00:43:48.520 --> 00:43:53.400 +is the the sequence that we can use to + +00:43:50.599 --> 00:43:54.640 +represent that SC and so um if you're + +00:43:53.400 --> 00:43:56.520 +interested in this you can look up the + +00:43:54.640 --> 00:44:00.400 +arithmetic coding on on wikip it's + +00:43:56.520 --> 00:44:02.079 +pretty fascinating but basically um here + +00:44:00.400 --> 00:44:04.040 +this is showing the example of the + +00:44:02.079 --> 00:44:07.160 +unigram model where the probabilities + +00:44:04.040 --> 00:44:10.240 +don't change based on the context but + +00:44:07.160 --> 00:44:13.000 +what if we knew that + +00:44:10.240 --> 00:44:15.599 +c had a really high probability of + +00:44:13.000 --> 00:44:22.160 +following B so if that's the case now we + +00:44:15.599 --> 00:44:24.559 +have like a a b c here um like based on + +00:44:22.160 --> 00:44:25.880 +our our byr model or neural language + +00:44:24.559 --> 00:44:29.319 +model or something like that so now this + +00:44:25.880 --> 00:44:31.240 +is interval is much much larger so it's + +00:44:29.319 --> 00:44:35.079 +much more likely to entirely Encompass a + +00:44:31.240 --> 00:44:39.720 +shorter string and because of that the + +00:44:35.079 --> 00:44:42.440 +um the output can be much shorter and so + +00:44:39.720 --> 00:44:45.760 +if you use this arithmetic encoding um + +00:44:42.440 --> 00:44:49.440 +over a very long sequence of outputs + +00:44:45.760 --> 00:44:52.440 +your the length of the sequence that is + +00:44:49.440 --> 00:44:56.000 +needed to encode this uh this particular + +00:44:52.440 --> 00:45:00.359 +output is going to be essentially um the + +00:44:56.000 --> 00:45:03.319 +number of bits according to times the + +00:45:00.359 --> 00:45:06.480 +times the sequence so this is very + +00:45:03.319 --> 00:45:10.000 +directly connected to like compression + +00:45:06.480 --> 00:45:13.160 +and information Theory and stuff like + +00:45:10.000 --> 00:45:15.359 +that so that that's where entropy comes + +00:45:13.160 --> 00:45:17.680 +from uh are are there any questions + +00:45:15.359 --> 00:45:17.680 +about + +00:45:19.319 --> 00:45:22.319 +this + +00:45:24.880 --> 00:45:28.119 +yeah + +00:45:26.800 --> 00:45:31.880 +uh for + +00:45:28.119 --> 00:45:34.319 +c um so + +00:45:31.880 --> 00:45:36.599 +111 is + +00:45:34.319 --> 00:45:37.920 +because let me let me see if I can do + +00:45:36.599 --> 00:45:40.559 +this + +00:45:37.920 --> 00:45:44.240 +again + +00:45:40.559 --> 00:45:44.240 +so I had one + +00:45:46.079 --> 00:45:54.520 +one so here this interval is + +00:45:50.920 --> 00:45:56.839 +one this interval is one one this + +00:45:54.520 --> 00:46:00.079 +interval is 111 + +00:45:56.839 --> 00:46:03.520 +and 111 is the first interval that is + +00:46:00.079 --> 00:46:05.520 +entirely overlapping with with c um and + +00:46:03.520 --> 00:46:08.760 +it's not one Z because one one Z is + +00:46:05.520 --> 00:46:08.760 +overlaping with b and + +00:46:09.960 --> 00:46:13.599 +c so which + +00:46:14.280 --> 00:46:21.720 +Cas so which case one + +00:46:20.160 --> 00:46:24.800 +Z + +00:46:21.720 --> 00:46:26.319 +one one one + +00:46:24.800 --> 00:46:30.800 +Z + +00:46:26.319 --> 00:46:30.800 +when would you use 110 to represent + +00:46:32.119 --> 00:46:38.839 +something it's a good question I guess + +00:46:36.119 --> 00:46:40.599 +maybe you wouldn't which seems a little + +00:46:38.839 --> 00:46:43.280 +bit wasteful + +00:46:40.599 --> 00:46:46.160 +so let me let me think about that I + +00:46:43.280 --> 00:46:49.920 +think um it might be the case that you + +00:46:46.160 --> 00:46:52.319 +just don't use it um + +00:46:49.920 --> 00:46:53.559 +but yeah I'll try to think about that a + +00:46:52.319 --> 00:46:55.920 +little bit more because it seems like + +00:46:53.559 --> 00:46:59.200 +you should use every bet string right so + +00:46:55.920 --> 00:47:01.559 +um yeah if anybody uh has has the answer + +00:46:59.200 --> 00:47:05.160 +I'd be happy to hear it otherwise I take + +00:47:01.559 --> 00:47:07.079 +you cool um so next thing is perplexity + +00:47:05.160 --> 00:47:10.640 +so this is another one that you see + +00:47:07.079 --> 00:47:13.240 +commonly and um so perplexity is + +00:47:10.640 --> 00:47:16.880 +basically two to the ENT uh two to the + +00:47:13.240 --> 00:47:20.760 +per word entropy or e to the uh negative + +00:47:16.880 --> 00:47:24.880 +word level log likelihood in log space + +00:47:20.760 --> 00:47:28.240 +um and so this uh T larger tends to be + +00:47:24.880 --> 00:47:32.559 +better I'd like to do a little exercise + +00:47:28.240 --> 00:47:34.599 +to see uh if this works so like let's + +00:47:32.559 --> 00:47:39.079 +say we have one a dog sees a squirrel it + +00:47:34.599 --> 00:47:40.960 +will usually um and can anyone guess the + +00:47:39.079 --> 00:47:43.480 +next word just yell it + +00:47:40.960 --> 00:47:46.400 +out bar + +00:47:43.480 --> 00:47:47.400 +okay uh what about that what about + +00:47:46.400 --> 00:47:50.400 +something + +00:47:47.400 --> 00:47:50.400 +else + +00:47:52.640 --> 00:47:57.520 +Chase Run + +00:47:54.720 --> 00:48:00.800 +Run + +00:47:57.520 --> 00:48:00.800 +okay John + +00:48:01.960 --> 00:48:05.280 +John anything + +00:48:07.000 --> 00:48:10.400 +else any other + +00:48:11.280 --> 00:48:16.960 +ones so basically what this shows is + +00:48:13.640 --> 00:48:16.960 +humans are really bad language + +00:48:17.160 --> 00:48:24.079 +models so uh interestingly every single + +00:48:21.520 --> 00:48:26.559 +one of the words you predicted here is a + +00:48:24.079 --> 00:48:32.240 +uh a regular verb + +00:48:26.559 --> 00:48:35.200 +um but in natural language model gpt2 uh + +00:48:32.240 --> 00:48:38.079 +the first thing it predicts is B uh + +00:48:35.200 --> 00:48:40.440 +which is kind of a like the Cula there's + +00:48:38.079 --> 00:48:43.400 +also start and that will be like start + +00:48:40.440 --> 00:48:44.880 +running start something um and humans + +00:48:43.400 --> 00:48:46.400 +actually are really bad at doing this + +00:48:44.880 --> 00:48:49.079 +are really bad at predicting next words + +00:48:46.400 --> 00:48:51.760 +we're not trained that way um and so uh + +00:48:49.079 --> 00:48:54.319 +we end up having these biases but anyway + +00:48:51.760 --> 00:48:55.799 +um the reason why I did this quiz was + +00:48:54.319 --> 00:48:57.280 +because that's essentially what + +00:48:55.799 --> 00:49:01.160 +perplexity + +00:48:57.280 --> 00:49:02.680 +means um and what what perplexity is is + +00:49:01.160 --> 00:49:04.559 +it's the number of times you'd have to + +00:49:02.680 --> 00:49:07.000 +sample from the probability distribution + +00:49:04.559 --> 00:49:09.200 +before you get the answer right so you + +00:49:07.000 --> 00:49:11.160 +were a little bit biased here because we + +00:49:09.200 --> 00:49:13.359 +were doing sampling without replacement + +00:49:11.160 --> 00:49:15.480 +so like nobody was actually picking a + +00:49:13.359 --> 00:49:17.000 +word that had already been said but it's + +00:49:15.480 --> 00:49:18.319 +essentially like if you guessed over and + +00:49:17.000 --> 00:49:20.839 +over and over again how many times would + +00:49:18.319 --> 00:49:22.720 +you need until you get it right and so + +00:49:20.839 --> 00:49:25.119 +here like if the actual answer was start + +00:49:22.720 --> 00:49:27.480 +the perplexity would be 4.66 so we'd + +00:49:25.119 --> 00:49:30.240 +expect language model to get it in uh + +00:49:27.480 --> 00:49:34.400 +four guesses uh between four and five + +00:49:30.240 --> 00:49:38.559 +guesses and you guys all did six so you + +00:49:34.400 --> 00:49:41.599 +lose um so uh another important thing to + +00:49:38.559 --> 00:49:42.799 +mention is evaluation in vocabulary uh + +00:49:41.599 --> 00:49:44.880 +so for fair + +00:49:42.799 --> 00:49:47.319 +comparison um make sure that the + +00:49:44.880 --> 00:49:49.559 +denominator is the same so uh if you're + +00:49:47.319 --> 00:49:51.559 +calculating the perplexity make sure + +00:49:49.559 --> 00:49:53.359 +that you're dividing by the same number + +00:49:51.559 --> 00:49:55.799 +uh every time you're dividing by words + +00:49:53.359 --> 00:49:58.520 +if it's uh the other paper or whatever + +00:49:55.799 --> 00:50:00.680 +is dividing by words or like let's say + +00:49:58.520 --> 00:50:02.160 +you're comparing llama to gp2 they have + +00:50:00.680 --> 00:50:04.880 +different tokenizers so they'll have + +00:50:02.160 --> 00:50:07.040 +different numbers of tokens so comparing + +00:50:04.880 --> 00:50:10.880 +uh with different denominators is not uh + +00:50:07.040 --> 00:50:12.440 +not fair um if you're allowing unknown + +00:50:10.880 --> 00:50:14.559 +words or characters so if you allow the + +00:50:12.440 --> 00:50:17.640 +model to not predict + +00:50:14.559 --> 00:50:19.119 +any token then you need to be fair about + +00:50:17.640 --> 00:50:22.040 +that + +00:50:19.119 --> 00:50:25.160 +too um so I'd like to go into a few + +00:50:22.040 --> 00:50:27.960 +Alternatives these are very similar to + +00:50:25.160 --> 00:50:29.400 +the Network classifiers and bag of words + +00:50:27.960 --> 00:50:30.680 +classifiers that I talked about before + +00:50:29.400 --> 00:50:32.480 +so I'm going to go through them rather + +00:50:30.680 --> 00:50:35.480 +quickly because I think you should get + +00:50:32.480 --> 00:50:38.119 +the basic idea but basically the + +00:50:35.480 --> 00:50:40.000 +alternative is uh featued models so we + +00:50:38.119 --> 00:50:42.559 +calculate features of to account based + +00:50:40.000 --> 00:50:44.599 +models as featued models so we calculate + +00:50:42.559 --> 00:50:46.880 +features of the context and based on the + +00:50:44.599 --> 00:50:48.280 +features calculate probabilities + +00:50:46.880 --> 00:50:50.480 +optimize the feature weights using + +00:50:48.280 --> 00:50:53.839 +gradient descent uh + +00:50:50.480 --> 00:50:56.119 +Etc and so for example if we have uh + +00:50:53.839 --> 00:50:58.880 +input giving a + +00:50:56.119 --> 00:51:02.960 +uh we calculate features so um we might + +00:50:58.880 --> 00:51:05.400 +look up uh the word identity of the two + +00:51:02.960 --> 00:51:08.240 +previous words look up the word identity + +00:51:05.400 --> 00:51:11.000 +of the word uh directly previous add a + +00:51:08.240 --> 00:51:13.480 +bias add them all together get scores + +00:51:11.000 --> 00:51:14.960 +and calculate probabilities where each + +00:51:13.480 --> 00:51:16.920 +Vector is the size of the output + +00:51:14.960 --> 00:51:19.680 +vocabulary and feature weights are + +00:51:16.920 --> 00:51:21.799 +optimized using SGD so this is basically + +00:51:19.680 --> 00:51:24.240 +a bag of words classifier but it's a + +00:51:21.799 --> 00:51:27.200 +multiclass bag of words classifier over + +00:51:24.240 --> 00:51:28.960 +the next token so it's very similar to + +00:51:27.200 --> 00:51:30.839 +our classification task before except + +00:51:28.960 --> 00:51:33.160 +now instead of having two classes we + +00:51:30.839 --> 00:51:36.280 +have you know 10,000 classes or 100,000 + +00:51:33.160 --> 00:51:38.480 +classes oh yeah sorry very quick aside + +00:51:36.280 --> 00:51:40.280 +um these were actually invented by Rony + +00:51:38.480 --> 00:51:41.440 +Rosenfeld who's the head of the machine + +00:51:40.280 --> 00:51:45.119 +learning department at the end the + +00:51:41.440 --> 00:51:47.799 +machine learning Department uh so um 27 + +00:51:45.119 --> 00:51:50.760 +years ago I guess so he has even more + +00:51:47.799 --> 00:51:52.680 +experience large language modeling than + +00:51:50.760 --> 00:51:55.880 +um + +00:51:52.680 --> 00:51:58.599 +cool so um the one difference with a bag + +00:51:55.880 --> 00:52:02.119 +of words classifier is + +00:51:58.599 --> 00:52:05.480 +um we we have + +00:52:02.119 --> 00:52:07.640 +biases um and we have the probability + +00:52:05.480 --> 00:52:09.400 +Vector given the previous word but + +00:52:07.640 --> 00:52:11.720 +instead of using a bag of words this + +00:52:09.400 --> 00:52:15.440 +actually is using uh How likely is it + +00:52:11.720 --> 00:52:16.960 +giving given two words previous so uh + +00:52:15.440 --> 00:52:18.040 +the feature design would be a little bit + +00:52:16.960 --> 00:52:19.119 +different and that would give you a + +00:52:18.040 --> 00:52:22.920 +total + +00:52:19.119 --> 00:52:24.359 +score um as a reminder uh last time we + +00:52:22.920 --> 00:52:26.440 +did a training algorithm where we + +00:52:24.359 --> 00:52:27.480 +calculated gradients loss function with + +00:52:26.440 --> 00:52:29.960 +respect to the + +00:52:27.480 --> 00:52:32.319 +parameters and uh we can use the chain + +00:52:29.960 --> 00:52:33.839 +Rule and back propagation and updates to + +00:52:32.319 --> 00:52:36.400 +move in the direction that increases + +00:52:33.839 --> 00:52:39.040 +enough so nothing extremely different + +00:52:36.400 --> 00:52:42.640 +from what we had for our + +00:52:39.040 --> 00:52:44.240 +B um similarly this solves some problems + +00:52:42.640 --> 00:52:47.240 +so this didn't solve the problem of + +00:52:44.240 --> 00:52:49.119 +sharing strength among similar words it + +00:52:47.240 --> 00:52:50.839 +did solve the problem of conditioning on + +00:52:49.119 --> 00:52:52.839 +context with intervening words because + +00:52:50.839 --> 00:52:56.920 +now we can condition directly on Doctor + +00:52:52.839 --> 00:52:59.680 +without having to um combine with + +00:52:56.920 --> 00:53:01.200 +gitrid um and it doesn't necessarily + +00:52:59.680 --> 00:53:03.480 +handle longdistance dependencies because + +00:53:01.200 --> 00:53:05.240 +we're still limited in our context with + +00:53:03.480 --> 00:53:09.079 +the model I just + +00:53:05.240 --> 00:53:11.920 +described so um if we so sorry back to + +00:53:09.079 --> 00:53:13.480 +neural networks is what I should say um + +00:53:11.920 --> 00:53:15.160 +so if we have a feedforward neural + +00:53:13.480 --> 00:53:18.480 +network language model the way this + +00:53:15.160 --> 00:53:20.400 +could work is instead of looking up + +00:53:18.480 --> 00:53:23.079 +discrete features uh like we had in a + +00:53:20.400 --> 00:53:25.960 +bag of words model uh we would look up + +00:53:23.079 --> 00:53:27.400 +dents embeddings and so we concatenate + +00:53:25.960 --> 00:53:29.359 +together these dense + +00:53:27.400 --> 00:53:32.319 +embeddings and based on the dense + +00:53:29.359 --> 00:53:34.599 +embeddings uh we do some sort of uh + +00:53:32.319 --> 00:53:36.079 +intermediate layer transforms to extract + +00:53:34.599 --> 00:53:37.200 +features like we did for our neural + +00:53:36.079 --> 00:53:39.359 +network based + +00:53:37.200 --> 00:53:41.520 +classifier um we multiply this by + +00:53:39.359 --> 00:53:43.559 +weights uh we have a bias and we + +00:53:41.520 --> 00:53:46.559 +calculate + +00:53:43.559 --> 00:53:49.200 +scores and uh then we take a soft Max to + +00:53:46.559 --> 00:53:49.200 +do + +00:53:50.400 --> 00:53:55.799 +classification so um this can calculate + +00:53:53.359 --> 00:53:58.000 +combination features uh like we we also + +00:53:55.799 --> 00:54:02.280 +used in our uh neural network based + +00:53:58.000 --> 00:54:04.119 +classifiers so um this could uh give us + +00:54:02.280 --> 00:54:05.760 +a positive number for example if the + +00:54:04.119 --> 00:54:07.760 +previous word is a determiner and the + +00:54:05.760 --> 00:54:10.440 +second previous word is a verb so that + +00:54:07.760 --> 00:54:14.520 +would be like uh in giving and then that + +00:54:10.440 --> 00:54:14.520 +would allow us upway to that particular + +00:54:15.000 --> 00:54:19.559 +examples um so this allows us to share + +00:54:17.640 --> 00:54:21.640 +strength in various places in our model + +00:54:19.559 --> 00:54:23.520 +which was also You Know instrumental in + +00:54:21.640 --> 00:54:25.599 +making our our neural network + +00:54:23.520 --> 00:54:28.000 +classifiers work for similar work and + +00:54:25.599 --> 00:54:30.119 +stuff and so these would be word + +00:54:28.000 --> 00:54:32.160 +embeddings so similar words get similar + +00:54:30.119 --> 00:54:35.079 +embeddings another really important + +00:54:32.160 --> 00:54:38.480 +thing is uh similar output words also + +00:54:35.079 --> 00:54:41.839 +get similar rows in The softmax Matrix + +00:54:38.480 --> 00:54:44.440 +and so here remember if you remember + +00:54:41.839 --> 00:54:48.240 +from last class this was a big Matrix + +00:54:44.440 --> 00:54:50.400 +where the size of the Matrix was the + +00:54:48.240 --> 00:54:53.319 +number of vocabulary items times the + +00:54:50.400 --> 00:54:55.920 +size of a word embedding this is also a + +00:54:53.319 --> 00:54:58.319 +matrix where this is + +00:54:55.920 --> 00:55:02.200 +the number of vocabulary items times the + +00:54:58.319 --> 00:55:04.160 +size of a context embedding gr and so + +00:55:02.200 --> 00:55:06.160 +these will also be similar because words + +00:55:04.160 --> 00:55:08.280 +that appear in similar contexts will + +00:55:06.160 --> 00:55:11.920 +also you know want similar embeddings so + +00:55:08.280 --> 00:55:15.119 +they get uploaded in at the same + +00:55:11.920 --> 00:55:17.119 +time and similar hidden States will have + +00:55:15.119 --> 00:55:19.799 +similar context so ideally like if you + +00:55:17.119 --> 00:55:20.920 +have giving a or delivering a or + +00:55:19.799 --> 00:55:22.680 +something like that those would be + +00:55:20.920 --> 00:55:27.000 +similar contexts so they would get + +00:55:22.680 --> 00:55:27.000 +similar purple embeddings out out of the + +00:55:28.440 --> 00:55:31.599 +so one trick that's widely used in + +00:55:30.200 --> 00:55:34.960 +language model that further takes + +00:55:31.599 --> 00:55:38.799 +advantage of this is uh tying + +00:55:34.960 --> 00:55:44.160 +embeddings so here what this does is + +00:55:38.799 --> 00:55:48.280 +sharing parameters between this um + +00:55:44.160 --> 00:55:49.920 +lookup Matrix here and this uh Matrix + +00:55:48.280 --> 00:55:51.119 +over here that we use for calculating + +00:55:49.920 --> 00:55:56.200 +the + +00:55:51.119 --> 00:55:58.839 +softmax and um the reason why this is + +00:55:56.200 --> 00:56:00.559 +useful is twofold number one it gives + +00:55:58.839 --> 00:56:02.079 +you essentially more training data to + +00:56:00.559 --> 00:56:04.440 +learn these embeddings because instead + +00:56:02.079 --> 00:56:05.799 +of learning the embeddings whenever a + +00:56:04.440 --> 00:56:08.520 +word is in + +00:56:05.799 --> 00:56:10.599 +context separately from learning the + +00:56:08.520 --> 00:56:13.520 +embeddings whenever a word is predicted + +00:56:10.599 --> 00:56:15.480 +you learn the the same embedding Matrix + +00:56:13.520 --> 00:56:19.319 +whenever the word is in the context or + +00:56:15.480 --> 00:56:21.520 +whatever it's predicted and so um that + +00:56:19.319 --> 00:56:24.119 +makes it more accurate to learn these uh + +00:56:21.520 --> 00:56:26.960 +embeddings well another thing is the + +00:56:24.119 --> 00:56:31.119 +embedding mat can actually be very large + +00:56:26.960 --> 00:56:34.920 +so like let's say we have aab of + +00:56:31.119 --> 00:56:37.520 +10 100,000 and we have an embedding a + +00:56:34.920 --> 00:56:40.799 +word embedding size of like 512 or + +00:56:37.520 --> 00:56:45.319 +something like that + +00:56:40.799 --> 00:56:45.319 +that's um 51 million + +00:56:46.839 --> 00:56:52.440 +parameters um and this doesn't sound + +00:56:49.559 --> 00:56:55.520 +like a lot of parameters at first but it + +00:56:52.440 --> 00:56:57.880 +actually is a lot to learn when um + +00:56:55.520 --> 00:57:01.000 +these get updated relatively + +00:56:57.880 --> 00:57:03.400 +infrequently uh because + +00:57:01.000 --> 00:57:06.079 +um these get updated relatively + +00:57:03.400 --> 00:57:07.960 +infrequently because they only are + +00:57:06.079 --> 00:57:09.559 +updated whenever that word or token + +00:57:07.960 --> 00:57:12.319 +actually appears in your training data + +00:57:09.559 --> 00:57:14.119 +so um this can be a good thing for + +00:57:12.319 --> 00:57:16.319 +parameter savings parameter efficiency + +00:57:14.119 --> 00:57:16.319 +as + +00:57:16.440 --> 00:57:22.520 +well um so this uh solves most of the + +00:57:19.599 --> 00:57:24.319 +problems here um but it doesn't solve + +00:57:22.520 --> 00:57:26.839 +the problem of longdistance dependencies + +00:57:24.319 --> 00:57:29.839 +because still limited by the overall + +00:57:26.839 --> 00:57:31.359 +length of uh the context that we're + +00:57:29.839 --> 00:57:32.520 +concatenating together here sure we + +00:57:31.359 --> 00:57:35.760 +could make that longer but that would + +00:57:32.520 --> 00:57:37.200 +make our model larger and um and bring + +00:57:35.760 --> 00:57:39.720 +various + +00:57:37.200 --> 00:57:42.520 +issues and so what I'm going to talk + +00:57:39.720 --> 00:57:44.599 +about in on thur day is how we solve + +00:57:42.520 --> 00:57:47.559 +this problem of modeling long contexts + +00:57:44.599 --> 00:57:49.720 +so how do we um build recurrent neural + +00:57:47.559 --> 00:57:52.559 +networks uh how do we build + +00:57:49.720 --> 00:57:54.960 +convolutional uh convolutional networks + +00:57:52.559 --> 00:57:57.520 +or how do we build attention based + +00:57:54.960 --> 00:58:00.720 +Transformer models and these are all + +00:57:57.520 --> 00:58:02.119 +options that are used um Transformers + +00:58:00.720 --> 00:58:04.359 +are kind of + +00:58:02.119 --> 00:58:06.039 +the the main thing that people use + +00:58:04.359 --> 00:58:08.400 +nowadays but there's a lot of versions + +00:58:06.039 --> 00:58:11.880 +of Transformers that borrow ideas from + +00:58:08.400 --> 00:58:14.960 +recurrent uh and convolutional models + +00:58:11.880 --> 00:58:17.359 +um recently a lot of long context models + +00:58:14.960 --> 00:58:19.440 +us use ideas from recurrent networks and + +00:58:17.359 --> 00:58:22.160 +a lot of for example speech models or + +00:58:19.440 --> 00:58:24.160 +things like or image models use ideas + +00:58:22.160 --> 00:58:25.920 +from convolutional networks so I think + +00:58:24.160 --> 00:58:28.760 +learning all but at the same time is a + +00:58:25.920 --> 00:58:32.160 +good idea in comparing + +00:58:28.760 --> 00:58:34.319 +them cool uh any any questions about + +00:58:32.160 --> 00:58:35.799 +this part I went through this kind of + +00:58:34.319 --> 00:58:37.319 +quickly because it's pretty similar to + +00:58:35.799 --> 00:58:40.079 +the the classification stuff that we + +00:58:37.319 --> 00:58:42.680 +covered last time but uh any any things + +00:58:40.079 --> 00:58:42.680 +that people want to + +00:58:43.880 --> 00:58:49.039 +ask okay so next I'm going to talk about + +00:58:46.839 --> 00:58:51.559 +a few other desiderata of language + +00:58:49.039 --> 00:58:53.039 +models so the next one is really really + +00:58:51.559 --> 00:58:55.640 +important it's a concept I want + +00:58:53.039 --> 00:58:57.640 +everybody to know I actually + +00:58:55.640 --> 00:58:59.520 +taught this informally up until this + +00:58:57.640 --> 00:59:02.039 +class but now I I actually made slides + +00:58:59.520 --> 00:59:05.079 +for it starting this time which is + +00:59:02.039 --> 00:59:07.240 +calibration so the idea of calibration + +00:59:05.079 --> 00:59:10.200 +is that the model quote unquote knows + +00:59:07.240 --> 00:59:14.559 +when it knows or the the fact that it is + +00:59:10.200 --> 00:59:17.480 +able to provide a a good answer um uh + +00:59:14.559 --> 00:59:21.640 +provide a good confidence in its answer + +00:59:17.480 --> 00:59:23.640 +and more formally this can be specified + +00:59:21.640 --> 00:59:25.240 +as + +00:59:23.640 --> 00:59:27.799 +the + +00:59:25.240 --> 00:59:29.200 +feature that the model probability of + +00:59:27.799 --> 00:59:33.119 +the answer matches the actual + +00:59:29.200 --> 00:59:37.319 +probability of getting it right um and + +00:59:33.119 --> 00:59:37.319 +so what this means + +00:59:41.960 --> 00:59:47.480 +is the + +00:59:44.240 --> 00:59:51.839 +probability of the + +00:59:47.480 --> 00:59:51.839 +answer um is + +00:59:52.720 --> 00:59:59.880 +correct given the fact that + +00:59:56.319 --> 00:59:59.880 +the model + +01:00:00.160 --> 01:00:07.440 +probability is equal to + +01:00:03.640 --> 01:00:07.440 +P is equal to + +01:00:08.559 --> 01:00:12.760 +ke + +01:00:10.480 --> 01:00:15.319 +so I know this is a little bit hard to + +01:00:12.760 --> 01:00:18.240 +parse I it always took me like a few + +01:00:15.319 --> 01:00:21.720 +seconds to parse before I uh like when I + +01:00:18.240 --> 01:00:25.160 +looked at it but basically if the model + +01:00:21.720 --> 01:00:26.920 +if the model says the probability of it + +01:00:25.160 --> 01:00:29.440 +being correct is + +01:00:26.920 --> 01:00:33.559 +0.7 then the probability that the answer + +01:00:29.440 --> 01:00:35.960 +is correct is actually 0.7 so um you + +01:00:33.559 --> 01:00:41.520 +know if it says uh the probability is + +01:00:35.960 --> 01:00:41.520 +0.7 100 times then it will be right 70 + +01:00:43.640 --> 01:00:52.160 +times and so the way we formalize this + +01:00:48.039 --> 01:00:55.200 +um is is by this uh it was proposed by + +01:00:52.160 --> 01:00:57.760 +this seminal paper by gu it all in + +01:00:55.200 --> 01:01:00.319 +2017 + +01:00:57.760 --> 01:01:03.319 +and + +01:01:00.319 --> 01:01:05.520 +unfortunately this data itself is hard + +01:01:03.319 --> 01:01:08.119 +to collect + +01:01:05.520 --> 01:01:11.200 +because the model probability is always + +01:01:08.119 --> 01:01:13.359 +different right and so if the model + +01:01:11.200 --> 01:01:15.359 +probability is like if the model + +01:01:13.359 --> 01:01:20.480 +probability was actually 0.7 that'd be + +01:01:15.359 --> 01:01:22.000 +nice but actually it's 0.793 to 6 8 5 + +01:01:20.480 --> 01:01:24.599 +and you never get another example where + +01:01:22.000 --> 01:01:26.319 +the probability is exactly the same so + +01:01:24.599 --> 01:01:28.280 +what we do instead is we divide the + +01:01:26.319 --> 01:01:30.240 +model probabilities into buckets so we + +01:01:28.280 --> 01:01:32.880 +say the model probability is between 0 + +01:01:30.240 --> 01:01:36.599 +and 0.1 we say the model probability is + +01:01:32.880 --> 01:01:40.319 +between 0.1 and 0.2 0.2 and 0.3 so we + +01:01:36.599 --> 01:01:44.599 +create buckets like this like these and + +01:01:40.319 --> 01:01:46.520 +then we looked at the model confidence + +01:01:44.599 --> 01:01:52.839 +the average model confidence within that + +01:01:46.520 --> 01:01:55.000 +bucket so maybe uh between 0.1 and 0 uh + +01:01:52.839 --> 01:01:58.000 +between 0 and 0.1 the model confidence + +01:01:55.000 --> 01:02:00.920 +on average is 0 055 or something like + +01:01:58.000 --> 01:02:02.640 +that so that would be this T here and + +01:02:00.920 --> 01:02:05.079 +then the accuracy is how often did it + +01:02:02.640 --> 01:02:06.680 +actually get a correct and this can be + +01:02:05.079 --> 01:02:09.720 +plotted in this thing called a + +01:02:06.680 --> 01:02:15.039 +reliability diagram and the reliability + +01:02:09.720 --> 01:02:17.599 +diagram basically um the the + +01:02:15.039 --> 01:02:20.359 +outputs uh + +01:02:17.599 --> 01:02:26.359 +here so this is + +01:02:20.359 --> 01:02:26.359 +um the this is the model + +01:02:27.520 --> 01:02:34.119 +yeah I think the red is the model + +01:02:30.760 --> 01:02:36.400 +um expected probability and then the + +01:02:34.119 --> 01:02:40.559 +blue uh the blue is the actual + +01:02:36.400 --> 01:02:43.240 +probability and then um + +01:02:40.559 --> 01:02:45.160 +the difference between the expected and + +01:02:43.240 --> 01:02:47.160 +the actual probability is kind of like + +01:02:45.160 --> 01:02:48.359 +the penalty there is how how poorly + +01:02:47.160 --> 01:02:52.000 +calibrated + +01:02:48.359 --> 01:02:55.880 +the and one really important thing to + +01:02:52.000 --> 01:02:58.440 +know is that calibration in accuracy are + +01:02:55.880 --> 01:03:00.599 +not necessarily they don't go hand inand + +01:02:58.440 --> 01:03:02.359 +uh they do to some extent but they don't + +01:03:00.599 --> 01:03:06.440 +uh they don't necessarily go hand in + +01:03:02.359 --> 01:03:06.440 +hand and + +01:03:07.200 --> 01:03:14.319 +the example on the left is a a bad model + +01:03:11.200 --> 01:03:16.279 +but a well calibrated so its accuracy is + +01:03:14.319 --> 01:03:18.720 +uh its error is + +01:03:16.279 --> 01:03:20.000 +44.9% um but it's well calibrated as you + +01:03:18.720 --> 01:03:21.440 +can see like when it says it knows the + +01:03:20.000 --> 01:03:23.880 +answer it knows the answer when it + +01:03:21.440 --> 01:03:27.799 +doesn't answer does this model on the + +01:03:23.880 --> 01:03:30.000 +other hand has better erir and um but + +01:03:27.799 --> 01:03:31.880 +worse calibration so the reason why is + +01:03:30.000 --> 01:03:36.680 +the model is very very confident all the + +01:03:31.880 --> 01:03:39.640 +time and usually what happens is um + +01:03:36.680 --> 01:03:41.200 +models that overfit to the data + +01:03:39.640 --> 01:03:43.359 +especially when you do early stopping on + +01:03:41.200 --> 01:03:44.760 +something like accuracy uh when you stop + +01:03:43.359 --> 01:03:47.279 +the training on something like accuracy + +01:03:44.760 --> 01:03:49.960 +will become very overconfident and uh + +01:03:47.279 --> 01:03:52.599 +give confidence estimates um that are in + +01:03:49.960 --> 01:03:54.000 +cor like this so this is important to + +01:03:52.599 --> 01:03:56.079 +know and the reason why it's important + +01:03:54.000 --> 01:03:58.000 +to know is actually because you know + +01:03:56.079 --> 01:04:00.960 +models are very good at making up things + +01:03:58.000 --> 01:04:02.359 +that aren't actually correct nowadays um + +01:04:00.960 --> 01:04:04.920 +and but if you have a really well + +01:04:02.359 --> 01:04:07.760 +calibrated model you could at least say + +01:04:04.920 --> 01:04:09.920 +with what confidence you have this + +01:04:07.760 --> 01:04:12.760 +working so how do you calculate the + +01:04:09.920 --> 01:04:14.160 +probability of an answer so H yeah sorry + +01:04:12.760 --> 01:04:17.599 +uh yes + +01:04:14.160 --> 01:04:17.599 +yes yeah please + +01:04:17.799 --> 01:04:26.559 +go the probability of percent or + +01:04:23.200 --> 01:04:28.039 +percent um usually this would be for a + +01:04:26.559 --> 01:04:29.599 +generated output because you want to + +01:04:28.039 --> 01:04:32.559 +know the the probability that the + +01:04:29.599 --> 01:04:32.559 +generated output is + +01:04:53.160 --> 01:04:56.160 +cor + +01:05:01.079 --> 01:05:06.319 +great that's what I'm about to talk + +01:05:03.000 --> 01:05:07.839 +about so perfect perfect question um so + +01:05:06.319 --> 01:05:10.160 +how do we calculate the answer + +01:05:07.839 --> 01:05:13.279 +probability or um how do we calculate + +01:05:10.160 --> 01:05:15.039 +the confidence in an answer um we're + +01:05:13.279 --> 01:05:18.319 +actually going to go into more detail + +01:05:15.039 --> 01:05:20.760 +about this um in a a later class but the + +01:05:18.319 --> 01:05:23.200 +first thing is probability of the answer + +01:05:20.760 --> 01:05:25.799 +and this is easy when there's a single + +01:05:23.200 --> 01:05:29.079 +answer um like if there's only one + +01:05:25.799 --> 01:05:31.839 +correct answer and you want your model + +01:05:29.079 --> 01:05:34.160 +to be solving math problems and you want + +01:05:31.839 --> 01:05:38.319 +it to return only the answer and nothing + +01:05:34.160 --> 01:05:40.760 +else if it returns anything else like it + +01:05:38.319 --> 01:05:44.920 +won't work then you can just use the + +01:05:40.760 --> 01:05:47.119 +probability of the answer but what + +01:05:44.920 --> 01:05:49.559 +if + +01:05:47.119 --> 01:05:52.000 +um what if there are multiple acceptable + +01:05:49.559 --> 01:05:54.680 +answers um and maybe a perfect example + +01:05:52.000 --> 01:06:02.240 +of that is like where is CMU located + +01:05:54.680 --> 01:06:04.400 +or um uh where where are we right now um + +01:06:02.240 --> 01:06:06.960 +if the answer is where are we right + +01:06:04.400 --> 01:06:08.880 +now um could be + +01:06:06.960 --> 01:06:12.880 +Pittsburgh could be + +01:06:08.880 --> 01:06:12.880 +CMU could be carnegy + +01:06:16.200 --> 01:06:24.440 +melon could be other other things like + +01:06:18.760 --> 01:06:26.760 +this right um and so another way that + +01:06:24.440 --> 01:06:28.319 +you can calculate the confidence is + +01:06:26.760 --> 01:06:31.240 +calculating the probability of the + +01:06:28.319 --> 01:06:33.680 +answer plus uh you know paraphrases of + +01:06:31.240 --> 01:06:35.799 +the answer or other uh other things like + +01:06:33.680 --> 01:06:37.680 +this and so then you would just sum the + +01:06:35.799 --> 01:06:38.839 +probability over all the qu like + +01:06:37.680 --> 01:06:41.680 +acceptable + +01:06:38.839 --> 01:06:45.359 +answers + +01:06:41.680 --> 01:06:47.680 +um another thing that you can do is um + +01:06:45.359 --> 01:06:49.279 +sample multiple outputs and count the + +01:06:47.680 --> 01:06:51.000 +number of times you get a particular + +01:06:49.279 --> 01:06:54.440 +answer this doesn't solve the problem of + +01:06:51.000 --> 01:06:58.119 +paraphrasing ex paraphrases existing but + +01:06:54.440 --> 01:06:59.880 +it does solve the problem of uh it does + +01:06:58.119 --> 01:07:01.480 +solve two problems sometimes there are + +01:06:59.880 --> 01:07:05.240 +language models where you can't get + +01:07:01.480 --> 01:07:06.640 +probabilities out of them um this is not + +01:07:05.240 --> 01:07:08.680 +so much of a problem anymore with the + +01:07:06.640 --> 01:07:11.240 +GPT models because they're reintroducing + +01:07:08.680 --> 01:07:12.440 +the ability to get probabilities but um + +01:07:11.240 --> 01:07:13.720 +there are some models where you can just + +01:07:12.440 --> 01:07:16.279 +sample from them and you can't get + +01:07:13.720 --> 01:07:18.680 +probabilities out but also more + +01:07:16.279 --> 01:07:21.039 +importantly um sometimes when you're + +01:07:18.680 --> 01:07:23.000 +using things like uh Chain of Thought + +01:07:21.039 --> 01:07:26.520 +reasoning which I'll talk about in more + +01:07:23.000 --> 01:07:29.839 +detail but basically it's like um please + +01:07:26.520 --> 01:07:31.480 +solve this math problem and explain + +01:07:29.839 --> 01:07:33.480 +explain your solution and then if it + +01:07:31.480 --> 01:07:35.119 +will do that it will generate you know a + +01:07:33.480 --> 01:07:36.279 +really long explanation of how it got to + +01:07:35.119 --> 01:07:40.119 +the solution and then it will give you + +01:07:36.279 --> 01:07:41.640 +the answer at the very end and so then + +01:07:40.119 --> 01:07:44.960 +you can't calculate the probability of + +01:07:41.640 --> 01:07:47.720 +the actual like answer itself because + +01:07:44.960 --> 01:07:49.359 +there's this long reasoning chain in + +01:07:47.720 --> 01:07:51.960 +between and you have like all these + +01:07:49.359 --> 01:07:53.559 +other all that other text there but what + +01:07:51.960 --> 01:07:55.480 +you can do is you can sample those + +01:07:53.559 --> 01:07:56.920 +reasoning chains 100 times and then see + +01:07:55.480 --> 01:07:59.599 +how many times you got a particular + +01:07:56.920 --> 01:08:02.960 +answer and that's actually a pretty um a + +01:07:59.599 --> 01:08:06.079 +Prett pretty reasonable way of uh + +01:08:02.960 --> 01:08:09.000 +getting a have + +01:08:06.079 --> 01:08:11.200 +yet this is my favorite one I I love how + +01:08:09.000 --> 01:08:12.880 +we can do this now it's just absolutely + +01:08:11.200 --> 01:08:16.480 +ridiculous but you could ask the model + +01:08:12.880 --> 01:08:20.279 +how confident it is and um it sometimes + +01:08:16.480 --> 01:08:22.359 +gives you a reasonable uh a reasonable + +01:08:20.279 --> 01:08:24.600 +answer um there's a really nice + +01:08:22.359 --> 01:08:26.400 +comparison of different methods uh in + +01:08:24.600 --> 01:08:29.679 +this paper which is also on on the + +01:08:26.400 --> 01:08:31.960 +website and basically long story short + +01:08:29.679 --> 01:08:34.000 +the conclusion from this paper is the + +01:08:31.960 --> 01:08:35.640 +sampling multiple outputs one is the + +01:08:34.000 --> 01:08:36.839 +best way to do it if you can't directly + +01:08:35.640 --> 01:08:39.520 +calculate + +01:08:36.839 --> 01:08:41.359 +probabilities um another thing that I'd + +01:08:39.520 --> 01:08:42.600 +like people to pay very close attention + +01:08:41.359 --> 01:08:45.040 +to is in the + +01:08:42.600 --> 01:08:46.480 +Generation Um in the generation class + +01:08:45.040 --> 01:08:49.600 +we're going to be talking about minimum + +01:08:46.480 --> 01:08:52.600 +based risk which is a Criterion for + +01:08:49.600 --> 01:08:54.719 +deciding how risky an output is and it's + +01:08:52.600 --> 01:08:56.199 +actually a really good uh confidence + +01:08:54.719 --> 01:08:58.000 +metric as well but I'm going to leave + +01:08:56.199 --> 01:08:59.440 +that till when we discuss it more detail + +01:08:58.000 --> 01:09:02.759 +with + +01:08:59.440 --> 01:09:05.359 +it um any any questions + +01:09:02.759 --> 01:09:08.440 +here okay + +01:09:05.359 --> 01:09:10.480 +cool um so the other Criterion uh this + +01:09:08.440 --> 01:09:12.520 +is just yet another Criterion that we + +01:09:10.480 --> 01:09:15.239 +would like language models to be good at + +01:09:12.520 --> 01:09:17.600 +um its efficiency and so basically the + +01:09:15.239 --> 01:09:21.920 +model is easy to run on limited Hardware + +01:09:17.600 --> 01:09:25.400 +by some you know uh metric of easy and + +01:09:21.920 --> 01:09:29.319 +some metrics that we like to talk about + +01:09:25.400 --> 01:09:32.400 +our parameter account so often you will + +01:09:29.319 --> 01:09:34.239 +see oh this is the best model under + +01:09:32.400 --> 01:09:35.520 +three billion parameters or this is the + +01:09:34.239 --> 01:09:37.960 +best model under seven billion + +01:09:35.520 --> 01:09:39.600 +parameters or um we trained a model with + +01:09:37.960 --> 01:09:42.159 +one trillion parameters or something + +01:09:39.600 --> 01:09:44.719 +like that you know + +01:09:42.159 --> 01:09:46.839 +uh the thing is parameter count doesn't + +01:09:44.719 --> 01:09:49.640 +really mean that much um from the point + +01:09:46.839 --> 01:09:52.839 +of view of like ease of using the model + +01:09:49.640 --> 01:09:54.400 +um unless you also think about other uh + +01:09:52.839 --> 01:09:56.480 +you know deser + +01:09:54.400 --> 01:09:58.840 +like just to give one example this is a + +01:09:56.480 --> 01:10:00.880 +parameter count um let's say you have a + +01:09:58.840 --> 01:10:02.960 +parameter count of 7 billion is that 7 + +01:10:00.880 --> 01:10:05.719 +billion parameters at 32-bit Precision + +01:10:02.960 --> 01:10:07.800 +or is that 7 billion parameters at 4bit + +01:10:05.719 --> 01:10:09.400 +Precision um will make a huge difference + +01:10:07.800 --> 01:10:12.960 +in your memory footprint your speed + +01:10:09.400 --> 01:10:14.920 +other things like that um so some of the + +01:10:12.960 --> 01:10:18.040 +things that are more direct with respect + +01:10:14.920 --> 01:10:19.800 +to efficiency are memory usage um and + +01:10:18.040 --> 01:10:22.440 +there's two varieties of memory usage + +01:10:19.800 --> 01:10:24.280 +one is model uh model only memory usage + +01:10:22.440 --> 01:10:27.120 +so when you load loaded the model into + +01:10:24.280 --> 01:10:29.120 +memory uh how much space does it take + +01:10:27.120 --> 01:10:31.159 +and also Peak memory consumption when + +01:10:29.120 --> 01:10:33.159 +you run have run the model over a + +01:10:31.159 --> 01:10:35.920 +sequence of a certain length how much is + +01:10:33.159 --> 01:10:40.040 +it going to P so that's another + +01:10:35.920 --> 01:10:43.000 +thing another thing is latency um and + +01:10:40.040 --> 01:10:46.440 +with respect to latency this can be + +01:10:43.000 --> 01:10:49.440 +either how long does it take to start + +01:10:46.440 --> 01:10:52.080 +outputting the first token um and how + +01:10:49.440 --> 01:10:54.840 +long does it take to uh finish + +01:10:52.080 --> 01:10:59.480 +outputting uh a generation of a certain + +01:10:54.840 --> 01:11:01.199 +length and the first will have more to + +01:10:59.480 --> 01:11:04.960 +do with how long does it take to encode + +01:11:01.199 --> 01:11:06.480 +a sequence um which is usually faster + +01:11:04.960 --> 01:11:09.080 +than how long does it take to generate a + +01:11:06.480 --> 01:11:11.360 +sequence so this will have to do with + +01:11:09.080 --> 01:11:13.000 +like encoding time this will require + +01:11:11.360 --> 01:11:15.880 +encoding time of course but it will also + +01:11:13.000 --> 01:11:15.880 +require generation + +01:11:16.280 --> 01:11:21.840 +time also throughput so you know how + +01:11:19.239 --> 01:11:23.679 +much um how many sentences can you + +01:11:21.840 --> 01:11:25.400 +process in a certain amount of time so + +01:11:23.679 --> 01:11:26.480 +of these are kind of desad that you you + +01:11:25.400 --> 01:11:29.000 +would + +01:11:26.480 --> 01:11:30.280 +say um we're going to be talking about + +01:11:29.000 --> 01:11:31.920 +this more in the distillation and + +01:11:30.280 --> 01:11:33.199 +compression and generation algorithms + +01:11:31.920 --> 01:11:35.640 +classes so I won't go into a whole lot + +01:11:33.199 --> 01:11:36.840 +of detail about this but um it's just + +01:11:35.640 --> 01:11:39.960 +another thing that we want to be + +01:11:36.840 --> 01:11:43.560 +thinking about in addition to + +01:11:39.960 --> 01:11:45.360 +complexity um but since I'm I'm on the + +01:11:43.560 --> 01:11:47.800 +topic of efficiency I would like to talk + +01:11:45.360 --> 01:11:49.480 +just a little bit about it um in terms + +01:11:47.800 --> 01:11:51.000 +of especially things that will be useful + +01:11:49.480 --> 01:11:53.600 +for implementing your first + +01:11:51.000 --> 01:11:55.840 +assignment and uh one thing that every + +01:11:53.600 --> 01:11:58.639 +body should know about um if you've done + +01:11:55.840 --> 01:11:59.920 +any like deep learning with pytorch or + +01:11:58.639 --> 01:12:02.639 +something like this you already know + +01:11:59.920 --> 01:12:05.880 +about this probably but uh I think it's + +01:12:02.639 --> 01:12:08.760 +worth mentioning but basically mini + +01:12:05.880 --> 01:12:12.120 +batching or batching uh is uh very + +01:12:08.760 --> 01:12:15.320 +useful and the basic idea behind it is + +01:12:12.120 --> 01:12:17.560 +that on Modern Hardware if you do many + +01:12:15.320 --> 01:12:20.520 +of the same operations at once it's much + +01:12:17.560 --> 01:12:24.320 +faster than doing um + +01:12:20.520 --> 01:12:25.480 +like uh operations executively and + +01:12:24.320 --> 01:12:27.280 +that's especially the case if you're + +01:12:25.480 --> 01:12:30.520 +programming in an extremely slow + +01:12:27.280 --> 01:12:33.239 +programming language like python um I + +01:12:30.520 --> 01:12:37.239 +love python but it's slow I mean like + +01:12:33.239 --> 01:12:38.719 +there's no argument about that um and so + +01:12:37.239 --> 01:12:40.520 +what mini batching does is it combines + +01:12:38.719 --> 01:12:43.600 +together smaller operations into one big + +01:12:40.520 --> 01:12:47.480 +one and the basic idea uh for example if + +01:12:43.600 --> 01:12:51.679 +we want to calculate our um our linear + +01:12:47.480 --> 01:12:56.560 +layer with a t uh nonlinearity after it + +01:12:51.679 --> 01:12:59.760 +we will take several inputs X1 X2 X3 + +01:12:56.560 --> 01:13:02.040 +concatenate them together and do a + +01:12:59.760 --> 01:13:04.600 +Matrix Matrix multiply instead of doing + +01:13:02.040 --> 01:13:07.960 +three Vector Matrix + +01:13:04.600 --> 01:13:09.239 +multiplies and so what we do is we take + +01:13:07.960 --> 01:13:11.280 +a whole bunch of examples we take like + +01:13:09.239 --> 01:13:13.840 +64 examples or something like that and + +01:13:11.280 --> 01:13:18.000 +we combine them together and calculate + +01:13:13.840 --> 01:13:21.280 +out thingsit one thing to know is that + +01:13:18.000 --> 01:13:22.560 +if you're working with sentences there's + +01:13:21.280 --> 01:13:24.719 +different ways you can calculate the + +01:13:22.560 --> 01:13:27.360 +size of your mini + +01:13:24.719 --> 01:13:28.880 +normally nowadays the thing that people + +01:13:27.360 --> 01:13:30.400 +do and the thing that I recommend is to + +01:13:28.880 --> 01:13:31.679 +calculate the size of your mini batches + +01:13:30.400 --> 01:13:33.639 +based on the number of tokens in the + +01:13:31.679 --> 01:13:35.840 +mini batch it used to be that you would + +01:13:33.639 --> 01:13:39.719 +do it based on the number of sequences + +01:13:35.840 --> 01:13:43.800 +but the the problem is um one like 50 + +01:13:39.719 --> 01:13:47.120 +sequences of length like 100 is much + +01:13:43.800 --> 01:13:49.480 +more memory intensive than uh 50 + +01:13:47.120 --> 01:13:51.960 +sequences of Link five and so you get + +01:13:49.480 --> 01:13:53.920 +these vastly varying these mini batches + +01:13:51.960 --> 01:13:57.000 +of vastly varying size and that's both + +01:13:53.920 --> 01:13:59.800 +bad for you know memory overflows and + +01:13:57.000 --> 01:14:01.639 +bad for um and bad for learning + +01:13:59.800 --> 01:14:04.280 +stability so I I definitely recommend + +01:14:01.639 --> 01:14:06.880 +doing it based on the number of + +01:14:04.280 --> 01:14:09.080 +comps uh another thing is gpus versus + +01:14:06.880 --> 01:14:12.400 +CPUs so + +01:14:09.080 --> 01:14:14.600 +um uh CPUs one way you can think of it + +01:14:12.400 --> 01:14:17.320 +is a CPUs kind of like a motorcycle it's + +01:14:14.600 --> 01:14:19.600 +very fast at picking up and doing a + +01:14:17.320 --> 01:14:23.960 +bunch of uh things very quickly + +01:14:19.600 --> 01:14:26.600 +accelerating uh into starting new uh new + +01:14:23.960 --> 01:14:28.760 +tasks a GPU is more like an airplane + +01:14:26.600 --> 01:14:30.719 +which uh you wait forever in line in + +01:14:28.760 --> 01:14:33.360 +security and + +01:14:30.719 --> 01:14:34.800 +then and then uh it takes a long time to + +01:14:33.360 --> 01:14:40.400 +get off the ground and start working but + +01:14:34.800 --> 01:14:43.679 +once it does it's extremely fast um and + +01:14:40.400 --> 01:14:45.360 +so if we do a simple example of how long + +01:14:43.679 --> 01:14:47.600 +does it take to do a Matrix Matrix + +01:14:45.360 --> 01:14:49.040 +multiply I calculated this a really long + +01:14:47.600 --> 01:14:51.280 +time ago it's probably horribly out of + +01:14:49.040 --> 01:14:55.120 +date now but the same general principle + +01:14:51.280 --> 01:14:56.560 +stands which is if we have have um the + +01:14:55.120 --> 01:14:58.480 +number of seconds that it takes to do a + +01:14:56.560 --> 01:15:02.080 +Matrix Matrix multiply doing one of size + +01:14:58.480 --> 01:15:03.920 +16 is actually faster on CPU because uh + +01:15:02.080 --> 01:15:07.760 +the overhead it takes to get started is + +01:15:03.920 --> 01:15:10.880 +very low but if you um once you start + +01:15:07.760 --> 01:15:13.360 +getting up to size like 128 by 128 + +01:15:10.880 --> 01:15:15.800 +Matrix multiplies then doing it on GPU + +01:15:13.360 --> 01:15:17.320 +is faster and then um it's you know a + +01:15:15.800 --> 01:15:19.679 +100 times faster once you start getting + +01:15:17.320 --> 01:15:21.600 +up to very large matrices so um if + +01:15:19.679 --> 01:15:24.000 +you're dealing with very large networks + +01:15:21.600 --> 01:15:26.800 +handling a GPU is good + +01:15:24.000 --> 01:15:30.159 +um and this is the the speed up + +01:15:26.800 --> 01:15:31.440 +percentage um one thing I should mention + +01:15:30.159 --> 01:15:34.239 +is + +01:15:31.440 --> 01:15:36.440 +um compute with respect to like doing + +01:15:34.239 --> 01:15:39.800 +the assignments for this class if you + +01:15:36.440 --> 01:15:43.199 +have a relatively recent Mac you're kind + +01:15:39.800 --> 01:15:44.760 +of in luck because actually the gpus on + +01:15:43.199 --> 01:15:47.239 +the Mac are pretty fast and they're well + +01:15:44.760 --> 01:15:48.960 +integrated with um they're well + +01:15:47.239 --> 01:15:52.080 +integrated with pipor and other things + +01:15:48.960 --> 01:15:53.440 +like that so decently sized models maybe + +01:15:52.080 --> 01:15:54.840 +up to the size that you would need to + +01:15:53.440 --> 01:15:57.840 +run for assignment one or even + +01:15:54.840 --> 01:16:00.880 +assignment two might uh just run on your + +01:15:57.840 --> 01:16:03.639 +uh laptop computer um if you don't have + +01:16:00.880 --> 01:16:05.280 +a GPU uh that you have immediately + +01:16:03.639 --> 01:16:06.760 +accessible to you I we're going to + +01:16:05.280 --> 01:16:08.400 +recommend that you use collab where you + +01:16:06.760 --> 01:16:10.120 +can get a GPU uh for the first + +01:16:08.400 --> 01:16:12.440 +assignments and then we'll have plug + +01:16:10.120 --> 01:16:15.159 +reddits that you can use otherwise but + +01:16:12.440 --> 01:16:16.800 +um GPU is usually like something that + +01:16:15.159 --> 01:16:18.440 +you can get on the cloud or one that you + +01:16:16.800 --> 01:16:21.080 +have on your Mac or one that you have on + +01:16:18.440 --> 01:16:24.600 +your gaming computer or something like + +01:16:21.080 --> 01:16:26.040 +that um there's a few speed tricks that + +01:16:24.600 --> 01:16:30.000 +you should know for efficient GPU + +01:16:26.040 --> 01:16:32.480 +operations so um one mistake that people + +01:16:30.000 --> 01:16:35.880 +make when creating models is they repeat + +01:16:32.480 --> 01:16:38.080 +operations over and over again and um + +01:16:35.880 --> 01:16:40.600 +you don't want to be doing this so like + +01:16:38.080 --> 01:16:43.239 +for example um this is multiplying a + +01:16:40.600 --> 01:16:45.320 +matrix by a constant multiple times and + +01:16:43.239 --> 01:16:46.880 +if you're just using out of thee box pie + +01:16:45.320 --> 01:16:49.280 +torch this would be really bad because + +01:16:46.880 --> 01:16:50.400 +you'd be repeating the operation uh when + +01:16:49.280 --> 01:16:52.679 +it's not + +01:16:50.400 --> 01:16:54.480 +necessary um you can also reduce the + +01:16:52.679 --> 01:16:57.360 +number of operations that you need to + +01:16:54.480 --> 01:17:00.320 +use so uh use Matrix Matrix multiplies + +01:16:57.360 --> 01:17:03.080 +instead of Matrix Vector + +01:17:00.320 --> 01:17:07.920 +multiplies and another thing is uh + +01:17:03.080 --> 01:17:10.719 +reducing CPU GPU data movement and um so + +01:17:07.920 --> 01:17:12.360 +when you do try to move memory um when + +01:17:10.719 --> 01:17:17.080 +you do try to move memory try to do it + +01:17:12.360 --> 01:17:20.040 +as early as possible and as uh and as + +01:17:17.080 --> 01:17:22.199 +few times as possible and the reason why + +01:17:20.040 --> 01:17:24.199 +you want to move things early or start + +01:17:22.199 --> 01:17:25.920 +operations early is many GPU operations + +01:17:24.199 --> 01:17:27.159 +are asynchronous so you can start the + +01:17:25.920 --> 01:17:28.800 +operation and it will run in the + +01:17:27.159 --> 01:17:33.120 +background while other things are + +01:17:28.800 --> 01:17:36.080 +processing so um it's a good idea to try + +01:17:33.120 --> 01:17:39.840 +to um to optimize and you can also use + +01:17:36.080 --> 01:17:42.360 +your python profiler or um envidia GPU + +01:17:39.840 --> 01:17:43.679 +profilers to try to optimize these + +01:17:42.360 --> 01:17:46.520 +things as + +01:17:43.679 --> 01:17:49.840 +well cool that's all I have uh we're + +01:17:46.520 --> 01:17:49.840 +right at time diff --git a/CMU Advanced NLP 2024 (4) Sequence Modeling/CMU Advanced NLP 2024 (4) Sequence Modeling.mp4 b/CMU Advanced NLP 2024 (4) Sequence Modeling/CMU Advanced NLP 2024 (4) Sequence Modeling.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..725d7a7d84e6fcdd4ccb2868c7ec926a60287577 --- /dev/null +++ b/CMU Advanced NLP 2024 (4) Sequence Modeling/CMU Advanced NLP 2024 (4) Sequence Modeling.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6cf316e2db8a81892b937fb656a115700d76bd3aba666216b58125a77be4f9f4 +size 69941620 diff --git a/CMU Advanced NLP 2024 (4) Sequence Modeling/metadata.json b/CMU Advanced NLP 2024 (4) Sequence Modeling/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..63d4a57a480c94d50d9ca5695579eef97a766b12 --- /dev/null +++ b/CMU Advanced NLP 2024 (4) Sequence Modeling/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=x3U2zVhrgJ8", + "title": "CMU Advanced NLP 2024 (4) Sequence Modeling" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (4) Sequence Modeling/transcript.srt b/CMU Advanced NLP 2024 (4) Sequence Modeling/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..1a7430f239a707ac4e3aef3a1386775111a97114 --- /dev/null +++ b/CMU Advanced NLP 2024 (4) Sequence Modeling/transcript.srt @@ -0,0 +1,6959 @@ +1 +00:00:00,040 --> 00:00:06,600 +started in a moment uh since it's now uh + +2 +00:00:03,959 --> 00:00:08,839 +12:30 are there any questions before we + +3 +00:00:06,600 --> 00:00:08,839 +get + +4 +00:00:11,840 --> 00:00:17,240 +started okay I don't see I don't see any + +5 +00:00:14,679 --> 00:00:18,640 +so I guess we can uh Jump Right In this + +6 +00:00:17,240 --> 00:00:22,080 +time I'll be talking about sequence + +7 +00:00:18,640 --> 00:00:24,560 +modeling and N first I'm going to be + +8 +00:00:22,080 --> 00:00:26,359 +talking about uh why why we do sequence + +9 +00:00:24,560 --> 00:00:29,160 +modeling what varieties of sequence + +10 +00:00:26,359 --> 00:00:31,199 +modeling exist and then after that I'm + +11 +00:00:29,160 --> 00:00:34,120 +going to talk about kind of three basic + +12 +00:00:31,199 --> 00:00:36,320 +techniques for sequence modeling namely + +13 +00:00:34,120 --> 00:00:38,879 +recurrent neural networks convolutional + +14 +00:00:36,320 --> 00:00:38,879 +networks and + +15 +00:00:39,360 --> 00:00:44,079 +attention so when we talk about sequence + +16 +00:00:41,920 --> 00:00:46,680 +modeling in NLP I've kind of already + +17 +00:00:44,079 --> 00:00:50,039 +made the motivation for doing this but + +18 +00:00:46,680 --> 00:00:51,920 +basically NLP is full of sequential data + +19 +00:00:50,039 --> 00:00:56,120 +and this can be everything from words + +20 +00:00:51,920 --> 00:00:59,399 +and sentences or tokens and sentences to + +21 +00:00:56,120 --> 00:01:01,920 +uh characters and words to sentences in + +22 +00:00:59,399 --> 00:01:04,640 +a discourse or a paragraph or a + +23 +00:01:01,920 --> 00:01:06,640 +document um it can also be multiple + +24 +00:01:04,640 --> 00:01:08,840 +documents in time multiple social media + +25 +00:01:06,640 --> 00:01:12,320 +posts whatever else you want there's + +26 +00:01:08,840 --> 00:01:15,159 +just you know sequences all over + +27 +00:01:12,320 --> 00:01:16,640 +NLP and I mentioned this uh last time + +28 +00:01:15,159 --> 00:01:19,240 +also but there's also long-distance + +29 +00:01:16,640 --> 00:01:20,840 +dependencies in language so uh just to + +30 +00:01:19,240 --> 00:01:23,720 +give an example there's agreement in + +31 +00:01:20,840 --> 00:01:25,799 +number uh gender Etc so in order to + +32 +00:01:23,720 --> 00:01:28,439 +create a fluent language model you'll + +33 +00:01:25,799 --> 00:01:30,320 +have to handle this agreement so if we + +34 +00:01:28,439 --> 00:01:32,920 +you say he does not have very much + +35 +00:01:30,320 --> 00:01:35,280 +confidence in uh it would have to be + +36 +00:01:32,920 --> 00:01:36,680 +himself but if you say she does not have + +37 +00:01:35,280 --> 00:01:39,360 +very much confidence in it would have to + +38 +00:01:36,680 --> 00:01:41,360 +be herself and this is this gender + +39 +00:01:39,360 --> 00:01:44,159 +agreement is not super frequent in + +40 +00:01:41,360 --> 00:01:47,600 +English but it's very frequent in other + +41 +00:01:44,159 --> 00:01:50,119 +languages like French or uh you know + +42 +00:01:47,600 --> 00:01:51,759 +most languages in the world in some uh + +43 +00:01:50,119 --> 00:01:53,799 +way or + +44 +00:01:51,759 --> 00:01:55,320 +another then separately from that you + +45 +00:01:53,799 --> 00:01:58,520 +also have things like selectional + +46 +00:01:55,320 --> 00:02:00,119 +preferences um like the Reign has lasted + +47 +00:01:58,520 --> 00:02:01,799 +as long as the life of the queen and the + +48 +00:02:00,119 --> 00:02:04,439 +rain has lasted as long as the life of + +49 +00:02:01,799 --> 00:02:07,360 +the clouds uh in American English the + +50 +00:02:04,439 --> 00:02:09,119 +only way you could know uh which word + +51 +00:02:07,360 --> 00:02:13,520 +came beforehand if you were doing speech + +52 +00:02:09,119 --> 00:02:17,400 +recognition is if you uh like had that + +53 +00:02:13,520 --> 00:02:20,319 +kind of semantic uh idea of uh that + +54 +00:02:17,400 --> 00:02:22,040 +these agree with each other in some way + +55 +00:02:20,319 --> 00:02:23,920 +and there's also factual knowledge + +56 +00:02:22,040 --> 00:02:27,680 +there's all kinds of other things uh + +57 +00:02:23,920 --> 00:02:27,680 +that you need to carry over long + +58 +00:02:28,319 --> 00:02:33,800 +contexts um these can be comp + +59 +00:02:30,840 --> 00:02:36,360 +complicated so this is a a nice example + +60 +00:02:33,800 --> 00:02:39,400 +so if we try to figure out what it + +61 +00:02:36,360 --> 00:02:41,239 +refers to here uh the trophy would not + +62 +00:02:39,400 --> 00:02:45,680 +fit in the brown suitcase because it was + +63 +00:02:41,239 --> 00:02:45,680 +too big what is it + +64 +00:02:46,680 --> 00:02:51,360 +here the trophy yeah and then what about + +65 +00:02:49,879 --> 00:02:53,120 +uh the trophy would not fit in the brown + +66 +00:02:51,360 --> 00:02:57,080 +suitcase because it was too + +67 +00:02:53,120 --> 00:02:58,680 +small suit suitcase right um does anyone + +68 +00:02:57,080 --> 00:03:01,760 +know what the name of something like + +69 +00:02:58,680 --> 00:03:01,760 +this is + +70 +00:03:03,599 --> 00:03:07,840 +has anyone heard of this challenge uh + +71 +00:03:09,280 --> 00:03:14,840 +before no one okay um this this is + +72 +00:03:12,239 --> 00:03:17,200 +called the winegrad schema challenge or + +73 +00:03:14,840 --> 00:03:22,760 +these are called winegrad schemas and + +74 +00:03:17,200 --> 00:03:26,319 +basically winterr schemas are a type + +75 +00:03:22,760 --> 00:03:29,280 +of they're type of kind of linguistic + +76 +00:03:26,319 --> 00:03:30,439 +challenge where you create two paired uh + +77 +00:03:29,280 --> 00:03:33,799 +examples + +78 +00:03:30,439 --> 00:03:37,360 +that you vary in very minimal ways where + +79 +00:03:33,799 --> 00:03:40,599 +the answer differs between the two um + +80 +00:03:37,360 --> 00:03:42,000 +and so uh there's lots of other examples + +81 +00:03:40,599 --> 00:03:44,080 +about how you can create these things + +82 +00:03:42,000 --> 00:03:45,720 +and they're good for testing uh whether + +83 +00:03:44,080 --> 00:03:48,239 +language models are able to do things + +84 +00:03:45,720 --> 00:03:50,920 +because they're able to uh kind of + +85 +00:03:48,239 --> 00:03:54,239 +control for the fact that you know like + +86 +00:03:50,920 --> 00:04:01,079 +the answer might be + +87 +00:03:54,239 --> 00:04:03,000 +um the answer might be very uh like + +88 +00:04:01,079 --> 00:04:04,560 +more frequent or less frequent and so + +89 +00:04:03,000 --> 00:04:07,720 +the language model could just pick that + +90 +00:04:04,560 --> 00:04:11,040 +so another example is we uh we came up + +91 +00:04:07,720 --> 00:04:12,239 +with a benchmark of figurative language + +92 +00:04:11,040 --> 00:04:14,239 +where we tried to figure out whether + +93 +00:04:12,239 --> 00:04:17,160 +language models would be able + +94 +00:04:14,239 --> 00:04:19,720 +to interpret figur figurative language + +95 +00:04:17,160 --> 00:04:22,720 +and I actually have the multilingual uh + +96 +00:04:19,720 --> 00:04:24,160 +version on the suggested projects uh on + +97 +00:04:22,720 --> 00:04:26,240 +the Piaza oh yeah that's one + +98 +00:04:24,160 --> 00:04:28,360 +announcement I posted a big list of + +99 +00:04:26,240 --> 00:04:30,080 +suggested projects on pza I think a lot + +100 +00:04:28,360 --> 00:04:31,639 +of people saw it you don't have to + +101 +00:04:30,080 --> 00:04:33,160 +follow these but if you're interested in + +102 +00:04:31,639 --> 00:04:34,440 +them feel free to talk to the contacts + +103 +00:04:33,160 --> 00:04:38,880 +and we can give you more information + +104 +00:04:34,440 --> 00:04:41,039 +about them um but anyway uh so in this + +105 +00:04:38,880 --> 00:04:43,080 +data set what we did is we came up with + +106 +00:04:41,039 --> 00:04:46,039 +some figurative language like this movie + +107 +00:04:43,080 --> 00:04:47,880 +had the depth of of a waiting pool and + +108 +00:04:46,039 --> 00:04:50,919 +this movie had the depth of a diving + +109 +00:04:47,880 --> 00:04:54,120 +pool and so then after that you would + +110 +00:04:50,919 --> 00:04:56,199 +have two choices this movie was uh this + +111 +00:04:54,120 --> 00:04:58,400 +movie was very deep and interesting this + +112 +00:04:56,199 --> 00:05:01,000 +movie was not very deep and interesting + +113 +00:04:58,400 --> 00:05:02,800 +and so you have these like like two + +114 +00:05:01,000 --> 00:05:04,759 +pairs of questions and answers and you + +115 +00:05:02,800 --> 00:05:06,240 +need to decide between them and + +116 +00:05:04,759 --> 00:05:07,759 +depending on what the input is the + +117 +00:05:06,240 --> 00:05:10,639 +output will change and so that's a good + +118 +00:05:07,759 --> 00:05:11,919 +way to control for um and test whether + +119 +00:05:10,639 --> 00:05:13,600 +language models really understand + +120 +00:05:11,919 --> 00:05:15,080 +something so if you're interested in + +121 +00:05:13,600 --> 00:05:17,199 +benchmarking or other things like that + +122 +00:05:15,080 --> 00:05:19,160 +it's a good parad time to think about + +123 +00:05:17,199 --> 00:05:22,759 +anyway that's a little bit of an aside + +124 +00:05:19,160 --> 00:05:25,960 +um so now I'd like to go on to types of + +125 +00:05:22,759 --> 00:05:28,360 +sequential prediction problems + +126 +00:05:25,960 --> 00:05:30,880 +and types of prediction problems in + +127 +00:05:28,360 --> 00:05:32,560 +general uh binary and multiclass we + +128 +00:05:30,880 --> 00:05:35,240 +already talked about that's when we're + +129 +00:05:32,560 --> 00:05:37,199 +doing for example uh classification + +130 +00:05:35,240 --> 00:05:38,960 +between two classes or classification + +131 +00:05:37,199 --> 00:05:41,280 +between multiple + +132 +00:05:38,960 --> 00:05:42,880 +classes but there's also another variety + +133 +00:05:41,280 --> 00:05:45,120 +of prediction called structured + +134 +00:05:42,880 --> 00:05:47,120 +prediction and structured prediction is + +135 +00:05:45,120 --> 00:05:49,639 +when you have a very large number of + +136 +00:05:47,120 --> 00:05:53,680 +labels it's not you know a finite number + +137 +00:05:49,639 --> 00:05:56,560 +of labels and uh so that would be + +138 +00:05:53,680 --> 00:05:58,160 +something like uh if you take in an + +139 +00:05:56,560 --> 00:06:00,680 +input and you want to predict all of the + +140 +00:05:58,160 --> 00:06:04,000 +parts of speech of all the words in the + +141 +00:06:00,680 --> 00:06:06,840 +input and if you had like 50 parts of + +142 +00:06:04,000 --> 00:06:09,039 +speech the number of labels that you + +143 +00:06:06,840 --> 00:06:11,360 +would have for each sentence + +144 +00:06:09,039 --> 00:06:15,280 +is any any + +145 +00:06:11,360 --> 00:06:17,919 +ideas 50 50 parts of speech and like + +146 +00:06:15,280 --> 00:06:17,919 +let's say for + +147 +00:06:19,880 --> 00:06:31,400 +wordss 60 um it it's every combination + +148 +00:06:26,039 --> 00:06:31,400 +of parts of speech for every words so + +149 +00:06:32,039 --> 00:06:38,440 +uh close but maybe the opposite it's uh + +150 +00:06:35,520 --> 00:06:40,720 +50 to the four because you have 50 50 + +151 +00:06:38,440 --> 00:06:42,400 +choices here 50 choices here so it's a c + +152 +00:06:40,720 --> 00:06:45,599 +cross product of all of the + +153 +00:06:42,400 --> 00:06:48,560 +choices um and so that becomes very + +154 +00:06:45,599 --> 00:06:50,280 +quickly un untenable um let's say you're + +155 +00:06:48,560 --> 00:06:53,120 +talking about translation from English + +156 +00:06:50,280 --> 00:06:54,800 +to Japanese uh now you don't really even + +157 +00:06:53,120 --> 00:06:57,240 +have a finite number of choices because + +158 +00:06:54,800 --> 00:06:58,440 +the length could be even longer uh the + +159 +00:06:57,240 --> 00:07:01,400 +length of the output could be even + +160 +00:06:58,440 --> 00:07:01,400 +longer than the + +161 +00:07:04,840 --> 00:07:08,879 +C + +162 +00:07:06,520 --> 00:07:11,319 +rules + +163 +00:07:08,879 --> 00:07:14,879 +together makes it + +164 +00:07:11,319 --> 00:07:17,400 +fewer yeah so really good question um so + +165 +00:07:14,879 --> 00:07:19,319 +the question or the the question or + +166 +00:07:17,400 --> 00:07:21,160 +comment was if there are certain rules + +167 +00:07:19,319 --> 00:07:22,759 +about one thing not ever being able to + +168 +00:07:21,160 --> 00:07:25,080 +follow the other you can actually reduce + +169 +00:07:22,759 --> 00:07:28,319 +the number um you could do that with a + +170 +00:07:25,080 --> 00:07:30,280 +hard constraint and make things uh kind + +171 +00:07:28,319 --> 00:07:32,520 +of + +172 +00:07:30,280 --> 00:07:34,240 +and like actually cut off things that + +173 +00:07:32,520 --> 00:07:36,280 +you know have zero probability but in + +174 +00:07:34,240 --> 00:07:38,680 +reality what people do is they just trim + +175 +00:07:36,280 --> 00:07:41,319 +hypotheses that have low probability and + +176 +00:07:38,680 --> 00:07:43,319 +so that has kind of the same effect like + +177 +00:07:41,319 --> 00:07:47,599 +you almost never see a determiner after + +178 +00:07:43,319 --> 00:07:49,720 +a determiner in English um and so yeah + +179 +00:07:47,599 --> 00:07:52,400 +we're going to talk about uh algorithms + +180 +00:07:49,720 --> 00:07:53,960 +to do this in the Generation section so + +181 +00:07:52,400 --> 00:07:57,240 +we could talk more about that + +182 +00:07:53,960 --> 00:08:00,080 +that um but anyway the basic idea behind + +183 +00:07:57,240 --> 00:08:02,400 +structured prediction is that you don't + +184 +00:08:00,080 --> 00:08:04,280 +like language modeling like I said last + +185 +00:08:02,400 --> 00:08:06,240 +time you don't predict all of the the + +186 +00:08:04,280 --> 00:08:08,319 +whole sequence at once you usually + +187 +00:08:06,240 --> 00:08:10,440 +predict each element at once and then + +188 +00:08:08,319 --> 00:08:12,080 +somehow calculate the conditional + +189 +00:08:10,440 --> 00:08:13,720 +probability of the next element given + +190 +00:08:12,080 --> 00:08:15,879 +the the current element or other things + +191 +00:08:13,720 --> 00:08:18,840 +like that so that's how we solve + +192 +00:08:15,879 --> 00:08:18,840 +structured prediction + +193 +00:08:18,919 --> 00:08:22,960 +problems another thing is unconditioned + +194 +00:08:21,319 --> 00:08:25,120 +versus conditioned predictions so + +195 +00:08:22,960 --> 00:08:28,520 +uncondition prediction we don't do this + +196 +00:08:25,120 --> 00:08:31,240 +very often um but basically uh we + +197 +00:08:28,520 --> 00:08:34,039 +predict the probability of a a single + +198 +00:08:31,240 --> 00:08:35,880 +variable or generate a single variable + +199 +00:08:34,039 --> 00:08:37,599 +and condition pro prediction is + +200 +00:08:35,880 --> 00:08:41,000 +predicting the probability of an output + +201 +00:08:37,599 --> 00:08:45,120 +variable given an input like + +202 +00:08:41,000 --> 00:08:48,040 +this so um for unconditioned prediction + +203 +00:08:45,120 --> 00:08:50,000 +um the way we can do this is left to + +204 +00:08:48,040 --> 00:08:51,399 +right autoagressive models and these are + +205 +00:08:50,000 --> 00:08:52,600 +the ones that I talked about last time + +206 +00:08:51,399 --> 00:08:56,360 +when I was talking about how we build + +207 +00:08:52,600 --> 00:08:59,000 +language models um and these could be uh + +208 +00:08:56,360 --> 00:09:01,880 +specifically this kind though is a kind + +209 +00:08:59,000 --> 00:09:03,480 +that doesn't have any context limit so + +210 +00:09:01,880 --> 00:09:05,680 +it's looking all the way back to the + +211 +00:09:03,480 --> 00:09:07,519 +beginning of the the sequence and this + +212 +00:09:05,680 --> 00:09:09,440 +could be like an infinite length endr + +213 +00:09:07,519 --> 00:09:10,440 +model but practically we can't use those + +214 +00:09:09,440 --> 00:09:12,519 +because they would have too many + +215 +00:09:10,440 --> 00:09:15,360 +parameters they would be too sparse for + +216 +00:09:12,519 --> 00:09:17,079 +us to estimate the parameters so um what + +217 +00:09:15,360 --> 00:09:19,120 +we do instead with engram models which I + +218 +00:09:17,079 --> 00:09:21,240 +talked about last time is we limit the + +219 +00:09:19,120 --> 00:09:23,600 +the context length so we have something + +220 +00:09:21,240 --> 00:09:25,760 +like a trigram model where we don't + +221 +00:09:23,600 --> 00:09:28,680 +actually reference all of the previous + +222 +00:09:25,760 --> 00:09:30,680 +outputs uh when we make a prediction oh + +223 +00:09:28,680 --> 00:09:34,440 +and sorry actually I I should explain + +224 +00:09:30,680 --> 00:09:37,640 +how how do we uh how do we read this + +225 +00:09:34,440 --> 00:09:40,519 +graph so this would be we're predicting + +226 +00:09:37,640 --> 00:09:42,680 +number one here we're predicting word + +227 +00:09:40,519 --> 00:09:45,240 +number one and we're conditioning we're + +228 +00:09:42,680 --> 00:09:47,640 +not conditioning on anything after it + +229 +00:09:45,240 --> 00:09:49,040 +we're predicting word number two we're + +230 +00:09:47,640 --> 00:09:50,480 +conditioning on Word number one we're + +231 +00:09:49,040 --> 00:09:53,040 +predicting word number three we're + +232 +00:09:50,480 --> 00:09:55,640 +conditioning on Word number two so here + +233 +00:09:53,040 --> 00:09:58,320 +we would be uh predicting word number + +234 +00:09:55,640 --> 00:09:59,920 +four conditioning on Words number three + +235 +00:09:58,320 --> 00:10:02,200 +and two but not number one so that would + +236 +00:09:59,920 --> 00:10:07,600 +be like a trigram + +237 +00:10:02,200 --> 00:10:07,600 +bottle um so + +238 +00:10:08,600 --> 00:10:15,240 +the what is this is there a robot + +239 +00:10:11,399 --> 00:10:17,480 +walking around somewhere um Howard drill + +240 +00:10:15,240 --> 00:10:20,440 +okay okay' be a lot more fun if it was a + +241 +00:10:17,480 --> 00:10:22,560 +robot um so + +242 +00:10:20,440 --> 00:10:25,519 +uh the things we're going to talk about + +243 +00:10:22,560 --> 00:10:28,360 +today are largely going to be ones that + +244 +00:10:25,519 --> 00:10:31,200 +have unlimited length context um and so + +245 +00:10:28,360 --> 00:10:33,440 +we can uh we'll talk about some examples + +246 +00:10:31,200 --> 00:10:35,680 +here and then um there's also + +247 +00:10:33,440 --> 00:10:37,279 +independent prediction so this uh would + +248 +00:10:35,680 --> 00:10:39,160 +be something like a unigram model where + +249 +00:10:37,279 --> 00:10:41,560 +you would just uh not condition on any + +250 +00:10:39,160 --> 00:10:41,560 +previous + +251 +00:10:41,880 --> 00:10:45,959 +context there's also bidirectional + +252 +00:10:44,279 --> 00:10:47,959 +prediction where basically when you + +253 +00:10:45,959 --> 00:10:50,440 +predict each element you predict based + +254 +00:10:47,959 --> 00:10:52,680 +on all of the other elements not the + +255 +00:10:50,440 --> 00:10:55,519 +element itself uh this could be + +256 +00:10:52,680 --> 00:10:59,720 +something like a masked language model + +257 +00:10:55,519 --> 00:11:02,160 +um but note here that I put a slash + +258 +00:10:59,720 --> 00:11:04,000 +through here uh because this is not a + +259 +00:11:02,160 --> 00:11:06,800 +well-formed probability because as I + +260 +00:11:04,000 --> 00:11:08,760 +mentioned last time um in order to have + +261 +00:11:06,800 --> 00:11:11,000 +a well-formed probability you need to + +262 +00:11:08,760 --> 00:11:12,440 +predict the elements based on all of the + +263 +00:11:11,000 --> 00:11:14,120 +elements that you predicted before and + +264 +00:11:12,440 --> 00:11:16,519 +you can't predict based on future + +265 +00:11:14,120 --> 00:11:18,519 +elements so this is not actually a + +266 +00:11:16,519 --> 00:11:20,760 +probabilistic model but sometimes people + +267 +00:11:18,519 --> 00:11:22,240 +use this to kind of learn + +268 +00:11:20,760 --> 00:11:24,720 +representations that could be used + +269 +00:11:22,240 --> 00:11:28,680 +Downstream for some + +270 +00:11:24,720 --> 00:11:30,959 +reason cool is this clear any questions + +271 +00:11:28,680 --> 00:11:30,959 +comments + +272 +00:11:32,680 --> 00:11:39,839 +yeah so these are all um not + +273 +00:11:36,800 --> 00:11:42,000 +conditioning on any prior context uh so + +274 +00:11:39,839 --> 00:11:43,959 +when you predict each word it's + +275 +00:11:42,000 --> 00:11:46,880 +conditioning on context that you + +276 +00:11:43,959 --> 00:11:50,160 +previously generated or previously + +277 +00:11:46,880 --> 00:11:52,279 +predicted yeah so and if I go to the + +278 +00:11:50,160 --> 00:11:55,399 +conditioned ones these are where you + +279 +00:11:52,279 --> 00:11:56,800 +have like a source x uh where you're + +280 +00:11:55,399 --> 00:11:58,480 +given this and then you want to + +281 +00:11:56,800 --> 00:11:59,639 +calculate the conditional probability of + +282 +00:11:58,480 --> 00:12:04,279 +something else + +283 +00:11:59,639 --> 00:12:06,839 +so um to give some examples of this um + +284 +00:12:04,279 --> 00:12:10,320 +this is autor regressive conditioned + +285 +00:12:06,839 --> 00:12:12,920 +prediction and um this could be like a + +286 +00:12:10,320 --> 00:12:14,440 +SE a standard sequence to sequence model + +287 +00:12:12,920 --> 00:12:16,079 +or it could be a language model where + +288 +00:12:14,440 --> 00:12:18,600 +you're given a prompt and you want to + +289 +00:12:16,079 --> 00:12:20,560 +predict the following output like we + +290 +00:12:18,600 --> 00:12:24,160 +often do with chat GPT or something like + +291 +00:12:20,560 --> 00:12:27,880 +this and so + +292 +00:12:24,160 --> 00:12:30,199 +um yeah I I don't think you + +293 +00:12:27,880 --> 00:12:32,279 +can + +294 +00:12:30,199 --> 00:12:34,639 +yeah I don't know if any way you can do + +295 +00:12:32,279 --> 00:12:37,680 +a chat GPT without any conditioning + +296 +00:12:34,639 --> 00:12:39,959 +context um but there were people who + +297 +00:12:37,680 --> 00:12:41,240 +were sending uh I saw this about a week + +298 +00:12:39,959 --> 00:12:44,079 +or two ago there were people who were + +299 +00:12:41,240 --> 00:12:47,839 +sending things to the chat um to the GPD + +300 +00:12:44,079 --> 00:12:50,480 +3.5 or gp4 API with no input and it + +301 +00:12:47,839 --> 00:12:52,279 +would randomly output random questions + +302 +00:12:50,480 --> 00:12:54,800 +or something like that so that's what's + +303 +00:12:52,279 --> 00:12:56,720 +what happens when you send things to uh + +304 +00:12:54,800 --> 00:12:58,120 +to chat GPT without any prior + +305 +00:12:56,720 --> 00:13:00,120 +conditioning conts but normally what you + +306 +00:12:58,120 --> 00:13:01,440 +do is you put in you know your prompt + +307 +00:13:00,120 --> 00:13:05,320 +and then it follows up with your prompt + +308 +00:13:01,440 --> 00:13:05,320 +and that would be in this uh in this + +309 +00:13:06,000 --> 00:13:11,279 +Paradigm there's also something called + +310 +00:13:08,240 --> 00:13:14,199 +non-auto regressive condition prediction + +311 +00:13:11,279 --> 00:13:16,760 +um and this can be used for something + +312 +00:13:14,199 --> 00:13:19,160 +like sequence labeling or non- autor + +313 +00:13:16,760 --> 00:13:20,760 +regressive machine translation I'll talk + +314 +00:13:19,160 --> 00:13:22,839 +about the first one in this class and + +315 +00:13:20,760 --> 00:13:25,600 +I'll talk about the the second one maybe + +316 +00:13:22,839 --> 00:13:27,399 +later um it's kind of a minor topic now + +317 +00:13:25,600 --> 00:13:30,040 +it used to be popular a few years ago so + +318 +00:13:27,399 --> 00:13:33,279 +I'm not sure whether it'll cover it but + +319 +00:13:30,040 --> 00:13:33,279 +um uh + +320 +00:13:33,399 --> 00:13:39,279 +yeah cool so the basic modeling Paradigm + +321 +00:13:37,079 --> 00:13:41,199 +that we use for things like this is + +322 +00:13:39,279 --> 00:13:42,760 +extracting features and predicting so + +323 +00:13:41,199 --> 00:13:44,839 +this is exactly the same as the bag of + +324 +00:13:42,760 --> 00:13:46,680 +wordss model right I the bag of wordss + +325 +00:13:44,839 --> 00:13:48,680 +model that I talked about the first time + +326 +00:13:46,680 --> 00:13:50,959 +we extracted features uh based on those + +327 +00:13:48,680 --> 00:13:53,440 +features we made predictions so it's no + +328 +00:13:50,959 --> 00:13:55,480 +different when we do sequence modeling + +329 +00:13:53,440 --> 00:13:57,680 +um but the methods that we use for + +330 +00:13:55,480 --> 00:14:01,120 +feature extraction is different so given + +331 +00:13:57,680 --> 00:14:03,920 +in the input text X we extract features + +332 +00:14:01,120 --> 00:14:06,519 +H and predict labels + +333 +00:14:03,920 --> 00:14:10,320 +Y and for something like text + +334 +00:14:06,519 --> 00:14:12,600 +classification what we do is we uh so + +335 +00:14:10,320 --> 00:14:15,440 +for example we have text classification + +336 +00:14:12,600 --> 00:14:17,920 +or or sequence labeling and for text + +337 +00:14:15,440 --> 00:14:19,720 +classification usually what we would do + +338 +00:14:17,920 --> 00:14:21,360 +is we would have a feature extractor + +339 +00:14:19,720 --> 00:14:23,120 +from this feature extractor we take the + +340 +00:14:21,360 --> 00:14:25,199 +sequence and we convert it into a single + +341 +00:14:23,120 --> 00:14:28,040 +vector and then based on this Vector we + +342 +00:14:25,199 --> 00:14:30,160 +make a prediction so that that's what we + +343 +00:14:28,040 --> 00:14:33,160 +do for + +344 +00:14:30,160 --> 00:14:35,480 +classification um for sequence labeling + +345 +00:14:33,160 --> 00:14:37,160 +normally what we do is we extract one + +346 +00:14:35,480 --> 00:14:40,240 +vector for each thing that we would like + +347 +00:14:37,160 --> 00:14:42,079 +to predict about so here that might be + +348 +00:14:40,240 --> 00:14:45,639 +one vector for each + +349 +00:14:42,079 --> 00:14:47,720 +word um and then based on this uh we + +350 +00:14:45,639 --> 00:14:49,120 +would predict something for each word so + +351 +00:14:47,720 --> 00:14:50,360 +this is an example of part of speech + +352 +00:14:49,120 --> 00:14:53,079 +tagging but there's a lot of other + +353 +00:14:50,360 --> 00:14:56,920 +sequence labeling tasks + +354 +00:14:53,079 --> 00:14:58,839 +also and what tasks exist for something + +355 +00:14:56,920 --> 00:15:03,040 +like sequence labeling so sequence lab + +356 +00:14:58,839 --> 00:15:06,240 +in is uh a pretty + +357 +00:15:03,040 --> 00:15:09,000 +big subset of NLP tasks you can express + +358 +00:15:06,240 --> 00:15:11,040 +a lot of things as sequence labeling and + +359 +00:15:09,000 --> 00:15:13,000 +basically given an input text X we + +360 +00:15:11,040 --> 00:15:16,079 +predict an output label sequence y of + +361 +00:15:13,000 --> 00:15:17,560 +equal length so this can be used for + +362 +00:15:16,079 --> 00:15:20,160 +things like part of speech tagging to + +363 +00:15:17,560 --> 00:15:22,000 +get the parts of speech of each word um + +364 +00:15:20,160 --> 00:15:24,639 +it can also be used for something like + +365 +00:15:22,000 --> 00:15:26,959 +lemmatization and litiz basically what + +366 +00:15:24,639 --> 00:15:29,880 +that is is it is predicting the base + +367 +00:15:26,959 --> 00:15:31,480 +form of each word uh and this can be + +368 +00:15:29,880 --> 00:15:34,560 +used for normalization if you want to + +369 +00:15:31,480 --> 00:15:36,360 +find like for example all instances of a + +370 +00:15:34,560 --> 00:15:38,480 +a particular verb being used or all + +371 +00:15:36,360 --> 00:15:40,800 +instances of a particular noun being + +372 +00:15:38,480 --> 00:15:42,720 +used this is a little bit different than + +373 +00:15:40,800 --> 00:15:45,000 +something like stemming so stemming + +374 +00:15:42,720 --> 00:15:48,160 +normally what stemming would do is it + +375 +00:15:45,000 --> 00:15:50,560 +would uh chop off the plural here it + +376 +00:15:48,160 --> 00:15:53,240 +would chop off S but it wouldn't be able + +377 +00:15:50,560 --> 00:15:56,279 +to do things like normalized saw into C + +378 +00:15:53,240 --> 00:15:57,759 +because uh stemming uh just removes + +379 +00:15:56,279 --> 00:15:59,240 +suffixes it doesn't do any sort of + +380 +00:15:57,759 --> 00:16:02,720 +normalization so that's the difference + +381 +00:15:59,240 --> 00:16:05,199 +between lonization and + +382 +00:16:02,720 --> 00:16:08,079 +stemon there's also something called + +383 +00:16:05,199 --> 00:16:09,680 +morphological tagging um in + +384 +00:16:08,079 --> 00:16:11,639 +morphological tagging basically what + +385 +00:16:09,680 --> 00:16:14,360 +this is doing is this is a + +386 +00:16:11,639 --> 00:16:17,040 +more advanced version of part of speech + +387 +00:16:14,360 --> 00:16:20,360 +tagging uh that predicts things like + +388 +00:16:17,040 --> 00:16:23,600 +okay this is a a past tense verb uh this + +389 +00:16:20,360 --> 00:16:25,639 +is a plural um this is a particular verb + +390 +00:16:23,600 --> 00:16:27,240 +form and you have multiple tags here + +391 +00:16:25,639 --> 00:16:28,959 +this is less interesting in English + +392 +00:16:27,240 --> 00:16:30,920 +because English is kind of boring + +393 +00:16:28,959 --> 00:16:32,319 +language morphology morphologically it + +394 +00:16:30,920 --> 00:16:33,399 +doesn't have a lot of conjugation and + +395 +00:16:32,319 --> 00:16:35,839 +other stuff but it's a lot more + +396 +00:16:33,399 --> 00:16:38,319 +interesting in more complex languages + +397 +00:16:35,839 --> 00:16:40,040 +like Japanese or Hindi or other things + +398 +00:16:38,319 --> 00:16:42,480 +like + +399 +00:16:40,040 --> 00:16:43,920 +that Chinese is even more boring than + +400 +00:16:42,480 --> 00:16:46,120 +English so if you're interested in + +401 +00:16:43,920 --> 00:16:47,000 +Chinese then you don't need to worry + +402 +00:16:46,120 --> 00:16:50,680 +about + +403 +00:16:47,000 --> 00:16:52,560 +that cool um but actually what's maybe + +404 +00:16:50,680 --> 00:16:55,000 +more widely used from the sequence + +405 +00:16:52,560 --> 00:16:57,480 +labeling perspective is span labeling + +406 +00:16:55,000 --> 00:17:01,040 +and here you want to predict spans and + +407 +00:16:57,480 --> 00:17:03,560 +labels and this could be uh named entity + +408 +00:17:01,040 --> 00:17:05,360 +recognitions so if you say uh Graham nub + +409 +00:17:03,560 --> 00:17:07,199 +is teaching at Carnegie melan University + +410 +00:17:05,360 --> 00:17:09,520 +you would want to identify each entity + +411 +00:17:07,199 --> 00:17:11,480 +is being like a person organization + +412 +00:17:09,520 --> 00:17:16,039 +Place governmental entity other stuff + +413 +00:17:11,480 --> 00:17:18,760 +like that um there's also + +414 +00:17:16,039 --> 00:17:20,439 +uh things like syntactic chunking where + +415 +00:17:18,760 --> 00:17:23,640 +you want to find all noun phrases and + +416 +00:17:20,439 --> 00:17:26,799 +verb phrases um also semantic role + +417 +00:17:23,640 --> 00:17:30,360 +labeling where semantic role labeling is + +418 +00:17:26,799 --> 00:17:32,480 +uh demonstrating who did what to whom so + +419 +00:17:30,360 --> 00:17:34,440 +it's saying uh this is the actor the + +420 +00:17:32,480 --> 00:17:36,120 +person who did the thing this is the + +421 +00:17:34,440 --> 00:17:38,520 +thing that is being done and this is the + +422 +00:17:36,120 --> 00:17:40,280 +place where it's being done so uh this + +423 +00:17:38,520 --> 00:17:42,840 +can be useful if you want to do any sort + +424 +00:17:40,280 --> 00:17:45,559 +of analysis about who does what to whom + +425 +00:17:42,840 --> 00:17:48,160 +uh other things like + +426 +00:17:45,559 --> 00:17:50,360 +that um there's also a more complicated + +427 +00:17:48,160 --> 00:17:52,080 +thing called an entity linking which + +428 +00:17:50,360 --> 00:17:54,559 +isn't really a span linking task but + +429 +00:17:52,080 --> 00:17:58,400 +it's basically named entity recognition + +430 +00:17:54,559 --> 00:18:00,799 +and you link it to um and you link it to + +431 +00:17:58,400 --> 00:18:04,200 +to like a database like Wiki data or + +432 +00:18:00,799 --> 00:18:06,600 +Wikipedia or something like this and + +433 +00:18:04,200 --> 00:18:09,520 +this doesn't seem very glamorous perhaps + +434 +00:18:06,600 --> 00:18:10,799 +you know a lot of people might not you + +435 +00:18:09,520 --> 00:18:13,400 +might not + +436 +00:18:10,799 --> 00:18:15,000 +sound like immediately excit be + +437 +00:18:13,400 --> 00:18:16,799 +immediately excited by entity linking + +438 +00:18:15,000 --> 00:18:18,520 +but actually it's super super important + +439 +00:18:16,799 --> 00:18:20,080 +for things like news aggregation and + +440 +00:18:18,520 --> 00:18:21,640 +other stuff like that find all the news + +441 +00:18:20,080 --> 00:18:23,799 +articles about the celebrity or + +442 +00:18:21,640 --> 00:18:26,919 +something like this uh find all of the + +443 +00:18:23,799 --> 00:18:29,720 +mentions of our product um our company's + +444 +00:18:26,919 --> 00:18:33,400 +product and uh social media or things so + +445 +00:18:29,720 --> 00:18:33,400 +it's actually a really widely used + +446 +00:18:33,720 --> 00:18:38,000 +technology and then finally span + +447 +00:18:36,039 --> 00:18:40,240 +labeling can also be treated as sequence + +448 +00:18:38,000 --> 00:18:43,240 +labeling um and the way we normally do + +449 +00:18:40,240 --> 00:18:45,600 +this is we use something called bio tags + +450 +00:18:43,240 --> 00:18:47,760 +and uh here you predict the beginning uh + +451 +00:18:45,600 --> 00:18:50,200 +in and out tags for each word or spans + +452 +00:18:47,760 --> 00:18:52,400 +so if we have this example of spans uh + +453 +00:18:50,200 --> 00:18:56,120 +we just convert this into tags uh where + +454 +00:18:52,400 --> 00:18:57,760 +you say uh begin person in person o + +455 +00:18:56,120 --> 00:18:59,640 +means it's not an entity begin + +456 +00:18:57,760 --> 00:19:02,799 +organization in organization and then + +457 +00:18:59,640 --> 00:19:05,520 +you canvert that back into um into these + +458 +00:19:02,799 --> 00:19:09,880 +spans so this makes it relatively easy + +459 +00:19:05,520 --> 00:19:09,880 +to uh kind of do the span + +460 +00:19:10,480 --> 00:19:15,120 +prediction cool um so now you know uh + +461 +00:19:13,640 --> 00:19:16,600 +now you know what to do if you want to + +462 +00:19:15,120 --> 00:19:18,280 +predict entities or other things like + +463 +00:19:16,600 --> 00:19:20,240 +that there's a lot of models on like + +464 +00:19:18,280 --> 00:19:22,400 +hugging face for example that uh allow + +465 +00:19:20,240 --> 00:19:25,640 +you to do these things are there any + +466 +00:19:22,400 --> 00:19:25,640 +questions uh before I move + +467 +00:19:27,080 --> 00:19:32,440 +on okay + +468 +00:19:28,799 --> 00:19:34,039 +cool I'll just go forward then so um now + +469 +00:19:32,440 --> 00:19:37,000 +I'm going to talk about how we actually + +470 +00:19:34,039 --> 00:19:38,559 +model these in machine learning models + +471 +00:19:37,000 --> 00:19:40,919 +and there's three major types of + +472 +00:19:38,559 --> 00:19:43,120 +sequence models uh there are other types + +473 +00:19:40,919 --> 00:19:45,320 +of sequence models but I'd say the great + +474 +00:19:43,120 --> 00:19:47,840 +majority of work uses one of these three + +475 +00:19:45,320 --> 00:19:51,720 +different types and the first one is + +476 +00:19:47,840 --> 00:19:54,840 +recurrence um what recurrence does it is + +477 +00:19:51,720 --> 00:19:56,240 +it conditions on representations on an + +478 +00:19:54,840 --> 00:19:58,720 +encoding of the + +479 +00:19:56,240 --> 00:20:01,360 +history and so the way this works works + +480 +00:19:58,720 --> 00:20:04,679 +is essentially you have your input + +481 +00:20:01,360 --> 00:20:06,280 +vectors like this uh usually word + +482 +00:20:04,679 --> 00:20:08,600 +embeddings or embeddings from the + +483 +00:20:06,280 --> 00:20:10,880 +previous layer of the model and you have + +484 +00:20:08,600 --> 00:20:12,840 +a recurrent neural network and the + +485 +00:20:10,880 --> 00:20:14,600 +recurrent neural network um at the very + +486 +00:20:12,840 --> 00:20:17,280 +beginning might only take the first + +487 +00:20:14,600 --> 00:20:19,480 +Vector but every subsequent step it + +488 +00:20:17,280 --> 00:20:23,760 +takes the input vector and it takes the + +489 +00:20:19,480 --> 00:20:23,760 +hidden Vector from the previous uh + +490 +00:20:24,080 --> 00:20:32,280 +input and the uh then you keep on going + +491 +00:20:29,039 --> 00:20:32,280 +uh like this all the way through the + +492 +00:20:32,320 --> 00:20:37,600 +sequence the convolution is a + +493 +00:20:35,799 --> 00:20:40,880 +conditioning representations on local + +494 +00:20:37,600 --> 00:20:44,200 +context so you have the inputs like this + +495 +00:20:40,880 --> 00:20:47,200 +and here you're conditioning on the word + +496 +00:20:44,200 --> 00:20:51,240 +itself and the surrounding um words on + +497 +00:20:47,200 --> 00:20:52,960 +the right or the left so um you would do + +498 +00:20:51,240 --> 00:20:55,240 +something like this this is a typical + +499 +00:20:52,960 --> 00:20:57,480 +convolution where you have this this + +500 +00:20:55,240 --> 00:20:59,039 +certain one here and the left one and + +501 +00:20:57,480 --> 00:21:01,080 +the right one and this would be a size + +502 +00:20:59,039 --> 00:21:03,480 +three convolution you could also have a + +503 +00:21:01,080 --> 00:21:06,520 +size five convolution 7 n you know + +504 +00:21:03,480 --> 00:21:08,600 +whatever else um that would take in more + +505 +00:21:06,520 --> 00:21:11,520 +surrounding words like + +506 +00:21:08,600 --> 00:21:13,720 +this and then finally we have attention + +507 +00:21:11,520 --> 00:21:15,640 +um and attention is conditioned + +508 +00:21:13,720 --> 00:21:19,080 +representations at a weighted average of + +509 +00:21:15,640 --> 00:21:21,000 +all tokens in the sequence and so here + +510 +00:21:19,080 --> 00:21:24,600 +um we're conditioning on all of the + +511 +00:21:21,000 --> 00:21:26,279 +other tokens in the sequence but um the + +512 +00:21:24,600 --> 00:21:28,919 +amount that we condition on each of the + +513 +00:21:26,279 --> 00:21:32,039 +tokens differs uh between + +514 +00:21:28,919 --> 00:21:34,919 +so we might get more of this token less + +515 +00:21:32,039 --> 00:21:37,600 +of this token and other things like that + +516 +00:21:34,919 --> 00:21:39,720 +and I'll go into the mechanisms of each + +517 +00:21:37,600 --> 00:21:43,159 +of + +518 +00:21:39,720 --> 00:21:45,720 +these one important thing to think about + +519 +00:21:43,159 --> 00:21:49,279 +is uh the computational complexity of + +520 +00:21:45,720 --> 00:21:51,960 +each of these and um the computational + +521 +00:21:49,279 --> 00:21:56,240 +complexity can be + +522 +00:21:51,960 --> 00:21:58,600 +expressed as the sequence length let's + +523 +00:21:56,240 --> 00:22:00,840 +call the sequence length n and + +524 +00:21:58,600 --> 00:22:02,520 +convolution has a convolution window + +525 +00:22:00,840 --> 00:22:05,080 +size so I'll call that + +526 +00:22:02,520 --> 00:22:08,039 +W so does anyone have an idea of the + +527 +00:22:05,080 --> 00:22:10,360 +computational complexity of a recurrent + +528 +00:22:08,039 --> 00:22:10,360 +neural + +529 +00:22:11,480 --> 00:22:16,640 +network so how um how quickly does the + +530 +00:22:15,120 --> 00:22:18,640 +computation of a recurrent neural + +531 +00:22:16,640 --> 00:22:20,760 +network grow and one way you can look at + +532 +00:22:18,640 --> 00:22:24,360 +this is uh figure out the number of + +533 +00:22:20,760 --> 00:22:24,360 +arrows uh that you see + +534 +00:22:24,480 --> 00:22:29,080 +here yeah it's it's linear so it's + +535 +00:22:27,440 --> 00:22:32,520 +basically + +536 +00:22:29,080 --> 00:22:35,520 +n um what about + +537 +00:22:32,520 --> 00:22:36,760 +convolution any other ideas any ideas + +538 +00:22:35,520 --> 00:22:42,039 +about + +539 +00:22:36,760 --> 00:22:45,120 +convolution n yeah NW n + +540 +00:22:42,039 --> 00:22:47,559 +w and what about + +541 +00:22:45,120 --> 00:22:52,200 +attention n squar + +542 +00:22:47,559 --> 00:22:53,559 +yeah so what you can see is um for very + +543 +00:22:52,200 --> 00:22:58,000 +long + +544 +00:22:53,559 --> 00:23:00,400 +sequences um for very long sequences the + +545 +00:22:58,000 --> 00:23:04,480 +asymptotic complexity of running a + +546 +00:23:00,400 --> 00:23:06,039 +recurrent neural network is uh lower so + +547 +00:23:04,480 --> 00:23:08,960 +you can run a recurrent neural network + +548 +00:23:06,039 --> 00:23:10,480 +over a sequence of length uh you know 20 + +549 +00:23:08,960 --> 00:23:12,480 +million or something like that and as + +550 +00:23:10,480 --> 00:23:15,200 +long as you had enough memory it would + +551 +00:23:12,480 --> 00:23:16,520 +take a linear time but um if you do + +552 +00:23:15,200 --> 00:23:18,400 +something like attention over a really + +553 +00:23:16,520 --> 00:23:20,240 +long sequence it would be more difficult + +554 +00:23:18,400 --> 00:23:22,080 +there's a lot of caveats here because + +555 +00:23:20,240 --> 00:23:23,320 +attention and convolution are easily + +556 +00:23:22,080 --> 00:23:26,200 +paral + +557 +00:23:23,320 --> 00:23:28,520 +parallelized uh whereas uh recurrence is + +558 +00:23:26,200 --> 00:23:30,919 +not um and I'll talk about that a second + +559 +00:23:28,520 --> 00:23:32,679 +but any anyway it's a good thing to keep + +560 +00:23:30,919 --> 00:23:36,240 +in + +561 +00:23:32,679 --> 00:23:37,679 +mind cool um so the next the first + +562 +00:23:36,240 --> 00:23:39,799 +sequence model I want to introduce is + +563 +00:23:37,679 --> 00:23:42,559 +recurrent neural networks oh um sorry + +564 +00:23:39,799 --> 00:23:45,799 +one other thing I want to mention is all + +565 +00:23:42,559 --> 00:23:47,600 +of these are still used um it might seem + +566 +00:23:45,799 --> 00:23:49,960 +that like if you're very plugged into + +567 +00:23:47,600 --> 00:23:52,640 +NLP it might seem like Well everybody's + +568 +00:23:49,960 --> 00:23:55,080 +using attention um so why do we need to + +569 +00:23:52,640 --> 00:23:56,880 +learn about the other ones uh but + +570 +00:23:55,080 --> 00:23:59,679 +actually all of these are used and + +571 +00:23:56,880 --> 00:24:02,600 +usually recurrence and convolution are + +572 +00:23:59,679 --> 00:24:04,960 +used in combination with attention uh in + +573 +00:24:02,600 --> 00:24:07,799 +some way for particular applications + +574 +00:24:04,960 --> 00:24:09,960 +where uh like uh recurrence or a + +575 +00:24:07,799 --> 00:24:12,640 +convolution are are useful so I'll I'll + +576 +00:24:09,960 --> 00:24:15,279 +go into details of that + +577 +00:24:12,640 --> 00:24:18,159 +l so let's talk about the first sequence + +578 +00:24:15,279 --> 00:24:20,600 +model uh recurrent neural networks so + +579 +00:24:18,159 --> 00:24:22,919 +recurrent neural networks um they're + +580 +00:24:20,600 --> 00:24:26,399 +basically tools to remember information + +581 +00:24:22,919 --> 00:24:28,520 +uh they were invented in uh around + +582 +00:24:26,399 --> 00:24:30,520 +1990 and + +583 +00:24:28,520 --> 00:24:34,120 +the way they work is a feedforward + +584 +00:24:30,520 --> 00:24:35,600 +neural network looks a bit like this we + +585 +00:24:34,120 --> 00:24:38,000 +have some sort of look up over the + +586 +00:24:35,600 --> 00:24:40,120 +context we calculate embeddings we do a + +587 +00:24:38,000 --> 00:24:41,000 +transform we get a hidden State and we + +588 +00:24:40,120 --> 00:24:43,039 +make the + +589 +00:24:41,000 --> 00:24:46,159 +prediction whereas a recurrent neural + +590 +00:24:43,039 --> 00:24:49,360 +network uh feeds in the previous hidden + +591 +00:24:46,159 --> 00:24:53,360 +State and a very simple Elman style + +592 +00:24:49,360 --> 00:24:54,840 +neural network looks um or I'll contrast + +593 +00:24:53,360 --> 00:24:56,559 +the feed forward neural network that we + +594 +00:24:54,840 --> 00:24:58,279 +already know with an Elman style neural + +595 +00:24:56,559 --> 00:25:00,399 +network um + +596 +00:24:58,279 --> 00:25:01,880 +uh recurrent neural network so basically + +597 +00:25:00,399 --> 00:25:06,120 +the feed forward Network that we already + +598 +00:25:01,880 --> 00:25:07,840 +know does a um linear transform over the + +599 +00:25:06,120 --> 00:25:09,279 +input and then it runs it through a + +600 +00:25:07,840 --> 00:25:11,640 +nonlinear function and this could be + +601 +00:25:09,279 --> 00:25:14,200 +like a tan function or a Ru function or + +602 +00:25:11,640 --> 00:25:17,080 +anything like that in a recurrent neural + +603 +00:25:14,200 --> 00:25:19,559 +network we add uh multiplication by the + +604 +00:25:17,080 --> 00:25:22,080 +hidden the previous hidden state so it + +605 +00:25:19,559 --> 00:25:25,120 +looks like + +606 +00:25:22,080 --> 00:25:27,000 +this and so if we look at what + +607 +00:25:25,120 --> 00:25:29,080 +processing a sequence looks like uh + +608 +00:25:27,000 --> 00:25:31,080 +basically what we do is we start out + +609 +00:25:29,080 --> 00:25:32,720 +with an initial State this initial State + +610 +00:25:31,080 --> 00:25:34,320 +could be like all zeros or it could be + +611 +00:25:32,720 --> 00:25:35,200 +randomized or it could be learned or + +612 +00:25:34,320 --> 00:25:38,480 +whatever + +613 +00:25:35,200 --> 00:25:42,080 +else and then based on based on this uh + +614 +00:25:38,480 --> 00:25:44,279 +we run it through an RNN function um and + +615 +00:25:42,080 --> 00:25:46,600 +then you know use calculate the hidden + +616 +00:25:44,279 --> 00:25:48,960 +State use it to make a prediction uh we + +617 +00:25:46,600 --> 00:25:50,760 +have the RNN function uh make a + +618 +00:25:48,960 --> 00:25:51,760 +prediction RNN make a prediction RNN + +619 +00:25:50,760 --> 00:25:54,520 +make a + +620 +00:25:51,760 --> 00:25:56,960 +prediction so one important thing here + +621 +00:25:54,520 --> 00:25:58,360 +is that this RNN is exactly the same + +622 +00:25:56,960 --> 00:26:01,880 +function + +623 +00:25:58,360 --> 00:26:04,960 +no matter which position it appears in + +624 +00:26:01,880 --> 00:26:06,640 +and so because of that we just no matter + +625 +00:26:04,960 --> 00:26:08,279 +how long the sequence becomes we always + +626 +00:26:06,640 --> 00:26:10,200 +have the same number of parameters which + +627 +00:26:08,279 --> 00:26:12,600 +is always like really important for a + +628 +00:26:10,200 --> 00:26:15,120 +sequence model so uh that's what this + +629 +00:26:12,600 --> 00:26:15,120 +looks like + +630 +00:26:15,799 --> 00:26:20,480 +here so how do we train + +631 +00:26:18,320 --> 00:26:22,679 +rnns um + +632 +00:26:20,480 --> 00:26:24,399 +basically if you remember we can trade + +633 +00:26:22,679 --> 00:26:27,159 +neural networks as long as we have a + +634 +00:26:24,399 --> 00:26:29,240 +directed e cyclic graph that calculates + +635 +00:26:27,159 --> 00:26:30,919 +our loss function and then for uh + +636 +00:26:29,240 --> 00:26:32,640 +forward propagation and back propagation + +637 +00:26:30,919 --> 00:26:35,720 +we'll do all the rest to calculate our + +638 +00:26:32,640 --> 00:26:38,760 +parameters and we uh we update the + +639 +00:26:35,720 --> 00:26:40,480 +parameters so the way this works is uh + +640 +00:26:38,760 --> 00:26:42,000 +let's say we're doing sequence labeling + +641 +00:26:40,480 --> 00:26:45,200 +in each of these predictions is a part + +642 +00:26:42,000 --> 00:26:47,559 +of speech uh each of these labels is a + +643 +00:26:45,200 --> 00:26:49,000 +true part of speech label or sorry each + +644 +00:26:47,559 --> 00:26:50,760 +of these predictions is like a + +645 +00:26:49,000 --> 00:26:52,919 +probability over the part parts of + +646 +00:26:50,760 --> 00:26:55,720 +speech for that sequence each of these + +647 +00:26:52,919 --> 00:26:57,640 +labels is a true part of speech label so + +648 +00:26:55,720 --> 00:26:59,320 +basically what we do is from this we + +649 +00:26:57,640 --> 00:27:02,200 +calculate the negative log likelihood of + +650 +00:26:59,320 --> 00:27:05,559 +the true part of speech we get a + +651 +00:27:02,200 --> 00:27:09,120 +loss and so now we have four losses uh + +652 +00:27:05,559 --> 00:27:11,559 +here this is no longer a nice directed + +653 +00:27:09,120 --> 00:27:13,000 +acyclic uh graph that ends in a single + +654 +00:27:11,559 --> 00:27:15,279 +loss function which is kind of what we + +655 +00:27:13,000 --> 00:27:17,559 +needed for back propagation right so + +656 +00:27:15,279 --> 00:27:20,240 +what do we do uh very simple we just add + +657 +00:27:17,559 --> 00:27:22,440 +them together uh we take the sum and now + +658 +00:27:20,240 --> 00:27:24,120 +we have a single loss function uh which + +659 +00:27:22,440 --> 00:27:26,240 +is the sum of all of the loss functions + +660 +00:27:24,120 --> 00:27:28,679 +for each prediction that we + +661 +00:27:26,240 --> 00:27:30,799 +made and that's our total loss and now + +662 +00:27:28,679 --> 00:27:32,600 +we do have a directed asli graph where + +663 +00:27:30,799 --> 00:27:34,320 +this is the terminal node and we can do + +664 +00:27:32,600 --> 00:27:36,480 +backr like + +665 +00:27:34,320 --> 00:27:37,799 +this this is true for all sequence + +666 +00:27:36,480 --> 00:27:39,320 +models I'm going to talk about today I'm + +667 +00:27:37,799 --> 00:27:41,559 +just illustrating it with recurrent + +668 +00:27:39,320 --> 00:27:43,279 +networks um any any questions here + +669 +00:27:41,559 --> 00:27:45,240 +everything + +670 +00:27:43,279 --> 00:27:47,919 +good + +671 +00:27:45,240 --> 00:27:50,279 +okay cool um yeah so now we have the + +672 +00:27:47,919 --> 00:27:52,960 +loss it's a Well form dag uh we can run + +673 +00:27:50,279 --> 00:27:55,320 +backrop so uh basically what we do is we + +674 +00:27:52,960 --> 00:27:58,399 +just run back propop and our loss goes + +675 +00:27:55,320 --> 00:28:01,120 +out uh back into all of the + +676 +00:27:58,399 --> 00:28:04,200 +places now parameters are tied across + +677 +00:28:01,120 --> 00:28:06,080 +time so the derivatives into the + +678 +00:28:04,200 --> 00:28:07,200 +parameters are aggregated over all of + +679 +00:28:06,080 --> 00:28:10,760 +the time + +680 +00:28:07,200 --> 00:28:13,760 +steps um and this has been called back + +681 +00:28:10,760 --> 00:28:16,320 +propagation through time uh since uh + +682 +00:28:13,760 --> 00:28:18,679 +these were originally invented so + +683 +00:28:16,320 --> 00:28:21,720 +basically what it looks like is because + +684 +00:28:18,679 --> 00:28:25,600 +the parameters for this RNN function are + +685 +00:28:21,720 --> 00:28:27,120 +shared uh they'll essentially be updated + +686 +00:28:25,600 --> 00:28:29,480 +they'll only be updated once but they're + +687 +00:28:27,120 --> 00:28:32,640 +updated from like four different + +688 +00:28:29,480 --> 00:28:32,640 +positions in this network + +689 +00:28:34,120 --> 00:28:38,440 +essentially yeah and this is the same + +690 +00:28:36,120 --> 00:28:40,559 +for all sequence uh sequence models that + +691 +00:28:38,440 --> 00:28:43,519 +I'm going to talk about + +692 +00:28:40,559 --> 00:28:45,360 +today um another variety of models that + +693 +00:28:43,519 --> 00:28:47,559 +people use are bidirectional rnns and + +694 +00:28:45,360 --> 00:28:49,880 +these are uh used when you want to you + +695 +00:28:47,559 --> 00:28:52,960 +know do something like sequence labeling + +696 +00:28:49,880 --> 00:28:54,399 +and so you just uh run two rnns you want + +697 +00:28:52,960 --> 00:28:56,279 +run one from the beginning one from the + +698 +00:28:54,399 --> 00:28:59,399 +end and concatenate them together like + +699 +00:28:56,279 --> 00:28:59,399 +this make predictions + +700 +00:29:01,200 --> 00:29:08,200 +cool uh any questions yeah if you run + +701 +00:29:05,559 --> 00:29:09,960 +the does that change your + +702 +00:29:08,200 --> 00:29:11,679 +complexity does this change the + +703 +00:29:09,960 --> 00:29:13,000 +complexity it doesn't change the ASM + +704 +00:29:11,679 --> 00:29:16,519 +totic complexity because you're + +705 +00:29:13,000 --> 00:29:18,320 +multiplying by two uh and like Big O + +706 +00:29:16,519 --> 00:29:21,559 +notation doesn't care if you multiply by + +707 +00:29:18,320 --> 00:29:23,880 +a constant but it it does double the Ty + +708 +00:29:21,559 --> 00:29:23,880 +that it would + +709 +00:29:24,080 --> 00:29:28,080 +do cool any + +710 +00:29:26,320 --> 00:29:32,799 +other + +711 +00:29:28,080 --> 00:29:35,720 +okay let's go forward um another problem + +712 +00:29:32,799 --> 00:29:37,240 +that is particularly Salient in rnns and + +713 +00:29:35,720 --> 00:29:40,440 +part of the reason why attention models + +714 +00:29:37,240 --> 00:29:42,000 +are so useful is Vanishing gradients but + +715 +00:29:40,440 --> 00:29:43,880 +you should be aware of this regardless + +716 +00:29:42,000 --> 00:29:46,799 +of whether like no matter which model + +717 +00:29:43,880 --> 00:29:48,799 +you're using and um thinking about it + +718 +00:29:46,799 --> 00:29:50,720 +very carefully is actually a really good + +719 +00:29:48,799 --> 00:29:52,399 +way to design better architectures if + +720 +00:29:50,720 --> 00:29:54,000 +you're going to be designing uh + +721 +00:29:52,399 --> 00:29:56,039 +designing + +722 +00:29:54,000 --> 00:29:58,000 +architectures so basically the problem + +723 +00:29:56,039 --> 00:29:59,399 +with Vanishing gradients is like let's + +724 +00:29:58,000 --> 00:30:01,799 +say we have a prediction task where + +725 +00:29:59,399 --> 00:30:03,960 +we're calculating a regression we're + +726 +00:30:01,799 --> 00:30:05,519 +inputting a whole bunch of tokens and + +727 +00:30:03,960 --> 00:30:08,080 +then calculating a regression at the + +728 +00:30:05,519 --> 00:30:12,840 +very end using a square air loss + +729 +00:30:08,080 --> 00:30:16,360 +function if we do something like this uh + +730 +00:30:12,840 --> 00:30:17,919 +the problem is if we have a standard RNN + +731 +00:30:16,360 --> 00:30:21,279 +when we do back + +732 +00:30:17,919 --> 00:30:25,480 +propop we'll have a big gradient + +733 +00:30:21,279 --> 00:30:27,000 +probably for the first RNN unit here but + +734 +00:30:25,480 --> 00:30:30,120 +every time because we're running this + +735 +00:30:27,000 --> 00:30:33,679 +through through some sort of + +736 +00:30:30,120 --> 00:30:37,080 +nonlinearity if we for example if our + +737 +00:30:33,679 --> 00:30:39,240 +nonlinearity is a t h function uh the + +738 +00:30:37,080 --> 00:30:42,000 +gradient of the tan H function looks a + +739 +00:30:39,240 --> 00:30:42,000 +little bit like + +740 +00:30:42,120 --> 00:30:50,000 +this and um here I if I am not mistaken + +741 +00:30:47,200 --> 00:30:53,480 +this Peaks at at one and everywhere else + +742 +00:30:50,000 --> 00:30:56,919 +at zero and so because this is peing at + +743 +00:30:53,480 --> 00:30:58,679 +one everywhere else at zero let's say um + +744 +00:30:56,919 --> 00:31:01,360 +we have an input way over here like + +745 +00:30:58,679 --> 00:31:03,080 +minus minus 3 or something like that if + +746 +00:31:01,360 --> 00:31:04,760 +we have that that basically destroys our + +747 +00:31:03,080 --> 00:31:10,760 +gradient our gradient disappears for + +748 +00:31:04,760 --> 00:31:13,559 +that particular unit um and you know + +749 +00:31:10,760 --> 00:31:15,399 +maybe one thing that you might say is oh + +750 +00:31:13,559 --> 00:31:17,039 +well you know if this is getting so + +751 +00:31:15,399 --> 00:31:19,320 +small because this only goes up to one + +752 +00:31:17,039 --> 00:31:22,960 +let's do like 100 time t + +753 +00:31:19,320 --> 00:31:24,880 +h as our uh as our activation function + +754 +00:31:22,960 --> 00:31:26,600 +we'll do 100 time tan H and so now this + +755 +00:31:24,880 --> 00:31:28,279 +goes up to 100 and now our gradients are + +756 +00:31:26,600 --> 00:31:30,080 +not going to disapp here but then you + +757 +00:31:28,279 --> 00:31:31,720 +have the the opposite problem you have + +758 +00:31:30,080 --> 00:31:34,760 +exploding gradients where it goes up by + +759 +00:31:31,720 --> 00:31:36,360 +100 every time uh it gets unmanageable + +760 +00:31:34,760 --> 00:31:40,000 +and destroys your gradient descent + +761 +00:31:36,360 --> 00:31:41,720 +itself so basically we have uh we have + +762 +00:31:40,000 --> 00:31:43,200 +this problem because if you apply a + +763 +00:31:41,720 --> 00:31:45,639 +function over and over again your + +764 +00:31:43,200 --> 00:31:47,240 +gradient gets smaller and smaller every + +765 +00:31:45,639 --> 00:31:49,080 +smaller and smaller bigger and bigger + +766 +00:31:47,240 --> 00:31:50,480 +every time you do that and uh you have + +767 +00:31:49,080 --> 00:31:51,720 +the vanishing gradient or exploding + +768 +00:31:50,480 --> 00:31:54,799 +gradient + +769 +00:31:51,720 --> 00:31:56,919 +problem um it's not just a problem with + +770 +00:31:54,799 --> 00:31:59,039 +nonlinearities so it also happens when + +771 +00:31:56,919 --> 00:32:00,480 +you do do your weight Matrix multiplies + +772 +00:31:59,039 --> 00:32:03,840 +and other stuff like that basically + +773 +00:32:00,480 --> 00:32:05,960 +anytime you modify uh the the input into + +774 +00:32:03,840 --> 00:32:07,720 +a different output it will have a + +775 +00:32:05,960 --> 00:32:10,240 +gradient and so it will either be bigger + +776 +00:32:07,720 --> 00:32:14,000 +than one or less than + +777 +00:32:10,240 --> 00:32:16,000 +one um so I mentioned this is a problem + +778 +00:32:14,000 --> 00:32:18,120 +for rnns it's particularly a problem for + +779 +00:32:16,000 --> 00:32:20,799 +rnns over long sequences but it's also a + +780 +00:32:18,120 --> 00:32:23,039 +problem for any other model you use and + +781 +00:32:20,799 --> 00:32:24,960 +the reason why this is important to know + +782 +00:32:23,039 --> 00:32:26,799 +is if there's important information in + +783 +00:32:24,960 --> 00:32:29,000 +your model finding a way that you can + +784 +00:32:26,799 --> 00:32:30,559 +get a direct path from that important + +785 +00:32:29,000 --> 00:32:32,600 +information to wherever you're making a + +786 +00:32:30,559 --> 00:32:34,440 +prediction often is a way to improve + +787 +00:32:32,600 --> 00:32:39,120 +your model + +788 +00:32:34,440 --> 00:32:41,159 +um improve your model performance and on + +789 +00:32:39,120 --> 00:32:42,919 +the contrary if there's unimportant + +790 +00:32:41,159 --> 00:32:45,320 +information if there's information that + +791 +00:32:42,919 --> 00:32:47,159 +you think is likely to be unimportant + +792 +00:32:45,320 --> 00:32:49,159 +putting it farther away or making it a + +793 +00:32:47,159 --> 00:32:51,279 +more indirect path so the model has to + +794 +00:32:49,159 --> 00:32:53,200 +kind of work harder to use it is a good + +795 +00:32:51,279 --> 00:32:54,840 +way to prevent the model from being + +796 +00:32:53,200 --> 00:32:57,679 +distracted by like tons and tons of + +797 +00:32:54,840 --> 00:33:00,200 +information um uh some of it + +798 +00:32:57,679 --> 00:33:03,960 +which may be irrelevant so it's a good + +799 +00:33:00,200 --> 00:33:03,960 +thing to know about in general for model + +800 +00:33:05,360 --> 00:33:13,080 +design so um how did RNN solve this + +801 +00:33:09,559 --> 00:33:15,360 +problem of uh of the vanishing gradient + +802 +00:33:13,080 --> 00:33:16,880 +there is a method called long short-term + +803 +00:33:15,360 --> 00:33:20,360 +memory + +804 +00:33:16,880 --> 00:33:22,840 +um and the basic idea is to make + +805 +00:33:20,360 --> 00:33:24,360 +additive connections between time + +806 +00:33:22,840 --> 00:33:29,919 +steps + +807 +00:33:24,360 --> 00:33:32,799 +and so addition is the + +808 +00:33:29,919 --> 00:33:36,399 +only addition or kind of like the + +809 +00:33:32,799 --> 00:33:38,159 +identity is the only thing that does not + +810 +00:33:36,399 --> 00:33:40,880 +change the gradient it's guaranteed to + +811 +00:33:38,159 --> 00:33:43,279 +not change the gradient because um the + +812 +00:33:40,880 --> 00:33:46,639 +identity function is like f + +813 +00:33:43,279 --> 00:33:49,159 +ofx equals X and if you take the + +814 +00:33:46,639 --> 00:33:51,480 +derivative of this it's one so you're + +815 +00:33:49,159 --> 00:33:55,440 +guaranteed to always have a gradient of + +816 +00:33:51,480 --> 00:33:57,360 +one according to this function so um + +817 +00:33:55,440 --> 00:33:59,559 +long shortterm memory makes sure that + +818 +00:33:57,360 --> 00:34:01,840 +you have this additive uh input between + +819 +00:33:59,559 --> 00:34:04,600 +time steps and this is what it looks + +820 +00:34:01,840 --> 00:34:05,919 +like it's not super super important to + +821 +00:34:04,600 --> 00:34:09,119 +understand everything that's going on + +822 +00:34:05,919 --> 00:34:12,200 +here but just to explain it very quickly + +823 +00:34:09,119 --> 00:34:15,720 +this uh C here is something called the + +824 +00:34:12,200 --> 00:34:20,520 +memory cell it's passed on linearly like + +825 +00:34:15,720 --> 00:34:24,679 +this and then um you have some gates the + +826 +00:34:20,520 --> 00:34:27,320 +update gate is determining whether uh + +827 +00:34:24,679 --> 00:34:28,919 +whether you update this hidden state or + +828 +00:34:27,320 --> 00:34:31,440 +how much you update given this hidden + +829 +00:34:28,919 --> 00:34:34,480 +State this input gate is deciding how + +830 +00:34:31,440 --> 00:34:36,760 +much of the input you take in um and + +831 +00:34:34,480 --> 00:34:39,879 +then the output gate is deciding how + +832 +00:34:36,760 --> 00:34:43,280 +much of uh the output from the cell you + +833 +00:34:39,879 --> 00:34:45,599 +uh you basically push out after using + +834 +00:34:43,280 --> 00:34:47,079 +the cells so um it has these three gates + +835 +00:34:45,599 --> 00:34:48,760 +that control the information flow and + +836 +00:34:47,079 --> 00:34:51,520 +the model can learn to turn them on or + +837 +00:34:48,760 --> 00:34:53,720 +off uh or something like that so uh + +838 +00:34:51,520 --> 00:34:55,679 +that's the basic uh basic idea of the + +839 +00:34:53,720 --> 00:34:57,240 +LSM and there's lots of other like + +840 +00:34:55,679 --> 00:34:59,359 +variants of this like gated recurrent + +841 +00:34:57,240 --> 00:35:01,520 +units that are a little bit simpler but + +842 +00:34:59,359 --> 00:35:03,920 +the basic idea of an additive connection + +843 +00:35:01,520 --> 00:35:07,240 +plus gating is uh something that appears + +844 +00:35:03,920 --> 00:35:07,240 +a lot in many different types of + +845 +00:35:07,440 --> 00:35:14,240 +architectures um any questions + +846 +00:35:12,079 --> 00:35:15,760 +here another thing I should mention that + +847 +00:35:14,240 --> 00:35:19,200 +I just realized I don't have on my + +848 +00:35:15,760 --> 00:35:24,480 +slides but it's a good thing to know is + +849 +00:35:19,200 --> 00:35:29,040 +that this is also used in uh deep + +850 +00:35:24,480 --> 00:35:32,440 +networks and uh multi-layer + +851 +00:35:29,040 --> 00:35:32,440 +networks and so + +852 +00:35:34,240 --> 00:35:39,520 +basically lstms uh this is + +853 +00:35:39,720 --> 00:35:45,359 +time lstms have this additive connection + +854 +00:35:43,359 --> 00:35:47,599 +between the member eel where you're + +855 +00:35:45,359 --> 00:35:50,079 +always + +856 +00:35:47,599 --> 00:35:53,119 +adding um adding this into to whatever + +857 +00:35:50,079 --> 00:35:53,119 +input you + +858 +00:35:54,200 --> 00:36:00,720 +get and then you you get an input and + +859 +00:35:57,000 --> 00:36:00,720 +you add this in you get an + +860 +00:36:00,839 --> 00:36:07,000 +input and so this this makes sure you + +861 +00:36:03,440 --> 00:36:09,640 +pass your gradients forward in + +862 +00:36:07,000 --> 00:36:11,720 +time there's also uh something called + +863 +00:36:09,640 --> 00:36:13,000 +residual connections which I think a lot + +864 +00:36:11,720 --> 00:36:14,319 +of people have heard of if you've done a + +865 +00:36:13,000 --> 00:36:16,000 +deep learning class or something like + +866 +00:36:14,319 --> 00:36:18,079 +that but if you haven't uh they're a + +867 +00:36:16,000 --> 00:36:20,599 +good thing to know residual connections + +868 +00:36:18,079 --> 00:36:22,440 +are if you run your input through + +869 +00:36:20,599 --> 00:36:25,720 +multiple + +870 +00:36:22,440 --> 00:36:28,720 +layers like let's say you have a block + +871 +00:36:25,720 --> 00:36:28,720 +here + +872 +00:36:36,480 --> 00:36:41,280 +let's let's call this an RNN for now + +873 +00:36:38,560 --> 00:36:44,280 +because we know um we know about RNN + +874 +00:36:41,280 --> 00:36:44,280 +already so + +875 +00:36:45,119 --> 00:36:49,560 +RNN so this this connection here is + +876 +00:36:48,319 --> 00:36:50,920 +called the residual connection and + +877 +00:36:49,560 --> 00:36:55,240 +basically it's adding an additive + +878 +00:36:50,920 --> 00:36:57,280 +connection before and after layers so um + +879 +00:36:55,240 --> 00:36:58,640 +this allows you to pass information from + +880 +00:36:57,280 --> 00:37:00,880 +the very beginning of a network to the + +881 +00:36:58,640 --> 00:37:03,520 +very end of a network um through + +882 +00:37:00,880 --> 00:37:05,480 +multiple layers and it also is there to + +883 +00:37:03,520 --> 00:37:08,800 +help prevent the gradient finishing + +884 +00:37:05,480 --> 00:37:11,520 +problem so like in a way you can view uh + +885 +00:37:08,800 --> 00:37:14,560 +you can view lstms what lstms are doing + +886 +00:37:11,520 --> 00:37:15,800 +is preventing loss of gradient in time + +887 +00:37:14,560 --> 00:37:17,280 +and these are preventing loss of + +888 +00:37:15,800 --> 00:37:19,480 +gradient as you go through like multiple + +889 +00:37:17,280 --> 00:37:21,119 +layers of the network and this is super + +890 +00:37:19,480 --> 00:37:24,079 +standard this is used in all like + +891 +00:37:21,119 --> 00:37:25,599 +Transformer models and llama and GPT and + +892 +00:37:24,079 --> 00:37:31,200 +whatever + +893 +00:37:25,599 --> 00:37:31,200 +else cool um any other questions about + +894 +00:37:32,760 --> 00:37:39,079 +that okay cool um so next I'd like to go + +895 +00:37:36,880 --> 00:37:41,760 +into convolution um one one thing I + +896 +00:37:39,079 --> 00:37:44,760 +should mention is rnns or RNN style + +897 +00:37:41,760 --> 00:37:46,920 +models are used extensively in very long + +898 +00:37:44,760 --> 00:37:48,160 +sequence modeling and we're going to + +899 +00:37:46,920 --> 00:37:50,440 +talk more about like actual + +900 +00:37:48,160 --> 00:37:52,640 +architectures that people use uh to do + +901 +00:37:50,440 --> 00:37:55,119 +this um usually in combination with + +902 +00:37:52,640 --> 00:37:57,720 +attention based models uh but they're + +903 +00:37:55,119 --> 00:38:01,800 +used in very long sequence modeling + +904 +00:37:57,720 --> 00:38:05,640 +convolutions tend to be used in um a lot + +905 +00:38:01,800 --> 00:38:07,160 +in speech and image processing uh and + +906 +00:38:05,640 --> 00:38:10,880 +the reason why they're used a lot in + +907 +00:38:07,160 --> 00:38:13,560 +speech and image processing is + +908 +00:38:10,880 --> 00:38:16,800 +because when we're processing + +909 +00:38:13,560 --> 00:38:18,599 +language uh we have like + +910 +00:38:16,800 --> 00:38:22,720 +um + +911 +00:38:18,599 --> 00:38:22,720 +this is + +912 +00:38:23,599 --> 00:38:29,400 +wonderful like this is wonderful is + +913 +00:38:26,599 --> 00:38:33,319 +three tokens in language but if we look + +914 +00:38:29,400 --> 00:38:36,960 +at it in speech it's going to be + +915 +00:38:33,319 --> 00:38:36,960 +like many many + +916 +00:38:37,560 --> 00:38:46,079 +frames so kind of + +917 +00:38:41,200 --> 00:38:47,680 +the semantics of language is already + +918 +00:38:46,079 --> 00:38:48,960 +kind of like if you look at a single + +919 +00:38:47,680 --> 00:38:51,599 +token you already get something + +920 +00:38:48,960 --> 00:38:52,839 +semantically meaningful um but in + +921 +00:38:51,599 --> 00:38:54,560 +contrast if you're looking at like + +922 +00:38:52,839 --> 00:38:56,000 +speech or you're looking at pixels and + +923 +00:38:54,560 --> 00:38:57,400 +images or something like that you're not + +924 +00:38:56,000 --> 00:39:00,359 +going to get something semantically + +925 +00:38:57,400 --> 00:39:01,920 +meaningful uh so uh convolution is used + +926 +00:39:00,359 --> 00:39:03,359 +a lot in that case and also you could + +927 +00:39:01,920 --> 00:39:06,079 +create a convolutional model over + +928 +00:39:03,359 --> 00:39:08,599 +characters as well + +929 +00:39:06,079 --> 00:39:10,599 +um so what is convolution in the first + +930 +00:39:08,599 --> 00:39:13,319 +place um as I mentioned before basically + +931 +00:39:10,599 --> 00:39:16,359 +you take the local window uh around an + +932 +00:39:13,319 --> 00:39:19,680 +input and you run it through um + +933 +00:39:16,359 --> 00:39:22,079 +basically a model and a a good way to + +934 +00:39:19,680 --> 00:39:24,400 +think about it is it's essentially a + +935 +00:39:22,079 --> 00:39:26,440 +feed forward Network where you can + +936 +00:39:24,400 --> 00:39:28,240 +catenate uh all of the surrounding + +937 +00:39:26,440 --> 00:39:30,280 +vectors together and run them through a + +938 +00:39:28,240 --> 00:39:34,400 +linear transform like this so you can + +939 +00:39:30,280 --> 00:39:34,400 +Cate XT minus XT XT + +940 +00:39:35,880 --> 00:39:43,040 +plus1 convolution can also be used in + +941 +00:39:39,440 --> 00:39:45,400 +Auto regressive models and normally like + +942 +00:39:43,040 --> 00:39:48,079 +we think of it like this so we think + +943 +00:39:45,400 --> 00:39:50,640 +that we're taking the previous one the + +944 +00:39:48,079 --> 00:39:53,839 +current one and the next one and making + +945 +00:39:50,640 --> 00:39:54,960 +a prediction based on this but this + +946 +00:39:53,839 --> 00:39:56,440 +would be good for something like + +947 +00:39:54,960 --> 00:39:57,720 +sequence labeling but it's not good for + +948 +00:39:56,440 --> 00:39:59,040 +for something like language modeling + +949 +00:39:57,720 --> 00:40:01,400 +because in language modeling we can't + +950 +00:39:59,040 --> 00:40:05,200 +look at the future right but there's a + +951 +00:40:01,400 --> 00:40:07,280 +super simple uh solution to this which + +952 +00:40:05,200 --> 00:40:11,280 +is you have a convolution that just + +953 +00:40:07,280 --> 00:40:13,720 +looks at the past basically um and + +954 +00:40:11,280 --> 00:40:15,319 +predicts the next word based on the the + +955 +00:40:13,720 --> 00:40:16,760 +you know current word in the past so + +956 +00:40:15,319 --> 00:40:19,520 +here you would be predicting the word + +957 +00:40:16,760 --> 00:40:21,040 +movie um this is actually essentially + +958 +00:40:19,520 --> 00:40:23,839 +equivalent to the feed forward language + +959 +00:40:21,040 --> 00:40:25,880 +model that I talked about last time uh + +960 +00:40:23,839 --> 00:40:27,240 +so you can also think of that as a + +961 +00:40:25,880 --> 00:40:30,599 +convolution + +962 +00:40:27,240 --> 00:40:32,119 +a convolutional language model um so + +963 +00:40:30,599 --> 00:40:33,359 +when whenever you say feed forward or + +964 +00:40:32,119 --> 00:40:36,160 +convolutional language model they're + +965 +00:40:33,359 --> 00:40:38,880 +basically the same uh modulo some uh + +966 +00:40:36,160 --> 00:40:42,359 +some details about striding and stuff + +967 +00:40:38,880 --> 00:40:42,359 +which I'm going to talk about the class + +968 +00:40:43,000 --> 00:40:49,359 +today cool um I covered convolution very + +969 +00:40:47,400 --> 00:40:51,440 +briefly because it's also the least used + +970 +00:40:49,359 --> 00:40:53,400 +of the three uh sequence modeling things + +971 +00:40:51,440 --> 00:40:55,400 +in NLP nowadays but um are there any + +972 +00:40:53,400 --> 00:40:58,319 +questions there or can I just run into + +973 +00:40:55,400 --> 00:40:58,319 +attention + +974 +00:40:59,119 --> 00:41:04,040 +okay cool I'll go into attention next so + +975 +00:41:02,400 --> 00:41:06,400 +uh the basic idea about + +976 +00:41:04,040 --> 00:41:11,119 +attention um + +977 +00:41:06,400 --> 00:41:12,839 +is that we encode uh each token and the + +978 +00:41:11,119 --> 00:41:14,440 +sequence into a + +979 +00:41:12,839 --> 00:41:19,119 +vector + +980 +00:41:14,440 --> 00:41:21,640 +um or so we we have input an input + +981 +00:41:19,119 --> 00:41:24,240 +sequence that we'd like to encode over + +982 +00:41:21,640 --> 00:41:27,800 +and we perform a linear combination of + +983 +00:41:24,240 --> 00:41:30,640 +the vectors weighted by attention weight + +984 +00:41:27,800 --> 00:41:33,359 +and there's two varieties of attention + +985 +00:41:30,640 --> 00:41:35,160 +uh that are good to know about the first + +986 +00:41:33,359 --> 00:41:37,440 +one is cross + +987 +00:41:35,160 --> 00:41:40,040 +atten where each element in a sequence + +988 +00:41:37,440 --> 00:41:41,960 +attends to elements of another sequence + +989 +00:41:40,040 --> 00:41:44,280 +and this is widely used in encoder + +990 +00:41:41,960 --> 00:41:47,359 +decoder models where you have one + +991 +00:41:44,280 --> 00:41:50,319 +encoder and you have a separate decoder + +992 +00:41:47,359 --> 00:41:51,880 +um these models the popular models that + +993 +00:41:50,319 --> 00:41:55,119 +are like this that people still use a + +994 +00:41:51,880 --> 00:41:57,480 +lot are T5 uh is a example of an encoder + +995 +00:41:55,119 --> 00:42:00,760 +decoder model or embar is another + +996 +00:41:57,480 --> 00:42:03,160 +example of encoder decoder model um but + +997 +00:42:00,760 --> 00:42:07,880 +basically the uh The Way Cross attention + +998 +00:42:03,160 --> 00:42:10,359 +works is we have for example an English + +999 +00:42:07,880 --> 00:42:14,079 +uh sentence here and we want to + +1000 +00:42:10,359 --> 00:42:17,560 +translate it into uh into a Japanese + +1001 +00:42:14,079 --> 00:42:23,040 +sentence and so when we output the first + +1002 +00:42:17,560 --> 00:42:25,119 +word we would mostly uh upweight this or + +1003 +00:42:23,040 --> 00:42:26,800 +sorry we have a we have a Japanese + +1004 +00:42:25,119 --> 00:42:29,119 +sentence and we would like to translated + +1005 +00:42:26,800 --> 00:42:31,680 +into an English sentence for example so + +1006 +00:42:29,119 --> 00:42:35,160 +when we generate the first word in + +1007 +00:42:31,680 --> 00:42:38,400 +Japanese means this so in order to + +1008 +00:42:35,160 --> 00:42:40,079 +Output the first word we would first uh + +1009 +00:42:38,400 --> 00:42:43,559 +do a weighted sum of all of the + +1010 +00:42:40,079 --> 00:42:46,240 +embeddings of the Japanese sentence and + +1011 +00:42:43,559 --> 00:42:49,359 +we would focus probably most on this + +1012 +00:42:46,240 --> 00:42:51,920 +word up here C because it corresponds to + +1013 +00:42:49,359 --> 00:42:51,920 +the word + +1014 +00:42:53,160 --> 00:42:59,800 +this in the next step of generating an + +1015 +00:42:55,960 --> 00:43:01,319 +out output uh we would uh attend to + +1016 +00:42:59,800 --> 00:43:04,119 +different words because different words + +1017 +00:43:01,319 --> 00:43:07,680 +correspond to is so you would attend to + +1018 +00:43:04,119 --> 00:43:11,040 +which corresponds to is um when you + +1019 +00:43:07,680 --> 00:43:12,599 +output n actually there's no word in the + +1020 +00:43:11,040 --> 00:43:16,839 +Japanese sentence that correspon to and + +1021 +00:43:12,599 --> 00:43:18,720 +so you might get a very like blob like + +1022 +00:43:16,839 --> 00:43:21,319 +uh in attention weight that doesn't look + +1023 +00:43:18,720 --> 00:43:23,319 +very uh that looks very smooth not very + +1024 +00:43:21,319 --> 00:43:25,119 +peaky and then when you do example you'd + +1025 +00:43:23,319 --> 00:43:27,880 +have strong attention on uh on the word + +1026 +00:43:25,119 --> 00:43:29,400 +that corresponds to example + +1027 +00:43:27,880 --> 00:43:31,599 +there's also self + +1028 +00:43:29,400 --> 00:43:33,480 +attention and um self attention + +1029 +00:43:31,599 --> 00:43:36,000 +basically what it does is each element + +1030 +00:43:33,480 --> 00:43:38,640 +in a sequence attends to elements of the + +1031 +00:43:36,000 --> 00:43:40,240 +same sequence and so this is a good way + +1032 +00:43:38,640 --> 00:43:43,359 +of doing sequence encoding just like we + +1033 +00:43:40,240 --> 00:43:46,280 +used rnns by rnns uh convolutional + +1034 +00:43:43,359 --> 00:43:47,559 +neural networks and so um the reason why + +1035 +00:43:46,280 --> 00:43:50,119 +you would want to do something like this + +1036 +00:43:47,559 --> 00:43:52,760 +just to give an example let's say we + +1037 +00:43:50,119 --> 00:43:54,280 +wanted to run this we wanted to encode + +1038 +00:43:52,760 --> 00:43:56,920 +the English sentence before doing + +1039 +00:43:54,280 --> 00:44:00,040 +something like translation into Japanese + +1040 +00:43:56,920 --> 00:44:01,559 +and if we did that um this maybe we + +1041 +00:44:00,040 --> 00:44:02,960 +don't need to attend to a whole lot of + +1042 +00:44:01,559 --> 00:44:06,440 +other things because it's kind of clear + +1043 +00:44:02,960 --> 00:44:08,920 +what this means but um + +1044 +00:44:06,440 --> 00:44:10,880 +is the way you would translate it would + +1045 +00:44:08,920 --> 00:44:12,280 +be rather heavily dependent on what the + +1046 +00:44:10,880 --> 00:44:13,640 +other words in the sentence so you might + +1047 +00:44:12,280 --> 00:44:17,280 +want to attend to all the other words in + +1048 +00:44:13,640 --> 00:44:20,559 +the sentence say oh this is is co + +1049 +00:44:17,280 --> 00:44:22,839 +cooccurring with this and example and so + +1050 +00:44:20,559 --> 00:44:24,440 +if that's the case then well we would + +1051 +00:44:22,839 --> 00:44:26,920 +need to translate it in this way or we' + +1052 +00:44:24,440 --> 00:44:28,960 +need to handle it in this way and that's + +1053 +00:44:26,920 --> 00:44:29,880 +exactly the same for you know any other + +1054 +00:44:28,960 --> 00:44:32,720 +sort of + +1055 +00:44:29,880 --> 00:44:35,880 +disambiguation uh style + +1056 +00:44:32,720 --> 00:44:37,720 +task so uh yeah we do something similar + +1057 +00:44:35,880 --> 00:44:39,040 +like this so basically cross attention + +1058 +00:44:37,720 --> 00:44:42,520 +is attending to a different sequence + +1059 +00:44:39,040 --> 00:44:42,520 +self attention is attending to the same + +1060 +00:44:42,680 --> 00:44:46,559 +sequence so how do we do this + +1061 +00:44:44,960 --> 00:44:48,200 +mechanistically in the first place so + +1062 +00:44:46,559 --> 00:44:51,480 +like let's say We're translating from + +1063 +00:44:48,200 --> 00:44:52,880 +Japanese to English um we would have uh + +1064 +00:44:51,480 --> 00:44:55,960 +and we're doing it with an encoder + +1065 +00:44:52,880 --> 00:44:57,480 +decoder model where we have already ENC + +1066 +00:44:55,960 --> 00:45:00,640 +coded the + +1067 +00:44:57,480 --> 00:45:02,920 +input sequence and now we're generating + +1068 +00:45:00,640 --> 00:45:05,240 +the output sequence with a for example a + +1069 +00:45:02,920 --> 00:45:09,880 +recurrent neural network um and so if + +1070 +00:45:05,240 --> 00:45:12,400 +that's the case we have uh I I hate uh + +1071 +00:45:09,880 --> 00:45:14,440 +like this and we want to predict the + +1072 +00:45:12,400 --> 00:45:17,280 +next word so what we would do is we + +1073 +00:45:14,440 --> 00:45:19,480 +would take the current state + +1074 +00:45:17,280 --> 00:45:21,480 +here and uh we use something called a + +1075 +00:45:19,480 --> 00:45:22,760 +query vector and the query Vector is + +1076 +00:45:21,480 --> 00:45:24,880 +essentially the vector that we want to + +1077 +00:45:22,760 --> 00:45:28,720 +use to decide what to attend + +1078 +00:45:24,880 --> 00:45:31,800 +to we then have key vectors and the key + +1079 +00:45:28,720 --> 00:45:35,319 +vectors are the vectors that we would + +1080 +00:45:31,800 --> 00:45:37,480 +like to use to decide which ones we + +1081 +00:45:35,319 --> 00:45:40,720 +should be attending + +1082 +00:45:37,480 --> 00:45:42,040 +to and then for each query key pair we + +1083 +00:45:40,720 --> 00:45:45,319 +calculate a + +1084 +00:45:42,040 --> 00:45:48,319 +weight and we do it like this um this + +1085 +00:45:45,319 --> 00:45:50,680 +gear here is some function that takes in + +1086 +00:45:48,319 --> 00:45:53,200 +the uh query vector and the key vector + +1087 +00:45:50,680 --> 00:45:55,599 +and outputs a weight and notably we use + +1088 +00:45:53,200 --> 00:45:57,559 +the same function every single time this + +1089 +00:45:55,599 --> 00:46:00,960 +is really important again because like + +1090 +00:45:57,559 --> 00:46:03,760 +RNN that allows us to extrapolate + +1091 +00:46:00,960 --> 00:46:05,960 +unlimited length sequences because uh we + +1092 +00:46:03,760 --> 00:46:08,280 +only have one set of you know we only + +1093 +00:46:05,960 --> 00:46:10,359 +have one function no matter how long the + +1094 +00:46:08,280 --> 00:46:13,200 +sequence gets so we can just apply it + +1095 +00:46:10,359 --> 00:46:15,839 +over and over and over + +1096 +00:46:13,200 --> 00:46:17,920 +again uh once we calculate these values + +1097 +00:46:15,839 --> 00:46:20,839 +we normalize so that they add up to one + +1098 +00:46:17,920 --> 00:46:22,559 +using the softmax function and um + +1099 +00:46:20,839 --> 00:46:27,800 +basically in this case that would be + +1100 +00:46:22,559 --> 00:46:27,800 +like 0.76 uh etc etc oops + +1101 +00:46:28,800 --> 00:46:33,559 +so step number two is once we have this + +1102 +00:46:32,280 --> 00:46:37,839 +uh these + +1103 +00:46:33,559 --> 00:46:40,160 +attention uh values here notably these + +1104 +00:46:37,839 --> 00:46:41,359 +values aren't really probabilities uh + +1105 +00:46:40,160 --> 00:46:42,800 +despite the fact that they're between + +1106 +00:46:41,359 --> 00:46:44,240 +zero and one and they add up to one + +1107 +00:46:42,800 --> 00:46:47,440 +because all we're doing is we're using + +1108 +00:46:44,240 --> 00:46:50,480 +them to uh to combine together uh + +1109 +00:46:47,440 --> 00:46:51,800 +multiple vectors so I we don't really + +1110 +00:46:50,480 --> 00:46:53,319 +normally call them attention + +1111 +00:46:51,800 --> 00:46:54,680 +probabilities or anything like that I + +1112 +00:46:53,319 --> 00:46:56,319 +just call them attention values or + +1113 +00:46:54,680 --> 00:46:59,680 +normalized attention values + +1114 +00:46:56,319 --> 00:47:03,760 +is um but once we have these uh + +1115 +00:46:59,680 --> 00:47:05,760 +attention uh attention weights we have + +1116 +00:47:03,760 --> 00:47:07,200 +value vectors and these value vectors + +1117 +00:47:05,760 --> 00:47:10,000 +are the vectors that we would actually + +1118 +00:47:07,200 --> 00:47:12,319 +like to combine together to get the uh + +1119 +00:47:10,000 --> 00:47:14,000 +encoding here and so we take these + +1120 +00:47:12,319 --> 00:47:17,559 +vectors we do a weighted some of the + +1121 +00:47:14,000 --> 00:47:21,200 +vectors and get a final final sum + +1122 +00:47:17,559 --> 00:47:22,920 +here and we can take this uh some and + +1123 +00:47:21,200 --> 00:47:26,920 +use it in any part of the model that we + +1124 +00:47:22,920 --> 00:47:29,079 +would like um and so is very broad it + +1125 +00:47:26,920 --> 00:47:31,200 +can be used in any way now the most + +1126 +00:47:29,079 --> 00:47:33,240 +common way to use it is just have lots + +1127 +00:47:31,200 --> 00:47:35,000 +of self attention layers like in + +1128 +00:47:33,240 --> 00:47:37,440 +something in a Transformer but um you + +1129 +00:47:35,000 --> 00:47:40,160 +can also use it in decoder or other + +1130 +00:47:37,440 --> 00:47:42,920 +things like that as + +1131 +00:47:40,160 --> 00:47:45,480 +well this is an actual graphical example + +1132 +00:47:42,920 --> 00:47:47,319 +from the original attention paper um I'm + +1133 +00:47:45,480 --> 00:47:50,000 +going to give some other examples from + +1134 +00:47:47,319 --> 00:47:52,480 +Transformers in the next class but + +1135 +00:47:50,000 --> 00:47:55,400 +basically you can see that the attention + +1136 +00:47:52,480 --> 00:47:57,559 +weights uh for this English to French I + +1137 +00:47:55,400 --> 00:48:00,520 +think it's English French translation + +1138 +00:47:57,559 --> 00:48:02,920 +task basically um overlap with what you + +1139 +00:48:00,520 --> 00:48:04,440 +would expect uh if you can read English + +1140 +00:48:02,920 --> 00:48:06,599 +and French it's kind of the words that + +1141 +00:48:04,440 --> 00:48:09,319 +are semantically similar to each other + +1142 +00:48:06,599 --> 00:48:12,920 +um it even learns to do this reordering + +1143 +00:48:09,319 --> 00:48:14,880 +uh in an appropriate way here and all of + +1144 +00:48:12,920 --> 00:48:16,720 +this is completely unsupervised so you + +1145 +00:48:14,880 --> 00:48:18,079 +never actually give the model + +1146 +00:48:16,720 --> 00:48:19,440 +information about what it should be + +1147 +00:48:18,079 --> 00:48:21,559 +attending to it's all learned through + +1148 +00:48:19,440 --> 00:48:23,520 +gradient descent and the model learns to + +1149 +00:48:21,559 --> 00:48:27,640 +do this by making the embeddings of the + +1150 +00:48:23,520 --> 00:48:27,640 +key and query vectors closer together + +1151 +00:48:28,440 --> 00:48:33,240 +cool + +1152 +00:48:30,000 --> 00:48:33,240 +um any + +1153 +00:48:33,800 --> 00:48:40,040 +questions okay so um next I'd like to go + +1154 +00:48:38,440 --> 00:48:41,680 +a little bit into how we actually + +1155 +00:48:40,040 --> 00:48:43,599 +calculate the attention score function + +1156 +00:48:41,680 --> 00:48:44,839 +so that's the little gear that I had on + +1157 +00:48:43,599 --> 00:48:50,280 +my + +1158 +00:48:44,839 --> 00:48:53,559 +uh my slide before so here Q is a query + +1159 +00:48:50,280 --> 00:48:56,440 +and K is the key um the original + +1160 +00:48:53,559 --> 00:48:58,400 +attention paper used a multi-layer layer + +1161 +00:48:56,440 --> 00:49:00,119 +uh a multi-layer neural network to + +1162 +00:48:58,400 --> 00:49:02,440 +calculate this so basically what it did + +1163 +00:49:00,119 --> 00:49:05,319 +is it concatenated the query and key + +1164 +00:49:02,440 --> 00:49:08,000 +Vector together multiplied it by a + +1165 +00:49:05,319 --> 00:49:12,240 +weight Matrix calculated a tan H and + +1166 +00:49:08,000 --> 00:49:15,040 +then ran it through uh a weight + +1167 +00:49:12,240 --> 00:49:19,799 +Vector so this + +1168 +00:49:15,040 --> 00:49:22,480 +is essentially very expressive + +1169 +00:49:19,799 --> 00:49:24,799 +um uh it's flexible it's often good with + +1170 +00:49:22,480 --> 00:49:27,960 +large data but it adds extra parameters + +1171 +00:49:24,799 --> 00:49:30,359 +and uh computation time uh to your + +1172 +00:49:27,960 --> 00:49:31,559 +calculations here so it's not as widely + +1173 +00:49:30,359 --> 00:49:34,359 +used + +1174 +00:49:31,559 --> 00:49:37,799 +anymore the uh other thing which was + +1175 +00:49:34,359 --> 00:49:41,599 +proposed by long ad all is a bilinear + +1176 +00:49:37,799 --> 00:49:43,200 +function um and a bilinear function + +1177 +00:49:41,599 --> 00:49:45,920 +basically what it does is it has your + +1178 +00:49:43,200 --> 00:49:48,319 +key Vector it has your query vector and + +1179 +00:49:45,920 --> 00:49:51,440 +it has a matrix in between them like + +1180 +00:49:48,319 --> 00:49:53,000 +this and uh then you calculate uh you + +1181 +00:49:51,440 --> 00:49:54,520 +calculate the + +1182 +00:49:53,000 --> 00:49:56,680 +alut + +1183 +00:49:54,520 --> 00:49:59,880 +so + +1184 +00:49:56,680 --> 00:50:03,200 +this is uh nice because it basically um + +1185 +00:49:59,880 --> 00:50:05,760 +Can Transform uh the key and + +1186 +00:50:03,200 --> 00:50:08,760 +query uh together + +1187 +00:50:05,760 --> 00:50:08,760 +here + +1188 +00:50:09,119 --> 00:50:13,559 +um people have also experimented with + +1189 +00:50:11,760 --> 00:50:16,079 +DOT product and the dot product is + +1190 +00:50:13,559 --> 00:50:19,839 +basically query times + +1191 +00:50:16,079 --> 00:50:23,480 +key uh query transpose times key or + +1192 +00:50:19,839 --> 00:50:25,760 +query. key this is okay but the problem + +1193 +00:50:23,480 --> 00:50:27,280 +with this is then the query vector and + +1194 +00:50:25,760 --> 00:50:30,160 +the key vectors have to be in exactly + +1195 +00:50:27,280 --> 00:50:31,920 +the same space and that's kind of too + +1196 +00:50:30,160 --> 00:50:34,799 +hard of a constraint so it doesn't scale + +1197 +00:50:31,920 --> 00:50:38,000 +very well if you're um if you're working + +1198 +00:50:34,799 --> 00:50:40,839 +hard uh if you're uh like training on + +1199 +00:50:38,000 --> 00:50:45,400 +lots of data um then the scaled dot + +1200 +00:50:40,839 --> 00:50:47,880 +product um the scale dot product here uh + +1201 +00:50:45,400 --> 00:50:50,079 +one problem is that the scale of the dot + +1202 +00:50:47,880 --> 00:50:53,680 +product increases as the dimensions get + +1203 +00:50:50,079 --> 00:50:55,880 +larger and so there's a fix to scale by + +1204 +00:50:53,680 --> 00:50:58,839 +the square root of the length of one of + +1205 +00:50:55,880 --> 00:51:00,680 +the vectors um and so basically you're + +1206 +00:50:58,839 --> 00:51:04,559 +multiplying uh you're taking the dot + +1207 +00:51:00,680 --> 00:51:06,559 +product but you're dividing by the uh + +1208 +00:51:04,559 --> 00:51:09,359 +the square root of the length of one of + +1209 +00:51:06,559 --> 00:51:11,839 +the vectors uh does anyone have an idea + +1210 +00:51:09,359 --> 00:51:13,599 +why you might take the square root here + +1211 +00:51:11,839 --> 00:51:16,920 +if you've taken a machine + +1212 +00:51:13,599 --> 00:51:20,000 +learning uh or maybe statistics class + +1213 +00:51:16,920 --> 00:51:20,000 +you might have a an + +1214 +00:51:20,599 --> 00:51:26,599 +idea any any ideas yeah it normalization + +1215 +00:51:24,720 --> 00:51:29,079 +to make sure + +1216 +00:51:26,599 --> 00:51:32,760 +because otherwise it will impact the + +1217 +00:51:29,079 --> 00:51:35,640 +result because we want normalize one yes + +1218 +00:51:32,760 --> 00:51:37,920 +so we do we do want to normalize it um + +1219 +00:51:35,640 --> 00:51:40,000 +and so that's the reason why we divide + +1220 +00:51:37,920 --> 00:51:41,920 +by the length um and that prevents it + +1221 +00:51:40,000 --> 00:51:43,839 +from getting too large + +1222 +00:51:41,920 --> 00:51:45,920 +specifically does anyone have an idea + +1223 +00:51:43,839 --> 00:51:49,440 +why you take the square root here as + +1224 +00:51:45,920 --> 00:51:49,440 +opposed to dividing just by the length + +1225 +00:51:52,400 --> 00:51:59,480 +overall so um this is this is pretty + +1226 +00:51:55,400 --> 00:52:01,720 +tough and actually uh we I didn't know + +1227 +00:51:59,480 --> 00:52:04,359 +one of the last times I did this class + +1228 +00:52:01,720 --> 00:52:06,640 +uh and had to actually go look for it + +1229 +00:52:04,359 --> 00:52:09,000 +but basically the reason why is because + +1230 +00:52:06,640 --> 00:52:11,400 +if you um if you have a whole bunch of + +1231 +00:52:09,000 --> 00:52:12,720 +random variables so let's say you have a + +1232 +00:52:11,400 --> 00:52:14,040 +whole bunch of random variables no + +1233 +00:52:12,720 --> 00:52:15,240 +matter what kind they are as long as + +1234 +00:52:14,040 --> 00:52:19,680 +they're from the same distribution + +1235 +00:52:15,240 --> 00:52:19,680 +they're IID and you add them all + +1236 +00:52:20,160 --> 00:52:25,720 +together um then the variance I believe + +1237 +00:52:23,200 --> 00:52:27,760 +yeah the variance of this variant + +1238 +00:52:25,720 --> 00:52:31,119 +standard deviation maybe standard + +1239 +00:52:27,760 --> 00:52:33,319 +deviation of this goes uh goes up uh + +1240 +00:52:31,119 --> 00:52:35,640 +square root uh yeah I think standard + +1241 +00:52:33,319 --> 00:52:38,880 +deviation goes + +1242 +00:52:35,640 --> 00:52:41,040 +up dividing by something that would + +1243 +00:52:38,880 --> 00:52:44,040 +divide by this the standard deviation + +1244 +00:52:41,040 --> 00:52:48,240 +here so it's made like normalizing by + +1245 +00:52:44,040 --> 00:52:51,040 +that so um it's a it's that's actually I + +1246 +00:52:48,240 --> 00:52:53,359 +don't think explicitly explained and the + +1247 +00:52:51,040 --> 00:52:54,720 +uh attention is all you need paper uh + +1248 +00:52:53,359 --> 00:52:57,920 +the vasani paper where they introduce + +1249 +00:52:54,720 --> 00:53:01,079 +this but that's basic idea um in terms + +1250 +00:52:57,920 --> 00:53:03,839 +of what people use most widely nowadays + +1251 +00:53:01,079 --> 00:53:07,680 +um they + +1252 +00:53:03,839 --> 00:53:07,680 +are basically doing + +1253 +00:53:24,160 --> 00:53:27,160 +this + +1254 +00:53:30,280 --> 00:53:34,880 +so they're taking the the hidden state + +1255 +00:53:33,000 --> 00:53:36,599 +from the keys and multiplying it by a + +1256 +00:53:34,880 --> 00:53:39,440 +matrix the hidden state by the queries + +1257 +00:53:36,599 --> 00:53:41,680 +and multiplying it by a matrix um this + +1258 +00:53:39,440 --> 00:53:46,559 +is what is done in uh in + +1259 +00:53:41,680 --> 00:53:50,280 +Transformers and the uh and then they're + +1260 +00:53:46,559 --> 00:53:54,160 +using this to um they're normalizing it + +1261 +00:53:50,280 --> 00:53:57,160 +by this uh square root here + +1262 +00:53:54,160 --> 00:53:57,160 +and + +1263 +00:53:59,440 --> 00:54:05,040 +so this is essentially a bilinear + +1264 +00:54:02,240 --> 00:54:07,680 +model um it's a bilinear model that is + +1265 +00:54:05,040 --> 00:54:09,119 +normalized uh they call it uh scale do + +1266 +00:54:07,680 --> 00:54:11,119 +product detention but actually because + +1267 +00:54:09,119 --> 00:54:15,520 +they have these weight matrices uh it's + +1268 +00:54:11,119 --> 00:54:18,839 +a bilinear model so um that's the the + +1269 +00:54:15,520 --> 00:54:18,839 +most standard thing to be used + +1270 +00:54:20,200 --> 00:54:24,079 +nowadays cool any any questions about + +1271 +00:54:22,520 --> 00:54:27,079 +this + +1272 +00:54:24,079 --> 00:54:27,079 +part + +1273 +00:54:28,240 --> 00:54:36,559 +okay so um finally when you actually + +1274 +00:54:32,280 --> 00:54:36,559 +train the model um as I mentioned + +1275 +00:54:41,960 --> 00:54:45,680 +before right at the very + +1276 +00:54:48,040 --> 00:54:52,400 +beginning + +1277 +00:54:49,839 --> 00:54:55,760 +we when we're training an autor + +1278 +00:54:52,400 --> 00:54:57,400 +regressive model we don't want to be + +1279 +00:54:55,760 --> 00:54:59,799 +referring to the Future to things in the + +1280 +00:54:57,400 --> 00:55:01,240 +future um because then you know + +1281 +00:54:59,799 --> 00:55:03,079 +basically we'd be cheating and we'd have + +1282 +00:55:01,240 --> 00:55:04,599 +a nonprobabilistic model it wouldn't be + +1283 +00:55:03,079 --> 00:55:08,960 +good when we actually have to generate + +1284 +00:55:04,599 --> 00:55:12,119 +left to right um and + +1285 +00:55:08,960 --> 00:55:15,720 +so we essentially want to prevent + +1286 +00:55:12,119 --> 00:55:17,480 +ourselves from using information from + +1287 +00:55:15,720 --> 00:55:20,319 +the + +1288 +00:55:17,480 --> 00:55:22,839 +future + +1289 +00:55:20,319 --> 00:55:24,240 +and in an unconditioned model we want to + +1290 +00:55:22,839 --> 00:55:27,400 +prevent ourselves from using any + +1291 +00:55:24,240 --> 00:55:29,680 +information in the feature here um in a + +1292 +00:55:27,400 --> 00:55:31,520 +conditioned model we're okay with doing + +1293 +00:55:29,680 --> 00:55:33,480 +kind of bir + +1294 +00:55:31,520 --> 00:55:35,880 +directional conditioning here to + +1295 +00:55:33,480 --> 00:55:37,359 +calculate the representations but we're + +1296 +00:55:35,880 --> 00:55:40,440 +not okay with doing it on the target + +1297 +00:55:37,359 --> 00:55:40,440 +side so basically what we + +1298 +00:55:44,240 --> 00:55:50,960 +do basically what we do is we create a + +1299 +00:55:47,920 --> 00:55:52,400 +mask that prevents us from attending to + +1300 +00:55:50,960 --> 00:55:54,559 +any of the information in the future + +1301 +00:55:52,400 --> 00:55:56,440 +when we're uh predicting when we're + +1302 +00:55:54,559 --> 00:56:00,799 +calculating the representations of the + +1303 +00:55:56,440 --> 00:56:04,880 +the current thing uh word and + +1304 +00:56:00,799 --> 00:56:08,280 +technically how we do this is we have + +1305 +00:56:04,880 --> 00:56:08,280 +the attention + +1306 +00:56:09,079 --> 00:56:13,799 +values uh like + +1307 +00:56:11,680 --> 00:56:15,480 +2.1 + +1308 +00:56:13,799 --> 00:56:17,880 +attention + +1309 +00:56:15,480 --> 00:56:19,920 +0.3 and + +1310 +00:56:17,880 --> 00:56:22,480 +attention uh + +1311 +00:56:19,920 --> 00:56:24,960 +0.5 or something like + +1312 +00:56:22,480 --> 00:56:27,480 +that these are eventually going to be + +1313 +00:56:24,960 --> 00:56:29,799 +fed through the soft Max to calculate + +1314 +00:56:27,480 --> 00:56:32,119 +the attention values that we use to do + +1315 +00:56:29,799 --> 00:56:33,680 +the waiting so what we do is any ones we + +1316 +00:56:32,119 --> 00:56:36,160 +don't want to attend to we just add + +1317 +00:56:33,680 --> 00:56:39,799 +negative infinity or add a very large + +1318 +00:56:36,160 --> 00:56:42,119 +negative number so we uh cross that out + +1319 +00:56:39,799 --> 00:56:44,000 +and set this the negative infinity and + +1320 +00:56:42,119 --> 00:56:45,440 +so then when we take the softb basically + +1321 +00:56:44,000 --> 00:56:47,839 +the value goes to zero and we don't + +1322 +00:56:45,440 --> 00:56:49,359 +attend to it so um this is called the + +1323 +00:56:47,839 --> 00:56:53,240 +attention mask and you'll see it when + +1324 +00:56:49,359 --> 00:56:53,240 +you have to implement + +1325 +00:56:53,440 --> 00:56:56,880 +attention cool + +1326 +00:56:57,039 --> 00:57:00,200 +any any questions about + +1327 +00:57:02,079 --> 00:57:08,599 +this okay great um so next I'd like to + +1328 +00:57:05,839 --> 00:57:11,039 +go to Applications of sequence models um + +1329 +00:57:08,599 --> 00:57:13,200 +there's a bunch of ways that you can use + +1330 +00:57:11,039 --> 00:57:16,160 +sequence models of any variety I wrote + +1331 +00:57:13,200 --> 00:57:18,400 +RNN here arbitrarily but it could be + +1332 +00:57:16,160 --> 00:57:21,720 +convolution or Transformer or anything + +1333 +00:57:18,400 --> 00:57:23,559 +else so the first one is encoding + +1334 +00:57:21,720 --> 00:57:26,839 +sequences + +1335 +00:57:23,559 --> 00:57:29,240 +um and essentially if you do it with an + +1336 +00:57:26,839 --> 00:57:31,559 +RNN this is one way you can encode a + +1337 +00:57:29,240 --> 00:57:35,799 +sequence basically you take the + +1338 +00:57:31,559 --> 00:57:36,960 +last uh value here and you use it to uh + +1339 +00:57:35,799 --> 00:57:40,559 +encode the + +1340 +00:57:36,960 --> 00:57:42,720 +output this can be used for any sort of + +1341 +00:57:40,559 --> 00:57:45,839 +uh like binary or multiclass prediction + +1342 +00:57:42,720 --> 00:57:48,280 +problem it's also right now used very + +1343 +00:57:45,839 --> 00:57:50,920 +widely in sentence representations for + +1344 +00:57:48,280 --> 00:57:54,200 +retrieval uh so for example you build a + +1345 +00:57:50,920 --> 00:57:55,520 +big retrieval index uh with these + +1346 +00:57:54,200 --> 00:57:57,920 +vectors + +1347 +00:57:55,520 --> 00:57:59,480 +and then you do a vector near you also + +1348 +00:57:57,920 --> 00:58:02,119 +in quote a query and you do a vector + +1349 +00:57:59,480 --> 00:58:04,760 +nearest neighbor search to look up uh + +1350 +00:58:02,119 --> 00:58:06,760 +the most similar sentence here so this + +1351 +00:58:04,760 --> 00:58:10,160 +is uh these are two applications where + +1352 +00:58:06,760 --> 00:58:13,440 +you use something like this right on + +1353 +00:58:10,160 --> 00:58:15,520 +this slide I wrote that you use the last + +1354 +00:58:13,440 --> 00:58:17,359 +Vector here but actually a lot of the + +1355 +00:58:15,520 --> 00:58:20,039 +time it's also a good idea to just take + +1356 +00:58:17,359 --> 00:58:22,599 +the mean of the vectors or take the max + +1357 +00:58:20,039 --> 00:58:26,640 +of all of the vectors + +1358 +00:58:22,599 --> 00:58:29,119 +uh in fact I would almost I would almost + +1359 +00:58:26,640 --> 00:58:30,520 +say that that's usually a better choice + +1360 +00:58:29,119 --> 00:58:32,760 +if you're doing any sort of thing where + +1361 +00:58:30,520 --> 00:58:35,359 +you need a single Vector unless your + +1362 +00:58:32,760 --> 00:58:38,200 +model has been specifically trained to + +1363 +00:58:35,359 --> 00:58:41,480 +have good like output vectors uh from + +1364 +00:58:38,200 --> 00:58:44,359 +the final Vector here so um you could + +1365 +00:58:41,480 --> 00:58:46,880 +also just take the the mean of all of + +1366 +00:58:44,359 --> 00:58:46,880 +the purple + +1367 +00:58:48,240 --> 00:58:52,960 +ones um another thing you can do is + +1368 +00:58:50,280 --> 00:58:54,359 +encode tokens for sequence labeling Um + +1369 +00:58:52,960 --> 00:58:56,200 +this can also be used for language + +1370 +00:58:54,359 --> 00:58:58,280 +modeling and what do I mean it can be + +1371 +00:58:56,200 --> 00:59:00,039 +used for language + +1372 +00:58:58,280 --> 00:59:03,319 +modeling + +1373 +00:59:00,039 --> 00:59:06,599 +basically you can view this as first + +1374 +00:59:03,319 --> 00:59:09,200 +running along sequence encoding and then + +1375 +00:59:06,599 --> 00:59:12,319 +after that making all of the predictions + +1376 +00:59:09,200 --> 00:59:15,240 +um it's also a good thing to know + +1377 +00:59:12,319 --> 00:59:18,440 +computationally because um often you can + +1378 +00:59:15,240 --> 00:59:20,720 +do sequence encoding uh kind of all in + +1379 +00:59:18,440 --> 00:59:22,440 +parallel and yeah actually I said I was + +1380 +00:59:20,720 --> 00:59:23,359 +going to mention I said I was going to + +1381 +00:59:22,440 --> 00:59:25,079 +mention that but I don't think I + +1382 +00:59:23,359 --> 00:59:27,319 +actually have a slide about it but um + +1383 +00:59:25,079 --> 00:59:29,720 +one important thing about rnn's compared + +1384 +00:59:27,319 --> 00:59:33,079 +to convolution or Transformers uh sorry + +1385 +00:59:29,720 --> 00:59:34,839 +convolution or attention is rnns in + +1386 +00:59:33,079 --> 00:59:37,440 +order to calculate this RNN you need to + +1387 +00:59:34,839 --> 00:59:39,599 +wait for this RNN to finish so it's + +1388 +00:59:37,440 --> 00:59:41,200 +sequential and you need to go like here + +1389 +00:59:39,599 --> 00:59:43,480 +and then here and then here and then + +1390 +00:59:41,200 --> 00:59:45,720 +here and then here and that's a pretty + +1391 +00:59:43,480 --> 00:59:48,200 +big bottleneck because uh things like + +1392 +00:59:45,720 --> 00:59:50,760 +gpus or tpus they're actually really + +1393 +00:59:48,200 --> 00:59:52,839 +good at doing a bunch of things at once + +1394 +00:59:50,760 --> 00:59:56,440 +and so attention even though its ASM + +1395 +00:59:52,839 --> 00:59:57,400 +totic complexity is worse o of n squ uh + +1396 +00:59:56,440 --> 00:59:59,319 +just because you don't have that + +1397 +00:59:57,400 --> 01:00:01,680 +bottleneck of doing things sequentially + +1398 +00:59:59,319 --> 01:00:03,640 +it can be way way faster on a GPU + +1399 +01:00:01,680 --> 01:00:04,960 +because you're not wasting your time + +1400 +01:00:03,640 --> 01:00:07,640 +waiting for the previous thing to be + +1401 +01:00:04,960 --> 01:00:11,039 +calculated so that's actually why uh + +1402 +01:00:07,640 --> 01:00:13,520 +Transformers are so fast + +1403 +01:00:11,039 --> 01:00:14,599 +um uh Transformers and attention models + +1404 +01:00:13,520 --> 01:00:17,160 +are so + +1405 +01:00:14,599 --> 01:00:21,119 +fast + +1406 +01:00:17,160 --> 01:00:23,079 +um another thing to note so that's one + +1407 +01:00:21,119 --> 01:00:25,039 +of the big reasons why attention models + +1408 +01:00:23,079 --> 01:00:27,359 +are so popular nowadays because fast to + +1409 +01:00:25,039 --> 01:00:30,200 +calculate on Modern Hardware another + +1410 +01:00:27,359 --> 01:00:33,520 +reason why attention models are popular + +1411 +01:00:30,200 --> 01:00:34,799 +nowadays does anyone have a um does + +1412 +01:00:33,520 --> 01:00:37,280 +anyone have an + +1413 +01:00:34,799 --> 01:00:38,839 +idea uh about another reason it's based + +1414 +01:00:37,280 --> 01:00:41,200 +on how easy they are to learn and + +1415 +01:00:38,839 --> 01:00:43,680 +there's a reason why and that reason why + +1416 +01:00:41,200 --> 01:00:46,240 +has to do with + +1417 +01:00:43,680 --> 01:00:48,520 +um that reason why has to do with uh + +1418 +01:00:46,240 --> 01:00:49,400 +something I introduced in this lecture + +1419 +01:00:48,520 --> 01:00:52,039 +uh + +1420 +01:00:49,400 --> 01:00:54,720 +earlier I'll give a + +1421 +01:00:52,039 --> 01:00:58,079 +hint gradients yeah more more + +1422 +01:00:54,720 --> 01:01:00,480 +specifically what what's nice about + +1423 +01:00:58,079 --> 01:01:02,920 +attention with respect to gradients or + +1424 +01:01:00,480 --> 01:01:02,920 +Vanishing + +1425 +01:01:04,119 --> 01:01:07,319 +gradients any + +1426 +01:01:07,680 --> 01:01:15,160 +ideas let's say we have a really long + +1427 +01:01:10,160 --> 01:01:17,839 +sentence it's like X1 X2 X3 + +1428 +01:01:15,160 --> 01:01:21,799 +X4 um + +1429 +01:01:17,839 --> 01:01:26,440 +X200 over here and in order to predict + +1430 +01:01:21,799 --> 01:01:26,440 +X200 you need to pay attention to X3 + +1431 +01:01:27,359 --> 01:01:29,640 +any + +1432 +01:01:33,079 --> 01:01:37,359 +ideas another another hint how many + +1433 +01:01:35,599 --> 01:01:38,960 +nonlinearities do you have to pass + +1434 +01:01:37,359 --> 01:01:41,440 +through in order to pass that + +1435 +01:01:38,960 --> 01:01:44,839 +information from X3 to + +1436 +01:01:41,440 --> 01:01:48,839 +X200 in a recurrent Network um in a + +1437 +01:01:44,839 --> 01:01:48,839 +recurrent Network or + +1438 +01:01:51,920 --> 01:01:57,160 +attention netw should be + +1439 +01:01:54,960 --> 01:02:00,680 +197 yeah in a recurrent Network it's + +1440 +01:01:57,160 --> 01:02:03,480 +basically 197 or may maybe 196 I haven't + +1441 +01:02:00,680 --> 01:02:06,319 +paid attention but every time every time + +1442 +01:02:03,480 --> 01:02:08,319 +you pass it to the hidden + +1443 +01:02:06,319 --> 01:02:10,200 +state it has to go through a + +1444 +01:02:08,319 --> 01:02:13,240 +nonlinearity so it goes through like + +1445 +01:02:10,200 --> 01:02:17,119 +1907 nonlinearities and even if you're + +1446 +01:02:13,240 --> 01:02:19,680 +using an lstm um it's still the lstm + +1447 +01:02:17,119 --> 01:02:21,559 +hidden cell is getting information added + +1448 +01:02:19,680 --> 01:02:23,400 +to it and subtracted to it and other + +1449 +01:02:21,559 --> 01:02:24,960 +things like that so it's still a bit + +1450 +01:02:23,400 --> 01:02:27,880 +tricky + +1451 +01:02:24,960 --> 01:02:27,880 +um what about + +1452 +01:02:28,119 --> 01:02:35,160 +attention yeah basically one time so + +1453 +01:02:31,520 --> 01:02:39,319 +attention um in the next layer here + +1454 +01:02:35,160 --> 01:02:41,119 +you're passing it all the way you're + +1455 +01:02:39,319 --> 01:02:45,000 +passing all of the information directly + +1456 +01:02:41,119 --> 01:02:46,480 +in and the only qualifying thing is that + +1457 +01:02:45,000 --> 01:02:47,760 +your weight has to be good it has to + +1458 +01:02:46,480 --> 01:02:49,079 +find a good attention weight so that + +1459 +01:02:47,760 --> 01:02:50,920 +it's actually paying attention to that + +1460 +01:02:49,079 --> 01:02:53,039 +information so this is actually + +1461 +01:02:50,920 --> 01:02:54,400 +discussed in the vaswani at all + +1462 +01:02:53,039 --> 01:02:57,359 +attention is all you need paper that + +1463 +01:02:54,400 --> 01:02:59,920 +introduced Transformers um convolutions + +1464 +01:02:57,359 --> 01:03:03,640 +are kind of in the middle so like let's + +1465 +01:02:59,920 --> 01:03:06,400 +say you have a convolution of length 10 + +1466 +01:03:03,640 --> 01:03:09,880 +um and then you have two layers of it um + +1467 +01:03:06,400 --> 01:03:09,880 +if you have a convolution of length + +1468 +01:03:10,200 --> 01:03:15,880 +10 or yeah let's say you have a + +1469 +01:03:12,559 --> 01:03:18,520 +convolution of length 10 you would need + +1470 +01:03:15,880 --> 01:03:19,520 +basically you would pass from 10 + +1471 +01:03:18,520 --> 01:03:21,720 +previous + +1472 +01:03:19,520 --> 01:03:23,319 +ones and then you would pass again from + +1473 +01:03:21,720 --> 01:03:27,359 +10 previous ones and then you would have + +1474 +01:03:23,319 --> 01:03:29,160 +to go through like 16 or like I guess + +1475 +01:03:27,359 --> 01:03:31,279 +almost 20 layers of convolution in order + +1476 +01:03:29,160 --> 01:03:34,720 +to pass that information along so it's + +1477 +01:03:31,279 --> 01:03:39,200 +kind of in the middle of RNs in uh in + +1478 +01:03:34,720 --> 01:03:43,480 +lsms uh sorry RNN in attention + +1479 +01:03:39,200 --> 01:03:47,359 +Ms Yeah question so regarding how you + +1480 +01:03:43,480 --> 01:03:51,319 +have to wait for one r& the next one can + +1481 +01:03:47,359 --> 01:03:53,000 +you inflence on one RNN once it's done + +1482 +01:03:51,319 --> 01:03:54,839 +even though the next one's competing off + +1483 +01:03:53,000 --> 01:03:58,400 +that one + +1484 +01:03:54,839 --> 01:04:01,160 +yes yeah you can you can do + +1485 +01:03:58,400 --> 01:04:03,880 +inference you could is well so as long + +1486 +01:04:01,160 --> 01:04:03,880 +as + +1487 +01:04:05,599 --> 01:04:10,640 +the as long as the output doesn't affect + +1488 +01:04:08,079 --> 01:04:14,000 +the next input so in this + +1489 +01:04:10,640 --> 01:04:17,119 +case in this case because of language + +1490 +01:04:14,000 --> 01:04:19,400 +modeling or generation is because the + +1491 +01:04:17,119 --> 01:04:21,000 +output doesn't affect the ne uh because + +1492 +01:04:19,400 --> 01:04:22,440 +the output affects the next input if + +1493 +01:04:21,000 --> 01:04:26,680 +you're predicting the output you have to + +1494 +01:04:22,440 --> 01:04:28,920 +weigh if you know the output already um + +1495 +01:04:26,680 --> 01:04:30,599 +if you know the output already you could + +1496 +01:04:28,920 --> 01:04:33,599 +make the prediction at the same time + +1497 +01:04:30,599 --> 01:04:34,799 +miscalculating this next hidden State um + +1498 +01:04:33,599 --> 01:04:36,200 +so if you're just calculating the + +1499 +01:04:34,799 --> 01:04:38,559 +probability you could do that and that's + +1500 +01:04:36,200 --> 01:04:40,880 +actually where Transformers or attention + +1501 +01:04:38,559 --> 01:04:44,839 +models shine attention models actually + +1502 +01:04:40,880 --> 01:04:46,000 +aren't great for Generation Um and the + +1503 +01:04:44,839 --> 01:04:49,279 +reason why they're not great for + +1504 +01:04:46,000 --> 01:04:52,279 +generation is because they're + +1505 +01:04:49,279 --> 01:04:52,279 +um + +1506 +01:04:52,799 --> 01:04:57,680 +like when you're you're generating the + +1507 +01:04:55,039 --> 01:04:59,200 +next token you still need to wait you + +1508 +01:04:57,680 --> 01:05:00,559 +can't calculate in parallel because you + +1509 +01:04:59,200 --> 01:05:03,039 +need to generate the next token before + +1510 +01:05:00,559 --> 01:05:04,839 +you can encode the next uh the previous + +1511 +01:05:03,039 --> 01:05:07,119 +sorry need to generate the next token + +1512 +01:05:04,839 --> 01:05:08,680 +before you can encode it so you can't do + +1513 +01:05:07,119 --> 01:05:10,359 +everything in parallel so Transformers + +1514 +01:05:08,680 --> 01:05:15,039 +for generation are actually + +1515 +01:05:10,359 --> 01:05:16,559 +slow and um there are models uh I don't + +1516 +01:05:15,039 --> 01:05:18,520 +know if people are using them super + +1517 +01:05:16,559 --> 01:05:22,200 +widely now but there were actually + +1518 +01:05:18,520 --> 01:05:23,640 +transform uh language model sorry + +1519 +01:05:22,200 --> 01:05:26,319 +machine translation model set we in + +1520 +01:05:23,640 --> 01:05:28,279 +production they had a really big strong + +1521 +01:05:26,319 --> 01:05:34,359 +Transformer encoder and then they had a + +1522 +01:05:28,279 --> 01:05:34,359 +tiny fast RNN decoder um + +1523 +01:05:35,440 --> 01:05:40,960 +and and if you want a actual + +1524 +01:05:52,000 --> 01:05:59,440 +reference there's there's + +1525 +01:05:55,079 --> 01:05:59,440 +this deep encoder shellow + +1526 +01:05:59,559 --> 01:06:05,520 +decoder um and then there's also the the + +1527 +01:06:03,079 --> 01:06:07,599 +Maran machine translation toolkit that + +1528 +01:06:05,520 --> 01:06:11,119 +supports uh supports those types of + +1529 +01:06:07,599 --> 01:06:13,839 +things as well so um it's also the + +1530 +01:06:11,119 --> 01:06:16,200 +reason why uh if you're using if you're + +1531 +01:06:13,839 --> 01:06:18,839 +using uh like the GPT models through the + +1532 +01:06:16,200 --> 01:06:21,680 +API that decoding is more expensive + +1533 +01:06:18,839 --> 01:06:21,680 +right like + +1534 +01:06:22,119 --> 01:06:27,960 +encoding I forget exactly is it 0.03 + +1535 +01:06:26,279 --> 01:06:30,839 +cents for 1,000 tokens for encoding and + +1536 +01:06:27,960 --> 01:06:33,039 +0.06 cents for 1,000 tokens for decoding + +1537 +01:06:30,839 --> 01:06:34,799 +in like gp4 or something like this the + +1538 +01:06:33,039 --> 01:06:36,839 +reason why is precisely that just + +1539 +01:06:34,799 --> 01:06:37,760 +because it's so much more expensive to + +1540 +01:06:36,839 --> 01:06:41,599 +to run the + +1541 +01:06:37,760 --> 01:06:45,160 +decoder um cool I have a few final + +1542 +01:06:41,599 --> 01:06:47,039 +things also about efficiency so um these + +1543 +01:06:45,160 --> 01:06:50,720 +go back to the efficiency things that I + +1544 +01:06:47,039 --> 01:06:52,279 +talked about last time um handling mini + +1545 +01:06:50,720 --> 01:06:54,440 +batching so what do we have to do when + +1546 +01:06:52,279 --> 01:06:56,359 +we're handling mini batching if we were + +1547 +01:06:54,440 --> 01:06:59,440 +handling mini batching in feed forward + +1548 +01:06:56,359 --> 01:07:02,880 +networks it's actually relatively easy + +1549 +01:06:59,440 --> 01:07:04,880 +um because we all of our computations + +1550 +01:07:02,880 --> 01:07:06,400 +are the same shape so we just + +1551 +01:07:04,880 --> 01:07:09,359 +concatenate them all together into a big + +1552 +01:07:06,400 --> 01:07:11,000 +tensor and run uh run over it uh we saw + +1553 +01:07:09,359 --> 01:07:12,599 +mini batching makes things much faster + +1554 +01:07:11,000 --> 01:07:15,160 +but mini batching and sequence modeling + +1555 +01:07:12,599 --> 01:07:17,240 +is harder than in feed forward networks + +1556 +01:07:15,160 --> 01:07:20,240 +um one reason is in rnns each word + +1557 +01:07:17,240 --> 01:07:22,680 +depends on the previous word um also + +1558 +01:07:20,240 --> 01:07:26,359 +because sequences are of various + +1559 +01:07:22,680 --> 01:07:30,279 +lengths so so what we do to handle this + +1560 +01:07:26,359 --> 01:07:33,480 +is uh we do padding and masking uh + +1561 +01:07:30,279 --> 01:07:35,680 +so we can do padding like this uh so we + +1562 +01:07:33,480 --> 01:07:37,279 +just add an extra token at the end to + +1563 +01:07:35,680 --> 01:07:40,440 +make all of the sequences at the same + +1564 +01:07:37,279 --> 01:07:44,480 +length um if we are doing an encoder + +1565 +01:07:40,440 --> 01:07:47,160 +decoder style model uh where we have an + +1566 +01:07:44,480 --> 01:07:48,440 +input and then we want to generate all + +1567 +01:07:47,160 --> 01:07:50,640 +the outputs based on the input one of + +1568 +01:07:48,440 --> 01:07:54,920 +the easy things is to add pads to the + +1569 +01:07:50,640 --> 01:07:56,520 +beginning um and then so yeah it doesn't + +1570 +01:07:54,920 --> 01:07:58,000 +really matter but you can add pads to + +1571 +01:07:56,520 --> 01:07:59,440 +the beginning so they're all starting at + +1572 +01:07:58,000 --> 01:08:03,079 +the same place especially if you're + +1573 +01:07:59,440 --> 01:08:05,799 +using RNN style models um then we + +1574 +01:08:03,079 --> 01:08:08,920 +calculate the loss over the output for + +1575 +01:08:05,799 --> 01:08:11,000 +example we multiply the loss by a mask + +1576 +01:08:08,920 --> 01:08:13,480 +to remove the loss over the tokens that + +1577 +01:08:11,000 --> 01:08:16,880 +we don't care about and we take the sum + +1578 +01:08:13,480 --> 01:08:19,120 +of these and so luckily most of this is + +1579 +01:08:16,880 --> 01:08:20,719 +implemented in for example ptch or + +1580 +01:08:19,120 --> 01:08:22,279 +huging face Transformers already so you + +1581 +01:08:20,719 --> 01:08:23,560 +don't need to worry about it but it is a + +1582 +01:08:22,279 --> 01:08:24,799 +good idea to know what's going on under + +1583 +01:08:23,560 --> 01:08:28,560 +the hood if you want to implement + +1584 +01:08:24,799 --> 01:08:32,440 +anything unusual and also um it's good + +1585 +01:08:28,560 --> 01:08:35,600 +to know for the following reason also + +1586 +01:08:32,440 --> 01:08:38,799 +which is bucketing and + +1587 +01:08:35,600 --> 01:08:40,319 +sorting so if we use sentences of vastly + +1588 +01:08:38,799 --> 01:08:43,359 +different lengths and we put them in the + +1589 +01:08:40,319 --> 01:08:46,640 +same mini batch this can uh waste a + +1590 +01:08:43,359 --> 01:08:48,000 +really large amount of computation so + +1591 +01:08:46,640 --> 01:08:50,759 +like let's say we're processing + +1592 +01:08:48,000 --> 01:08:52,480 +documents or movie reviews or something + +1593 +01:08:50,759 --> 01:08:54,799 +like that and you have a most movie + +1594 +01:08:52,480 --> 01:08:57,719 +reviews are like + +1595 +01:08:54,799 --> 01:09:00,080 +10 words long but you have one movie + +1596 +01:08:57,719 --> 01:09:02,319 +review in your mini batch of uh a + +1597 +01:09:00,080 --> 01:09:04,359 +thousand words so basically what that + +1598 +01:09:02,319 --> 01:09:08,279 +means is you're padding most of your + +1599 +01:09:04,359 --> 01:09:11,120 +sequences 990 times to process 10 + +1600 +01:09:08,279 --> 01:09:12,120 +sequences which is like a lot of waste + +1601 +01:09:11,120 --> 01:09:14,000 +right because you're running them all + +1602 +01:09:12,120 --> 01:09:16,799 +through your GPU and other things like + +1603 +01:09:14,000 --> 01:09:19,080 +that so one way to remedy this is to + +1604 +01:09:16,799 --> 01:09:22,719 +sort sentences so similarly length + +1605 +01:09:19,080 --> 01:09:27,480 +sentences are in the same batch so you + +1606 +01:09:22,719 --> 01:09:29,920 +uh you first sort before building all of + +1607 +01:09:27,480 --> 01:09:31,640 +your batches and then uh that makes it + +1608 +01:09:29,920 --> 01:09:32,960 +so that similarly sized ones are the + +1609 +01:09:31,640 --> 01:09:35,239 +same + +1610 +01:09:32,960 --> 01:09:37,040 +batch this goes into the problem that I + +1611 +01:09:35,239 --> 01:09:39,359 +mentioned before but only in passing + +1612 +01:09:37,040 --> 01:09:42,440 +which is uh let's say you're calculating + +1613 +01:09:39,359 --> 01:09:44,199 +your batch based on the number of + +1614 +01:09:42,440 --> 01:09:47,679 +sequences that you're + +1615 +01:09:44,199 --> 01:09:51,400 +processing if you say Okay I want 64 + +1616 +01:09:47,679 --> 01:09:53,359 +sequences in my mini batch um if most of + +1617 +01:09:51,400 --> 01:09:55,159 +the time those 64 sequences are are 10 + +1618 +01:09:53,359 --> 01:09:57,480 +tokens that's fine but then when you get + +1619 +01:09:55,159 --> 01:10:01,440 +the One Mini batch that has a thousand + +1620 +01:09:57,480 --> 01:10:02,760 +tokens in each sentence or each sequence + +1621 +01:10:01,440 --> 01:10:04,920 +um suddenly you're going to run out of + +1622 +01:10:02,760 --> 01:10:07,800 +GPU memory and you're like training is + +1623 +01:10:04,920 --> 01:10:08,920 +going to crash right which is you really + +1624 +01:10:07,800 --> 01:10:10,440 +don't want that to happen when you + +1625 +01:10:08,920 --> 01:10:12,440 +started running your homework assignment + +1626 +01:10:10,440 --> 01:10:15,560 +and then went to bed and then wake up + +1627 +01:10:12,440 --> 01:10:18,440 +and it crashed you know uh 15 minutes + +1628 +01:10:15,560 --> 01:10:21,040 +into Computing or something so uh this + +1629 +01:10:18,440 --> 01:10:23,440 +is an important thing to be aware of + +1630 +01:10:21,040 --> 01:10:26,760 +practically uh again this can be solved + +1631 +01:10:23,440 --> 01:10:29,239 +by a lot of toolkits like I know fer uh + +1632 +01:10:26,760 --> 01:10:30,840 +does it and hugging face does it if you + +1633 +01:10:29,239 --> 01:10:33,159 +set the appropriate settings but it's + +1634 +01:10:30,840 --> 01:10:36,239 +something you should be aware of um + +1635 +01:10:33,159 --> 01:10:37,880 +another note is that if you do this it's + +1636 +01:10:36,239 --> 01:10:41,280 +reducing the randomness in your + +1637 +01:10:37,880 --> 01:10:42,880 +distribution of data so um stochastic + +1638 +01:10:41,280 --> 01:10:44,520 +gradient descent is really heavily + +1639 +01:10:42,880 --> 01:10:47,480 +reliant on the fact that your ordering + +1640 +01:10:44,520 --> 01:10:49,440 +of data is randomized or at least it's a + +1641 +01:10:47,480 --> 01:10:52,159 +distributed appropriately so it's + +1642 +01:10:49,440 --> 01:10:56,840 +something to definitely be aware of um + +1643 +01:10:52,159 --> 01:10:59,560 +so uh this is a good thing to to think + +1644 +01:10:56,840 --> 01:11:01,400 +about another really useful thing to + +1645 +01:10:59,560 --> 01:11:03,800 +think about is strided + +1646 +01:11:01,400 --> 01:11:05,440 +architectures um strided architectures + +1647 +01:11:03,800 --> 01:11:07,520 +appear in rnns they appear in + +1648 +01:11:05,440 --> 01:11:10,080 +convolution they appear in trans + +1649 +01:11:07,520 --> 01:11:12,320 +Transformers or attention based models + +1650 +01:11:10,080 --> 01:11:15,199 +um they're called different things in + +1651 +01:11:12,320 --> 01:11:18,159 +each of them so in rnns they're called + +1652 +01:11:15,199 --> 01:11:21,280 +pyramidal rnns in convolution they're + +1653 +01:11:18,159 --> 01:11:22,400 +called strided architectures and in + +1654 +01:11:21,280 --> 01:11:25,080 +attention they're called sparse + +1655 +01:11:22,400 --> 01:11:27,440 +attention usually they all actually kind + +1656 +01:11:25,080 --> 01:11:30,800 +of mean the same thing um and basically + +1657 +01:11:27,440 --> 01:11:33,440 +what they mean is you don't you have a + +1658 +01:11:30,800 --> 01:11:37,040 +multi-layer model and when you have a + +1659 +01:11:33,440 --> 01:11:40,920 +multi-layer model you don't process + +1660 +01:11:37,040 --> 01:11:43,920 +every input uh from the uh from the + +1661 +01:11:40,920 --> 01:11:45,560 +previous layer so here's an example um + +1662 +01:11:43,920 --> 01:11:47,840 +like let's say you have a whole bunch of + +1663 +01:11:45,560 --> 01:11:50,199 +inputs um each of the inputs is + +1664 +01:11:47,840 --> 01:11:53,159 +processed in the first layer in some way + +1665 +01:11:50,199 --> 01:11:56,639 +but in the second layer you actually + +1666 +01:11:53,159 --> 01:12:01,520 +input for example uh two inputs to the + +1667 +01:11:56,639 --> 01:12:03,560 +RNN but you you skip so you have one + +1668 +01:12:01,520 --> 01:12:05,440 +state that corresponds to state number + +1669 +01:12:03,560 --> 01:12:06,840 +one and two another state that + +1670 +01:12:05,440 --> 01:12:08,440 +corresponds to state number two and + +1671 +01:12:06,840 --> 01:12:10,920 +three another state that corresponds to + +1672 +01:12:08,440 --> 01:12:13,280 +state number three and four so what that + +1673 +01:12:10,920 --> 01:12:15,199 +means is you can gradually decrease the + +1674 +01:12:13,280 --> 01:12:18,199 +number like the length of the sequence + +1675 +01:12:15,199 --> 01:12:20,719 +every time you process so uh this is a + +1676 +01:12:18,199 --> 01:12:22,360 +really useful thing that to do if you're + +1677 +01:12:20,719 --> 01:12:25,480 +processing very long sequences so you + +1678 +01:12:22,360 --> 01:12:25,480 +should be aware of it + +1679 +01:12:27,440 --> 01:12:34,120 +cool um everything + +1680 +01:12:30,639 --> 01:12:36,920 +okay okay the final thing is truncated + +1681 +01:12:34,120 --> 01:12:39,239 +back propagation through time and uh + +1682 +01:12:36,920 --> 01:12:41,000 +truncated back propagation Through Time + +1683 +01:12:39,239 --> 01:12:43,560 +what this is doing is basically you do + +1684 +01:12:41,000 --> 01:12:46,120 +back propop over shorter segments but + +1685 +01:12:43,560 --> 01:12:47,840 +you initialize with the state from the + +1686 +01:12:46,120 --> 01:12:51,040 +previous + +1687 +01:12:47,840 --> 01:12:52,440 +segment and the way this works is uh + +1688 +01:12:51,040 --> 01:12:56,080 +like for example if you're running an + +1689 +01:12:52,440 --> 01:12:57,600 +RNN uh you would run the RNN over the + +1690 +01:12:56,080 --> 01:12:59,400 +previous segment maybe it's length four + +1691 +01:12:57,600 --> 01:13:02,120 +maybe it's length 400 it doesn't really + +1692 +01:12:59,400 --> 01:13:04,520 +matter but it's uh coherently length + +1693 +01:13:02,120 --> 01:13:06,360 +segment and then when you do the next + +1694 +01:13:04,520 --> 01:13:08,840 +segment what you do is you only pass the + +1695 +01:13:06,360 --> 01:13:12,960 +hidden state but you throw away the rest + +1696 +01:13:08,840 --> 01:13:16,360 +of the previous computation graph and + +1697 +01:13:12,960 --> 01:13:18,040 +then walk through uh like this uh so you + +1698 +01:13:16,360 --> 01:13:22,159 +won't actually be updating the + +1699 +01:13:18,040 --> 01:13:24,080 +parameters of this based on the result + +1700 +01:13:22,159 --> 01:13:25,800 +the lost from this but you're still + +1701 +01:13:24,080 --> 01:13:28,159 +passing the information so this can use + +1702 +01:13:25,800 --> 01:13:30,400 +the information for the previous state + +1703 +01:13:28,159 --> 01:13:32,239 +so this is an example from RNN this is + +1704 +01:13:30,400 --> 01:13:35,159 +used pretty widely in RNN but there's + +1705 +01:13:32,239 --> 01:13:38,000 +also a lot of Transformer architectures + +1706 +01:13:35,159 --> 01:13:39,400 +that do things like this um the original + +1707 +01:13:38,000 --> 01:13:41,000 +one is something called Transformer + +1708 +01:13:39,400 --> 01:13:44,560 +Excel that was actually created here at + +1709 +01:13:41,000 --> 01:13:46,560 +CMU but this is also um used in the new + +1710 +01:13:44,560 --> 01:13:48,719 +mistol models and other things like this + +1711 +01:13:46,560 --> 01:13:51,719 +as well so um it's something that's + +1712 +01:13:48,719 --> 01:13:54,719 +still very much alive and well nowadays + +1713 +01:13:51,719 --> 01:13:56,320 +as well + +1714 +01:13:54,719 --> 01:13:57,840 +cool um that's all I have for today are + +1715 +01:13:56,320 --> 01:13:59,760 +there any questions people want to ask + +1716 +01:13:57,840 --> 01:14:02,760 +before we wrap + +1717 +01:13:59,760 --> 01:14:02,760 +up + +1718 +01:14:12,840 --> 01:14:20,000 +yeah doesent yeah so for condition + +1719 +01:14:16,960 --> 01:14:25,040 +prediction what is Source X and Target y + +1720 +01:14:20,000 --> 01:14:26,520 +um I think I kind of maybe carried over + +1721 +01:14:25,040 --> 01:14:28,679 +uh some terminology from machine + +1722 +01:14:26,520 --> 01:14:31,400 +translation uh by accident maybe it + +1723 +01:14:28,679 --> 01:14:34,080 +should be input X and output y uh that + +1724 +01:14:31,400 --> 01:14:36,600 +would be a better way to put it and so + +1725 +01:14:34,080 --> 01:14:38,080 +uh it could be anything for translation + +1726 +01:14:36,600 --> 01:14:39,560 +it's like something in the source + +1727 +01:14:38,080 --> 01:14:42,600 +language and something in the target + +1728 +01:14:39,560 --> 01:14:44,520 +language so like English and Japanese um + +1729 +01:14:42,600 --> 01:14:47,280 +if it's just a regular language model it + +1730 +01:14:44,520 --> 01:14:50,560 +could be something like a prompt and the + +1731 +01:14:47,280 --> 01:14:55,280 +output so for + +1732 +01:14:50,560 --> 01:14:55,280 +UNC y example that + +1733 +01:14:57,400 --> 01:15:01,400 +yeah so for unconditioned prediction + +1734 +01:14:59,760 --> 01:15:03,840 +that could just be straight up language + +1735 +01:15:01,400 --> 01:15:07,040 +modeling for example so um language + +1736 +01:15:03,840 --> 01:15:11,840 +modeling with no not necessarily any + +1737 +01:15:07,040 --> 01:15:11,840 +problems okay thanks and anything + +1738 +01:15:12,440 --> 01:15:17,880 +else okay great thanks a lot I'm happy + +1739 +01:15:14,639 --> 01:15:17,880 +to take questions + +1740 +01:15:18,639 --> 01:15:21,639 +to \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (4) Sequence Modeling/transcript.vtt b/CMU Advanced NLP 2024 (4) Sequence Modeling/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..f68a268db039bf652aeda95cdd36e853a1910f07 --- /dev/null +++ b/CMU Advanced NLP 2024 (4) Sequence Modeling/transcript.vtt @@ -0,0 +1,5221 @@ +WEBVTT + +00:00:00.040 --> 00:00:06.600 +started in a moment uh since it's now uh + +00:00:03.959 --> 00:00:08.839 +12:30 are there any questions before we + +00:00:06.600 --> 00:00:08.839 +get + +00:00:11.840 --> 00:00:17.240 +started okay I don't see I don't see any + +00:00:14.679 --> 00:00:18.640 +so I guess we can uh Jump Right In this + +00:00:17.240 --> 00:00:22.080 +time I'll be talking about sequence + +00:00:18.640 --> 00:00:24.560 +modeling and N first I'm going to be + +00:00:22.080 --> 00:00:26.359 +talking about uh why why we do sequence + +00:00:24.560 --> 00:00:29.160 +modeling what varieties of sequence + +00:00:26.359 --> 00:00:31.199 +modeling exist and then after that I'm + +00:00:29.160 --> 00:00:34.120 +going to talk about kind of three basic + +00:00:31.199 --> 00:00:36.320 +techniques for sequence modeling namely + +00:00:34.120 --> 00:00:38.879 +recurrent neural networks convolutional + +00:00:36.320 --> 00:00:38.879 +networks and + +00:00:39.360 --> 00:00:44.079 +attention so when we talk about sequence + +00:00:41.920 --> 00:00:46.680 +modeling in NLP I've kind of already + +00:00:44.079 --> 00:00:50.039 +made the motivation for doing this but + +00:00:46.680 --> 00:00:51.920 +basically NLP is full of sequential data + +00:00:50.039 --> 00:00:56.120 +and this can be everything from words + +00:00:51.920 --> 00:00:59.399 +and sentences or tokens and sentences to + +00:00:56.120 --> 00:01:01.920 +uh characters and words to sentences in + +00:00:59.399 --> 00:01:04.640 +a discourse or a paragraph or a + +00:01:01.920 --> 00:01:06.640 +document um it can also be multiple + +00:01:04.640 --> 00:01:08.840 +documents in time multiple social media + +00:01:06.640 --> 00:01:12.320 +posts whatever else you want there's + +00:01:08.840 --> 00:01:15.159 +just you know sequences all over + +00:01:12.320 --> 00:01:16.640 +NLP and I mentioned this uh last time + +00:01:15.159 --> 00:01:19.240 +also but there's also long-distance + +00:01:16.640 --> 00:01:20.840 +dependencies in language so uh just to + +00:01:19.240 --> 00:01:23.720 +give an example there's agreement in + +00:01:20.840 --> 00:01:25.799 +number uh gender Etc so in order to + +00:01:23.720 --> 00:01:28.439 +create a fluent language model you'll + +00:01:25.799 --> 00:01:30.320 +have to handle this agreement so if we + +00:01:28.439 --> 00:01:32.920 +you say he does not have very much + +00:01:30.320 --> 00:01:35.280 +confidence in uh it would have to be + +00:01:32.920 --> 00:01:36.680 +himself but if you say she does not have + +00:01:35.280 --> 00:01:39.360 +very much confidence in it would have to + +00:01:36.680 --> 00:01:41.360 +be herself and this is this gender + +00:01:39.360 --> 00:01:44.159 +agreement is not super frequent in + +00:01:41.360 --> 00:01:47.600 +English but it's very frequent in other + +00:01:44.159 --> 00:01:50.119 +languages like French or uh you know + +00:01:47.600 --> 00:01:51.759 +most languages in the world in some uh + +00:01:50.119 --> 00:01:53.799 +way or + +00:01:51.759 --> 00:01:55.320 +another then separately from that you + +00:01:53.799 --> 00:01:58.520 +also have things like selectional + +00:01:55.320 --> 00:02:00.119 +preferences um like the Reign has lasted + +00:01:58.520 --> 00:02:01.799 +as long as the life of the queen and the + +00:02:00.119 --> 00:02:04.439 +rain has lasted as long as the life of + +00:02:01.799 --> 00:02:07.360 +the clouds uh in American English the + +00:02:04.439 --> 00:02:09.119 +only way you could know uh which word + +00:02:07.360 --> 00:02:13.520 +came beforehand if you were doing speech + +00:02:09.119 --> 00:02:17.400 +recognition is if you uh like had that + +00:02:13.520 --> 00:02:20.319 +kind of semantic uh idea of uh that + +00:02:17.400 --> 00:02:22.040 +these agree with each other in some way + +00:02:20.319 --> 00:02:23.920 +and there's also factual knowledge + +00:02:22.040 --> 00:02:27.680 +there's all kinds of other things uh + +00:02:23.920 --> 00:02:27.680 +that you need to carry over long + +00:02:28.319 --> 00:02:33.800 +contexts um these can be comp + +00:02:30.840 --> 00:02:36.360 +complicated so this is a a nice example + +00:02:33.800 --> 00:02:39.400 +so if we try to figure out what it + +00:02:36.360 --> 00:02:41.239 +refers to here uh the trophy would not + +00:02:39.400 --> 00:02:45.680 +fit in the brown suitcase because it was + +00:02:41.239 --> 00:02:45.680 +too big what is it + +00:02:46.680 --> 00:02:51.360 +here the trophy yeah and then what about + +00:02:49.879 --> 00:02:53.120 +uh the trophy would not fit in the brown + +00:02:51.360 --> 00:02:57.080 +suitcase because it was too + +00:02:53.120 --> 00:02:58.680 +small suit suitcase right um does anyone + +00:02:57.080 --> 00:03:01.760 +know what the name of something like + +00:02:58.680 --> 00:03:01.760 +this is + +00:03:03.599 --> 00:03:07.840 +has anyone heard of this challenge uh + +00:03:09.280 --> 00:03:14.840 +before no one okay um this this is + +00:03:12.239 --> 00:03:17.200 +called the winegrad schema challenge or + +00:03:14.840 --> 00:03:22.760 +these are called winegrad schemas and + +00:03:17.200 --> 00:03:26.319 +basically winterr schemas are a type + +00:03:22.760 --> 00:03:29.280 +of they're type of kind of linguistic + +00:03:26.319 --> 00:03:30.439 +challenge where you create two paired uh + +00:03:29.280 --> 00:03:33.799 +examples + +00:03:30.439 --> 00:03:37.360 +that you vary in very minimal ways where + +00:03:33.799 --> 00:03:40.599 +the answer differs between the two um + +00:03:37.360 --> 00:03:42.000 +and so uh there's lots of other examples + +00:03:40.599 --> 00:03:44.080 +about how you can create these things + +00:03:42.000 --> 00:03:45.720 +and they're good for testing uh whether + +00:03:44.080 --> 00:03:48.239 +language models are able to do things + +00:03:45.720 --> 00:03:50.920 +because they're able to uh kind of + +00:03:48.239 --> 00:03:54.239 +control for the fact that you know like + +00:03:50.920 --> 00:04:01.079 +the answer might be + +00:03:54.239 --> 00:04:03.000 +um the answer might be very uh like + +00:04:01.079 --> 00:04:04.560 +more frequent or less frequent and so + +00:04:03.000 --> 00:04:07.720 +the language model could just pick that + +00:04:04.560 --> 00:04:11.040 +so another example is we uh we came up + +00:04:07.720 --> 00:04:12.239 +with a benchmark of figurative language + +00:04:11.040 --> 00:04:14.239 +where we tried to figure out whether + +00:04:12.239 --> 00:04:17.160 +language models would be able + +00:04:14.239 --> 00:04:19.720 +to interpret figur figurative language + +00:04:17.160 --> 00:04:22.720 +and I actually have the multilingual uh + +00:04:19.720 --> 00:04:24.160 +version on the suggested projects uh on + +00:04:22.720 --> 00:04:26.240 +the Piaza oh yeah that's one + +00:04:24.160 --> 00:04:28.360 +announcement I posted a big list of + +00:04:26.240 --> 00:04:30.080 +suggested projects on pza I think a lot + +00:04:28.360 --> 00:04:31.639 +of people saw it you don't have to + +00:04:30.080 --> 00:04:33.160 +follow these but if you're interested in + +00:04:31.639 --> 00:04:34.440 +them feel free to talk to the contacts + +00:04:33.160 --> 00:04:38.880 +and we can give you more information + +00:04:34.440 --> 00:04:41.039 +about them um but anyway uh so in this + +00:04:38.880 --> 00:04:43.080 +data set what we did is we came up with + +00:04:41.039 --> 00:04:46.039 +some figurative language like this movie + +00:04:43.080 --> 00:04:47.880 +had the depth of of a waiting pool and + +00:04:46.039 --> 00:04:50.919 +this movie had the depth of a diving + +00:04:47.880 --> 00:04:54.120 +pool and so then after that you would + +00:04:50.919 --> 00:04:56.199 +have two choices this movie was uh this + +00:04:54.120 --> 00:04:58.400 +movie was very deep and interesting this + +00:04:56.199 --> 00:05:01.000 +movie was not very deep and interesting + +00:04:58.400 --> 00:05:02.800 +and so you have these like like two + +00:05:01.000 --> 00:05:04.759 +pairs of questions and answers and you + +00:05:02.800 --> 00:05:06.240 +need to decide between them and + +00:05:04.759 --> 00:05:07.759 +depending on what the input is the + +00:05:06.240 --> 00:05:10.639 +output will change and so that's a good + +00:05:07.759 --> 00:05:11.919 +way to control for um and test whether + +00:05:10.639 --> 00:05:13.600 +language models really understand + +00:05:11.919 --> 00:05:15.080 +something so if you're interested in + +00:05:13.600 --> 00:05:17.199 +benchmarking or other things like that + +00:05:15.080 --> 00:05:19.160 +it's a good parad time to think about + +00:05:17.199 --> 00:05:22.759 +anyway that's a little bit of an aside + +00:05:19.160 --> 00:05:25.960 +um so now I'd like to go on to types of + +00:05:22.759 --> 00:05:28.360 +sequential prediction problems + +00:05:25.960 --> 00:05:30.880 +and types of prediction problems in + +00:05:28.360 --> 00:05:32.560 +general uh binary and multiclass we + +00:05:30.880 --> 00:05:35.240 +already talked about that's when we're + +00:05:32.560 --> 00:05:37.199 +doing for example uh classification + +00:05:35.240 --> 00:05:38.960 +between two classes or classification + +00:05:37.199 --> 00:05:41.280 +between multiple + +00:05:38.960 --> 00:05:42.880 +classes but there's also another variety + +00:05:41.280 --> 00:05:45.120 +of prediction called structured + +00:05:42.880 --> 00:05:47.120 +prediction and structured prediction is + +00:05:45.120 --> 00:05:49.639 +when you have a very large number of + +00:05:47.120 --> 00:05:53.680 +labels it's not you know a finite number + +00:05:49.639 --> 00:05:56.560 +of labels and uh so that would be + +00:05:53.680 --> 00:05:58.160 +something like uh if you take in an + +00:05:56.560 --> 00:06:00.680 +input and you want to predict all of the + +00:05:58.160 --> 00:06:04.000 +parts of speech of all the words in the + +00:06:00.680 --> 00:06:06.840 +input and if you had like 50 parts of + +00:06:04.000 --> 00:06:09.039 +speech the number of labels that you + +00:06:06.840 --> 00:06:11.360 +would have for each sentence + +00:06:09.039 --> 00:06:15.280 +is any any + +00:06:11.360 --> 00:06:17.919 +ideas 50 50 parts of speech and like + +00:06:15.280 --> 00:06:17.919 +let's say for + +00:06:19.880 --> 00:06:31.400 +wordss 60 um it it's every combination + +00:06:26.039 --> 00:06:31.400 +of parts of speech for every words so + +00:06:32.039 --> 00:06:38.440 +uh close but maybe the opposite it's uh + +00:06:35.520 --> 00:06:40.720 +50 to the four because you have 50 50 + +00:06:38.440 --> 00:06:42.400 +choices here 50 choices here so it's a c + +00:06:40.720 --> 00:06:45.599 +cross product of all of the + +00:06:42.400 --> 00:06:48.560 +choices um and so that becomes very + +00:06:45.599 --> 00:06:50.280 +quickly un untenable um let's say you're + +00:06:48.560 --> 00:06:53.120 +talking about translation from English + +00:06:50.280 --> 00:06:54.800 +to Japanese uh now you don't really even + +00:06:53.120 --> 00:06:57.240 +have a finite number of choices because + +00:06:54.800 --> 00:06:58.440 +the length could be even longer uh the + +00:06:57.240 --> 00:07:01.400 +length of the output could be even + +00:06:58.440 --> 00:07:01.400 +longer than the + +00:07:04.840 --> 00:07:08.879 +C + +00:07:06.520 --> 00:07:11.319 +rules + +00:07:08.879 --> 00:07:14.879 +together makes it + +00:07:11.319 --> 00:07:17.400 +fewer yeah so really good question um so + +00:07:14.879 --> 00:07:19.319 +the question or the the question or + +00:07:17.400 --> 00:07:21.160 +comment was if there are certain rules + +00:07:19.319 --> 00:07:22.759 +about one thing not ever being able to + +00:07:21.160 --> 00:07:25.080 +follow the other you can actually reduce + +00:07:22.759 --> 00:07:28.319 +the number um you could do that with a + +00:07:25.080 --> 00:07:30.280 +hard constraint and make things uh kind + +00:07:28.319 --> 00:07:32.520 +of + +00:07:30.280 --> 00:07:34.240 +and like actually cut off things that + +00:07:32.520 --> 00:07:36.280 +you know have zero probability but in + +00:07:34.240 --> 00:07:38.680 +reality what people do is they just trim + +00:07:36.280 --> 00:07:41.319 +hypotheses that have low probability and + +00:07:38.680 --> 00:07:43.319 +so that has kind of the same effect like + +00:07:41.319 --> 00:07:47.599 +you almost never see a determiner after + +00:07:43.319 --> 00:07:49.720 +a determiner in English um and so yeah + +00:07:47.599 --> 00:07:52.400 +we're going to talk about uh algorithms + +00:07:49.720 --> 00:07:53.960 +to do this in the Generation section so + +00:07:52.400 --> 00:07:57.240 +we could talk more about that + +00:07:53.960 --> 00:08:00.080 +that um but anyway the basic idea behind + +00:07:57.240 --> 00:08:02.400 +structured prediction is that you don't + +00:08:00.080 --> 00:08:04.280 +like language modeling like I said last + +00:08:02.400 --> 00:08:06.240 +time you don't predict all of the the + +00:08:04.280 --> 00:08:08.319 +whole sequence at once you usually + +00:08:06.240 --> 00:08:10.440 +predict each element at once and then + +00:08:08.319 --> 00:08:12.080 +somehow calculate the conditional + +00:08:10.440 --> 00:08:13.720 +probability of the next element given + +00:08:12.080 --> 00:08:15.879 +the the current element or other things + +00:08:13.720 --> 00:08:18.840 +like that so that's how we solve + +00:08:15.879 --> 00:08:18.840 +structured prediction + +00:08:18.919 --> 00:08:22.960 +problems another thing is unconditioned + +00:08:21.319 --> 00:08:25.120 +versus conditioned predictions so + +00:08:22.960 --> 00:08:28.520 +uncondition prediction we don't do this + +00:08:25.120 --> 00:08:31.240 +very often um but basically uh we + +00:08:28.520 --> 00:08:34.039 +predict the probability of a a single + +00:08:31.240 --> 00:08:35.880 +variable or generate a single variable + +00:08:34.039 --> 00:08:37.599 +and condition pro prediction is + +00:08:35.880 --> 00:08:41.000 +predicting the probability of an output + +00:08:37.599 --> 00:08:45.120 +variable given an input like + +00:08:41.000 --> 00:08:48.040 +this so um for unconditioned prediction + +00:08:45.120 --> 00:08:50.000 +um the way we can do this is left to + +00:08:48.040 --> 00:08:51.399 +right autoagressive models and these are + +00:08:50.000 --> 00:08:52.600 +the ones that I talked about last time + +00:08:51.399 --> 00:08:56.360 +when I was talking about how we build + +00:08:52.600 --> 00:08:59.000 +language models um and these could be uh + +00:08:56.360 --> 00:09:01.880 +specifically this kind though is a kind + +00:08:59.000 --> 00:09:03.480 +that doesn't have any context limit so + +00:09:01.880 --> 00:09:05.680 +it's looking all the way back to the + +00:09:03.480 --> 00:09:07.519 +beginning of the the sequence and this + +00:09:05.680 --> 00:09:09.440 +could be like an infinite length endr + +00:09:07.519 --> 00:09:10.440 +model but practically we can't use those + +00:09:09.440 --> 00:09:12.519 +because they would have too many + +00:09:10.440 --> 00:09:15.360 +parameters they would be too sparse for + +00:09:12.519 --> 00:09:17.079 +us to estimate the parameters so um what + +00:09:15.360 --> 00:09:19.120 +we do instead with engram models which I + +00:09:17.079 --> 00:09:21.240 +talked about last time is we limit the + +00:09:19.120 --> 00:09:23.600 +the context length so we have something + +00:09:21.240 --> 00:09:25.760 +like a trigram model where we don't + +00:09:23.600 --> 00:09:28.680 +actually reference all of the previous + +00:09:25.760 --> 00:09:30.680 +outputs uh when we make a prediction oh + +00:09:28.680 --> 00:09:34.440 +and sorry actually I I should explain + +00:09:30.680 --> 00:09:37.640 +how how do we uh how do we read this + +00:09:34.440 --> 00:09:40.519 +graph so this would be we're predicting + +00:09:37.640 --> 00:09:42.680 +number one here we're predicting word + +00:09:40.519 --> 00:09:45.240 +number one and we're conditioning we're + +00:09:42.680 --> 00:09:47.640 +not conditioning on anything after it + +00:09:45.240 --> 00:09:49.040 +we're predicting word number two we're + +00:09:47.640 --> 00:09:50.480 +conditioning on Word number one we're + +00:09:49.040 --> 00:09:53.040 +predicting word number three we're + +00:09:50.480 --> 00:09:55.640 +conditioning on Word number two so here + +00:09:53.040 --> 00:09:58.320 +we would be uh predicting word number + +00:09:55.640 --> 00:09:59.920 +four conditioning on Words number three + +00:09:58.320 --> 00:10:02.200 +and two but not number one so that would + +00:09:59.920 --> 00:10:07.600 +be like a trigram + +00:10:02.200 --> 00:10:07.600 +bottle um so + +00:10:08.600 --> 00:10:15.240 +the what is this is there a robot + +00:10:11.399 --> 00:10:17.480 +walking around somewhere um Howard drill + +00:10:15.240 --> 00:10:20.440 +okay okay' be a lot more fun if it was a + +00:10:17.480 --> 00:10:22.560 +robot um so + +00:10:20.440 --> 00:10:25.519 +uh the things we're going to talk about + +00:10:22.560 --> 00:10:28.360 +today are largely going to be ones that + +00:10:25.519 --> 00:10:31.200 +have unlimited length context um and so + +00:10:28.360 --> 00:10:33.440 +we can uh we'll talk about some examples + +00:10:31.200 --> 00:10:35.680 +here and then um there's also + +00:10:33.440 --> 00:10:37.279 +independent prediction so this uh would + +00:10:35.680 --> 00:10:39.160 +be something like a unigram model where + +00:10:37.279 --> 00:10:41.560 +you would just uh not condition on any + +00:10:39.160 --> 00:10:41.560 +previous + +00:10:41.880 --> 00:10:45.959 +context there's also bidirectional + +00:10:44.279 --> 00:10:47.959 +prediction where basically when you + +00:10:45.959 --> 00:10:50.440 +predict each element you predict based + +00:10:47.959 --> 00:10:52.680 +on all of the other elements not the + +00:10:50.440 --> 00:10:55.519 +element itself uh this could be + +00:10:52.680 --> 00:10:59.720 +something like a masked language model + +00:10:55.519 --> 00:11:02.160 +um but note here that I put a slash + +00:10:59.720 --> 00:11:04.000 +through here uh because this is not a + +00:11:02.160 --> 00:11:06.800 +well-formed probability because as I + +00:11:04.000 --> 00:11:08.760 +mentioned last time um in order to have + +00:11:06.800 --> 00:11:11.000 +a well-formed probability you need to + +00:11:08.760 --> 00:11:12.440 +predict the elements based on all of the + +00:11:11.000 --> 00:11:14.120 +elements that you predicted before and + +00:11:12.440 --> 00:11:16.519 +you can't predict based on future + +00:11:14.120 --> 00:11:18.519 +elements so this is not actually a + +00:11:16.519 --> 00:11:20.760 +probabilistic model but sometimes people + +00:11:18.519 --> 00:11:22.240 +use this to kind of learn + +00:11:20.760 --> 00:11:24.720 +representations that could be used + +00:11:22.240 --> 00:11:28.680 +Downstream for some + +00:11:24.720 --> 00:11:30.959 +reason cool is this clear any questions + +00:11:28.680 --> 00:11:30.959 +comments + +00:11:32.680 --> 00:11:39.839 +yeah so these are all um not + +00:11:36.800 --> 00:11:42.000 +conditioning on any prior context uh so + +00:11:39.839 --> 00:11:43.959 +when you predict each word it's + +00:11:42.000 --> 00:11:46.880 +conditioning on context that you + +00:11:43.959 --> 00:11:50.160 +previously generated or previously + +00:11:46.880 --> 00:11:52.279 +predicted yeah so and if I go to the + +00:11:50.160 --> 00:11:55.399 +conditioned ones these are where you + +00:11:52.279 --> 00:11:56.800 +have like a source x uh where you're + +00:11:55.399 --> 00:11:58.480 +given this and then you want to + +00:11:56.800 --> 00:11:59.639 +calculate the conditional probability of + +00:11:58.480 --> 00:12:04.279 +something else + +00:11:59.639 --> 00:12:06.839 +so um to give some examples of this um + +00:12:04.279 --> 00:12:10.320 +this is autor regressive conditioned + +00:12:06.839 --> 00:12:12.920 +prediction and um this could be like a + +00:12:10.320 --> 00:12:14.440 +SE a standard sequence to sequence model + +00:12:12.920 --> 00:12:16.079 +or it could be a language model where + +00:12:14.440 --> 00:12:18.600 +you're given a prompt and you want to + +00:12:16.079 --> 00:12:20.560 +predict the following output like we + +00:12:18.600 --> 00:12:24.160 +often do with chat GPT or something like + +00:12:20.560 --> 00:12:27.880 +this and so + +00:12:24.160 --> 00:12:30.199 +um yeah I I don't think you + +00:12:27.880 --> 00:12:32.279 +can + +00:12:30.199 --> 00:12:34.639 +yeah I don't know if any way you can do + +00:12:32.279 --> 00:12:37.680 +a chat GPT without any conditioning + +00:12:34.639 --> 00:12:39.959 +context um but there were people who + +00:12:37.680 --> 00:12:41.240 +were sending uh I saw this about a week + +00:12:39.959 --> 00:12:44.079 +or two ago there were people who were + +00:12:41.240 --> 00:12:47.839 +sending things to the chat um to the GPD + +00:12:44.079 --> 00:12:50.480 +3.5 or gp4 API with no input and it + +00:12:47.839 --> 00:12:52.279 +would randomly output random questions + +00:12:50.480 --> 00:12:54.800 +or something like that so that's what's + +00:12:52.279 --> 00:12:56.720 +what happens when you send things to uh + +00:12:54.800 --> 00:12:58.120 +to chat GPT without any prior + +00:12:56.720 --> 00:13:00.120 +conditioning conts but normally what you + +00:12:58.120 --> 00:13:01.440 +do is you put in you know your prompt + +00:13:00.120 --> 00:13:05.320 +and then it follows up with your prompt + +00:13:01.440 --> 00:13:05.320 +and that would be in this uh in this + +00:13:06.000 --> 00:13:11.279 +Paradigm there's also something called + +00:13:08.240 --> 00:13:14.199 +non-auto regressive condition prediction + +00:13:11.279 --> 00:13:16.760 +um and this can be used for something + +00:13:14.199 --> 00:13:19.160 +like sequence labeling or non- autor + +00:13:16.760 --> 00:13:20.760 +regressive machine translation I'll talk + +00:13:19.160 --> 00:13:22.839 +about the first one in this class and + +00:13:20.760 --> 00:13:25.600 +I'll talk about the the second one maybe + +00:13:22.839 --> 00:13:27.399 +later um it's kind of a minor topic now + +00:13:25.600 --> 00:13:30.040 +it used to be popular a few years ago so + +00:13:27.399 --> 00:13:33.279 +I'm not sure whether it'll cover it but + +00:13:30.040 --> 00:13:33.279 +um uh + +00:13:33.399 --> 00:13:39.279 +yeah cool so the basic modeling Paradigm + +00:13:37.079 --> 00:13:41.199 +that we use for things like this is + +00:13:39.279 --> 00:13:42.760 +extracting features and predicting so + +00:13:41.199 --> 00:13:44.839 +this is exactly the same as the bag of + +00:13:42.760 --> 00:13:46.680 +wordss model right I the bag of wordss + +00:13:44.839 --> 00:13:48.680 +model that I talked about the first time + +00:13:46.680 --> 00:13:50.959 +we extracted features uh based on those + +00:13:48.680 --> 00:13:53.440 +features we made predictions so it's no + +00:13:50.959 --> 00:13:55.480 +different when we do sequence modeling + +00:13:53.440 --> 00:13:57.680 +um but the methods that we use for + +00:13:55.480 --> 00:14:01.120 +feature extraction is different so given + +00:13:57.680 --> 00:14:03.920 +in the input text X we extract features + +00:14:01.120 --> 00:14:06.519 +H and predict labels + +00:14:03.920 --> 00:14:10.320 +Y and for something like text + +00:14:06.519 --> 00:14:12.600 +classification what we do is we uh so + +00:14:10.320 --> 00:14:15.440 +for example we have text classification + +00:14:12.600 --> 00:14:17.920 +or or sequence labeling and for text + +00:14:15.440 --> 00:14:19.720 +classification usually what we would do + +00:14:17.920 --> 00:14:21.360 +is we would have a feature extractor + +00:14:19.720 --> 00:14:23.120 +from this feature extractor we take the + +00:14:21.360 --> 00:14:25.199 +sequence and we convert it into a single + +00:14:23.120 --> 00:14:28.040 +vector and then based on this Vector we + +00:14:25.199 --> 00:14:30.160 +make a prediction so that that's what we + +00:14:28.040 --> 00:14:33.160 +do for + +00:14:30.160 --> 00:14:35.480 +classification um for sequence labeling + +00:14:33.160 --> 00:14:37.160 +normally what we do is we extract one + +00:14:35.480 --> 00:14:40.240 +vector for each thing that we would like + +00:14:37.160 --> 00:14:42.079 +to predict about so here that might be + +00:14:40.240 --> 00:14:45.639 +one vector for each + +00:14:42.079 --> 00:14:47.720 +word um and then based on this uh we + +00:14:45.639 --> 00:14:49.120 +would predict something for each word so + +00:14:47.720 --> 00:14:50.360 +this is an example of part of speech + +00:14:49.120 --> 00:14:53.079 +tagging but there's a lot of other + +00:14:50.360 --> 00:14:56.920 +sequence labeling tasks + +00:14:53.079 --> 00:14:58.839 +also and what tasks exist for something + +00:14:56.920 --> 00:15:03.040 +like sequence labeling so sequence lab + +00:14:58.839 --> 00:15:06.240 +in is uh a pretty + +00:15:03.040 --> 00:15:09.000 +big subset of NLP tasks you can express + +00:15:06.240 --> 00:15:11.040 +a lot of things as sequence labeling and + +00:15:09.000 --> 00:15:13.000 +basically given an input text X we + +00:15:11.040 --> 00:15:16.079 +predict an output label sequence y of + +00:15:13.000 --> 00:15:17.560 +equal length so this can be used for + +00:15:16.079 --> 00:15:20.160 +things like part of speech tagging to + +00:15:17.560 --> 00:15:22.000 +get the parts of speech of each word um + +00:15:20.160 --> 00:15:24.639 +it can also be used for something like + +00:15:22.000 --> 00:15:26.959 +lemmatization and litiz basically what + +00:15:24.639 --> 00:15:29.880 +that is is it is predicting the base + +00:15:26.959 --> 00:15:31.480 +form of each word uh and this can be + +00:15:29.880 --> 00:15:34.560 +used for normalization if you want to + +00:15:31.480 --> 00:15:36.360 +find like for example all instances of a + +00:15:34.560 --> 00:15:38.480 +a particular verb being used or all + +00:15:36.360 --> 00:15:40.800 +instances of a particular noun being + +00:15:38.480 --> 00:15:42.720 +used this is a little bit different than + +00:15:40.800 --> 00:15:45.000 +something like stemming so stemming + +00:15:42.720 --> 00:15:48.160 +normally what stemming would do is it + +00:15:45.000 --> 00:15:50.560 +would uh chop off the plural here it + +00:15:48.160 --> 00:15:53.240 +would chop off S but it wouldn't be able + +00:15:50.560 --> 00:15:56.279 +to do things like normalized saw into C + +00:15:53.240 --> 00:15:57.759 +because uh stemming uh just removes + +00:15:56.279 --> 00:15:59.240 +suffixes it doesn't do any sort of + +00:15:57.759 --> 00:16:02.720 +normalization so that's the difference + +00:15:59.240 --> 00:16:05.199 +between lonization and + +00:16:02.720 --> 00:16:08.079 +stemon there's also something called + +00:16:05.199 --> 00:16:09.680 +morphological tagging um in + +00:16:08.079 --> 00:16:11.639 +morphological tagging basically what + +00:16:09.680 --> 00:16:14.360 +this is doing is this is a + +00:16:11.639 --> 00:16:17.040 +more advanced version of part of speech + +00:16:14.360 --> 00:16:20.360 +tagging uh that predicts things like + +00:16:17.040 --> 00:16:23.600 +okay this is a a past tense verb uh this + +00:16:20.360 --> 00:16:25.639 +is a plural um this is a particular verb + +00:16:23.600 --> 00:16:27.240 +form and you have multiple tags here + +00:16:25.639 --> 00:16:28.959 +this is less interesting in English + +00:16:27.240 --> 00:16:30.920 +because English is kind of boring + +00:16:28.959 --> 00:16:32.319 +language morphology morphologically it + +00:16:30.920 --> 00:16:33.399 +doesn't have a lot of conjugation and + +00:16:32.319 --> 00:16:35.839 +other stuff but it's a lot more + +00:16:33.399 --> 00:16:38.319 +interesting in more complex languages + +00:16:35.839 --> 00:16:40.040 +like Japanese or Hindi or other things + +00:16:38.319 --> 00:16:42.480 +like + +00:16:40.040 --> 00:16:43.920 +that Chinese is even more boring than + +00:16:42.480 --> 00:16:46.120 +English so if you're interested in + +00:16:43.920 --> 00:16:47.000 +Chinese then you don't need to worry + +00:16:46.120 --> 00:16:50.680 +about + +00:16:47.000 --> 00:16:52.560 +that cool um but actually what's maybe + +00:16:50.680 --> 00:16:55.000 +more widely used from the sequence + +00:16:52.560 --> 00:16:57.480 +labeling perspective is span labeling + +00:16:55.000 --> 00:17:01.040 +and here you want to predict spans and + +00:16:57.480 --> 00:17:03.560 +labels and this could be uh named entity + +00:17:01.040 --> 00:17:05.360 +recognitions so if you say uh Graham nub + +00:17:03.560 --> 00:17:07.199 +is teaching at Carnegie melan University + +00:17:05.360 --> 00:17:09.520 +you would want to identify each entity + +00:17:07.199 --> 00:17:11.480 +is being like a person organization + +00:17:09.520 --> 00:17:16.039 +Place governmental entity other stuff + +00:17:11.480 --> 00:17:18.760 +like that um there's also + +00:17:16.039 --> 00:17:20.439 +uh things like syntactic chunking where + +00:17:18.760 --> 00:17:23.640 +you want to find all noun phrases and + +00:17:20.439 --> 00:17:26.799 +verb phrases um also semantic role + +00:17:23.640 --> 00:17:30.360 +labeling where semantic role labeling is + +00:17:26.799 --> 00:17:32.480 +uh demonstrating who did what to whom so + +00:17:30.360 --> 00:17:34.440 +it's saying uh this is the actor the + +00:17:32.480 --> 00:17:36.120 +person who did the thing this is the + +00:17:34.440 --> 00:17:38.520 +thing that is being done and this is the + +00:17:36.120 --> 00:17:40.280 +place where it's being done so uh this + +00:17:38.520 --> 00:17:42.840 +can be useful if you want to do any sort + +00:17:40.280 --> 00:17:45.559 +of analysis about who does what to whom + +00:17:42.840 --> 00:17:48.160 +uh other things like + +00:17:45.559 --> 00:17:50.360 +that um there's also a more complicated + +00:17:48.160 --> 00:17:52.080 +thing called an entity linking which + +00:17:50.360 --> 00:17:54.559 +isn't really a span linking task but + +00:17:52.080 --> 00:17:58.400 +it's basically named entity recognition + +00:17:54.559 --> 00:18:00.799 +and you link it to um and you link it to + +00:17:58.400 --> 00:18:04.200 +to like a database like Wiki data or + +00:18:00.799 --> 00:18:06.600 +Wikipedia or something like this and + +00:18:04.200 --> 00:18:09.520 +this doesn't seem very glamorous perhaps + +00:18:06.600 --> 00:18:10.799 +you know a lot of people might not you + +00:18:09.520 --> 00:18:13.400 +might not + +00:18:10.799 --> 00:18:15.000 +sound like immediately excit be + +00:18:13.400 --> 00:18:16.799 +immediately excited by entity linking + +00:18:15.000 --> 00:18:18.520 +but actually it's super super important + +00:18:16.799 --> 00:18:20.080 +for things like news aggregation and + +00:18:18.520 --> 00:18:21.640 +other stuff like that find all the news + +00:18:20.080 --> 00:18:23.799 +articles about the celebrity or + +00:18:21.640 --> 00:18:26.919 +something like this uh find all of the + +00:18:23.799 --> 00:18:29.720 +mentions of our product um our company's + +00:18:26.919 --> 00:18:33.400 +product and uh social media or things so + +00:18:29.720 --> 00:18:33.400 +it's actually a really widely used + +00:18:33.720 --> 00:18:38.000 +technology and then finally span + +00:18:36.039 --> 00:18:40.240 +labeling can also be treated as sequence + +00:18:38.000 --> 00:18:43.240 +labeling um and the way we normally do + +00:18:40.240 --> 00:18:45.600 +this is we use something called bio tags + +00:18:43.240 --> 00:18:47.760 +and uh here you predict the beginning uh + +00:18:45.600 --> 00:18:50.200 +in and out tags for each word or spans + +00:18:47.760 --> 00:18:52.400 +so if we have this example of spans uh + +00:18:50.200 --> 00:18:56.120 +we just convert this into tags uh where + +00:18:52.400 --> 00:18:57.760 +you say uh begin person in person o + +00:18:56.120 --> 00:18:59.640 +means it's not an entity begin + +00:18:57.760 --> 00:19:02.799 +organization in organization and then + +00:18:59.640 --> 00:19:05.520 +you canvert that back into um into these + +00:19:02.799 --> 00:19:09.880 +spans so this makes it relatively easy + +00:19:05.520 --> 00:19:09.880 +to uh kind of do the span + +00:19:10.480 --> 00:19:15.120 +prediction cool um so now you know uh + +00:19:13.640 --> 00:19:16.600 +now you know what to do if you want to + +00:19:15.120 --> 00:19:18.280 +predict entities or other things like + +00:19:16.600 --> 00:19:20.240 +that there's a lot of models on like + +00:19:18.280 --> 00:19:22.400 +hugging face for example that uh allow + +00:19:20.240 --> 00:19:25.640 +you to do these things are there any + +00:19:22.400 --> 00:19:25.640 +questions uh before I move + +00:19:27.080 --> 00:19:32.440 +on okay + +00:19:28.799 --> 00:19:34.039 +cool I'll just go forward then so um now + +00:19:32.440 --> 00:19:37.000 +I'm going to talk about how we actually + +00:19:34.039 --> 00:19:38.559 +model these in machine learning models + +00:19:37.000 --> 00:19:40.919 +and there's three major types of + +00:19:38.559 --> 00:19:43.120 +sequence models uh there are other types + +00:19:40.919 --> 00:19:45.320 +of sequence models but I'd say the great + +00:19:43.120 --> 00:19:47.840 +majority of work uses one of these three + +00:19:45.320 --> 00:19:51.720 +different types and the first one is + +00:19:47.840 --> 00:19:54.840 +recurrence um what recurrence does it is + +00:19:51.720 --> 00:19:56.240 +it conditions on representations on an + +00:19:54.840 --> 00:19:58.720 +encoding of the + +00:19:56.240 --> 00:20:01.360 +history and so the way this works works + +00:19:58.720 --> 00:20:04.679 +is essentially you have your input + +00:20:01.360 --> 00:20:06.280 +vectors like this uh usually word + +00:20:04.679 --> 00:20:08.600 +embeddings or embeddings from the + +00:20:06.280 --> 00:20:10.880 +previous layer of the model and you have + +00:20:08.600 --> 00:20:12.840 +a recurrent neural network and the + +00:20:10.880 --> 00:20:14.600 +recurrent neural network um at the very + +00:20:12.840 --> 00:20:17.280 +beginning might only take the first + +00:20:14.600 --> 00:20:19.480 +Vector but every subsequent step it + +00:20:17.280 --> 00:20:23.760 +takes the input vector and it takes the + +00:20:19.480 --> 00:20:23.760 +hidden Vector from the previous uh + +00:20:24.080 --> 00:20:32.280 +input and the uh then you keep on going + +00:20:29.039 --> 00:20:32.280 +uh like this all the way through the + +00:20:32.320 --> 00:20:37.600 +sequence the convolution is a + +00:20:35.799 --> 00:20:40.880 +conditioning representations on local + +00:20:37.600 --> 00:20:44.200 +context so you have the inputs like this + +00:20:40.880 --> 00:20:47.200 +and here you're conditioning on the word + +00:20:44.200 --> 00:20:51.240 +itself and the surrounding um words on + +00:20:47.200 --> 00:20:52.960 +the right or the left so um you would do + +00:20:51.240 --> 00:20:55.240 +something like this this is a typical + +00:20:52.960 --> 00:20:57.480 +convolution where you have this this + +00:20:55.240 --> 00:20:59.039 +certain one here and the left one and + +00:20:57.480 --> 00:21:01.080 +the right one and this would be a size + +00:20:59.039 --> 00:21:03.480 +three convolution you could also have a + +00:21:01.080 --> 00:21:06.520 +size five convolution 7 n you know + +00:21:03.480 --> 00:21:08.600 +whatever else um that would take in more + +00:21:06.520 --> 00:21:11.520 +surrounding words like + +00:21:08.600 --> 00:21:13.720 +this and then finally we have attention + +00:21:11.520 --> 00:21:15.640 +um and attention is conditioned + +00:21:13.720 --> 00:21:19.080 +representations at a weighted average of + +00:21:15.640 --> 00:21:21.000 +all tokens in the sequence and so here + +00:21:19.080 --> 00:21:24.600 +um we're conditioning on all of the + +00:21:21.000 --> 00:21:26.279 +other tokens in the sequence but um the + +00:21:24.600 --> 00:21:28.919 +amount that we condition on each of the + +00:21:26.279 --> 00:21:32.039 +tokens differs uh between + +00:21:28.919 --> 00:21:34.919 +so we might get more of this token less + +00:21:32.039 --> 00:21:37.600 +of this token and other things like that + +00:21:34.919 --> 00:21:39.720 +and I'll go into the mechanisms of each + +00:21:37.600 --> 00:21:43.159 +of + +00:21:39.720 --> 00:21:45.720 +these one important thing to think about + +00:21:43.159 --> 00:21:49.279 +is uh the computational complexity of + +00:21:45.720 --> 00:21:51.960 +each of these and um the computational + +00:21:49.279 --> 00:21:56.240 +complexity can be + +00:21:51.960 --> 00:21:58.600 +expressed as the sequence length let's + +00:21:56.240 --> 00:22:00.840 +call the sequence length n and + +00:21:58.600 --> 00:22:02.520 +convolution has a convolution window + +00:22:00.840 --> 00:22:05.080 +size so I'll call that + +00:22:02.520 --> 00:22:08.039 +W so does anyone have an idea of the + +00:22:05.080 --> 00:22:10.360 +computational complexity of a recurrent + +00:22:08.039 --> 00:22:10.360 +neural + +00:22:11.480 --> 00:22:16.640 +network so how um how quickly does the + +00:22:15.120 --> 00:22:18.640 +computation of a recurrent neural + +00:22:16.640 --> 00:22:20.760 +network grow and one way you can look at + +00:22:18.640 --> 00:22:24.360 +this is uh figure out the number of + +00:22:20.760 --> 00:22:24.360 +arrows uh that you see + +00:22:24.480 --> 00:22:29.080 +here yeah it's it's linear so it's + +00:22:27.440 --> 00:22:32.520 +basically + +00:22:29.080 --> 00:22:35.520 +n um what about + +00:22:32.520 --> 00:22:36.760 +convolution any other ideas any ideas + +00:22:35.520 --> 00:22:42.039 +about + +00:22:36.760 --> 00:22:45.120 +convolution n yeah NW n + +00:22:42.039 --> 00:22:47.559 +w and what about + +00:22:45.120 --> 00:22:52.200 +attention n squar + +00:22:47.559 --> 00:22:53.559 +yeah so what you can see is um for very + +00:22:52.200 --> 00:22:58.000 +long + +00:22:53.559 --> 00:23:00.400 +sequences um for very long sequences the + +00:22:58.000 --> 00:23:04.480 +asymptotic complexity of running a + +00:23:00.400 --> 00:23:06.039 +recurrent neural network is uh lower so + +00:23:04.480 --> 00:23:08.960 +you can run a recurrent neural network + +00:23:06.039 --> 00:23:10.480 +over a sequence of length uh you know 20 + +00:23:08.960 --> 00:23:12.480 +million or something like that and as + +00:23:10.480 --> 00:23:15.200 +long as you had enough memory it would + +00:23:12.480 --> 00:23:16.520 +take a linear time but um if you do + +00:23:15.200 --> 00:23:18.400 +something like attention over a really + +00:23:16.520 --> 00:23:20.240 +long sequence it would be more difficult + +00:23:18.400 --> 00:23:22.080 +there's a lot of caveats here because + +00:23:20.240 --> 00:23:23.320 +attention and convolution are easily + +00:23:22.080 --> 00:23:26.200 +paral + +00:23:23.320 --> 00:23:28.520 +parallelized uh whereas uh recurrence is + +00:23:26.200 --> 00:23:30.919 +not um and I'll talk about that a second + +00:23:28.520 --> 00:23:32.679 +but any anyway it's a good thing to keep + +00:23:30.919 --> 00:23:36.240 +in + +00:23:32.679 --> 00:23:37.679 +mind cool um so the next the first + +00:23:36.240 --> 00:23:39.799 +sequence model I want to introduce is + +00:23:37.679 --> 00:23:42.559 +recurrent neural networks oh um sorry + +00:23:39.799 --> 00:23:45.799 +one other thing I want to mention is all + +00:23:42.559 --> 00:23:47.600 +of these are still used um it might seem + +00:23:45.799 --> 00:23:49.960 +that like if you're very plugged into + +00:23:47.600 --> 00:23:52.640 +NLP it might seem like Well everybody's + +00:23:49.960 --> 00:23:55.080 +using attention um so why do we need to + +00:23:52.640 --> 00:23:56.880 +learn about the other ones uh but + +00:23:55.080 --> 00:23:59.679 +actually all of these are used and + +00:23:56.880 --> 00:24:02.600 +usually recurrence and convolution are + +00:23:59.679 --> 00:24:04.960 +used in combination with attention uh in + +00:24:02.600 --> 00:24:07.799 +some way for particular applications + +00:24:04.960 --> 00:24:09.960 +where uh like uh recurrence or a + +00:24:07.799 --> 00:24:12.640 +convolution are are useful so I'll I'll + +00:24:09.960 --> 00:24:15.279 +go into details of that + +00:24:12.640 --> 00:24:18.159 +l so let's talk about the first sequence + +00:24:15.279 --> 00:24:20.600 +model uh recurrent neural networks so + +00:24:18.159 --> 00:24:22.919 +recurrent neural networks um they're + +00:24:20.600 --> 00:24:26.399 +basically tools to remember information + +00:24:22.919 --> 00:24:28.520 +uh they were invented in uh around + +00:24:26.399 --> 00:24:30.520 +1990 and + +00:24:28.520 --> 00:24:34.120 +the way they work is a feedforward + +00:24:30.520 --> 00:24:35.600 +neural network looks a bit like this we + +00:24:34.120 --> 00:24:38.000 +have some sort of look up over the + +00:24:35.600 --> 00:24:40.120 +context we calculate embeddings we do a + +00:24:38.000 --> 00:24:41.000 +transform we get a hidden State and we + +00:24:40.120 --> 00:24:43.039 +make the + +00:24:41.000 --> 00:24:46.159 +prediction whereas a recurrent neural + +00:24:43.039 --> 00:24:49.360 +network uh feeds in the previous hidden + +00:24:46.159 --> 00:24:53.360 +State and a very simple Elman style + +00:24:49.360 --> 00:24:54.840 +neural network looks um or I'll contrast + +00:24:53.360 --> 00:24:56.559 +the feed forward neural network that we + +00:24:54.840 --> 00:24:58.279 +already know with an Elman style neural + +00:24:56.559 --> 00:25:00.399 +network um + +00:24:58.279 --> 00:25:01.880 +uh recurrent neural network so basically + +00:25:00.399 --> 00:25:06.120 +the feed forward Network that we already + +00:25:01.880 --> 00:25:07.840 +know does a um linear transform over the + +00:25:06.120 --> 00:25:09.279 +input and then it runs it through a + +00:25:07.840 --> 00:25:11.640 +nonlinear function and this could be + +00:25:09.279 --> 00:25:14.200 +like a tan function or a Ru function or + +00:25:11.640 --> 00:25:17.080 +anything like that in a recurrent neural + +00:25:14.200 --> 00:25:19.559 +network we add uh multiplication by the + +00:25:17.080 --> 00:25:22.080 +hidden the previous hidden state so it + +00:25:19.559 --> 00:25:25.120 +looks like + +00:25:22.080 --> 00:25:27.000 +this and so if we look at what + +00:25:25.120 --> 00:25:29.080 +processing a sequence looks like uh + +00:25:27.000 --> 00:25:31.080 +basically what we do is we start out + +00:25:29.080 --> 00:25:32.720 +with an initial State this initial State + +00:25:31.080 --> 00:25:34.320 +could be like all zeros or it could be + +00:25:32.720 --> 00:25:35.200 +randomized or it could be learned or + +00:25:34.320 --> 00:25:38.480 +whatever + +00:25:35.200 --> 00:25:42.080 +else and then based on based on this uh + +00:25:38.480 --> 00:25:44.279 +we run it through an RNN function um and + +00:25:42.080 --> 00:25:46.600 +then you know use calculate the hidden + +00:25:44.279 --> 00:25:48.960 +State use it to make a prediction uh we + +00:25:46.600 --> 00:25:50.760 +have the RNN function uh make a + +00:25:48.960 --> 00:25:51.760 +prediction RNN make a prediction RNN + +00:25:50.760 --> 00:25:54.520 +make a + +00:25:51.760 --> 00:25:56.960 +prediction so one important thing here + +00:25:54.520 --> 00:25:58.360 +is that this RNN is exactly the same + +00:25:56.960 --> 00:26:01.880 +function + +00:25:58.360 --> 00:26:04.960 +no matter which position it appears in + +00:26:01.880 --> 00:26:06.640 +and so because of that we just no matter + +00:26:04.960 --> 00:26:08.279 +how long the sequence becomes we always + +00:26:06.640 --> 00:26:10.200 +have the same number of parameters which + +00:26:08.279 --> 00:26:12.600 +is always like really important for a + +00:26:10.200 --> 00:26:15.120 +sequence model so uh that's what this + +00:26:12.600 --> 00:26:15.120 +looks like + +00:26:15.799 --> 00:26:20.480 +here so how do we train + +00:26:18.320 --> 00:26:22.679 +rnns um + +00:26:20.480 --> 00:26:24.399 +basically if you remember we can trade + +00:26:22.679 --> 00:26:27.159 +neural networks as long as we have a + +00:26:24.399 --> 00:26:29.240 +directed e cyclic graph that calculates + +00:26:27.159 --> 00:26:30.919 +our loss function and then for uh + +00:26:29.240 --> 00:26:32.640 +forward propagation and back propagation + +00:26:30.919 --> 00:26:35.720 +we'll do all the rest to calculate our + +00:26:32.640 --> 00:26:38.760 +parameters and we uh we update the + +00:26:35.720 --> 00:26:40.480 +parameters so the way this works is uh + +00:26:38.760 --> 00:26:42.000 +let's say we're doing sequence labeling + +00:26:40.480 --> 00:26:45.200 +in each of these predictions is a part + +00:26:42.000 --> 00:26:47.559 +of speech uh each of these labels is a + +00:26:45.200 --> 00:26:49.000 +true part of speech label or sorry each + +00:26:47.559 --> 00:26:50.760 +of these predictions is like a + +00:26:49.000 --> 00:26:52.919 +probability over the part parts of + +00:26:50.760 --> 00:26:55.720 +speech for that sequence each of these + +00:26:52.919 --> 00:26:57.640 +labels is a true part of speech label so + +00:26:55.720 --> 00:26:59.320 +basically what we do is from this we + +00:26:57.640 --> 00:27:02.200 +calculate the negative log likelihood of + +00:26:59.320 --> 00:27:05.559 +the true part of speech we get a + +00:27:02.200 --> 00:27:09.120 +loss and so now we have four losses uh + +00:27:05.559 --> 00:27:11.559 +here this is no longer a nice directed + +00:27:09.120 --> 00:27:13.000 +acyclic uh graph that ends in a single + +00:27:11.559 --> 00:27:15.279 +loss function which is kind of what we + +00:27:13.000 --> 00:27:17.559 +needed for back propagation right so + +00:27:15.279 --> 00:27:20.240 +what do we do uh very simple we just add + +00:27:17.559 --> 00:27:22.440 +them together uh we take the sum and now + +00:27:20.240 --> 00:27:24.120 +we have a single loss function uh which + +00:27:22.440 --> 00:27:26.240 +is the sum of all of the loss functions + +00:27:24.120 --> 00:27:28.679 +for each prediction that we + +00:27:26.240 --> 00:27:30.799 +made and that's our total loss and now + +00:27:28.679 --> 00:27:32.600 +we do have a directed asli graph where + +00:27:30.799 --> 00:27:34.320 +this is the terminal node and we can do + +00:27:32.600 --> 00:27:36.480 +backr like + +00:27:34.320 --> 00:27:37.799 +this this is true for all sequence + +00:27:36.480 --> 00:27:39.320 +models I'm going to talk about today I'm + +00:27:37.799 --> 00:27:41.559 +just illustrating it with recurrent + +00:27:39.320 --> 00:27:43.279 +networks um any any questions here + +00:27:41.559 --> 00:27:45.240 +everything + +00:27:43.279 --> 00:27:47.919 +good + +00:27:45.240 --> 00:27:50.279 +okay cool um yeah so now we have the + +00:27:47.919 --> 00:27:52.960 +loss it's a Well form dag uh we can run + +00:27:50.279 --> 00:27:55.320 +backrop so uh basically what we do is we + +00:27:52.960 --> 00:27:58.399 +just run back propop and our loss goes + +00:27:55.320 --> 00:28:01.120 +out uh back into all of the + +00:27:58.399 --> 00:28:04.200 +places now parameters are tied across + +00:28:01.120 --> 00:28:06.080 +time so the derivatives into the + +00:28:04.200 --> 00:28:07.200 +parameters are aggregated over all of + +00:28:06.080 --> 00:28:10.760 +the time + +00:28:07.200 --> 00:28:13.760 +steps um and this has been called back + +00:28:10.760 --> 00:28:16.320 +propagation through time uh since uh + +00:28:13.760 --> 00:28:18.679 +these were originally invented so + +00:28:16.320 --> 00:28:21.720 +basically what it looks like is because + +00:28:18.679 --> 00:28:25.600 +the parameters for this RNN function are + +00:28:21.720 --> 00:28:27.120 +shared uh they'll essentially be updated + +00:28:25.600 --> 00:28:29.480 +they'll only be updated once but they're + +00:28:27.120 --> 00:28:32.640 +updated from like four different + +00:28:29.480 --> 00:28:32.640 +positions in this network + +00:28:34.120 --> 00:28:38.440 +essentially yeah and this is the same + +00:28:36.120 --> 00:28:40.559 +for all sequence uh sequence models that + +00:28:38.440 --> 00:28:43.519 +I'm going to talk about + +00:28:40.559 --> 00:28:45.360 +today um another variety of models that + +00:28:43.519 --> 00:28:47.559 +people use are bidirectional rnns and + +00:28:45.360 --> 00:28:49.880 +these are uh used when you want to you + +00:28:47.559 --> 00:28:52.960 +know do something like sequence labeling + +00:28:49.880 --> 00:28:54.399 +and so you just uh run two rnns you want + +00:28:52.960 --> 00:28:56.279 +run one from the beginning one from the + +00:28:54.399 --> 00:28:59.399 +end and concatenate them together like + +00:28:56.279 --> 00:28:59.399 +this make predictions + +00:29:01.200 --> 00:29:08.200 +cool uh any questions yeah if you run + +00:29:05.559 --> 00:29:09.960 +the does that change your + +00:29:08.200 --> 00:29:11.679 +complexity does this change the + +00:29:09.960 --> 00:29:13.000 +complexity it doesn't change the ASM + +00:29:11.679 --> 00:29:16.519 +totic complexity because you're + +00:29:13.000 --> 00:29:18.320 +multiplying by two uh and like Big O + +00:29:16.519 --> 00:29:21.559 +notation doesn't care if you multiply by + +00:29:18.320 --> 00:29:23.880 +a constant but it it does double the Ty + +00:29:21.559 --> 00:29:23.880 +that it would + +00:29:24.080 --> 00:29:28.080 +do cool any + +00:29:26.320 --> 00:29:32.799 +other + +00:29:28.080 --> 00:29:35.720 +okay let's go forward um another problem + +00:29:32.799 --> 00:29:37.240 +that is particularly Salient in rnns and + +00:29:35.720 --> 00:29:40.440 +part of the reason why attention models + +00:29:37.240 --> 00:29:42.000 +are so useful is Vanishing gradients but + +00:29:40.440 --> 00:29:43.880 +you should be aware of this regardless + +00:29:42.000 --> 00:29:46.799 +of whether like no matter which model + +00:29:43.880 --> 00:29:48.799 +you're using and um thinking about it + +00:29:46.799 --> 00:29:50.720 +very carefully is actually a really good + +00:29:48.799 --> 00:29:52.399 +way to design better architectures if + +00:29:50.720 --> 00:29:54.000 +you're going to be designing uh + +00:29:52.399 --> 00:29:56.039 +designing + +00:29:54.000 --> 00:29:58.000 +architectures so basically the problem + +00:29:56.039 --> 00:29:59.399 +with Vanishing gradients is like let's + +00:29:58.000 --> 00:30:01.799 +say we have a prediction task where + +00:29:59.399 --> 00:30:03.960 +we're calculating a regression we're + +00:30:01.799 --> 00:30:05.519 +inputting a whole bunch of tokens and + +00:30:03.960 --> 00:30:08.080 +then calculating a regression at the + +00:30:05.519 --> 00:30:12.840 +very end using a square air loss + +00:30:08.080 --> 00:30:16.360 +function if we do something like this uh + +00:30:12.840 --> 00:30:17.919 +the problem is if we have a standard RNN + +00:30:16.360 --> 00:30:21.279 +when we do back + +00:30:17.919 --> 00:30:25.480 +propop we'll have a big gradient + +00:30:21.279 --> 00:30:27.000 +probably for the first RNN unit here but + +00:30:25.480 --> 00:30:30.120 +every time because we're running this + +00:30:27.000 --> 00:30:33.679 +through through some sort of + +00:30:30.120 --> 00:30:37.080 +nonlinearity if we for example if our + +00:30:33.679 --> 00:30:39.240 +nonlinearity is a t h function uh the + +00:30:37.080 --> 00:30:42.000 +gradient of the tan H function looks a + +00:30:39.240 --> 00:30:42.000 +little bit like + +00:30:42.120 --> 00:30:50.000 +this and um here I if I am not mistaken + +00:30:47.200 --> 00:30:53.480 +this Peaks at at one and everywhere else + +00:30:50.000 --> 00:30:56.919 +at zero and so because this is peing at + +00:30:53.480 --> 00:30:58.679 +one everywhere else at zero let's say um + +00:30:56.919 --> 00:31:01.360 +we have an input way over here like + +00:30:58.679 --> 00:31:03.080 +minus minus 3 or something like that if + +00:31:01.360 --> 00:31:04.760 +we have that that basically destroys our + +00:31:03.080 --> 00:31:10.760 +gradient our gradient disappears for + +00:31:04.760 --> 00:31:13.559 +that particular unit um and you know + +00:31:10.760 --> 00:31:15.399 +maybe one thing that you might say is oh + +00:31:13.559 --> 00:31:17.039 +well you know if this is getting so + +00:31:15.399 --> 00:31:19.320 +small because this only goes up to one + +00:31:17.039 --> 00:31:22.960 +let's do like 100 time t + +00:31:19.320 --> 00:31:24.880 +h as our uh as our activation function + +00:31:22.960 --> 00:31:26.600 +we'll do 100 time tan H and so now this + +00:31:24.880 --> 00:31:28.279 +goes up to 100 and now our gradients are + +00:31:26.600 --> 00:31:30.080 +not going to disapp here but then you + +00:31:28.279 --> 00:31:31.720 +have the the opposite problem you have + +00:31:30.080 --> 00:31:34.760 +exploding gradients where it goes up by + +00:31:31.720 --> 00:31:36.360 +100 every time uh it gets unmanageable + +00:31:34.760 --> 00:31:40.000 +and destroys your gradient descent + +00:31:36.360 --> 00:31:41.720 +itself so basically we have uh we have + +00:31:40.000 --> 00:31:43.200 +this problem because if you apply a + +00:31:41.720 --> 00:31:45.639 +function over and over again your + +00:31:43.200 --> 00:31:47.240 +gradient gets smaller and smaller every + +00:31:45.639 --> 00:31:49.080 +smaller and smaller bigger and bigger + +00:31:47.240 --> 00:31:50.480 +every time you do that and uh you have + +00:31:49.080 --> 00:31:51.720 +the vanishing gradient or exploding + +00:31:50.480 --> 00:31:54.799 +gradient + +00:31:51.720 --> 00:31:56.919 +problem um it's not just a problem with + +00:31:54.799 --> 00:31:59.039 +nonlinearities so it also happens when + +00:31:56.919 --> 00:32:00.480 +you do do your weight Matrix multiplies + +00:31:59.039 --> 00:32:03.840 +and other stuff like that basically + +00:32:00.480 --> 00:32:05.960 +anytime you modify uh the the input into + +00:32:03.840 --> 00:32:07.720 +a different output it will have a + +00:32:05.960 --> 00:32:10.240 +gradient and so it will either be bigger + +00:32:07.720 --> 00:32:14.000 +than one or less than + +00:32:10.240 --> 00:32:16.000 +one um so I mentioned this is a problem + +00:32:14.000 --> 00:32:18.120 +for rnns it's particularly a problem for + +00:32:16.000 --> 00:32:20.799 +rnns over long sequences but it's also a + +00:32:18.120 --> 00:32:23.039 +problem for any other model you use and + +00:32:20.799 --> 00:32:24.960 +the reason why this is important to know + +00:32:23.039 --> 00:32:26.799 +is if there's important information in + +00:32:24.960 --> 00:32:29.000 +your model finding a way that you can + +00:32:26.799 --> 00:32:30.559 +get a direct path from that important + +00:32:29.000 --> 00:32:32.600 +information to wherever you're making a + +00:32:30.559 --> 00:32:34.440 +prediction often is a way to improve + +00:32:32.600 --> 00:32:39.120 +your model + +00:32:34.440 --> 00:32:41.159 +um improve your model performance and on + +00:32:39.120 --> 00:32:42.919 +the contrary if there's unimportant + +00:32:41.159 --> 00:32:45.320 +information if there's information that + +00:32:42.919 --> 00:32:47.159 +you think is likely to be unimportant + +00:32:45.320 --> 00:32:49.159 +putting it farther away or making it a + +00:32:47.159 --> 00:32:51.279 +more indirect path so the model has to + +00:32:49.159 --> 00:32:53.200 +kind of work harder to use it is a good + +00:32:51.279 --> 00:32:54.840 +way to prevent the model from being + +00:32:53.200 --> 00:32:57.679 +distracted by like tons and tons of + +00:32:54.840 --> 00:33:00.200 +information um uh some of it + +00:32:57.679 --> 00:33:03.960 +which may be irrelevant so it's a good + +00:33:00.200 --> 00:33:03.960 +thing to know about in general for model + +00:33:05.360 --> 00:33:13.080 +design so um how did RNN solve this + +00:33:09.559 --> 00:33:15.360 +problem of uh of the vanishing gradient + +00:33:13.080 --> 00:33:16.880 +there is a method called long short-term + +00:33:15.360 --> 00:33:20.360 +memory + +00:33:16.880 --> 00:33:22.840 +um and the basic idea is to make + +00:33:20.360 --> 00:33:24.360 +additive connections between time + +00:33:22.840 --> 00:33:29.919 +steps + +00:33:24.360 --> 00:33:32.799 +and so addition is the + +00:33:29.919 --> 00:33:36.399 +only addition or kind of like the + +00:33:32.799 --> 00:33:38.159 +identity is the only thing that does not + +00:33:36.399 --> 00:33:40.880 +change the gradient it's guaranteed to + +00:33:38.159 --> 00:33:43.279 +not change the gradient because um the + +00:33:40.880 --> 00:33:46.639 +identity function is like f + +00:33:43.279 --> 00:33:49.159 +ofx equals X and if you take the + +00:33:46.639 --> 00:33:51.480 +derivative of this it's one so you're + +00:33:49.159 --> 00:33:55.440 +guaranteed to always have a gradient of + +00:33:51.480 --> 00:33:57.360 +one according to this function so um + +00:33:55.440 --> 00:33:59.559 +long shortterm memory makes sure that + +00:33:57.360 --> 00:34:01.840 +you have this additive uh input between + +00:33:59.559 --> 00:34:04.600 +time steps and this is what it looks + +00:34:01.840 --> 00:34:05.919 +like it's not super super important to + +00:34:04.600 --> 00:34:09.119 +understand everything that's going on + +00:34:05.919 --> 00:34:12.200 +here but just to explain it very quickly + +00:34:09.119 --> 00:34:15.720 +this uh C here is something called the + +00:34:12.200 --> 00:34:20.520 +memory cell it's passed on linearly like + +00:34:15.720 --> 00:34:24.679 +this and then um you have some gates the + +00:34:20.520 --> 00:34:27.320 +update gate is determining whether uh + +00:34:24.679 --> 00:34:28.919 +whether you update this hidden state or + +00:34:27.320 --> 00:34:31.440 +how much you update given this hidden + +00:34:28.919 --> 00:34:34.480 +State this input gate is deciding how + +00:34:31.440 --> 00:34:36.760 +much of the input you take in um and + +00:34:34.480 --> 00:34:39.879 +then the output gate is deciding how + +00:34:36.760 --> 00:34:43.280 +much of uh the output from the cell you + +00:34:39.879 --> 00:34:45.599 +uh you basically push out after using + +00:34:43.280 --> 00:34:47.079 +the cells so um it has these three gates + +00:34:45.599 --> 00:34:48.760 +that control the information flow and + +00:34:47.079 --> 00:34:51.520 +the model can learn to turn them on or + +00:34:48.760 --> 00:34:53.720 +off uh or something like that so uh + +00:34:51.520 --> 00:34:55.679 +that's the basic uh basic idea of the + +00:34:53.720 --> 00:34:57.240 +LSM and there's lots of other like + +00:34:55.679 --> 00:34:59.359 +variants of this like gated recurrent + +00:34:57.240 --> 00:35:01.520 +units that are a little bit simpler but + +00:34:59.359 --> 00:35:03.920 +the basic idea of an additive connection + +00:35:01.520 --> 00:35:07.240 +plus gating is uh something that appears + +00:35:03.920 --> 00:35:07.240 +a lot in many different types of + +00:35:07.440 --> 00:35:14.240 +architectures um any questions + +00:35:12.079 --> 00:35:15.760 +here another thing I should mention that + +00:35:14.240 --> 00:35:19.200 +I just realized I don't have on my + +00:35:15.760 --> 00:35:24.480 +slides but it's a good thing to know is + +00:35:19.200 --> 00:35:29.040 +that this is also used in uh deep + +00:35:24.480 --> 00:35:32.440 +networks and uh multi-layer + +00:35:29.040 --> 00:35:32.440 +networks and so + +00:35:34.240 --> 00:35:39.520 +basically lstms uh this is + +00:35:39.720 --> 00:35:45.359 +time lstms have this additive connection + +00:35:43.359 --> 00:35:47.599 +between the member eel where you're + +00:35:45.359 --> 00:35:50.079 +always + +00:35:47.599 --> 00:35:53.119 +adding um adding this into to whatever + +00:35:50.079 --> 00:35:53.119 +input you + +00:35:54.200 --> 00:36:00.720 +get and then you you get an input and + +00:35:57.000 --> 00:36:00.720 +you add this in you get an + +00:36:00.839 --> 00:36:07.000 +input and so this this makes sure you + +00:36:03.440 --> 00:36:09.640 +pass your gradients forward in + +00:36:07.000 --> 00:36:11.720 +time there's also uh something called + +00:36:09.640 --> 00:36:13.000 +residual connections which I think a lot + +00:36:11.720 --> 00:36:14.319 +of people have heard of if you've done a + +00:36:13.000 --> 00:36:16.000 +deep learning class or something like + +00:36:14.319 --> 00:36:18.079 +that but if you haven't uh they're a + +00:36:16.000 --> 00:36:20.599 +good thing to know residual connections + +00:36:18.079 --> 00:36:22.440 +are if you run your input through + +00:36:20.599 --> 00:36:25.720 +multiple + +00:36:22.440 --> 00:36:28.720 +layers like let's say you have a block + +00:36:25.720 --> 00:36:28.720 +here + +00:36:36.480 --> 00:36:41.280 +let's let's call this an RNN for now + +00:36:38.560 --> 00:36:44.280 +because we know um we know about RNN + +00:36:41.280 --> 00:36:44.280 +already so + +00:36:45.119 --> 00:36:49.560 +RNN so this this connection here is + +00:36:48.319 --> 00:36:50.920 +called the residual connection and + +00:36:49.560 --> 00:36:55.240 +basically it's adding an additive + +00:36:50.920 --> 00:36:57.280 +connection before and after layers so um + +00:36:55.240 --> 00:36:58.640 +this allows you to pass information from + +00:36:57.280 --> 00:37:00.880 +the very beginning of a network to the + +00:36:58.640 --> 00:37:03.520 +very end of a network um through + +00:37:00.880 --> 00:37:05.480 +multiple layers and it also is there to + +00:37:03.520 --> 00:37:08.800 +help prevent the gradient finishing + +00:37:05.480 --> 00:37:11.520 +problem so like in a way you can view uh + +00:37:08.800 --> 00:37:14.560 +you can view lstms what lstms are doing + +00:37:11.520 --> 00:37:15.800 +is preventing loss of gradient in time + +00:37:14.560 --> 00:37:17.280 +and these are preventing loss of + +00:37:15.800 --> 00:37:19.480 +gradient as you go through like multiple + +00:37:17.280 --> 00:37:21.119 +layers of the network and this is super + +00:37:19.480 --> 00:37:24.079 +standard this is used in all like + +00:37:21.119 --> 00:37:25.599 +Transformer models and llama and GPT and + +00:37:24.079 --> 00:37:31.200 +whatever + +00:37:25.599 --> 00:37:31.200 +else cool um any other questions about + +00:37:32.760 --> 00:37:39.079 +that okay cool um so next I'd like to go + +00:37:36.880 --> 00:37:41.760 +into convolution um one one thing I + +00:37:39.079 --> 00:37:44.760 +should mention is rnns or RNN style + +00:37:41.760 --> 00:37:46.920 +models are used extensively in very long + +00:37:44.760 --> 00:37:48.160 +sequence modeling and we're going to + +00:37:46.920 --> 00:37:50.440 +talk more about like actual + +00:37:48.160 --> 00:37:52.640 +architectures that people use uh to do + +00:37:50.440 --> 00:37:55.119 +this um usually in combination with + +00:37:52.640 --> 00:37:57.720 +attention based models uh but they're + +00:37:55.119 --> 00:38:01.800 +used in very long sequence modeling + +00:37:57.720 --> 00:38:05.640 +convolutions tend to be used in um a lot + +00:38:01.800 --> 00:38:07.160 +in speech and image processing uh and + +00:38:05.640 --> 00:38:10.880 +the reason why they're used a lot in + +00:38:07.160 --> 00:38:13.560 +speech and image processing is + +00:38:10.880 --> 00:38:16.800 +because when we're processing + +00:38:13.560 --> 00:38:18.599 +language uh we have like + +00:38:16.800 --> 00:38:22.720 +um + +00:38:18.599 --> 00:38:22.720 +this is + +00:38:23.599 --> 00:38:29.400 +wonderful like this is wonderful is + +00:38:26.599 --> 00:38:33.319 +three tokens in language but if we look + +00:38:29.400 --> 00:38:36.960 +at it in speech it's going to be + +00:38:33.319 --> 00:38:36.960 +like many many + +00:38:37.560 --> 00:38:46.079 +frames so kind of + +00:38:41.200 --> 00:38:47.680 +the semantics of language is already + +00:38:46.079 --> 00:38:48.960 +kind of like if you look at a single + +00:38:47.680 --> 00:38:51.599 +token you already get something + +00:38:48.960 --> 00:38:52.839 +semantically meaningful um but in + +00:38:51.599 --> 00:38:54.560 +contrast if you're looking at like + +00:38:52.839 --> 00:38:56.000 +speech or you're looking at pixels and + +00:38:54.560 --> 00:38:57.400 +images or something like that you're not + +00:38:56.000 --> 00:39:00.359 +going to get something semantically + +00:38:57.400 --> 00:39:01.920 +meaningful uh so uh convolution is used + +00:39:00.359 --> 00:39:03.359 +a lot in that case and also you could + +00:39:01.920 --> 00:39:06.079 +create a convolutional model over + +00:39:03.359 --> 00:39:08.599 +characters as well + +00:39:06.079 --> 00:39:10.599 +um so what is convolution in the first + +00:39:08.599 --> 00:39:13.319 +place um as I mentioned before basically + +00:39:10.599 --> 00:39:16.359 +you take the local window uh around an + +00:39:13.319 --> 00:39:19.680 +input and you run it through um + +00:39:16.359 --> 00:39:22.079 +basically a model and a a good way to + +00:39:19.680 --> 00:39:24.400 +think about it is it's essentially a + +00:39:22.079 --> 00:39:26.440 +feed forward Network where you can + +00:39:24.400 --> 00:39:28.240 +catenate uh all of the surrounding + +00:39:26.440 --> 00:39:30.280 +vectors together and run them through a + +00:39:28.240 --> 00:39:34.400 +linear transform like this so you can + +00:39:30.280 --> 00:39:34.400 +Cate XT minus XT XT + +00:39:35.880 --> 00:39:43.040 +plus1 convolution can also be used in + +00:39:39.440 --> 00:39:45.400 +Auto regressive models and normally like + +00:39:43.040 --> 00:39:48.079 +we think of it like this so we think + +00:39:45.400 --> 00:39:50.640 +that we're taking the previous one the + +00:39:48.079 --> 00:39:53.839 +current one and the next one and making + +00:39:50.640 --> 00:39:54.960 +a prediction based on this but this + +00:39:53.839 --> 00:39:56.440 +would be good for something like + +00:39:54.960 --> 00:39:57.720 +sequence labeling but it's not good for + +00:39:56.440 --> 00:39:59.040 +for something like language modeling + +00:39:57.720 --> 00:40:01.400 +because in language modeling we can't + +00:39:59.040 --> 00:40:05.200 +look at the future right but there's a + +00:40:01.400 --> 00:40:07.280 +super simple uh solution to this which + +00:40:05.200 --> 00:40:11.280 +is you have a convolution that just + +00:40:07.280 --> 00:40:13.720 +looks at the past basically um and + +00:40:11.280 --> 00:40:15.319 +predicts the next word based on the the + +00:40:13.720 --> 00:40:16.760 +you know current word in the past so + +00:40:15.319 --> 00:40:19.520 +here you would be predicting the word + +00:40:16.760 --> 00:40:21.040 +movie um this is actually essentially + +00:40:19.520 --> 00:40:23.839 +equivalent to the feed forward language + +00:40:21.040 --> 00:40:25.880 +model that I talked about last time uh + +00:40:23.839 --> 00:40:27.240 +so you can also think of that as a + +00:40:25.880 --> 00:40:30.599 +convolution + +00:40:27.240 --> 00:40:32.119 +a convolutional language model um so + +00:40:30.599 --> 00:40:33.359 +when whenever you say feed forward or + +00:40:32.119 --> 00:40:36.160 +convolutional language model they're + +00:40:33.359 --> 00:40:38.880 +basically the same uh modulo some uh + +00:40:36.160 --> 00:40:42.359 +some details about striding and stuff + +00:40:38.880 --> 00:40:42.359 +which I'm going to talk about the class + +00:40:43.000 --> 00:40:49.359 +today cool um I covered convolution very + +00:40:47.400 --> 00:40:51.440 +briefly because it's also the least used + +00:40:49.359 --> 00:40:53.400 +of the three uh sequence modeling things + +00:40:51.440 --> 00:40:55.400 +in NLP nowadays but um are there any + +00:40:53.400 --> 00:40:58.319 +questions there or can I just run into + +00:40:55.400 --> 00:40:58.319 +attention + +00:40:59.119 --> 00:41:04.040 +okay cool I'll go into attention next so + +00:41:02.400 --> 00:41:06.400 +uh the basic idea about + +00:41:04.040 --> 00:41:11.119 +attention um + +00:41:06.400 --> 00:41:12.839 +is that we encode uh each token and the + +00:41:11.119 --> 00:41:14.440 +sequence into a + +00:41:12.839 --> 00:41:19.119 +vector + +00:41:14.440 --> 00:41:21.640 +um or so we we have input an input + +00:41:19.119 --> 00:41:24.240 +sequence that we'd like to encode over + +00:41:21.640 --> 00:41:27.800 +and we perform a linear combination of + +00:41:24.240 --> 00:41:30.640 +the vectors weighted by attention weight + +00:41:27.800 --> 00:41:33.359 +and there's two varieties of attention + +00:41:30.640 --> 00:41:35.160 +uh that are good to know about the first + +00:41:33.359 --> 00:41:37.440 +one is cross + +00:41:35.160 --> 00:41:40.040 +atten where each element in a sequence + +00:41:37.440 --> 00:41:41.960 +attends to elements of another sequence + +00:41:40.040 --> 00:41:44.280 +and this is widely used in encoder + +00:41:41.960 --> 00:41:47.359 +decoder models where you have one + +00:41:44.280 --> 00:41:50.319 +encoder and you have a separate decoder + +00:41:47.359 --> 00:41:51.880 +um these models the popular models that + +00:41:50.319 --> 00:41:55.119 +are like this that people still use a + +00:41:51.880 --> 00:41:57.480 +lot are T5 uh is a example of an encoder + +00:41:55.119 --> 00:42:00.760 +decoder model or embar is another + +00:41:57.480 --> 00:42:03.160 +example of encoder decoder model um but + +00:42:00.760 --> 00:42:07.880 +basically the uh The Way Cross attention + +00:42:03.160 --> 00:42:10.359 +works is we have for example an English + +00:42:07.880 --> 00:42:14.079 +uh sentence here and we want to + +00:42:10.359 --> 00:42:17.560 +translate it into uh into a Japanese + +00:42:14.079 --> 00:42:23.040 +sentence and so when we output the first + +00:42:17.560 --> 00:42:25.119 +word we would mostly uh upweight this or + +00:42:23.040 --> 00:42:26.800 +sorry we have a we have a Japanese + +00:42:25.119 --> 00:42:29.119 +sentence and we would like to translated + +00:42:26.800 --> 00:42:31.680 +into an English sentence for example so + +00:42:29.119 --> 00:42:35.160 +when we generate the first word in + +00:42:31.680 --> 00:42:38.400 +Japanese means this so in order to + +00:42:35.160 --> 00:42:40.079 +Output the first word we would first uh + +00:42:38.400 --> 00:42:43.559 +do a weighted sum of all of the + +00:42:40.079 --> 00:42:46.240 +embeddings of the Japanese sentence and + +00:42:43.559 --> 00:42:49.359 +we would focus probably most on this + +00:42:46.240 --> 00:42:51.920 +word up here C because it corresponds to + +00:42:49.359 --> 00:42:51.920 +the word + +00:42:53.160 --> 00:42:59.800 +this in the next step of generating an + +00:42:55.960 --> 00:43:01.319 +out output uh we would uh attend to + +00:42:59.800 --> 00:43:04.119 +different words because different words + +00:43:01.319 --> 00:43:07.680 +correspond to is so you would attend to + +00:43:04.119 --> 00:43:11.040 +which corresponds to is um when you + +00:43:07.680 --> 00:43:12.599 +output n actually there's no word in the + +00:43:11.040 --> 00:43:16.839 +Japanese sentence that correspon to and + +00:43:12.599 --> 00:43:18.720 +so you might get a very like blob like + +00:43:16.839 --> 00:43:21.319 +uh in attention weight that doesn't look + +00:43:18.720 --> 00:43:23.319 +very uh that looks very smooth not very + +00:43:21.319 --> 00:43:25.119 +peaky and then when you do example you'd + +00:43:23.319 --> 00:43:27.880 +have strong attention on uh on the word + +00:43:25.119 --> 00:43:29.400 +that corresponds to example + +00:43:27.880 --> 00:43:31.599 +there's also self + +00:43:29.400 --> 00:43:33.480 +attention and um self attention + +00:43:31.599 --> 00:43:36.000 +basically what it does is each element + +00:43:33.480 --> 00:43:38.640 +in a sequence attends to elements of the + +00:43:36.000 --> 00:43:40.240 +same sequence and so this is a good way + +00:43:38.640 --> 00:43:43.359 +of doing sequence encoding just like we + +00:43:40.240 --> 00:43:46.280 +used rnns by rnns uh convolutional + +00:43:43.359 --> 00:43:47.559 +neural networks and so um the reason why + +00:43:46.280 --> 00:43:50.119 +you would want to do something like this + +00:43:47.559 --> 00:43:52.760 +just to give an example let's say we + +00:43:50.119 --> 00:43:54.280 +wanted to run this we wanted to encode + +00:43:52.760 --> 00:43:56.920 +the English sentence before doing + +00:43:54.280 --> 00:44:00.040 +something like translation into Japanese + +00:43:56.920 --> 00:44:01.559 +and if we did that um this maybe we + +00:44:00.040 --> 00:44:02.960 +don't need to attend to a whole lot of + +00:44:01.559 --> 00:44:06.440 +other things because it's kind of clear + +00:44:02.960 --> 00:44:08.920 +what this means but um + +00:44:06.440 --> 00:44:10.880 +is the way you would translate it would + +00:44:08.920 --> 00:44:12.280 +be rather heavily dependent on what the + +00:44:10.880 --> 00:44:13.640 +other words in the sentence so you might + +00:44:12.280 --> 00:44:17.280 +want to attend to all the other words in + +00:44:13.640 --> 00:44:20.559 +the sentence say oh this is is co + +00:44:17.280 --> 00:44:22.839 +cooccurring with this and example and so + +00:44:20.559 --> 00:44:24.440 +if that's the case then well we would + +00:44:22.839 --> 00:44:26.920 +need to translate it in this way or we' + +00:44:24.440 --> 00:44:28.960 +need to handle it in this way and that's + +00:44:26.920 --> 00:44:29.880 +exactly the same for you know any other + +00:44:28.960 --> 00:44:32.720 +sort of + +00:44:29.880 --> 00:44:35.880 +disambiguation uh style + +00:44:32.720 --> 00:44:37.720 +task so uh yeah we do something similar + +00:44:35.880 --> 00:44:39.040 +like this so basically cross attention + +00:44:37.720 --> 00:44:42.520 +is attending to a different sequence + +00:44:39.040 --> 00:44:42.520 +self attention is attending to the same + +00:44:42.680 --> 00:44:46.559 +sequence so how do we do this + +00:44:44.960 --> 00:44:48.200 +mechanistically in the first place so + +00:44:46.559 --> 00:44:51.480 +like let's say We're translating from + +00:44:48.200 --> 00:44:52.880 +Japanese to English um we would have uh + +00:44:51.480 --> 00:44:55.960 +and we're doing it with an encoder + +00:44:52.880 --> 00:44:57.480 +decoder model where we have already ENC + +00:44:55.960 --> 00:45:00.640 +coded the + +00:44:57.480 --> 00:45:02.920 +input sequence and now we're generating + +00:45:00.640 --> 00:45:05.240 +the output sequence with a for example a + +00:45:02.920 --> 00:45:09.880 +recurrent neural network um and so if + +00:45:05.240 --> 00:45:12.400 +that's the case we have uh I I hate uh + +00:45:09.880 --> 00:45:14.440 +like this and we want to predict the + +00:45:12.400 --> 00:45:17.280 +next word so what we would do is we + +00:45:14.440 --> 00:45:19.480 +would take the current state + +00:45:17.280 --> 00:45:21.480 +here and uh we use something called a + +00:45:19.480 --> 00:45:22.760 +query vector and the query Vector is + +00:45:21.480 --> 00:45:24.880 +essentially the vector that we want to + +00:45:22.760 --> 00:45:28.720 +use to decide what to attend + +00:45:24.880 --> 00:45:31.800 +to we then have key vectors and the key + +00:45:28.720 --> 00:45:35.319 +vectors are the vectors that we would + +00:45:31.800 --> 00:45:37.480 +like to use to decide which ones we + +00:45:35.319 --> 00:45:40.720 +should be attending + +00:45:37.480 --> 00:45:42.040 +to and then for each query key pair we + +00:45:40.720 --> 00:45:45.319 +calculate a + +00:45:42.040 --> 00:45:48.319 +weight and we do it like this um this + +00:45:45.319 --> 00:45:50.680 +gear here is some function that takes in + +00:45:48.319 --> 00:45:53.200 +the uh query vector and the key vector + +00:45:50.680 --> 00:45:55.599 +and outputs a weight and notably we use + +00:45:53.200 --> 00:45:57.559 +the same function every single time this + +00:45:55.599 --> 00:46:00.960 +is really important again because like + +00:45:57.559 --> 00:46:03.760 +RNN that allows us to extrapolate + +00:46:00.960 --> 00:46:05.960 +unlimited length sequences because uh we + +00:46:03.760 --> 00:46:08.280 +only have one set of you know we only + +00:46:05.960 --> 00:46:10.359 +have one function no matter how long the + +00:46:08.280 --> 00:46:13.200 +sequence gets so we can just apply it + +00:46:10.359 --> 00:46:15.839 +over and over and over + +00:46:13.200 --> 00:46:17.920 +again uh once we calculate these values + +00:46:15.839 --> 00:46:20.839 +we normalize so that they add up to one + +00:46:17.920 --> 00:46:22.559 +using the softmax function and um + +00:46:20.839 --> 00:46:27.800 +basically in this case that would be + +00:46:22.559 --> 00:46:27.800 +like 0.76 uh etc etc oops + +00:46:28.800 --> 00:46:33.559 +so step number two is once we have this + +00:46:32.280 --> 00:46:37.839 +uh these + +00:46:33.559 --> 00:46:40.160 +attention uh values here notably these + +00:46:37.839 --> 00:46:41.359 +values aren't really probabilities uh + +00:46:40.160 --> 00:46:42.800 +despite the fact that they're between + +00:46:41.359 --> 00:46:44.240 +zero and one and they add up to one + +00:46:42.800 --> 00:46:47.440 +because all we're doing is we're using + +00:46:44.240 --> 00:46:50.480 +them to uh to combine together uh + +00:46:47.440 --> 00:46:51.800 +multiple vectors so I we don't really + +00:46:50.480 --> 00:46:53.319 +normally call them attention + +00:46:51.800 --> 00:46:54.680 +probabilities or anything like that I + +00:46:53.319 --> 00:46:56.319 +just call them attention values or + +00:46:54.680 --> 00:46:59.680 +normalized attention values + +00:46:56.319 --> 00:47:03.760 +is um but once we have these uh + +00:46:59.680 --> 00:47:05.760 +attention uh attention weights we have + +00:47:03.760 --> 00:47:07.200 +value vectors and these value vectors + +00:47:05.760 --> 00:47:10.000 +are the vectors that we would actually + +00:47:07.200 --> 00:47:12.319 +like to combine together to get the uh + +00:47:10.000 --> 00:47:14.000 +encoding here and so we take these + +00:47:12.319 --> 00:47:17.559 +vectors we do a weighted some of the + +00:47:14.000 --> 00:47:21.200 +vectors and get a final final sum + +00:47:17.559 --> 00:47:22.920 +here and we can take this uh some and + +00:47:21.200 --> 00:47:26.920 +use it in any part of the model that we + +00:47:22.920 --> 00:47:29.079 +would like um and so is very broad it + +00:47:26.920 --> 00:47:31.200 +can be used in any way now the most + +00:47:29.079 --> 00:47:33.240 +common way to use it is just have lots + +00:47:31.200 --> 00:47:35.000 +of self attention layers like in + +00:47:33.240 --> 00:47:37.440 +something in a Transformer but um you + +00:47:35.000 --> 00:47:40.160 +can also use it in decoder or other + +00:47:37.440 --> 00:47:42.920 +things like that as + +00:47:40.160 --> 00:47:45.480 +well this is an actual graphical example + +00:47:42.920 --> 00:47:47.319 +from the original attention paper um I'm + +00:47:45.480 --> 00:47:50.000 +going to give some other examples from + +00:47:47.319 --> 00:47:52.480 +Transformers in the next class but + +00:47:50.000 --> 00:47:55.400 +basically you can see that the attention + +00:47:52.480 --> 00:47:57.559 +weights uh for this English to French I + +00:47:55.400 --> 00:48:00.520 +think it's English French translation + +00:47:57.559 --> 00:48:02.920 +task basically um overlap with what you + +00:48:00.520 --> 00:48:04.440 +would expect uh if you can read English + +00:48:02.920 --> 00:48:06.599 +and French it's kind of the words that + +00:48:04.440 --> 00:48:09.319 +are semantically similar to each other + +00:48:06.599 --> 00:48:12.920 +um it even learns to do this reordering + +00:48:09.319 --> 00:48:14.880 +uh in an appropriate way here and all of + +00:48:12.920 --> 00:48:16.720 +this is completely unsupervised so you + +00:48:14.880 --> 00:48:18.079 +never actually give the model + +00:48:16.720 --> 00:48:19.440 +information about what it should be + +00:48:18.079 --> 00:48:21.559 +attending to it's all learned through + +00:48:19.440 --> 00:48:23.520 +gradient descent and the model learns to + +00:48:21.559 --> 00:48:27.640 +do this by making the embeddings of the + +00:48:23.520 --> 00:48:27.640 +key and query vectors closer together + +00:48:28.440 --> 00:48:33.240 +cool + +00:48:30.000 --> 00:48:33.240 +um any + +00:48:33.800 --> 00:48:40.040 +questions okay so um next I'd like to go + +00:48:38.440 --> 00:48:41.680 +a little bit into how we actually + +00:48:40.040 --> 00:48:43.599 +calculate the attention score function + +00:48:41.680 --> 00:48:44.839 +so that's the little gear that I had on + +00:48:43.599 --> 00:48:50.280 +my + +00:48:44.839 --> 00:48:53.559 +uh my slide before so here Q is a query + +00:48:50.280 --> 00:48:56.440 +and K is the key um the original + +00:48:53.559 --> 00:48:58.400 +attention paper used a multi-layer layer + +00:48:56.440 --> 00:49:00.119 +uh a multi-layer neural network to + +00:48:58.400 --> 00:49:02.440 +calculate this so basically what it did + +00:49:00.119 --> 00:49:05.319 +is it concatenated the query and key + +00:49:02.440 --> 00:49:08.000 +Vector together multiplied it by a + +00:49:05.319 --> 00:49:12.240 +weight Matrix calculated a tan H and + +00:49:08.000 --> 00:49:15.040 +then ran it through uh a weight + +00:49:12.240 --> 00:49:19.799 +Vector so this + +00:49:15.040 --> 00:49:22.480 +is essentially very expressive + +00:49:19.799 --> 00:49:24.799 +um uh it's flexible it's often good with + +00:49:22.480 --> 00:49:27.960 +large data but it adds extra parameters + +00:49:24.799 --> 00:49:30.359 +and uh computation time uh to your + +00:49:27.960 --> 00:49:31.559 +calculations here so it's not as widely + +00:49:30.359 --> 00:49:34.359 +used + +00:49:31.559 --> 00:49:37.799 +anymore the uh other thing which was + +00:49:34.359 --> 00:49:41.599 +proposed by long ad all is a bilinear + +00:49:37.799 --> 00:49:43.200 +function um and a bilinear function + +00:49:41.599 --> 00:49:45.920 +basically what it does is it has your + +00:49:43.200 --> 00:49:48.319 +key Vector it has your query vector and + +00:49:45.920 --> 00:49:51.440 +it has a matrix in between them like + +00:49:48.319 --> 00:49:53.000 +this and uh then you calculate uh you + +00:49:51.440 --> 00:49:54.520 +calculate the + +00:49:53.000 --> 00:49:56.680 +alut + +00:49:54.520 --> 00:49:59.880 +so + +00:49:56.680 --> 00:50:03.200 +this is uh nice because it basically um + +00:49:59.880 --> 00:50:05.760 +Can Transform uh the key and + +00:50:03.200 --> 00:50:08.760 +query uh together + +00:50:05.760 --> 00:50:08.760 +here + +00:50:09.119 --> 00:50:13.559 +um people have also experimented with + +00:50:11.760 --> 00:50:16.079 +DOT product and the dot product is + +00:50:13.559 --> 00:50:19.839 +basically query times + +00:50:16.079 --> 00:50:23.480 +key uh query transpose times key or + +00:50:19.839 --> 00:50:25.760 +query. key this is okay but the problem + +00:50:23.480 --> 00:50:27.280 +with this is then the query vector and + +00:50:25.760 --> 00:50:30.160 +the key vectors have to be in exactly + +00:50:27.280 --> 00:50:31.920 +the same space and that's kind of too + +00:50:30.160 --> 00:50:34.799 +hard of a constraint so it doesn't scale + +00:50:31.920 --> 00:50:38.000 +very well if you're um if you're working + +00:50:34.799 --> 00:50:40.839 +hard uh if you're uh like training on + +00:50:38.000 --> 00:50:45.400 +lots of data um then the scaled dot + +00:50:40.839 --> 00:50:47.880 +product um the scale dot product here uh + +00:50:45.400 --> 00:50:50.079 +one problem is that the scale of the dot + +00:50:47.880 --> 00:50:53.680 +product increases as the dimensions get + +00:50:50.079 --> 00:50:55.880 +larger and so there's a fix to scale by + +00:50:53.680 --> 00:50:58.839 +the square root of the length of one of + +00:50:55.880 --> 00:51:00.680 +the vectors um and so basically you're + +00:50:58.839 --> 00:51:04.559 +multiplying uh you're taking the dot + +00:51:00.680 --> 00:51:06.559 +product but you're dividing by the uh + +00:51:04.559 --> 00:51:09.359 +the square root of the length of one of + +00:51:06.559 --> 00:51:11.839 +the vectors uh does anyone have an idea + +00:51:09.359 --> 00:51:13.599 +why you might take the square root here + +00:51:11.839 --> 00:51:16.920 +if you've taken a machine + +00:51:13.599 --> 00:51:20.000 +learning uh or maybe statistics class + +00:51:16.920 --> 00:51:20.000 +you might have a an + +00:51:20.599 --> 00:51:26.599 +idea any any ideas yeah it normalization + +00:51:24.720 --> 00:51:29.079 +to make sure + +00:51:26.599 --> 00:51:32.760 +because otherwise it will impact the + +00:51:29.079 --> 00:51:35.640 +result because we want normalize one yes + +00:51:32.760 --> 00:51:37.920 +so we do we do want to normalize it um + +00:51:35.640 --> 00:51:40.000 +and so that's the reason why we divide + +00:51:37.920 --> 00:51:41.920 +by the length um and that prevents it + +00:51:40.000 --> 00:51:43.839 +from getting too large + +00:51:41.920 --> 00:51:45.920 +specifically does anyone have an idea + +00:51:43.839 --> 00:51:49.440 +why you take the square root here as + +00:51:45.920 --> 00:51:49.440 +opposed to dividing just by the length + +00:51:52.400 --> 00:51:59.480 +overall so um this is this is pretty + +00:51:55.400 --> 00:52:01.720 +tough and actually uh we I didn't know + +00:51:59.480 --> 00:52:04.359 +one of the last times I did this class + +00:52:01.720 --> 00:52:06.640 +uh and had to actually go look for it + +00:52:04.359 --> 00:52:09.000 +but basically the reason why is because + +00:52:06.640 --> 00:52:11.400 +if you um if you have a whole bunch of + +00:52:09.000 --> 00:52:12.720 +random variables so let's say you have a + +00:52:11.400 --> 00:52:14.040 +whole bunch of random variables no + +00:52:12.720 --> 00:52:15.240 +matter what kind they are as long as + +00:52:14.040 --> 00:52:19.680 +they're from the same distribution + +00:52:15.240 --> 00:52:19.680 +they're IID and you add them all + +00:52:20.160 --> 00:52:25.720 +together um then the variance I believe + +00:52:23.200 --> 00:52:27.760 +yeah the variance of this variant + +00:52:25.720 --> 00:52:31.119 +standard deviation maybe standard + +00:52:27.760 --> 00:52:33.319 +deviation of this goes uh goes up uh + +00:52:31.119 --> 00:52:35.640 +square root uh yeah I think standard + +00:52:33.319 --> 00:52:38.880 +deviation goes + +00:52:35.640 --> 00:52:41.040 +up dividing by something that would + +00:52:38.880 --> 00:52:44.040 +divide by this the standard deviation + +00:52:41.040 --> 00:52:48.240 +here so it's made like normalizing by + +00:52:44.040 --> 00:52:51.040 +that so um it's a it's that's actually I + +00:52:48.240 --> 00:52:53.359 +don't think explicitly explained and the + +00:52:51.040 --> 00:52:54.720 +uh attention is all you need paper uh + +00:52:53.359 --> 00:52:57.920 +the vasani paper where they introduce + +00:52:54.720 --> 00:53:01.079 +this but that's basic idea um in terms + +00:52:57.920 --> 00:53:03.839 +of what people use most widely nowadays + +00:53:01.079 --> 00:53:07.680 +um they + +00:53:03.839 --> 00:53:07.680 +are basically doing + +00:53:24.160 --> 00:53:27.160 +this + +00:53:30.280 --> 00:53:34.880 +so they're taking the the hidden state + +00:53:33.000 --> 00:53:36.599 +from the keys and multiplying it by a + +00:53:34.880 --> 00:53:39.440 +matrix the hidden state by the queries + +00:53:36.599 --> 00:53:41.680 +and multiplying it by a matrix um this + +00:53:39.440 --> 00:53:46.559 +is what is done in uh in + +00:53:41.680 --> 00:53:50.280 +Transformers and the uh and then they're + +00:53:46.559 --> 00:53:54.160 +using this to um they're normalizing it + +00:53:50.280 --> 00:53:57.160 +by this uh square root here + +00:53:54.160 --> 00:53:57.160 +and + +00:53:59.440 --> 00:54:05.040 +so this is essentially a bilinear + +00:54:02.240 --> 00:54:07.680 +model um it's a bilinear model that is + +00:54:05.040 --> 00:54:09.119 +normalized uh they call it uh scale do + +00:54:07.680 --> 00:54:11.119 +product detention but actually because + +00:54:09.119 --> 00:54:15.520 +they have these weight matrices uh it's + +00:54:11.119 --> 00:54:18.839 +a bilinear model so um that's the the + +00:54:15.520 --> 00:54:18.839 +most standard thing to be used + +00:54:20.200 --> 00:54:24.079 +nowadays cool any any questions about + +00:54:22.520 --> 00:54:27.079 +this + +00:54:24.079 --> 00:54:27.079 +part + +00:54:28.240 --> 00:54:36.559 +okay so um finally when you actually + +00:54:32.280 --> 00:54:36.559 +train the model um as I mentioned + +00:54:41.960 --> 00:54:45.680 +before right at the very + +00:54:48.040 --> 00:54:52.400 +beginning + +00:54:49.839 --> 00:54:55.760 +we when we're training an autor + +00:54:52.400 --> 00:54:57.400 +regressive model we don't want to be + +00:54:55.760 --> 00:54:59.799 +referring to the Future to things in the + +00:54:57.400 --> 00:55:01.240 +future um because then you know + +00:54:59.799 --> 00:55:03.079 +basically we'd be cheating and we'd have + +00:55:01.240 --> 00:55:04.599 +a nonprobabilistic model it wouldn't be + +00:55:03.079 --> 00:55:08.960 +good when we actually have to generate + +00:55:04.599 --> 00:55:12.119 +left to right um and + +00:55:08.960 --> 00:55:15.720 +so we essentially want to prevent + +00:55:12.119 --> 00:55:17.480 +ourselves from using information from + +00:55:15.720 --> 00:55:20.319 +the + +00:55:17.480 --> 00:55:22.839 +future + +00:55:20.319 --> 00:55:24.240 +and in an unconditioned model we want to + +00:55:22.839 --> 00:55:27.400 +prevent ourselves from using any + +00:55:24.240 --> 00:55:29.680 +information in the feature here um in a + +00:55:27.400 --> 00:55:31.520 +conditioned model we're okay with doing + +00:55:29.680 --> 00:55:33.480 +kind of bir + +00:55:31.520 --> 00:55:35.880 +directional conditioning here to + +00:55:33.480 --> 00:55:37.359 +calculate the representations but we're + +00:55:35.880 --> 00:55:40.440 +not okay with doing it on the target + +00:55:37.359 --> 00:55:40.440 +side so basically what we + +00:55:44.240 --> 00:55:50.960 +do basically what we do is we create a + +00:55:47.920 --> 00:55:52.400 +mask that prevents us from attending to + +00:55:50.960 --> 00:55:54.559 +any of the information in the future + +00:55:52.400 --> 00:55:56.440 +when we're uh predicting when we're + +00:55:54.559 --> 00:56:00.799 +calculating the representations of the + +00:55:56.440 --> 00:56:04.880 +the current thing uh word and + +00:56:00.799 --> 00:56:08.280 +technically how we do this is we have + +00:56:04.880 --> 00:56:08.280 +the attention + +00:56:09.079 --> 00:56:13.799 +values uh like + +00:56:11.680 --> 00:56:15.480 +2.1 + +00:56:13.799 --> 00:56:17.880 +attention + +00:56:15.480 --> 00:56:19.920 +0.3 and + +00:56:17.880 --> 00:56:22.480 +attention uh + +00:56:19.920 --> 00:56:24.960 +0.5 or something like + +00:56:22.480 --> 00:56:27.480 +that these are eventually going to be + +00:56:24.960 --> 00:56:29.799 +fed through the soft Max to calculate + +00:56:27.480 --> 00:56:32.119 +the attention values that we use to do + +00:56:29.799 --> 00:56:33.680 +the waiting so what we do is any ones we + +00:56:32.119 --> 00:56:36.160 +don't want to attend to we just add + +00:56:33.680 --> 00:56:39.799 +negative infinity or add a very large + +00:56:36.160 --> 00:56:42.119 +negative number so we uh cross that out + +00:56:39.799 --> 00:56:44.000 +and set this the negative infinity and + +00:56:42.119 --> 00:56:45.440 +so then when we take the softb basically + +00:56:44.000 --> 00:56:47.839 +the value goes to zero and we don't + +00:56:45.440 --> 00:56:49.359 +attend to it so um this is called the + +00:56:47.839 --> 00:56:53.240 +attention mask and you'll see it when + +00:56:49.359 --> 00:56:53.240 +you have to implement + +00:56:53.440 --> 00:56:56.880 +attention cool + +00:56:57.039 --> 00:57:00.200 +any any questions about + +00:57:02.079 --> 00:57:08.599 +this okay great um so next I'd like to + +00:57:05.839 --> 00:57:11.039 +go to Applications of sequence models um + +00:57:08.599 --> 00:57:13.200 +there's a bunch of ways that you can use + +00:57:11.039 --> 00:57:16.160 +sequence models of any variety I wrote + +00:57:13.200 --> 00:57:18.400 +RNN here arbitrarily but it could be + +00:57:16.160 --> 00:57:21.720 +convolution or Transformer or anything + +00:57:18.400 --> 00:57:23.559 +else so the first one is encoding + +00:57:21.720 --> 00:57:26.839 +sequences + +00:57:23.559 --> 00:57:29.240 +um and essentially if you do it with an + +00:57:26.839 --> 00:57:31.559 +RNN this is one way you can encode a + +00:57:29.240 --> 00:57:35.799 +sequence basically you take the + +00:57:31.559 --> 00:57:36.960 +last uh value here and you use it to uh + +00:57:35.799 --> 00:57:40.559 +encode the + +00:57:36.960 --> 00:57:42.720 +output this can be used for any sort of + +00:57:40.559 --> 00:57:45.839 +uh like binary or multiclass prediction + +00:57:42.720 --> 00:57:48.280 +problem it's also right now used very + +00:57:45.839 --> 00:57:50.920 +widely in sentence representations for + +00:57:48.280 --> 00:57:54.200 +retrieval uh so for example you build a + +00:57:50.920 --> 00:57:55.520 +big retrieval index uh with these + +00:57:54.200 --> 00:57:57.920 +vectors + +00:57:55.520 --> 00:57:59.480 +and then you do a vector near you also + +00:57:57.920 --> 00:58:02.119 +in quote a query and you do a vector + +00:57:59.480 --> 00:58:04.760 +nearest neighbor search to look up uh + +00:58:02.119 --> 00:58:06.760 +the most similar sentence here so this + +00:58:04.760 --> 00:58:10.160 +is uh these are two applications where + +00:58:06.760 --> 00:58:13.440 +you use something like this right on + +00:58:10.160 --> 00:58:15.520 +this slide I wrote that you use the last + +00:58:13.440 --> 00:58:17.359 +Vector here but actually a lot of the + +00:58:15.520 --> 00:58:20.039 +time it's also a good idea to just take + +00:58:17.359 --> 00:58:22.599 +the mean of the vectors or take the max + +00:58:20.039 --> 00:58:26.640 +of all of the vectors + +00:58:22.599 --> 00:58:29.119 +uh in fact I would almost I would almost + +00:58:26.640 --> 00:58:30.520 +say that that's usually a better choice + +00:58:29.119 --> 00:58:32.760 +if you're doing any sort of thing where + +00:58:30.520 --> 00:58:35.359 +you need a single Vector unless your + +00:58:32.760 --> 00:58:38.200 +model has been specifically trained to + +00:58:35.359 --> 00:58:41.480 +have good like output vectors uh from + +00:58:38.200 --> 00:58:44.359 +the final Vector here so um you could + +00:58:41.480 --> 00:58:46.880 +also just take the the mean of all of + +00:58:44.359 --> 00:58:46.880 +the purple + +00:58:48.240 --> 00:58:52.960 +ones um another thing you can do is + +00:58:50.280 --> 00:58:54.359 +encode tokens for sequence labeling Um + +00:58:52.960 --> 00:58:56.200 +this can also be used for language + +00:58:54.359 --> 00:58:58.280 +modeling and what do I mean it can be + +00:58:56.200 --> 00:59:00.039 +used for language + +00:58:58.280 --> 00:59:03.319 +modeling + +00:59:00.039 --> 00:59:06.599 +basically you can view this as first + +00:59:03.319 --> 00:59:09.200 +running along sequence encoding and then + +00:59:06.599 --> 00:59:12.319 +after that making all of the predictions + +00:59:09.200 --> 00:59:15.240 +um it's also a good thing to know + +00:59:12.319 --> 00:59:18.440 +computationally because um often you can + +00:59:15.240 --> 00:59:20.720 +do sequence encoding uh kind of all in + +00:59:18.440 --> 00:59:22.440 +parallel and yeah actually I said I was + +00:59:20.720 --> 00:59:23.359 +going to mention I said I was going to + +00:59:22.440 --> 00:59:25.079 +mention that but I don't think I + +00:59:23.359 --> 00:59:27.319 +actually have a slide about it but um + +00:59:25.079 --> 00:59:29.720 +one important thing about rnn's compared + +00:59:27.319 --> 00:59:33.079 +to convolution or Transformers uh sorry + +00:59:29.720 --> 00:59:34.839 +convolution or attention is rnns in + +00:59:33.079 --> 00:59:37.440 +order to calculate this RNN you need to + +00:59:34.839 --> 00:59:39.599 +wait for this RNN to finish so it's + +00:59:37.440 --> 00:59:41.200 +sequential and you need to go like here + +00:59:39.599 --> 00:59:43.480 +and then here and then here and then + +00:59:41.200 --> 00:59:45.720 +here and then here and that's a pretty + +00:59:43.480 --> 00:59:48.200 +big bottleneck because uh things like + +00:59:45.720 --> 00:59:50.760 +gpus or tpus they're actually really + +00:59:48.200 --> 00:59:52.839 +good at doing a bunch of things at once + +00:59:50.760 --> 00:59:56.440 +and so attention even though its ASM + +00:59:52.839 --> 00:59:57.400 +totic complexity is worse o of n squ uh + +00:59:56.440 --> 00:59:59.319 +just because you don't have that + +00:59:57.400 --> 01:00:01.680 +bottleneck of doing things sequentially + +00:59:59.319 --> 01:00:03.640 +it can be way way faster on a GPU + +01:00:01.680 --> 01:00:04.960 +because you're not wasting your time + +01:00:03.640 --> 01:00:07.640 +waiting for the previous thing to be + +01:00:04.960 --> 01:00:11.039 +calculated so that's actually why uh + +01:00:07.640 --> 01:00:13.520 +Transformers are so fast + +01:00:11.039 --> 01:00:14.599 +um uh Transformers and attention models + +01:00:13.520 --> 01:00:17.160 +are so + +01:00:14.599 --> 01:00:21.119 +fast + +01:00:17.160 --> 01:00:23.079 +um another thing to note so that's one + +01:00:21.119 --> 01:00:25.039 +of the big reasons why attention models + +01:00:23.079 --> 01:00:27.359 +are so popular nowadays because fast to + +01:00:25.039 --> 01:00:30.200 +calculate on Modern Hardware another + +01:00:27.359 --> 01:00:33.520 +reason why attention models are popular + +01:00:30.200 --> 01:00:34.799 +nowadays does anyone have a um does + +01:00:33.520 --> 01:00:37.280 +anyone have an + +01:00:34.799 --> 01:00:38.839 +idea uh about another reason it's based + +01:00:37.280 --> 01:00:41.200 +on how easy they are to learn and + +01:00:38.839 --> 01:00:43.680 +there's a reason why and that reason why + +01:00:41.200 --> 01:00:46.240 +has to do with + +01:00:43.680 --> 01:00:48.520 +um that reason why has to do with uh + +01:00:46.240 --> 01:00:49.400 +something I introduced in this lecture + +01:00:48.520 --> 01:00:52.039 +uh + +01:00:49.400 --> 01:00:54.720 +earlier I'll give a + +01:00:52.039 --> 01:00:58.079 +hint gradients yeah more more + +01:00:54.720 --> 01:01:00.480 +specifically what what's nice about + +01:00:58.079 --> 01:01:02.920 +attention with respect to gradients or + +01:01:00.480 --> 01:01:02.920 +Vanishing + +01:01:04.119 --> 01:01:07.319 +gradients any + +01:01:07.680 --> 01:01:15.160 +ideas let's say we have a really long + +01:01:10.160 --> 01:01:17.839 +sentence it's like X1 X2 X3 + +01:01:15.160 --> 01:01:21.799 +X4 um + +01:01:17.839 --> 01:01:26.440 +X200 over here and in order to predict + +01:01:21.799 --> 01:01:26.440 +X200 you need to pay attention to X3 + +01:01:27.359 --> 01:01:29.640 +any + +01:01:33.079 --> 01:01:37.359 +ideas another another hint how many + +01:01:35.599 --> 01:01:38.960 +nonlinearities do you have to pass + +01:01:37.359 --> 01:01:41.440 +through in order to pass that + +01:01:38.960 --> 01:01:44.839 +information from X3 to + +01:01:41.440 --> 01:01:48.839 +X200 in a recurrent Network um in a + +01:01:44.839 --> 01:01:48.839 +recurrent Network or + +01:01:51.920 --> 01:01:57.160 +attention netw should be + +01:01:54.960 --> 01:02:00.680 +197 yeah in a recurrent Network it's + +01:01:57.160 --> 01:02:03.480 +basically 197 or may maybe 196 I haven't + +01:02:00.680 --> 01:02:06.319 +paid attention but every time every time + +01:02:03.480 --> 01:02:08.319 +you pass it to the hidden + +01:02:06.319 --> 01:02:10.200 +state it has to go through a + +01:02:08.319 --> 01:02:13.240 +nonlinearity so it goes through like + +01:02:10.200 --> 01:02:17.119 +1907 nonlinearities and even if you're + +01:02:13.240 --> 01:02:19.680 +using an lstm um it's still the lstm + +01:02:17.119 --> 01:02:21.559 +hidden cell is getting information added + +01:02:19.680 --> 01:02:23.400 +to it and subtracted to it and other + +01:02:21.559 --> 01:02:24.960 +things like that so it's still a bit + +01:02:23.400 --> 01:02:27.880 +tricky + +01:02:24.960 --> 01:02:27.880 +um what about + +01:02:28.119 --> 01:02:35.160 +attention yeah basically one time so + +01:02:31.520 --> 01:02:39.319 +attention um in the next layer here + +01:02:35.160 --> 01:02:41.119 +you're passing it all the way you're + +01:02:39.319 --> 01:02:45.000 +passing all of the information directly + +01:02:41.119 --> 01:02:46.480 +in and the only qualifying thing is that + +01:02:45.000 --> 01:02:47.760 +your weight has to be good it has to + +01:02:46.480 --> 01:02:49.079 +find a good attention weight so that + +01:02:47.760 --> 01:02:50.920 +it's actually paying attention to that + +01:02:49.079 --> 01:02:53.039 +information so this is actually + +01:02:50.920 --> 01:02:54.400 +discussed in the vaswani at all + +01:02:53.039 --> 01:02:57.359 +attention is all you need paper that + +01:02:54.400 --> 01:02:59.920 +introduced Transformers um convolutions + +01:02:57.359 --> 01:03:03.640 +are kind of in the middle so like let's + +01:02:59.920 --> 01:03:06.400 +say you have a convolution of length 10 + +01:03:03.640 --> 01:03:09.880 +um and then you have two layers of it um + +01:03:06.400 --> 01:03:09.880 +if you have a convolution of length + +01:03:10.200 --> 01:03:15.880 +10 or yeah let's say you have a + +01:03:12.559 --> 01:03:18.520 +convolution of length 10 you would need + +01:03:15.880 --> 01:03:19.520 +basically you would pass from 10 + +01:03:18.520 --> 01:03:21.720 +previous + +01:03:19.520 --> 01:03:23.319 +ones and then you would pass again from + +01:03:21.720 --> 01:03:27.359 +10 previous ones and then you would have + +01:03:23.319 --> 01:03:29.160 +to go through like 16 or like I guess + +01:03:27.359 --> 01:03:31.279 +almost 20 layers of convolution in order + +01:03:29.160 --> 01:03:34.720 +to pass that information along so it's + +01:03:31.279 --> 01:03:39.200 +kind of in the middle of RNs in uh in + +01:03:34.720 --> 01:03:43.480 +lsms uh sorry RNN in attention + +01:03:39.200 --> 01:03:47.359 +Ms Yeah question so regarding how you + +01:03:43.480 --> 01:03:51.319 +have to wait for one r& the next one can + +01:03:47.359 --> 01:03:53.000 +you inflence on one RNN once it's done + +01:03:51.319 --> 01:03:54.839 +even though the next one's competing off + +01:03:53.000 --> 01:03:58.400 +that one + +01:03:54.839 --> 01:04:01.160 +yes yeah you can you can do + +01:03:58.400 --> 01:04:03.880 +inference you could is well so as long + +01:04:01.160 --> 01:04:03.880 +as + +01:04:05.599 --> 01:04:10.640 +the as long as the output doesn't affect + +01:04:08.079 --> 01:04:14.000 +the next input so in this + +01:04:10.640 --> 01:04:17.119 +case in this case because of language + +01:04:14.000 --> 01:04:19.400 +modeling or generation is because the + +01:04:17.119 --> 01:04:21.000 +output doesn't affect the ne uh because + +01:04:19.400 --> 01:04:22.440 +the output affects the next input if + +01:04:21.000 --> 01:04:26.680 +you're predicting the output you have to + +01:04:22.440 --> 01:04:28.920 +weigh if you know the output already um + +01:04:26.680 --> 01:04:30.599 +if you know the output already you could + +01:04:28.920 --> 01:04:33.599 +make the prediction at the same time + +01:04:30.599 --> 01:04:34.799 +miscalculating this next hidden State um + +01:04:33.599 --> 01:04:36.200 +so if you're just calculating the + +01:04:34.799 --> 01:04:38.559 +probability you could do that and that's + +01:04:36.200 --> 01:04:40.880 +actually where Transformers or attention + +01:04:38.559 --> 01:04:44.839 +models shine attention models actually + +01:04:40.880 --> 01:04:46.000 +aren't great for Generation Um and the + +01:04:44.839 --> 01:04:49.279 +reason why they're not great for + +01:04:46.000 --> 01:04:52.279 +generation is because they're + +01:04:49.279 --> 01:04:52.279 +um + +01:04:52.799 --> 01:04:57.680 +like when you're you're generating the + +01:04:55.039 --> 01:04:59.200 +next token you still need to wait you + +01:04:57.680 --> 01:05:00.559 +can't calculate in parallel because you + +01:04:59.200 --> 01:05:03.039 +need to generate the next token before + +01:05:00.559 --> 01:05:04.839 +you can encode the next uh the previous + +01:05:03.039 --> 01:05:07.119 +sorry need to generate the next token + +01:05:04.839 --> 01:05:08.680 +before you can encode it so you can't do + +01:05:07.119 --> 01:05:10.359 +everything in parallel so Transformers + +01:05:08.680 --> 01:05:15.039 +for generation are actually + +01:05:10.359 --> 01:05:16.559 +slow and um there are models uh I don't + +01:05:15.039 --> 01:05:18.520 +know if people are using them super + +01:05:16.559 --> 01:05:22.200 +widely now but there were actually + +01:05:18.520 --> 01:05:23.640 +transform uh language model sorry + +01:05:22.200 --> 01:05:26.319 +machine translation model set we in + +01:05:23.640 --> 01:05:28.279 +production they had a really big strong + +01:05:26.319 --> 01:05:34.359 +Transformer encoder and then they had a + +01:05:28.279 --> 01:05:34.359 +tiny fast RNN decoder um + +01:05:35.440 --> 01:05:40.960 +and and if you want a actual + +01:05:52.000 --> 01:05:59.440 +reference there's there's + +01:05:55.079 --> 01:05:59.440 +this deep encoder shellow + +01:05:59.559 --> 01:06:05.520 +decoder um and then there's also the the + +01:06:03.079 --> 01:06:07.599 +Maran machine translation toolkit that + +01:06:05.520 --> 01:06:11.119 +supports uh supports those types of + +01:06:07.599 --> 01:06:13.839 +things as well so um it's also the + +01:06:11.119 --> 01:06:16.200 +reason why uh if you're using if you're + +01:06:13.839 --> 01:06:18.839 +using uh like the GPT models through the + +01:06:16.200 --> 01:06:21.680 +API that decoding is more expensive + +01:06:18.839 --> 01:06:21.680 +right like + +01:06:22.119 --> 01:06:27.960 +encoding I forget exactly is it 0.03 + +01:06:26.279 --> 01:06:30.839 +cents for 1,000 tokens for encoding and + +01:06:27.960 --> 01:06:33.039 +0.06 cents for 1,000 tokens for decoding + +01:06:30.839 --> 01:06:34.799 +in like gp4 or something like this the + +01:06:33.039 --> 01:06:36.839 +reason why is precisely that just + +01:06:34.799 --> 01:06:37.760 +because it's so much more expensive to + +01:06:36.839 --> 01:06:41.599 +to run the + +01:06:37.760 --> 01:06:45.160 +decoder um cool I have a few final + +01:06:41.599 --> 01:06:47.039 +things also about efficiency so um these + +01:06:45.160 --> 01:06:50.720 +go back to the efficiency things that I + +01:06:47.039 --> 01:06:52.279 +talked about last time um handling mini + +01:06:50.720 --> 01:06:54.440 +batching so what do we have to do when + +01:06:52.279 --> 01:06:56.359 +we're handling mini batching if we were + +01:06:54.440 --> 01:06:59.440 +handling mini batching in feed forward + +01:06:56.359 --> 01:07:02.880 +networks it's actually relatively easy + +01:06:59.440 --> 01:07:04.880 +um because we all of our computations + +01:07:02.880 --> 01:07:06.400 +are the same shape so we just + +01:07:04.880 --> 01:07:09.359 +concatenate them all together into a big + +01:07:06.400 --> 01:07:11.000 +tensor and run uh run over it uh we saw + +01:07:09.359 --> 01:07:12.599 +mini batching makes things much faster + +01:07:11.000 --> 01:07:15.160 +but mini batching and sequence modeling + +01:07:12.599 --> 01:07:17.240 +is harder than in feed forward networks + +01:07:15.160 --> 01:07:20.240 +um one reason is in rnns each word + +01:07:17.240 --> 01:07:22.680 +depends on the previous word um also + +01:07:20.240 --> 01:07:26.359 +because sequences are of various + +01:07:22.680 --> 01:07:30.279 +lengths so so what we do to handle this + +01:07:26.359 --> 01:07:33.480 +is uh we do padding and masking uh + +01:07:30.279 --> 01:07:35.680 +so we can do padding like this uh so we + +01:07:33.480 --> 01:07:37.279 +just add an extra token at the end to + +01:07:35.680 --> 01:07:40.440 +make all of the sequences at the same + +01:07:37.279 --> 01:07:44.480 +length um if we are doing an encoder + +01:07:40.440 --> 01:07:47.160 +decoder style model uh where we have an + +01:07:44.480 --> 01:07:48.440 +input and then we want to generate all + +01:07:47.160 --> 01:07:50.640 +the outputs based on the input one of + +01:07:48.440 --> 01:07:54.920 +the easy things is to add pads to the + +01:07:50.640 --> 01:07:56.520 +beginning um and then so yeah it doesn't + +01:07:54.920 --> 01:07:58.000 +really matter but you can add pads to + +01:07:56.520 --> 01:07:59.440 +the beginning so they're all starting at + +01:07:58.000 --> 01:08:03.079 +the same place especially if you're + +01:07:59.440 --> 01:08:05.799 +using RNN style models um then we + +01:08:03.079 --> 01:08:08.920 +calculate the loss over the output for + +01:08:05.799 --> 01:08:11.000 +example we multiply the loss by a mask + +01:08:08.920 --> 01:08:13.480 +to remove the loss over the tokens that + +01:08:11.000 --> 01:08:16.880 +we don't care about and we take the sum + +01:08:13.480 --> 01:08:19.120 +of these and so luckily most of this is + +01:08:16.880 --> 01:08:20.719 +implemented in for example ptch or + +01:08:19.120 --> 01:08:22.279 +huging face Transformers already so you + +01:08:20.719 --> 01:08:23.560 +don't need to worry about it but it is a + +01:08:22.279 --> 01:08:24.799 +good idea to know what's going on under + +01:08:23.560 --> 01:08:28.560 +the hood if you want to implement + +01:08:24.799 --> 01:08:32.440 +anything unusual and also um it's good + +01:08:28.560 --> 01:08:35.600 +to know for the following reason also + +01:08:32.440 --> 01:08:38.799 +which is bucketing and + +01:08:35.600 --> 01:08:40.319 +sorting so if we use sentences of vastly + +01:08:38.799 --> 01:08:43.359 +different lengths and we put them in the + +01:08:40.319 --> 01:08:46.640 +same mini batch this can uh waste a + +01:08:43.359 --> 01:08:48.000 +really large amount of computation so + +01:08:46.640 --> 01:08:50.759 +like let's say we're processing + +01:08:48.000 --> 01:08:52.480 +documents or movie reviews or something + +01:08:50.759 --> 01:08:54.799 +like that and you have a most movie + +01:08:52.480 --> 01:08:57.719 +reviews are like + +01:08:54.799 --> 01:09:00.080 +10 words long but you have one movie + +01:08:57.719 --> 01:09:02.319 +review in your mini batch of uh a + +01:09:00.080 --> 01:09:04.359 +thousand words so basically what that + +01:09:02.319 --> 01:09:08.279 +means is you're padding most of your + +01:09:04.359 --> 01:09:11.120 +sequences 990 times to process 10 + +01:09:08.279 --> 01:09:12.120 +sequences which is like a lot of waste + +01:09:11.120 --> 01:09:14.000 +right because you're running them all + +01:09:12.120 --> 01:09:16.799 +through your GPU and other things like + +01:09:14.000 --> 01:09:19.080 +that so one way to remedy this is to + +01:09:16.799 --> 01:09:22.719 +sort sentences so similarly length + +01:09:19.080 --> 01:09:27.480 +sentences are in the same batch so you + +01:09:22.719 --> 01:09:29.920 +uh you first sort before building all of + +01:09:27.480 --> 01:09:31.640 +your batches and then uh that makes it + +01:09:29.920 --> 01:09:32.960 +so that similarly sized ones are the + +01:09:31.640 --> 01:09:35.239 +same + +01:09:32.960 --> 01:09:37.040 +batch this goes into the problem that I + +01:09:35.239 --> 01:09:39.359 +mentioned before but only in passing + +01:09:37.040 --> 01:09:42.440 +which is uh let's say you're calculating + +01:09:39.359 --> 01:09:44.199 +your batch based on the number of + +01:09:42.440 --> 01:09:47.679 +sequences that you're + +01:09:44.199 --> 01:09:51.400 +processing if you say Okay I want 64 + +01:09:47.679 --> 01:09:53.359 +sequences in my mini batch um if most of + +01:09:51.400 --> 01:09:55.159 +the time those 64 sequences are are 10 + +01:09:53.359 --> 01:09:57.480 +tokens that's fine but then when you get + +01:09:55.159 --> 01:10:01.440 +the One Mini batch that has a thousand + +01:09:57.480 --> 01:10:02.760 +tokens in each sentence or each sequence + +01:10:01.440 --> 01:10:04.920 +um suddenly you're going to run out of + +01:10:02.760 --> 01:10:07.800 +GPU memory and you're like training is + +01:10:04.920 --> 01:10:08.920 +going to crash right which is you really + +01:10:07.800 --> 01:10:10.440 +don't want that to happen when you + +01:10:08.920 --> 01:10:12.440 +started running your homework assignment + +01:10:10.440 --> 01:10:15.560 +and then went to bed and then wake up + +01:10:12.440 --> 01:10:18.440 +and it crashed you know uh 15 minutes + +01:10:15.560 --> 01:10:21.040 +into Computing or something so uh this + +01:10:18.440 --> 01:10:23.440 +is an important thing to be aware of + +01:10:21.040 --> 01:10:26.760 +practically uh again this can be solved + +01:10:23.440 --> 01:10:29.239 +by a lot of toolkits like I know fer uh + +01:10:26.760 --> 01:10:30.840 +does it and hugging face does it if you + +01:10:29.239 --> 01:10:33.159 +set the appropriate settings but it's + +01:10:30.840 --> 01:10:36.239 +something you should be aware of um + +01:10:33.159 --> 01:10:37.880 +another note is that if you do this it's + +01:10:36.239 --> 01:10:41.280 +reducing the randomness in your + +01:10:37.880 --> 01:10:42.880 +distribution of data so um stochastic + +01:10:41.280 --> 01:10:44.520 +gradient descent is really heavily + +01:10:42.880 --> 01:10:47.480 +reliant on the fact that your ordering + +01:10:44.520 --> 01:10:49.440 +of data is randomized or at least it's a + +01:10:47.480 --> 01:10:52.159 +distributed appropriately so it's + +01:10:49.440 --> 01:10:56.840 +something to definitely be aware of um + +01:10:52.159 --> 01:10:59.560 +so uh this is a good thing to to think + +01:10:56.840 --> 01:11:01.400 +about another really useful thing to + +01:10:59.560 --> 01:11:03.800 +think about is strided + +01:11:01.400 --> 01:11:05.440 +architectures um strided architectures + +01:11:03.800 --> 01:11:07.520 +appear in rnns they appear in + +01:11:05.440 --> 01:11:10.080 +convolution they appear in trans + +01:11:07.520 --> 01:11:12.320 +Transformers or attention based models + +01:11:10.080 --> 01:11:15.199 +um they're called different things in + +01:11:12.320 --> 01:11:18.159 +each of them so in rnns they're called + +01:11:15.199 --> 01:11:21.280 +pyramidal rnns in convolution they're + +01:11:18.159 --> 01:11:22.400 +called strided architectures and in + +01:11:21.280 --> 01:11:25.080 +attention they're called sparse + +01:11:22.400 --> 01:11:27.440 +attention usually they all actually kind + +01:11:25.080 --> 01:11:30.800 +of mean the same thing um and basically + +01:11:27.440 --> 01:11:33.440 +what they mean is you don't you have a + +01:11:30.800 --> 01:11:37.040 +multi-layer model and when you have a + +01:11:33.440 --> 01:11:40.920 +multi-layer model you don't process + +01:11:37.040 --> 01:11:43.920 +every input uh from the uh from the + +01:11:40.920 --> 01:11:45.560 +previous layer so here's an example um + +01:11:43.920 --> 01:11:47.840 +like let's say you have a whole bunch of + +01:11:45.560 --> 01:11:50.199 +inputs um each of the inputs is + +01:11:47.840 --> 01:11:53.159 +processed in the first layer in some way + +01:11:50.199 --> 01:11:56.639 +but in the second layer you actually + +01:11:53.159 --> 01:12:01.520 +input for example uh two inputs to the + +01:11:56.639 --> 01:12:03.560 +RNN but you you skip so you have one + +01:12:01.520 --> 01:12:05.440 +state that corresponds to state number + +01:12:03.560 --> 01:12:06.840 +one and two another state that + +01:12:05.440 --> 01:12:08.440 +corresponds to state number two and + +01:12:06.840 --> 01:12:10.920 +three another state that corresponds to + +01:12:08.440 --> 01:12:13.280 +state number three and four so what that + +01:12:10.920 --> 01:12:15.199 +means is you can gradually decrease the + +01:12:13.280 --> 01:12:18.199 +number like the length of the sequence + +01:12:15.199 --> 01:12:20.719 +every time you process so uh this is a + +01:12:18.199 --> 01:12:22.360 +really useful thing that to do if you're + +01:12:20.719 --> 01:12:25.480 +processing very long sequences so you + +01:12:22.360 --> 01:12:25.480 +should be aware of it + +01:12:27.440 --> 01:12:34.120 +cool um everything + +01:12:30.639 --> 01:12:36.920 +okay okay the final thing is truncated + +01:12:34.120 --> 01:12:39.239 +back propagation through time and uh + +01:12:36.920 --> 01:12:41.000 +truncated back propagation Through Time + +01:12:39.239 --> 01:12:43.560 +what this is doing is basically you do + +01:12:41.000 --> 01:12:46.120 +back propop over shorter segments but + +01:12:43.560 --> 01:12:47.840 +you initialize with the state from the + +01:12:46.120 --> 01:12:51.040 +previous + +01:12:47.840 --> 01:12:52.440 +segment and the way this works is uh + +01:12:51.040 --> 01:12:56.080 +like for example if you're running an + +01:12:52.440 --> 01:12:57.600 +RNN uh you would run the RNN over the + +01:12:56.080 --> 01:12:59.400 +previous segment maybe it's length four + +01:12:57.600 --> 01:13:02.120 +maybe it's length 400 it doesn't really + +01:12:59.400 --> 01:13:04.520 +matter but it's uh coherently length + +01:13:02.120 --> 01:13:06.360 +segment and then when you do the next + +01:13:04.520 --> 01:13:08.840 +segment what you do is you only pass the + +01:13:06.360 --> 01:13:12.960 +hidden state but you throw away the rest + +01:13:08.840 --> 01:13:16.360 +of the previous computation graph and + +01:13:12.960 --> 01:13:18.040 +then walk through uh like this uh so you + +01:13:16.360 --> 01:13:22.159 +won't actually be updating the + +01:13:18.040 --> 01:13:24.080 +parameters of this based on the result + +01:13:22.159 --> 01:13:25.800 +the lost from this but you're still + +01:13:24.080 --> 01:13:28.159 +passing the information so this can use + +01:13:25.800 --> 01:13:30.400 +the information for the previous state + +01:13:28.159 --> 01:13:32.239 +so this is an example from RNN this is + +01:13:30.400 --> 01:13:35.159 +used pretty widely in RNN but there's + +01:13:32.239 --> 01:13:38.000 +also a lot of Transformer architectures + +01:13:35.159 --> 01:13:39.400 +that do things like this um the original + +01:13:38.000 --> 01:13:41.000 +one is something called Transformer + +01:13:39.400 --> 01:13:44.560 +Excel that was actually created here at + +01:13:41.000 --> 01:13:46.560 +CMU but this is also um used in the new + +01:13:44.560 --> 01:13:48.719 +mistol models and other things like this + +01:13:46.560 --> 01:13:51.719 +as well so um it's something that's + +01:13:48.719 --> 01:13:54.719 +still very much alive and well nowadays + +01:13:51.719 --> 01:13:56.320 +as well + +01:13:54.719 --> 01:13:57.840 +cool um that's all I have for today are + +01:13:56.320 --> 01:13:59.760 +there any questions people want to ask + +01:13:57.840 --> 01:14:02.760 +before we wrap + +01:13:59.760 --> 01:14:02.760 +up + +01:14:12.840 --> 01:14:20.000 +yeah doesent yeah so for condition + +01:14:16.960 --> 01:14:25.040 +prediction what is Source X and Target y + +01:14:20.000 --> 01:14:26.520 +um I think I kind of maybe carried over + +01:14:25.040 --> 01:14:28.679 +uh some terminology from machine + +01:14:26.520 --> 01:14:31.400 +translation uh by accident maybe it + +01:14:28.679 --> 01:14:34.080 +should be input X and output y uh that + +01:14:31.400 --> 01:14:36.600 +would be a better way to put it and so + +01:14:34.080 --> 01:14:38.080 +uh it could be anything for translation + +01:14:36.600 --> 01:14:39.560 +it's like something in the source + +01:14:38.080 --> 01:14:42.600 +language and something in the target + +01:14:39.560 --> 01:14:44.520 +language so like English and Japanese um + +01:14:42.600 --> 01:14:47.280 +if it's just a regular language model it + +01:14:44.520 --> 01:14:50.560 +could be something like a prompt and the + +01:14:47.280 --> 01:14:55.280 +output so for + +01:14:50.560 --> 01:14:55.280 +UNC y example that + +01:14:57.400 --> 01:15:01.400 +yeah so for unconditioned prediction + +01:14:59.760 --> 01:15:03.840 +that could just be straight up language + +01:15:01.400 --> 01:15:07.040 +modeling for example so um language + +01:15:03.840 --> 01:15:11.840 +modeling with no not necessarily any + +01:15:07.040 --> 01:15:11.840 +problems okay thanks and anything + +01:15:12.440 --> 01:15:17.880 +else okay great thanks a lot I'm happy + +01:15:14.639 --> 01:15:17.880 +to take questions + +01:15:18.639 --> 01:15:21.639 +to diff --git a/CMU Advanced NLP 2024 (5) Transformers/CMU Advanced NLP 2024 (5) Transformers.mp4 b/CMU Advanced NLP 2024 (5) Transformers/CMU Advanced NLP 2024 (5) Transformers.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..736a404e25f63a55207365f4c8266c27a6e6a364 --- /dev/null +++ b/CMU Advanced NLP 2024 (5) Transformers/CMU Advanced NLP 2024 (5) Transformers.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb41e7836ce707570fd585416fedcc15c078c5d5ad8421e500cf3acb72d74917 +size 71702341 diff --git a/CMU Advanced NLP 2024 (5) Transformers/metadata.json b/CMU Advanced NLP 2024 (5) Transformers/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..d211c6002ad75fc353dedb1cd83296f6f1c7fecb --- /dev/null +++ b/CMU Advanced NLP 2024 (5) Transformers/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=QkGwxtALTLU", + "title": "CMU Advanced NLP 2024 (5) Transformers" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (5) Transformers/transcript.srt b/CMU Advanced NLP 2024 (5) Transformers/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..5c37c85a113909441fe2f9f433af061ae8240f79 --- /dev/null +++ b/CMU Advanced NLP 2024 (5) Transformers/transcript.srt @@ -0,0 +1,6903 @@ +1 +00:00:00,240 --> 00:00:04,680 +so this time I'm going to be talking + +2 +00:00:02,720 --> 00:00:07,839 +about Transformers kind of the backbone + +3 +00:00:04,680 --> 00:00:09,719 +of most uh implementations not only a + +4 +00:00:07,839 --> 00:00:11,920 +natural language processing but also you + +5 +00:00:09,719 --> 00:00:14,320 +know a wide variety of other things as + +6 +00:00:11,920 --> 00:00:16,800 +well and I'm going to be talking both + +7 +00:00:14,320 --> 00:00:19,560 +about Transformers as they were + +8 +00:00:16,800 --> 00:00:22,400 +currently as they were originally + +9 +00:00:19,560 --> 00:00:25,119 +conceived and implemented in 2017 and + +10 +00:00:22,400 --> 00:00:26,960 +also some modifications that people make + +11 +00:00:25,119 --> 00:00:28,840 +to Transformers today to make them work + +12 +00:00:26,960 --> 00:00:31,359 +work much better in kind of modern + +13 +00:00:28,840 --> 00:00:35,879 +language models such as so I'll talk + +14 +00:00:31,359 --> 00:00:35,879 +about both of those at the same time + +15 +00:00:36,719 --> 00:00:44,200 +please so as a quick reminder I I just + +16 +00:00:40,000 --> 00:00:47,000 +want to review the attention from last + +17 +00:00:44,200 --> 00:00:48,840 +time very quickly and basically if you + +18 +00:00:47,000 --> 00:00:51,160 +remember attention there were two + +19 +00:00:48,840 --> 00:00:55,760 +varieties of attention one was cross + +20 +00:00:51,160 --> 00:00:57,960 +attention where you attend to another + +21 +00:00:55,760 --> 00:01:00,079 +sentence basically or another sequence + +22 +00:00:57,960 --> 00:01:03,519 +so you have one sequence that serves as + +23 +00:01:00,079 --> 00:01:05,479 +your uh keys that you are attending to + +24 +00:01:03,519 --> 00:01:07,560 +and one sequence that serve as your + +25 +00:01:05,479 --> 00:01:11,200 +queries the things that you are using to + +26 +00:01:07,560 --> 00:01:13,479 +attend to the the uh sequence of keys + +27 +00:01:11,200 --> 00:01:16,680 +and so uh you can do that for you know + +28 +00:01:13,479 --> 00:01:18,360 +every element in the query Vector uh + +29 +00:01:16,680 --> 00:01:21,119 +attending to every element in the key + +30 +00:01:18,360 --> 00:01:25,479 +vector and the other alternative is self + +31 +00:01:21,119 --> 00:01:27,280 +attention where you are uh attending to + +32 +00:01:25,479 --> 00:01:29,960 +the same sequence so basically you're + +33 +00:01:27,280 --> 00:01:32,560 +guaranteed that the queries and the keys + +34 +00:01:29,960 --> 00:01:34,920 +attend uh like correspond to the same + +35 +00:01:32,560 --> 00:01:36,360 +sequence and so that's really the only + +36 +00:01:34,920 --> 00:01:38,920 +difference between self attention and + +37 +00:01:36,360 --> 00:01:42,439 +cross attention um Transformer based + +38 +00:01:38,920 --> 00:01:44,759 +models use either self attention or they + +39 +00:01:42,439 --> 00:01:46,479 +use uh self attention and cross + +40 +00:01:44,759 --> 00:01:48,680 +attention so I'll talk a little bit + +41 +00:01:46,479 --> 00:01:50,920 +about two different types of Transformer + +42 +00:01:48,680 --> 00:01:53,399 +based models that use both of + +43 +00:01:50,920 --> 00:01:56,119 +those and the way we calculated + +44 +00:01:53,399 --> 00:01:59,200 +detention was basically uh by using the + +45 +00:01:56,119 --> 00:02:00,640 +query vectors uh taking all of the key + +46 +00:01:59,200 --> 00:02:03,280 +vectors + +47 +00:02:00,640 --> 00:02:05,119 +and uh for each query hey pair we would + +48 +00:02:03,280 --> 00:02:07,960 +calculate the weight between them like + +49 +00:02:05,119 --> 00:02:09,560 +this then we would normalize it by using + +50 +00:02:07,960 --> 00:02:12,440 +the soft Max to make sure they all add + +51 +00:02:09,560 --> 00:02:14,920 +up to one and are between zero and + +52 +00:02:12,440 --> 00:02:18,080 +one and then based on that we took the + +53 +00:02:14,920 --> 00:02:20,840 +value vectors and uh we multiplied in + +54 +00:02:18,080 --> 00:02:24,720 +these attention weights and we got a + +55 +00:02:20,840 --> 00:02:26,400 +final Vector for that so a single query + +56 +00:02:24,720 --> 00:02:28,400 +Vector would result in a single value + +57 +00:02:26,400 --> 00:02:32,040 +Vector is that + +58 +00:02:28,400 --> 00:02:34,840 +output so that's just the the review uh + +59 +00:02:32,040 --> 00:02:37,239 +to you know get everybody uh everybody's + +60 +00:02:34,840 --> 00:02:39,480 +mind working allow everybody to uh come + +61 +00:02:37,239 --> 00:02:41,440 +into the room and so now I'd like to + +62 +00:02:39,480 --> 00:02:44,440 +jump into the the new content of talking + +63 +00:02:41,440 --> 00:02:48,000 +about how Transformers + +64 +00:02:44,440 --> 00:02:50,959 +work um in Transformers were Post in the + +65 +00:02:48,000 --> 00:02:52,360 +paper attention is all you need uh by + +66 +00:02:50,959 --> 00:02:56,400 +vaswani at all in + +67 +00:02:52,360 --> 00:02:58,239 +2017 um when this paper came out it was + +68 +00:02:56,400 --> 00:03:00,319 +kind of already clear to me that this + +69 +00:02:58,239 --> 00:03:03,440 +was going to be a big thing you know + +70 +00:03:00,319 --> 00:03:05,319 +soon after the paper came out uh and it + +71 +00:03:03,440 --> 00:03:08,120 +it actually has turned out to be a very + +72 +00:03:05,319 --> 00:03:10,879 +big thing of course um but basically + +73 +00:03:08,120 --> 00:03:12,640 +when it came out it was a sequence + +74 +00:03:10,879 --> 00:03:14,720 +sequence model a model that could + +75 +00:03:12,640 --> 00:03:17,599 +generate sequences based entirely on + +76 +00:03:14,720 --> 00:03:19,560 +attention and so this is in contrast to + +77 +00:03:17,599 --> 00:03:22,640 +what came before it which is like they + +78 +00:03:19,560 --> 00:03:24,879 +would have an RNN based encoder and then + +79 +00:03:22,640 --> 00:03:28,080 +they would only use attention for cross + +80 +00:03:24,879 --> 00:03:32,000 +attention so um all of + +81 +00:03:28,080 --> 00:03:34,239 +the all of the in the + +82 +00:03:32,000 --> 00:03:38,239 +encoder up until this point these would + +83 +00:03:34,239 --> 00:03:40,040 +all be RNN RNN based + +84 +00:03:38,239 --> 00:03:42,599 +blocks and then they would have a + +85 +00:03:40,040 --> 00:03:46,799 +decoder over here maybe + +86 +00:03:42,599 --> 00:03:48,239 +also maybe also consisting of uh RNN + +87 +00:03:46,799 --> 00:03:50,879 +actually this could be a bidirectional + +88 +00:03:48,239 --> 00:03:50,879 +RNN for + +89 +00:03:51,000 --> 00:03:56,159 +example um and this could be a + +90 +00:03:53,319 --> 00:03:56,159 +unidirectional + +91 +00:03:57,360 --> 00:04:01,720 +rnm um and then they would only use + +92 +00:03:59,760 --> 00:04:04,360 +attention for the cross attention part + +93 +00:04:01,720 --> 00:04:07,760 +here so this would be + +94 +00:04:04,360 --> 00:04:09,040 +attention um and so what the Transformer + +95 +00:04:07,760 --> 00:04:11,319 +did is it basically said we're going to + +96 +00:04:09,040 --> 00:04:12,439 +remove the RNN as sequence modeling and + +97 +00:04:11,319 --> 00:04:15,280 +we're going to replace this all with + +98 +00:04:12,439 --> 00:04:16,600 +self attention so um hence the name + +99 +00:04:15,280 --> 00:04:18,199 +attention is all you need so they + +100 +00:04:16,600 --> 00:04:21,120 +removed all of the other sequence + +101 +00:04:18,199 --> 00:04:23,840 +modeling components other + +102 +00:04:21,120 --> 00:04:25,240 +than um at the time the paper came out + +103 +00:04:23,840 --> 00:04:26,880 +it gave strong results on machine + +104 +00:04:25,240 --> 00:04:30,120 +translation and of course now it gives + +105 +00:04:26,880 --> 00:04:33,080 +strong results on everything um another + +106 +00:04:30,120 --> 00:04:34,960 +really important thing is it's uh fast + +107 +00:04:33,080 --> 00:04:37,440 +and it only consists of Matrix + +108 +00:04:34,960 --> 00:04:38,880 +multiplications um and so this is really + +109 +00:04:37,440 --> 00:04:42,080 +important for the same reason that I + +110 +00:04:38,880 --> 00:04:43,759 +mentioned uh last class which is that + +111 +00:04:42,080 --> 00:04:45,080 +rnns are kind of bottlenecked by the + +112 +00:04:43,759 --> 00:04:46,479 +fact that you have to wait for the + +113 +00:04:45,080 --> 00:04:48,759 +calculation from the previous state + +114 +00:04:46,479 --> 00:04:50,400 +before you can calculate the next one um + +115 +00:04:48,759 --> 00:04:53,160 +Transformers you don't have to do that + +116 +00:04:50,400 --> 00:04:56,240 +so it makes it um makes it much faster + +117 +00:04:53,160 --> 00:04:58,479 +and actually I would argue that that's + +118 +00:04:56,240 --> 00:05:00,680 +probably a bigger reason why they became + +119 +00:04:58,479 --> 00:05:02,160 +very popular than uh like that + +120 +00:05:00,680 --> 00:05:04,280 +Transformers are better modeling + +121 +00:05:02,160 --> 00:05:06,800 +methodology or anything like that I I + +122 +00:05:04,280 --> 00:05:09,280 +think it's actually mostly due to them + +123 +00:05:06,800 --> 00:05:09,280 +being + +124 +00:05:10,840 --> 00:05:17,000 +fast so I'm going to go through two + +125 +00:05:13,400 --> 00:05:19,199 +types of Transformers um specifically + +126 +00:05:17,000 --> 00:05:22,319 +encoder decoder + +127 +00:05:19,199 --> 00:05:26,720 +Transformers uh and these are used in + +128 +00:05:22,319 --> 00:05:30,039 +models such as um T5 and + +129 +00:05:26,720 --> 00:05:32,479 +Bart uh and T5 is actually still uh + +130 +00:05:30,039 --> 00:05:34,919 +reasonably widely used in in some + +131 +00:05:32,479 --> 00:05:36,639 +applications um and also decoder only + +132 +00:05:34,919 --> 00:05:39,520 +models and these are things like GP and + +133 +00:05:36,639 --> 00:05:40,880 +llama and this is used uh kind of most + +134 +00:05:39,520 --> 00:05:42,479 +widely right now most of the new + +135 +00:05:40,880 --> 00:05:44,039 +language models coming out are decoder + +136 +00:05:42,479 --> 00:05:46,639 +only + +137 +00:05:44,039 --> 00:05:49,919 +models so here are the architecture + +138 +00:05:46,639 --> 00:05:51,960 +diagrams uh between them and what you + +139 +00:05:49,919 --> 00:05:53,440 +can see is a decoder only model only has + +140 +00:05:51,960 --> 00:05:57,080 +a + +141 +00:05:53,440 --> 00:05:59,800 +single model here where the encoder + +142 +00:05:57,080 --> 00:06:01,919 +decoder model has an encoder and a + +143 +00:05:59,800 --> 00:06:06,440 +decoder like + +144 +00:06:01,919 --> 00:06:08,080 +this so the way the blocks of the trans + +145 +00:06:06,440 --> 00:06:09,960 +or the way the Transformer works is you + +146 +00:06:08,080 --> 00:06:11,599 +have an input embedding you have + +147 +00:06:09,960 --> 00:06:13,479 +something called positional encodings + +148 +00:06:11,599 --> 00:06:16,199 +which I'll talk about a bit then you + +149 +00:06:13,479 --> 00:06:17,800 +have multi-head attention blocks and + +150 +00:06:16,199 --> 00:06:21,400 +these multi-head attention blocks are + +151 +00:06:17,800 --> 00:06:22,560 +followed by feed forward blocks so the + +152 +00:06:21,400 --> 00:06:23,800 +multi-head attention blocks are + +153 +00:06:22,560 --> 00:06:25,680 +basically doing attention the feed + +154 +00:06:23,800 --> 00:06:29,080 +forward blocks are basically doing uh + +155 +00:06:25,680 --> 00:06:31,000 +extraction of combination features um to + +156 +00:06:29,080 --> 00:06:33,880 +kind of mix together the different + +157 +00:06:31,000 --> 00:06:38,240 +features from the calculated by the + +158 +00:06:33,880 --> 00:06:40,360 +attention and then in the in a decoder + +159 +00:06:38,240 --> 00:06:42,560 +only model that's all you have and then + +160 +00:06:40,360 --> 00:06:46,400 +in the encoder decoder model you also + +161 +00:06:42,560 --> 00:06:49,840 +have um something like this where you + +162 +00:06:46,400 --> 00:06:52,520 +have a mass multi-head attension uh to + +163 +00:06:49,840 --> 00:06:55,160 +calculate kind of uh in place of the RNN + +164 +00:06:52,520 --> 00:06:56,919 +here and then you have this multihead + +165 +00:06:55,160 --> 00:06:59,199 +attention here in place of the Cross + +166 +00:06:56,919 --> 00:07:01,319 +attention here and then you also have + +167 +00:06:59,199 --> 00:07:02,720 +the Fe filler Network and I'm going to + +168 +00:07:01,319 --> 00:07:05,000 +go through each one of these in detail + +169 +00:07:02,720 --> 00:07:06,759 +but that's just kind of the general uh + +170 +00:07:05,000 --> 00:07:09,080 +the general + +171 +00:07:06,759 --> 00:07:11,160 +structure and so I mentioned that like + +172 +00:07:09,080 --> 00:07:14,680 +encoder decoder models were widely used + +173 +00:07:11,160 --> 00:07:18,319 +in T5 um this was also the original uh + +174 +00:07:14,680 --> 00:07:21,039 +Transformer paper had uh had this um + +175 +00:07:18,319 --> 00:07:24,240 +thing here uh this architecture here + +176 +00:07:21,039 --> 00:07:29,479 +this is a little bit newer so why would + +177 +00:07:24,240 --> 00:07:31,720 +you pick one or the other um T5 and Bart + +178 +00:07:29,479 --> 00:07:33,800 +basically uh they picked this one kind + +179 +00:07:31,720 --> 00:07:37,560 +of partly out of tradition but also + +180 +00:07:33,800 --> 00:07:39,400 +partly out of um uh for things where you + +181 +00:07:37,560 --> 00:07:41,960 +definitely have like a clear input + +182 +00:07:39,400 --> 00:07:44,280 +output structure right so it's like I + +183 +00:07:41,960 --> 00:07:46,639 +want to take in a summary or I want to + +184 +00:07:44,280 --> 00:07:48,680 +take in a document that's my input I + +185 +00:07:46,639 --> 00:07:51,360 +want to generate a summary or I want to + +186 +00:07:48,680 --> 00:07:53,440 +take an an English sentence and I want + +187 +00:07:51,360 --> 00:07:57,360 +to generate a Japanese sentence and + +188 +00:07:53,440 --> 00:07:58,560 +that's a translation um however things + +189 +00:07:57,360 --> 00:08:00,199 +get a little bit tricky when you're + +190 +00:07:58,560 --> 00:08:02,360 +talking about something like a chatot + +191 +00:08:00,199 --> 00:08:04,800 +right so if you have a chatbot what is + +192 +00:08:02,360 --> 00:08:06,479 +your input and what's your output like + +193 +00:08:04,800 --> 00:08:08,159 +one thing that could be your input is + +194 +00:08:06,479 --> 00:08:11,159 +like all of the context that you've seen + +195 +00:08:08,159 --> 00:08:12,400 +before um and then your output could be + +196 +00:08:11,159 --> 00:08:15,960 +you know the + +197 +00:08:12,400 --> 00:08:18,360 +next the next like utterance or the next + +198 +00:08:15,960 --> 00:08:19,879 +dialogue turn but on the other hand + +199 +00:08:18,360 --> 00:08:21,360 +another way you could look at it is well + +200 +00:08:19,879 --> 00:08:22,919 +it's all just part of this one big + +201 +00:08:21,360 --> 00:08:27,080 +sequence and we want to model this whole + +202 +00:08:22,919 --> 00:08:29,319 +big sequence at a time and so um because + +203 +00:08:27,080 --> 00:08:30,720 +of that decoder only models basically + +204 +00:08:29,319 --> 00:08:32,360 +Don't force you to decide what your + +205 +00:08:30,720 --> 00:08:34,399 +input and what your output is you can + +206 +00:08:32,360 --> 00:08:36,680 +just treat all of it as one long + +207 +00:08:34,399 --> 00:08:38,760 +sequence and that's a little bit more + +208 +00:08:36,680 --> 00:08:40,159 +convenient another reason why decoder + +209 +00:08:38,760 --> 00:08:42,959 +only models are a little bit more + +210 +00:08:40,159 --> 00:08:46,040 +convenient is they're simpler uh so they + +211 +00:08:42,959 --> 00:08:48,600 +just have you know these two layers and + +212 +00:08:46,040 --> 00:08:50,440 +they don't have like separate multi-head + +213 +00:08:48,600 --> 00:08:53,320 +attention blocks for the encoder and the + +214 +00:08:50,440 --> 00:08:56,440 +cross attention and uh the decoder + +215 +00:08:53,320 --> 00:08:57,920 +attention here and so because of this + +216 +00:08:56,440 --> 00:08:59,279 +because this is simpler and has fewer + +217 +00:08:57,920 --> 00:09:01,360 +parameters overall you can just make + +218 +00:08:59,279 --> 00:09:04,480 +make each layer bigger or you can make + +219 +00:09:01,360 --> 00:09:05,839 +more layers or other things like that um + +220 +00:09:04,480 --> 00:09:07,880 +actually one thing I forgot to mention + +221 +00:09:05,839 --> 00:09:09,560 +is this NX means you do this over and + +222 +00:09:07,880 --> 00:09:12,800 +over again so you have like multiple + +223 +00:09:09,560 --> 00:09:12,800 +layers of these blocks + +224 +00:09:12,880 --> 00:09:19,160 +basically cool um any any questions + +225 +00:09:17,200 --> 00:09:23,200 +here + +226 +00:09:19,160 --> 00:09:23,200 +yeah same + +227 +00:09:23,839 --> 00:09:27,839 +same so what do you mean by same size is + +228 +00:09:26,399 --> 00:09:30,480 +the first thing to ask about do you mean + +229 +00:09:27,839 --> 00:09:33,360 +same number of parameters you + +230 +00:09:30,480 --> 00:09:37,640 +me so for the same number of parameters + +231 +00:09:33,360 --> 00:09:40,040 +I I think it really depends um there was + +232 +00:09:37,640 --> 00:09:41,800 +a comparison in the T5 paper where they + +233 +00:09:40,040 --> 00:09:43,519 +did something like that and I think they + +234 +00:09:41,800 --> 00:09:45,279 +did demonstrate that the encoder decoder + +235 +00:09:43,519 --> 00:09:47,000 +was like slightly better but I don't + +236 +00:09:45,279 --> 00:09:49,279 +know if they exactly controlled for the + +237 +00:09:47,000 --> 00:09:51,720 +size um I have to go back and look at + +238 +00:09:49,279 --> 00:09:53,519 +that to tell you the details but the T5 + +239 +00:09:51,720 --> 00:09:56,399 +paper is actually a really really nice + +240 +00:09:53,519 --> 00:09:57,800 +paper in terms of uh how they explore + +241 +00:09:56,399 --> 00:09:59,399 +all the design dimensions and like + +242 +00:09:57,800 --> 00:10:02,399 +training objectives and stuff like that + +243 +00:09:59,399 --> 00:10:07,399 +so you could take a look at that if you + +244 +00:10:02,399 --> 00:10:09,720 +want um any other any other + +245 +00:10:07,399 --> 00:10:12,839 +questions okay so let's go into the + +246 +00:10:09,720 --> 00:10:17,240 +details so my goal of this uh by the end + +247 +00:10:12,839 --> 00:10:19,519 +is that you have a very good grasp of + +248 +00:10:17,240 --> 00:10:22,160 +you know all of the all of the basic + +249 +00:10:19,519 --> 00:10:25,640 +components that go in here and also uh + +250 +00:10:22,160 --> 00:10:28,839 +some of the parts that llama is changing + +251 +00:10:25,640 --> 00:10:31,279 +from the original architecture and how + +252 +00:10:28,839 --> 00:10:35,200 +why that's important so uh that's kind + +253 +00:10:31,279 --> 00:10:35,200 +of the main uh the main goal for + +254 +00:10:36,320 --> 00:10:42,800 +today okay so core Transformer Concepts + +255 +00:10:40,360 --> 00:10:45,639 +uh as I said positional encodings Are + +256 +00:10:42,800 --> 00:10:48,160 +One Core concept multi-headed detention + +257 +00:10:45,639 --> 00:10:49,839 +is another core concept um mask + +258 +00:10:48,160 --> 00:10:51,320 +detention is a core concept which I kind + +259 +00:10:49,839 --> 00:10:54,360 +of talked about last time but I'll I'll + +260 +00:10:51,320 --> 00:10:56,639 +talk in a little more detail um residual + +261 +00:10:54,360 --> 00:10:58,040 +layers and layer normalization and feed + +262 +00:10:56,639 --> 00:11:01,040 +the feed forward + +263 +00:10:58,040 --> 00:11:03,600 +layers + +264 +00:11:01,040 --> 00:11:06,360 +so inputs and embeddings are are kind of + +265 +00:11:03,600 --> 00:11:09,000 +boring I guess uh since we've already + +266 +00:11:06,360 --> 00:11:10,639 +covered them inputs are generally split + +267 +00:11:09,000 --> 00:11:13,160 +into subwords like this like we talked + +268 +00:11:10,639 --> 00:11:15,000 +about before embeddings normally you + +269 +00:11:13,160 --> 00:11:18,040 +just look them up like we discussed in + +270 +00:11:15,000 --> 00:11:19,880 +previous models so it Transformer based + +271 +00:11:18,040 --> 00:11:22,839 +models don't really do anything fancy + +272 +00:11:19,880 --> 00:11:25,320 +here um the only big thing I guess is + +273 +00:11:22,839 --> 00:11:28,320 +that they really when Transformer models + +274 +00:11:25,320 --> 00:11:29,880 +came out they kind of like normalized + +275 +00:11:28,320 --> 00:11:31,480 +the fact that you do subord segmentation + +276 +00:11:29,880 --> 00:11:35,360 +and like every major Transformer based + +277 +00:11:31,480 --> 00:11:35,360 +model does subord segmentation now + +278 +00:11:35,519 --> 00:11:39,959 +um so skipping over that briefly uh the + +279 +00:11:38,880 --> 00:11:42,000 +next thing I want to talk about is + +280 +00:11:39,959 --> 00:11:43,440 +multi-head attention and this is kind of + +281 +00:11:42,000 --> 00:11:45,800 +one of the big Innovations in the + +282 +00:11:43,440 --> 00:11:49,480 +Transformer + +283 +00:11:45,800 --> 00:11:53,120 +paper so multi-head attention + +284 +00:11:49,480 --> 00:11:56,839 +um the basic intuition behind it is that + +285 +00:11:53,120 --> 00:11:58,160 +information from different parts of the + +286 +00:11:56,839 --> 00:12:01,639 +sentence or sequence that you're + +287 +00:11:58,160 --> 00:12:04,880 +modeling can be useful in different ways + +288 +00:12:01,639 --> 00:12:08,480 +and if you are just doing + +289 +00:12:04,880 --> 00:12:11,600 +attention um if you are just doing + +290 +00:12:08,480 --> 00:12:14,360 +attention with a single attention head + +291 +00:12:11,600 --> 00:12:16,480 +basically a single uh you know attention + +292 +00:12:14,360 --> 00:12:17,920 +Vector you might need to make hard + +293 +00:12:16,480 --> 00:12:21,199 +decisions about which part of the + +294 +00:12:17,920 --> 00:12:24,639 +sentence you pay attention to so um I I + +295 +00:12:21,199 --> 00:12:27,880 +wrote four examples of the word run here + +296 +00:12:24,639 --> 00:12:30,040 +um can anybody tell me how these are + +297 +00:12:27,880 --> 00:12:31,959 +different + +298 +00:12:30,040 --> 00:12:33,800 +how are how are one and two different + +299 +00:12:31,959 --> 00:12:36,600 +from three and + +300 +00:12:33,800 --> 00:12:42,040 +four yeah the first two are verbs and + +301 +00:12:36,600 --> 00:12:45,320 +the second two are nouns yeah um and so + +302 +00:12:42,040 --> 00:12:45,320 +how how is one different from + +303 +00:12:47,720 --> 00:12:52,079 +two if you know another language if you + +304 +00:12:50,480 --> 00:12:55,240 +translate them into another language are + +305 +00:12:52,079 --> 00:12:55,240 +they translated the same or + +306 +00:12:55,800 --> 00:12:59,160 +differently yeah the meaning the + +307 +00:12:57,519 --> 00:13:01,920 +meanings are different so this is Al + +308 +00:12:59,160 --> 00:13:04,639 +also called word sense um or it's called + +309 +00:13:01,920 --> 00:13:06,480 +semantics or it's called other uh other + +310 +00:13:04,639 --> 00:13:08,000 +lexical semantics or something like this + +311 +00:13:06,480 --> 00:13:10,160 +but basically the meanings are different + +312 +00:13:08,000 --> 00:13:12,079 +like if you translate these two into + +313 +00:13:10,160 --> 00:13:13,240 +probably many other languages in the + +314 +00:13:12,079 --> 00:13:15,600 +world they'd have a different + +315 +00:13:13,240 --> 00:13:17,440 +translation uh because it they mean + +316 +00:13:15,600 --> 00:13:20,160 +different things like physically uh + +317 +00:13:17,440 --> 00:13:23,040 +sorry uh run run a business versus + +318 +00:13:20,160 --> 00:13:25,160 +physically run um and same for three and + +319 +00:13:23,040 --> 00:13:28,079 +four right running a staffing is very + +320 +00:13:25,160 --> 00:13:29,680 +different than um making making it run + +321 +00:13:28,079 --> 00:13:32,600 +there + +322 +00:13:29,680 --> 00:13:33,959 +now if you look at the information you + +323 +00:13:32,600 --> 00:13:35,240 +might not even think about it but if you + +324 +00:13:33,959 --> 00:13:38,720 +look at the information you use to + +325 +00:13:35,240 --> 00:13:41,680 +disambiguate these things it's pretty + +326 +00:13:38,720 --> 00:13:43,920 +different usually for syntactic things + +327 +00:13:41,680 --> 00:13:47,079 +you can just tell from the nearby + +328 +00:13:43,920 --> 00:13:49,160 +context so for example if you have a + +329 +00:13:47,079 --> 00:13:51,279 +noun to the left usually that means + +330 +00:13:49,160 --> 00:13:53,199 +something is going to be a verb uh on + +331 +00:13:51,279 --> 00:13:55,000 +the other hand if you have a determiner + +332 +00:13:53,199 --> 00:13:57,079 +on the left it's almost certain that + +333 +00:13:55,000 --> 00:14:00,199 +that that thing is going to be either an + +334 +00:13:57,079 --> 00:14:01,480 +uh a noun or an adjective so you only + +335 +00:14:00,199 --> 00:14:03,079 +really need to look at very local + +336 +00:14:01,480 --> 00:14:05,120 +context to do this sort of + +337 +00:14:03,079 --> 00:14:07,399 +disambiguation but in order to + +338 +00:14:05,120 --> 00:14:09,480 +disambiguate uh semantics you need to + +339 +00:14:07,399 --> 00:14:11,759 +look at farther uh + +340 +00:14:09,480 --> 00:14:13,720 +context one interesting thing is like + +341 +00:14:11,759 --> 00:14:16,880 +let's say you want to learn embeddings + +342 +00:14:13,720 --> 00:14:18,320 +of uh embeddings of words there's + +343 +00:14:16,880 --> 00:14:19,839 +actually a trick that you can use when + +344 +00:14:18,320 --> 00:14:22,040 +training word embeddings where you only + +345 +00:14:19,839 --> 00:14:23,639 +look at the local uh the local context + +346 +00:14:22,040 --> 00:14:26,120 +and you can learn syntactic embeddings + +347 +00:14:23,639 --> 00:14:27,480 +or you don't look at the local context + +348 +00:14:26,120 --> 00:14:30,160 +and you only look at the farther away + +349 +00:14:27,480 --> 00:14:33,600 +context and you can learn some Mets so + +350 +00:14:30,160 --> 00:14:35,000 +like you can actually use this to get um + +351 +00:14:33,600 --> 00:14:36,519 +like influence your models in + +352 +00:14:35,000 --> 00:14:40,920 +interesting ways but anyway that's kind + +353 +00:14:36,519 --> 00:14:42,720 +of in aide so um the the basic idea here + +354 +00:14:40,920 --> 00:14:44,360 +though is different pieces of context + +355 +00:14:42,720 --> 00:14:47,000 +can be useful for different + +356 +00:14:44,360 --> 00:14:49,199 +purposes and that's kind of what + +357 +00:14:47,000 --> 00:14:51,160 +multi-head attention is trying to uh + +358 +00:14:49,199 --> 00:14:53,279 +trying to get at so it doesn't want to + +359 +00:14:51,160 --> 00:14:55,440 +force you to decide whether to look at I + +360 +00:14:53,279 --> 00:14:57,440 +or to look at business but it wants you + +361 +00:14:55,440 --> 00:15:00,680 +to allow you to look at both of them for + +362 +00:14:57,440 --> 00:15:00,680 +different purposes + +363 +00:15:01,639 --> 00:15:07,399 +so how exactly does multi-headed + +364 +00:15:03,720 --> 00:15:11,040 +detention work I wrote the equation up + +365 +00:15:07,399 --> 00:15:15,880 +here and actually I should point out um + +366 +00:15:11,040 --> 00:15:18,279 +that the reference on the web page for + +367 +00:15:15,880 --> 00:15:21,199 +the annotated Transformer is really nice + +368 +00:15:18,279 --> 00:15:22,920 +like I uh I got some of the equations + +369 +00:15:21,199 --> 00:15:24,800 +directly from that and you can look + +370 +00:15:22,920 --> 00:15:27,360 +through and see pytorch code for all of + +371 +00:15:24,800 --> 00:15:29,279 +these things too uh which can be helpful + +372 +00:15:27,360 --> 00:15:31,480 +so um + +373 +00:15:29,279 --> 00:15:33,120 +anyway uh we have the multi-headed + +374 +00:15:31,480 --> 00:15:36,160 +attention and it looks like this I'm + +375 +00:15:33,120 --> 00:15:37,519 +going to walk through the uh the diagram + +376 +00:15:36,160 --> 00:15:39,360 +that I have down here though because it + +377 +00:15:37,519 --> 00:15:42,560 +might be a little bit easier to + +378 +00:15:39,360 --> 00:15:45,440 +follow so the this diagram is a little + +379 +00:15:42,560 --> 00:15:47,839 +bit different than what is presented in + +380 +00:15:45,440 --> 00:15:49,639 +the attention is all you need paper but + +381 +00:15:47,839 --> 00:15:51,199 +I intentionally made the diagram closer + +382 +00:15:49,639 --> 00:15:54,279 +to what you how you actually want to + +383 +00:15:51,199 --> 00:15:58,319 +implement it in pytorch uh for example + +384 +00:15:54,279 --> 00:16:00,079 +so um the first thing that you do is you + +385 +00:15:58,319 --> 00:16:01,720 +have a a whole bunch of query vectors + +386 +00:16:00,079 --> 00:16:05,440 +and a whole bunch of key + +387 +00:16:01,720 --> 00:16:07,000 +vectors um so the query vectors here I + +388 +00:16:05,440 --> 00:16:09,240 +only have three of them the key vectors + +389 +00:16:07,000 --> 00:16:12,839 +and value vectors I have four that's + +390 +00:16:09,240 --> 00:16:15,920 +kind of intentional um so this this + +391 +00:16:12,839 --> 00:16:17,399 +would be this is permissible uh you must + +392 +00:16:15,920 --> 00:16:19,279 +have the same number of key vectors and + +393 +00:16:17,399 --> 00:16:20,759 +value vectors uh but you can have a + +394 +00:16:19,279 --> 00:16:22,959 +different number of query vectors if you + +395 +00:16:20,759 --> 00:16:27,519 +want + +396 +00:16:22,959 --> 00:16:29,079 +um so is that + +397 +00:16:27,519 --> 00:16:30,880 +clear + +398 +00:16:29,079 --> 00:16:33,160 +in which case can you have a different + +399 +00:16:30,880 --> 00:16:36,759 +number of query vectors and key vectors + +400 +00:16:33,160 --> 00:16:39,839 +yeah when it's Mas when it's masked you + +401 +00:16:36,759 --> 00:16:44,720 +could do that um yeah that that is true + +402 +00:16:39,839 --> 00:16:44,720 +um I I was thinking something else + +403 +00:16:49,560 --> 00:16:53,240 +yeah when you're decoding and you have a + +404 +00:16:51,759 --> 00:16:56,360 +short sequence and you're attending to a + +405 +00:16:53,240 --> 00:17:00,680 +longer sequence yeah um that's basically + +406 +00:16:56,360 --> 00:17:03,079 +it um you can have this when it's cross + +407 +00:17:00,680 --> 00:17:04,400 +attention um because in Cross attention + +408 +00:17:03,079 --> 00:17:05,959 +the sequence that you're attending to + +409 +00:17:04,400 --> 00:17:08,480 +can be different than the sequence that + +410 +00:17:05,959 --> 00:17:10,079 +you're using to attend if you're doing + +411 +00:17:08,480 --> 00:17:12,120 +self attention these need to be the same + +412 +00:17:10,079 --> 00:17:13,600 +one because the sequences are the same + +413 +00:17:12,120 --> 00:17:16,079 +so the length of the sequence will also + +414 +00:17:13,600 --> 00:17:19,839 +be the same so + +415 +00:17:16,079 --> 00:17:22,360 +um so yeah that that's one thing + +416 +00:17:19,839 --> 00:17:24,760 +uh the reason why I made these different + +417 +00:17:22,360 --> 00:17:26,600 +just to demonstrate that + +418 +00:17:24,760 --> 00:17:28,799 +they + +419 +00:17:26,600 --> 00:17:32,080 +so the first thing that you do is you + +420 +00:17:28,799 --> 00:17:35,200 +multiply by weights and you have three + +421 +00:17:32,080 --> 00:17:36,720 +different weight matrices uh the first + +422 +00:17:35,200 --> 00:17:38,360 +or actually you have four different + +423 +00:17:36,720 --> 00:17:40,240 +weight matrices overall but here we're + +424 +00:17:38,360 --> 00:17:42,240 +going to use three of them the first one + +425 +00:17:40,240 --> 00:17:46,039 +is the query Matrix so you multiply this + +426 +00:17:42,240 --> 00:17:47,600 +input by the query Matrix uh then you + +427 +00:17:46,039 --> 00:17:49,760 +have your key Matrix you multiply the + +428 +00:17:47,600 --> 00:17:51,919 +input by the the key weight Matrix and + +429 +00:17:49,760 --> 00:17:54,120 +then you have the value Matrix so um + +430 +00:17:51,919 --> 00:17:57,400 +that you multiply by the weights here + +431 +00:17:54,120 --> 00:18:01,520 +and that's what we have up here in the + +432 +00:17:57,400 --> 00:18:05,720 +equation um then the next thing that we + +433 +00:18:01,520 --> 00:18:07,400 +do is we split and rearrange these into + +434 +00:18:05,720 --> 00:18:11,919 +n attention + +435 +00:18:07,400 --> 00:18:15,200 +inputs and so the way we do this is we + +436 +00:18:11,919 --> 00:18:18,120 +split these up like this so we we have 1 + +437 +00:18:15,200 --> 00:18:19,840 +2 3 four we split them up into two of + +438 +00:18:18,120 --> 00:18:21,760 +them of size two so this is the case + +439 +00:18:19,840 --> 00:18:25,000 +where you have two attention + +440 +00:18:21,760 --> 00:18:27,840 +heads um and each attention head has a + +441 +00:18:25,000 --> 00:18:29,760 +vector of size two in reality usually + +442 +00:18:27,840 --> 00:18:31,280 +your vector will be size like 512 or + +443 +00:18:29,760 --> 00:18:33,919 +1024 and then you'll have eight + +444 +00:18:31,280 --> 00:18:37,080 +attention heads or something like this + +445 +00:18:33,919 --> 00:18:40,120 +um and or you know much more if you have + +446 +00:18:37,080 --> 00:18:42,240 +a larger model um but here I'm just + +447 +00:18:40,120 --> 00:18:44,720 +doing a simple example for illustrative + +448 +00:18:42,240 --> 00:18:47,080 +purposes and we do this over all of + +449 +00:18:44,720 --> 00:18:48,720 +these note that this is like a little + +450 +00:18:47,080 --> 00:18:52,159 +bit different than the equation that you + +451 +00:18:48,720 --> 00:18:54,080 +have here um so the equation that you + +452 +00:18:52,159 --> 00:18:59,400 +have here you're splitting up first and + +453 +00:18:54,080 --> 00:19:00,840 +then doing the Matrix multiply so um + +454 +00:18:59,400 --> 00:19:03,360 +so you would be doing the Matrix + +455 +00:19:00,840 --> 00:19:04,440 +multiply of this Matrix uh resulting in + +456 +00:19:03,360 --> 00:19:06,520 +this then you would do the Matrix + +457 +00:19:04,440 --> 00:19:08,360 +multiply resulting in this but in + +458 +00:19:06,520 --> 00:19:09,720 +reality we do the big Matrix multiply + +459 +00:19:08,360 --> 00:19:11,679 +all at once just because it's more + +460 +00:19:09,720 --> 00:19:13,640 +efficient to do it that way uh because + +461 +00:19:11,679 --> 00:19:16,360 +we want to do more big operations than + +462 +00:19:13,640 --> 00:19:19,280 +do a bunch of operations separately so + +463 +00:19:16,360 --> 00:19:23,120 +uh this diagram here is closer to what + +464 +00:19:19,280 --> 00:19:23,120 +you actually do in py for + +465 +00:19:25,080 --> 00:19:29,400 +example so this is now a + +466 +00:19:27,280 --> 00:19:33,480 +three-dimensional + +467 +00:19:29,400 --> 00:19:35,200 +um so uh like at this at this point here + +468 +00:19:33,480 --> 00:19:40,919 +you would have a threedimensional fenor + +469 +00:19:35,200 --> 00:19:43,840 +where we have um two rows and three + +470 +00:19:40,919 --> 00:19:46,400 +columns and uh the third dimension is + +471 +00:19:43,840 --> 00:19:49,080 +two so you can see it's kind of + +472 +00:19:46,400 --> 00:19:50,480 +threedimensional here and that's also + +473 +00:19:49,080 --> 00:19:52,919 +good because in the next step we're + +474 +00:19:50,480 --> 00:19:54,960 +going to run a tension over each head + +475 +00:19:52,919 --> 00:19:56,760 +and when we run attention over each head + +476 +00:19:54,960 --> 00:19:59,000 +if we run attention over + +477 +00:19:56,760 --> 00:20:00,360 +threedimensional tensors once that's a + +478 +00:19:59,000 --> 00:20:02,200 +lot more efficient than writing a for + +479 +00:20:00,360 --> 00:20:06,640 +Loop and doing it individually over each + +480 +00:20:02,200 --> 00:20:08,280 +of these split up things here so um so + +481 +00:20:06,640 --> 00:20:11,919 +that's another uh that's the next thing + +482 +00:20:08,280 --> 00:20:14,559 +we do and when we run attention we + +483 +00:20:11,919 --> 00:20:16,480 +basically calculate the attention Vector + +484 +00:20:14,559 --> 00:20:20,640 +using the query and the + +485 +00:20:16,480 --> 00:20:22,919 +key and uh then we multiply the value + +486 +00:20:20,640 --> 00:20:25,080 +vectors by that attention Vector uh take + +487 +00:20:22,919 --> 00:20:27,799 +the weighted sum by via tension vector + +488 +00:20:25,080 --> 00:20:32,320 +and that gives us a result that looks + +489 +00:20:27,799 --> 00:20:35,880 +like this basically so um of course the + +490 +00:20:32,320 --> 00:20:38,799 +number of columns in this will be equal + +491 +00:20:35,880 --> 00:20:40,919 +to the number of columns in the query uh + +492 +00:20:38,799 --> 00:20:43,000 +the query typ here because we calculate + +493 +00:20:40,919 --> 00:20:45,480 +one representation for each thing in the + +494 +00:20:43,000 --> 00:20:48,039 +query uh in the query + +495 +00:20:45,480 --> 00:20:49,919 +Matrix and then we concat them + +496 +00:20:48,039 --> 00:20:52,520 +concatenate them + +497 +00:20:49,919 --> 00:20:54,799 +together uh and when we concatenate them + +498 +00:20:52,520 --> 00:20:58,440 +together we get a bigger um we get a + +499 +00:20:54,799 --> 00:21:01,559 +bigger Vector here and so when we do + +500 +00:20:58,440 --> 00:21:03,159 +this each um each one will get a + +501 +00:21:01,559 --> 00:21:05,320 +different attention weight so we have a + +502 +00:21:03,159 --> 00:21:06,400 +different attention weighting uh over + +503 +00:21:05,320 --> 00:21:09,240 +all of + +504 +00:21:06,400 --> 00:21:12,120 +them this is what the code looks like uh + +505 +00:21:09,240 --> 00:21:14,480 +I basically put it up here um but you B + +506 +00:21:12,120 --> 00:21:18,480 +you do linear projections for all of + +507 +00:21:14,480 --> 00:21:21,240 +these um we reshape to get H heads we + +508 +00:21:18,480 --> 00:21:24,159 +apply attention to all of the heads and + +509 +00:21:21,240 --> 00:21:26,080 +then we concatenate them back Al + +510 +00:21:24,159 --> 00:21:29,799 +together and then we apply a final + +511 +00:21:26,080 --> 00:21:34,520 +linear layer so we have a final uh final + +512 +00:21:29,799 --> 00:21:34,520 +matrix multiplication uh at the very end + +513 +00:21:35,159 --> 00:21:41,880 +here and so I didn't really uh I didn't + +514 +00:21:38,200 --> 00:21:46,400 +really explicitly expand the attention + +515 +00:21:41,880 --> 00:21:48,440 +uh vectors in the previous uh diagram + +516 +00:21:46,400 --> 00:21:50,760 +but I have them here so this is an + +517 +00:21:48,440 --> 00:21:52,039 +example from the vaswani at all paper + +518 +00:21:50,760 --> 00:21:53,440 +and they're showing what happens when + +519 +00:21:52,039 --> 00:21:56,320 +you calculate self + +520 +00:21:53,440 --> 00:21:58,880 +attention um and this this is the self + +521 +00:21:56,320 --> 00:22:01,799 +attention values for the word + +522 +00:21:58,880 --> 00:22:04,720 +making and the self attention values for + +523 +00:22:01,799 --> 00:22:08,400 +the word making are mostly attending to + +524 +00:22:04,720 --> 00:22:09,919 +like more uh more difficult and that + +525 +00:22:08,400 --> 00:22:12,679 +really closely matches with what I + +526 +00:22:09,919 --> 00:22:16,360 +talked about before right + +527 +00:22:12,679 --> 00:22:19,000 +so run in English is kind of a a verb + +528 +00:22:16,360 --> 00:22:21,159 +with lots of ambiguity uh like how you + +529 +00:22:19,000 --> 00:22:23,640 +translate the verb how you translate the + +530 +00:22:21,159 --> 00:22:26,559 +word run would be very different based + +531 +00:22:23,640 --> 00:22:29,279 +on you know the other words in the + +532 +00:22:26,559 --> 00:22:31,320 +sentence make is also a word with lots + +533 +00:22:29,279 --> 00:22:33,640 +of ambiguity so in order to understand + +534 +00:22:31,320 --> 00:22:35,240 +how you would translate it you would uh + +535 +00:22:33,640 --> 00:22:37,640 +need to pull in information from other + +536 +00:22:35,240 --> 00:22:39,200 +parts of the sentence and specifically + +537 +00:22:37,640 --> 00:22:40,960 +making something more difficult is + +538 +00:22:39,200 --> 00:22:44,080 +different than like making a cake or + +539 +00:22:40,960 --> 00:22:45,919 +making a house or something like that um + +540 +00:22:44,080 --> 00:22:48,480 +and so because of that uh it's pulling + +541 +00:22:45,919 --> 00:22:50,200 +in lots of information from over here + +542 +00:22:48,480 --> 00:22:53,120 +but there are some attention heads that + +543 +00:22:50,200 --> 00:22:54,640 +are like attending to the word itself uh + +544 +00:22:53,120 --> 00:22:56,480 +so this is pulling in information from + +545 +00:22:54,640 --> 00:22:58,480 +the word itself there's also another + +546 +00:22:56,480 --> 00:23:00,400 +attention head that's pulling in word + +547 +00:22:58,480 --> 00:23:02,520 +from uh information from the previous + +548 +00:23:00,400 --> 00:23:04,440 +word so this could be one that's doing + +549 +00:23:02,520 --> 00:23:07,200 +like syntactic dis invigoration of some + +550 +00:23:04,440 --> 00:23:08,799 +variety so you can see that each head is + +551 +00:23:07,200 --> 00:23:10,200 +pulling in different varieties of + +552 +00:23:08,799 --> 00:23:13,360 +information here which is kind of the + +553 +00:23:10,200 --> 00:23:13,360 +function of + +554 +00:23:15,679 --> 00:23:23,600 +multi so any yeah so happens you + +555 +00:23:26,880 --> 00:23:30,640 +have what happens if you have multi-head + +556 +00:23:29,200 --> 00:23:31,919 +attention and the sentence is shorter + +557 +00:23:30,640 --> 00:23:33,840 +than the number of heads so that's a + +558 +00:23:31,919 --> 00:23:37,000 +good question um it's actually not a + +559 +00:23:33,840 --> 00:23:39,600 +problem at all uh because here let's + +560 +00:23:37,000 --> 00:23:41,360 +look at the um the length of the + +561 +00:23:39,600 --> 00:23:44,240 +sentence the length of the sentence here + +562 +00:23:41,360 --> 00:23:47,760 +would be three uh this this number of + +563 +00:23:44,240 --> 00:23:49,640 +columns is the length of the sentence um + +564 +00:23:47,760 --> 00:23:52,720 +here the length of the sentence would be + +565 +00:23:49,640 --> 00:23:55,200 +four but we're not splitting on the + +566 +00:23:52,720 --> 00:23:57,080 +columns we're splitting on the rows so + +567 +00:23:55,200 --> 00:23:58,559 +you need to make sure that the rows are + +568 +00:23:57,080 --> 00:23:59,840 +greater than the number heads and you + +569 +00:23:58,559 --> 00:24:02,039 +always do that because you pick a + +570 +00:23:59,840 --> 00:24:04,679 +representation size of something like + +571 +00:24:02,039 --> 00:24:06,200 +512 and then you pick the number of + +572 +00:24:04,679 --> 00:24:08,520 +heads to be equal to something like + +573 +00:24:06,200 --> 00:24:10,559 +eight so you're sure that it's always + +574 +00:24:08,520 --> 00:24:12,120 +divisible it's always larger there's + +575 +00:24:10,559 --> 00:24:13,840 +actually something crazy called fine + +576 +00:24:12,120 --> 00:24:15,919 +grain detention that was proposed like + +577 +00:24:13,840 --> 00:24:17,799 +right after attention was composed where + +578 +00:24:15,919 --> 00:24:21,240 +you made the number of heads equal to + +579 +00:24:17,799 --> 00:24:23,640 +the number of uh of representations but + +580 +00:24:21,240 --> 00:24:27,000 +people stopped doing this just because + +581 +00:24:23,640 --> 00:24:29,200 +it's uh like it's Overkill you don't + +582 +00:24:27,000 --> 00:24:30,720 +need that many attention heads and + +583 +00:24:29,200 --> 00:24:32,039 +actually in the Transformer paper they + +584 +00:24:30,720 --> 00:24:33,720 +experiment with different numbers of + +585 +00:24:32,039 --> 00:24:37,080 +attention heads and found eight was like + +586 +00:24:33,720 --> 00:24:38,880 +sufficient for their their purposes yeah + +587 +00:24:37,080 --> 00:24:41,240 +attention in the original paper is not + +588 +00:24:38,880 --> 00:24:44,120 +causal right so it can like look into + +589 +00:24:41,240 --> 00:24:47,520 +future tokens as + +590 +00:24:44,120 --> 00:24:49,640 +well attention in the original attention + +591 +00:24:47,520 --> 00:24:50,679 +paper from like 2014 where they first + +592 +00:24:49,640 --> 00:24:54,880 +proposed + +593 +00:24:50,679 --> 00:24:58,080 +attention um in the original paper in + +594 +00:24:54,880 --> 00:25:01,279 +2014 where they first proposed attention + +595 +00:24:58,080 --> 00:25:03,279 +they were doing exclusively cross + +596 +00:25:01,279 --> 00:25:06,559 +attention like this so they were + +597 +00:25:03,279 --> 00:25:08,559 +attending to um like they encoded + +598 +00:25:06,559 --> 00:25:11,080 +everything with bidirectional rnns and + +599 +00:25:08,559 --> 00:25:13,760 +then they were just attending to things + +600 +00:25:11,080 --> 00:25:16,120 +into input not like doing causal + +601 +00:25:13,760 --> 00:25:18,200 +attention um causal attention was + +602 +00:25:16,120 --> 00:25:20,159 +basically causal attention is like left + +603 +00:25:18,200 --> 00:25:22,559 +to right or mask attention there's like + +604 +00:25:20,159 --> 00:25:24,960 +different ways of of saying it but M + +605 +00:25:22,559 --> 00:25:29,679 +attention was first proposed in the + +606 +00:25:24,960 --> 00:25:29,679 +Transformer paper and it was um + +607 +00:25:29,720 --> 00:25:36,320 +uh it it basically was only in the + +608 +00:25:31,960 --> 00:25:39,559 +output also um and then actually the the + +609 +00:25:36,320 --> 00:25:41,640 +first um decoder only models the first + +610 +00:25:39,559 --> 00:25:45,720 +decoder only model was basically like + +611 +00:25:41,640 --> 00:25:48,720 +gpt1 uh like the first GPT model uh and + +612 +00:25:45,720 --> 00:25:52,240 +there they did causal or mass detention + +613 +00:25:48,720 --> 00:25:56,399 +just on out side or just uh modeling + +614 +00:25:52,240 --> 00:26:00,000 +sequences yeah um so in the input to the + +615 +00:25:56,399 --> 00:26:01,760 +GPD when we say that in the making + +616 +00:26:00,000 --> 00:26:05,080 +example when we sort of looked at the + +617 +00:26:01,760 --> 00:26:07,960 +self form in the decoder only model + +618 +00:26:05,080 --> 00:26:09,760 +would making not be able to attemp to + +619 +00:26:07,960 --> 00:26:11,840 +Future tokens that comes + +620 +00:26:09,760 --> 00:26:13,399 +after uh that's a good question and + +621 +00:26:11,840 --> 00:26:17,520 +basically the answer is + +622 +00:26:13,399 --> 00:26:20,240 +yes um so encoder that is one argument + +623 +00:26:17,520 --> 00:26:21,559 +for why encoder decoder models might be + +624 +00:26:20,240 --> 00:26:23,520 +more useful because you can do + +625 +00:26:21,559 --> 00:26:26,159 +bidirectional attention on the + +626 +00:26:23,520 --> 00:26:29,440 +inputs um + +627 +00:26:26,159 --> 00:26:30,799 +and there's also so there's actually + +628 +00:26:29,440 --> 00:26:32,919 +something right in the middle it's not + +629 +00:26:30,799 --> 00:26:35,799 +used super widely + +630 +00:26:32,919 --> 00:26:38,360 +nowadays um but + +631 +00:26:35,799 --> 00:26:40,760 +basically it's something called a prefix + +632 +00:26:38,360 --> 00:26:40,760 +language + +633 +00:26:43,520 --> 00:26:47,559 +model um in a prefix language model is + +634 +00:26:46,240 --> 00:26:50,880 +something where you only have the + +635 +00:26:47,559 --> 00:26:52,679 +parameters of a decoder but you allow it + +636 +00:26:50,880 --> 00:26:55,880 +during training you allow it to do + +637 +00:26:52,679 --> 00:26:57,600 +either masked or unmasked detention so + +638 +00:26:55,880 --> 00:26:59,960 +you only do mask detention when you're + +639 +00:26:57,600 --> 00:27:02,360 +generating but you also like do unmasked + +640 +00:26:59,960 --> 00:27:07,120 +attention also so it's just a way to + +641 +00:27:02,360 --> 00:27:08,799 +train the model um it's it's a small + +642 +00:27:07,120 --> 00:27:10,799 +modification to how you train the model + +643 +00:27:08,799 --> 00:27:12,440 +but uh some papers have said that's more + +644 +00:27:10,799 --> 00:27:14,720 +effective but I guess it's like more + +645 +00:27:12,440 --> 00:27:16,559 +complicated and I don't see it used + +646 +00:27:14,720 --> 00:27:19,559 +super widely right + +647 +00:27:16,559 --> 00:27:19,559 +now + +648 +00:27:22,919 --> 00:27:29,520 +yeah uh the multi + +649 +00:27:26,520 --> 00:27:29,520 +yeah + +650 +00:27:33,679 --> 00:27:37,080 +the number of rows is the dimension of + +651 +00:27:50,000 --> 00:27:56,519 +the yeah + +652 +00:27:52,600 --> 00:27:58,159 +so the reason why we don't split on the + +653 +00:27:56,519 --> 00:28:01,279 +rows + +654 +00:27:58,159 --> 00:28:03,159 +so if we go back to the the reason why + +655 +00:28:01,279 --> 00:28:05,120 +attention is so powerful in the first + +656 +00:28:03,159 --> 00:28:07,399 +place the reason why attention is so + +657 +00:28:05,120 --> 00:28:10,320 +powerful in the first place is we're + +658 +00:28:07,399 --> 00:28:12,679 +applying the exact same function no + +659 +00:28:10,320 --> 00:28:15,159 +matter how long the length is so that + +660 +00:28:12,679 --> 00:28:18,559 +allows us to extrapolate essentially to + +661 +00:28:15,159 --> 00:28:21,080 +like infinite like as long as we want or + +662 +00:28:18,559 --> 00:28:23,080 +short as we want sentences if we were + +663 +00:28:21,080 --> 00:28:24,760 +doing things like splitting on the + +664 +00:28:23,080 --> 00:28:27,640 +length of the sentence and we would run + +665 +00:28:24,760 --> 00:28:30,159 +into the problem where the like we had + +666 +00:28:27,640 --> 00:28:31,600 +question about before which is like what + +667 +00:28:30,159 --> 00:28:34,799 +if the number of attention heads is + +668 +00:28:31,600 --> 00:28:36,159 +shorter than the sequence link so you + +669 +00:28:34,799 --> 00:28:37,440 +could you could come up with a model + +670 +00:28:36,159 --> 00:28:38,880 +that did something like that you could + +671 +00:28:37,440 --> 00:28:41,279 +come up with a model that said okay I'm + +672 +00:28:38,880 --> 00:28:42,679 +going to split the first quarter of the + +673 +00:28:41,279 --> 00:28:43,919 +sentence and then the next quarter of + +674 +00:28:42,679 --> 00:28:45,519 +the sentence and the next quar of the + +675 +00:28:43,919 --> 00:28:46,880 +sentence that there actually were models + +676 +00:28:45,519 --> 00:28:50,000 +like that back in the day like where + +677 +00:28:46,880 --> 00:28:51,880 +people encoded different like quartiles + +678 +00:28:50,000 --> 00:28:53,760 +of the sentence separately but it + +679 +00:28:51,880 --> 00:28:55,360 +becomes a little bit tricky and like + +680 +00:28:53,760 --> 00:28:56,840 +what if it's shorter and stuff like that + +681 +00:28:55,360 --> 00:28:59,399 +so you need to deal with all these scor + +682 +00:28:56,840 --> 00:28:59,399 +cases + +683 +00:28:59,440 --> 00:29:06,200 +yeah cool um okay any any other things + +684 +00:29:03,679 --> 00:29:10,519 +these are all good questions + +685 +00:29:06,200 --> 00:29:10,519 +so okay I'll move on to the next + +686 +00:29:14,279 --> 00:29:22,080 +one okay so positional inputting um so + +687 +00:29:18,000 --> 00:29:25,919 +this is another really core part of + +688 +00:29:22,080 --> 00:29:25,919 +the Transformer + +689 +00:29:26,320 --> 00:29:29,320 +model + +690 +00:29:30,440 --> 00:29:36,200 +and the positional encoding uh goes in + +691 +00:29:33,679 --> 00:29:37,880 +here it's added together with the input + +692 +00:29:36,200 --> 00:29:40,440 +embedding + +693 +00:29:37,880 --> 00:29:42,279 +and because the Transformer model is + +694 +00:29:40,440 --> 00:29:45,519 +purely + +695 +00:29:42,279 --> 00:29:47,000 +attentional if embeddings only are used + +696 +00:29:45,519 --> 00:29:50,559 +there actually would be no way to + +697 +00:29:47,000 --> 00:29:52,240 +distinguish between identical words so + +698 +00:29:50,559 --> 00:29:55,519 +because + +699 +00:29:52,240 --> 00:29:58,519 +you're just taking the input embedding + +700 +00:29:55,519 --> 00:30:00,840 +if you had a big dog and a big cat the + +701 +00:29:58,519 --> 00:30:02,320 +attention values from every other place + +702 +00:30:00,840 --> 00:30:04,519 +in the sentence would be guaranteed to + +703 +00:30:02,320 --> 00:30:07,519 +be the same for big right so you would + +704 +00:30:04,519 --> 00:30:09,000 +always have the same attention value for + +705 +00:30:07,519 --> 00:30:11,480 +these words because their vectors are + +706 +00:30:09,000 --> 00:30:13,919 +identical and that's a problem I guess + +707 +00:30:11,480 --> 00:30:17,519 +because like as I said sometimes + +708 +00:30:13,919 --> 00:30:19,440 +syntactic um syntactic information needs + +709 +00:30:17,519 --> 00:30:22,320 +to be pulled in from like locally + +710 +00:30:19,440 --> 00:30:24,200 +coherent contexts so that's a problem a + +711 +00:30:22,320 --> 00:30:25,640 +couple ways you can fix this the first + +712 +00:30:24,200 --> 00:30:28,000 +way you can fix this is use something + +713 +00:30:25,640 --> 00:30:30,960 +that is sensitive to position like an + +714 +00:30:28,000 --> 00:30:32,720 +RNN um and an RNN you know looks at + +715 +00:30:30,960 --> 00:30:34,640 +which words came before which words came + +716 +00:30:32,720 --> 00:30:36,360 +after and stuff like that so that would + +717 +00:30:34,640 --> 00:30:38,600 +solve your problem but the whole point + +718 +00:30:36,360 --> 00:30:41,679 +of Transformers or attention is all you + +719 +00:30:38,600 --> 00:30:43,840 +need is to not use rnms so uh we need + +720 +00:30:41,679 --> 00:30:47,399 +another way to fix this + +721 +00:30:43,840 --> 00:30:49,360 +problem um so the way this is fixed is + +722 +00:30:47,399 --> 00:30:51,760 +uh using something called positional + +723 +00:30:49,360 --> 00:30:53,399 +encodings and so positional encodings + +724 +00:30:51,760 --> 00:30:56,679 +add another embedding that's based on + +725 +00:30:53,399 --> 00:30:57,840 +the word position and so in addition to + +726 +00:30:56,679 --> 00:30:59,799 +having something that's based on the + +727 +00:30:57,840 --> 00:31:02,080 +word identity you have another embedding + +728 +00:30:59,799 --> 00:31:04,880 +that's based on the position so then the + +729 +00:31:02,080 --> 00:31:07,639 +word big uh that appears in position two + +730 +00:31:04,880 --> 00:31:10,000 +would be uh embedding of big plus + +731 +00:31:07,639 --> 00:31:12,880 +embedding of position two and the word + +732 +00:31:10,000 --> 00:31:14,120 +big that appears over here would be uh + +733 +00:31:12,880 --> 00:31:16,080 +the embeding a big and then the + +734 +00:31:14,120 --> 00:31:18,840 +embedding a position eight for example + +735 +00:31:16,080 --> 00:31:20,440 +so uh that would uh that kind of solves + +736 +00:31:18,840 --> 00:31:22,399 +that + +737 +00:31:20,440 --> 00:31:24,559 +problem there's a number of different + +738 +00:31:22,399 --> 00:31:26,000 +ways to make these uh the original + +739 +00:31:24,559 --> 00:31:28,480 +Transformer + +740 +00:31:26,000 --> 00:31:31,440 +paper they did it using something called + +741 +00:31:28,480 --> 00:31:32,320 +sinusoidal encodings this is kind of uh + +742 +00:31:31,440 --> 00:31:35,519 +one of + +743 +00:31:32,320 --> 00:31:37,120 +the I don't know when this paper came + +744 +00:31:35,519 --> 00:31:39,600 +out and we were first reading it when it + +745 +00:31:37,120 --> 00:31:40,760 +first came out it was like they + +746 +00:31:39,600 --> 00:31:42,320 +explained what they did and they + +747 +00:31:40,760 --> 00:31:43,880 +explained very briefly why they did it + +748 +00:31:42,320 --> 00:31:46,639 +but it was kind of like a mystery like + +749 +00:31:43,880 --> 00:31:48,000 +nobody like actually understood what + +750 +00:31:46,639 --> 00:31:49,399 +they wrote in the paper luckily now + +751 +00:31:48,000 --> 00:31:52,799 +there's a lot of nice blogs that + +752 +00:31:49,399 --> 00:31:56,360 +actually explain this um and so the way + +753 +00:31:52,799 --> 00:31:58,159 +these work essentially is you have a uh + +754 +00:31:56,360 --> 00:32:01,760 +a sign + +755 +00:31:58,159 --> 00:32:05,960 +uh like this and the sign is this uh + +756 +00:32:01,760 --> 00:32:08,919 +weight times the time step in the uh in + +757 +00:32:05,960 --> 00:32:12,159 +the output in every even numbered + +758 +00:32:08,919 --> 00:32:15,600 +embedding is uh uses as a sign every odd + +759 +00:32:12,159 --> 00:32:20,679 +numbered embedding uses the cosine and + +760 +00:32:15,600 --> 00:32:25,440 +this Omega over here is 10,000 to the 2 + +761 +00:32:20,679 --> 00:32:28,399 +k divided D um and so that's the value + +762 +00:32:25,440 --> 00:32:31,240 +and then this is the the dimension size + +763 +00:32:28,399 --> 00:32:36,000 +and what these embeddings look like is + +764 +00:32:31,240 --> 00:32:38,159 +something like this so um if you sorry + +765 +00:32:36,000 --> 00:32:40,039 +this is very small and also I should + +766 +00:32:38,159 --> 00:32:43,440 +acknowledge that comes from this this + +767 +00:32:40,039 --> 00:32:46,279 +blog up here um but this is the position + +768 +00:32:43,440 --> 00:32:49,200 +in the sentence and then this is the uh + +769 +00:32:46,279 --> 00:32:52,760 +embedding size I + +770 +00:32:49,200 --> 00:32:54,919 +believe um and so why why did they + +771 +00:32:52,760 --> 00:32:57,760 +choose to do it this way they chose to + +772 +00:32:54,919 --> 00:33:00,240 +do it this way because if you multip + +773 +00:32:57,760 --> 00:33:03,000 +these positional encodings together you + +774 +00:33:00,240 --> 00:33:05,760 +get something that looks a bit like this + +775 +00:33:03,000 --> 00:33:07,960 +and so if you multiply the two vectors + +776 +00:33:05,760 --> 00:33:10,440 +together you get something where if + +777 +00:33:07,960 --> 00:33:13,000 +you're closer together in position space + +778 +00:33:10,440 --> 00:33:15,480 +you you get a higher number and so that + +779 +00:33:13,000 --> 00:33:18,480 +kind of gives you a bias to uping the + +780 +00:33:15,480 --> 00:33:20,840 +attention values of uh things that are + +781 +00:33:18,480 --> 00:33:23,320 +closer together at least right at the + +782 +00:33:20,840 --> 00:33:24,799 +very beginning uh like layers of the + +783 +00:33:23,320 --> 00:33:27,960 +model where it's kind of more important + +784 +00:33:24,799 --> 00:33:30,000 +because you don't have it uh + +785 +00:33:27,960 --> 00:33:31,600 +calculated from the + +786 +00:33:30,000 --> 00:33:34,440 +previous + +787 +00:33:31,600 --> 00:33:36,000 +so this is uh this is a basic idea I + +788 +00:33:34,440 --> 00:33:37,960 +think the thing on the right is the most + +789 +00:33:36,000 --> 00:33:39,320 +important thing to to know here which is + +790 +00:33:37,960 --> 00:33:41,840 +like this is the reason why they chose + +791 +00:33:39,320 --> 00:33:45,360 +to do it that way um but that's the + +792 +00:33:41,840 --> 00:33:49,440 +basic idea yeah prly I think you at this + +793 +00:33:45,360 --> 00:33:52,240 +line Bing positional import like if it + +794 +00:33:49,440 --> 00:33:52,240 +next to you you would + +795 +00:33:53,840 --> 00:33:58,960 +multiply I just think why do we need to + +796 +00:34:02,120 --> 00:34:06,720 +um so sorry which part were you talking + +797 +00:34:19,720 --> 00:34:24,440 +about yeah so I'm G to talk about that + +798 +00:34:22,280 --> 00:34:28,879 +in a second + +799 +00:34:24,440 --> 00:34:28,879 +actually um any any other + +800 +00:34:30,520 --> 00:34:35,520 +okay so this is what is done um note + +801 +00:34:33,879 --> 00:34:38,200 +that these are added right at the very + +802 +00:34:35,520 --> 00:34:41,000 +beginning and then you pass this through + +803 +00:34:38,200 --> 00:34:43,359 +every layer but basically at the very + +804 +00:34:41,000 --> 00:34:45,919 +beginning layer at the very first layer + +805 +00:34:43,359 --> 00:34:48,119 +by using these positional encodings you + +806 +00:34:45,919 --> 00:34:50,399 +can kind of + +807 +00:34:48,119 --> 00:34:52,560 +disambiguate you can disambiguate this + +808 +00:34:50,399 --> 00:34:54,760 +case here and then after you passed it + +809 +00:34:52,560 --> 00:34:56,520 +through the first layer you're combining + +810 +00:34:54,760 --> 00:34:58,440 +together information anyway so you can + +811 +00:34:56,520 --> 00:35:03,480 +have pull in information about the local + +812 +00:34:58,440 --> 00:35:06,960 +context and now like it's essentially uh + +813 +00:35:03,480 --> 00:35:08,400 +um it's essentially already handled for + +814 +00:35:06,960 --> 00:35:11,040 +you another thing is if you have + +815 +00:35:08,400 --> 00:35:12,480 +residual connections this gets passed um + +816 +00:35:11,040 --> 00:35:14,440 +into the following layers which I'll + +817 +00:35:12,480 --> 00:35:17,440 +talk about a second + +818 +00:35:14,440 --> 00:35:17,440 +so + +819 +00:35:17,640 --> 00:35:23,240 +um the second thing that you could do is + +820 +00:35:19,839 --> 00:35:27,960 +learned encodings and learned encodings + +821 +00:35:23,240 --> 00:35:30,880 +basically what they do is they um create + +822 +00:35:27,960 --> 00:35:36,160 +a learnable uh embedding that you just + +823 +00:35:30,880 --> 00:35:37,800 +add in and um this is super simple uh so + +824 +00:35:36,160 --> 00:35:40,200 +it's like just + +825 +00:35:37,800 --> 00:35:42,720 +like just like you learned the embedding + +826 +00:35:40,200 --> 00:35:45,320 +for wbig you learn the embedding for w+ + +827 +00:35:42,720 --> 00:35:48,200 +two or uh plus + +828 +00:35:45,320 --> 00:35:49,640 +six and this is simpler uh like you + +829 +00:35:48,200 --> 00:35:51,800 +don't need to think about signs and + +830 +00:35:49,640 --> 00:35:53,520 +cosiness and stuff like that um it's + +831 +00:35:51,800 --> 00:35:55,400 +also more flexible because s model can + +832 +00:35:53,520 --> 00:35:56,960 +learn anything it needs to you know + +833 +00:35:55,400 --> 00:35:59,640 +learn in order to do a good job of + +834 +00:35:56,960 --> 00:36:01,520 +minimizing the loss but the dis the + +835 +00:35:59,640 --> 00:36:03,760 +biggest disadvantage is it makes it + +836 +00:36:01,520 --> 00:36:06,480 +impossible to extrapolate to longer + +837 +00:36:03,760 --> 00:36:08,599 +sequences than you saw at training time + +838 +00:36:06,480 --> 00:36:12,880 +so uh because you have no learned + +839 +00:36:08,599 --> 00:36:16,079 +embeddings for longer sequences it's + +840 +00:36:12,880 --> 00:36:18,400 +just it's at least in principle + +841 +00:36:16,079 --> 00:36:20,119 +impossible to extrapolate to longer + +842 +00:36:18,400 --> 00:36:21,960 +sequences unless you do some sort of + +843 +00:36:20,119 --> 00:36:22,960 +heuristics and if you do with eristics + +844 +00:36:21,960 --> 00:36:25,400 +you don't really know what's going to + +845 +00:36:22,960 --> 00:36:27,640 +happen so um that's the disadvantage to + +846 +00:36:25,400 --> 00:36:29,680 +doing it this way + +847 +00:36:27,640 --> 00:36:31,319 +in in contrast you know this you just + +848 +00:36:29,680 --> 00:36:33,160 +make K larger and calculate this + +849 +00:36:31,319 --> 00:36:35,400 +deterministic function and you could you + +850 +00:36:33,160 --> 00:36:38,160 +know theoretically extrapolate but + +851 +00:36:35,400 --> 00:36:39,760 +empirically models even ones that use + +852 +00:36:38,160 --> 00:36:41,359 +this sort of extrapolatable embedding + +853 +00:36:39,760 --> 00:36:43,720 +don't do super well at extrapolating to + +854 +00:36:41,359 --> 00:36:43,720 +longer + +855 +00:36:45,960 --> 00:36:50,960 +sequences um so going back to the + +856 +00:36:49,040 --> 00:36:52,920 +question um there's a distinction + +857 +00:36:50,960 --> 00:36:56,040 +between absolute versus relative + +858 +00:36:52,920 --> 00:36:57,680 +positional encodings and absolute + +859 +00:36:56,040 --> 00:37:00,040 +positional encoding in are like what I + +860 +00:36:57,680 --> 00:37:05,200 +said before they're basically positional + +861 +00:37:00,040 --> 00:37:08,240 +encodings where you add in a um you + +862 +00:37:05,200 --> 00:37:10,800 +specifically add in an encoding at each + +863 +00:37:08,240 --> 00:37:14,440 +position but you don't consider whether + +864 +00:37:10,800 --> 00:37:16,480 +like one query Vector is close to a key + +865 +00:37:14,440 --> 00:37:18,079 +vector or far away from a key vector or + +866 +00:37:16,480 --> 00:37:19,400 +anything like that you don't consider + +867 +00:37:18,079 --> 00:37:21,240 +that + +868 +00:37:19,400 --> 00:37:24,280 +directly + +869 +00:37:21,240 --> 00:37:27,920 +um on the other hand relative positional + +870 +00:37:24,280 --> 00:37:31,359 +encodings explicitly encode the relative + +871 +00:37:27,920 --> 00:37:32,240 +position and so what this means is when + +872 +00:37:31,359 --> 00:37:35,599 +you do + +873 +00:37:32,240 --> 00:37:38,599 +attention um when you do attention it + +874 +00:37:35,599 --> 00:37:40,760 +explicitly thinks about whether you know + +875 +00:37:38,599 --> 00:37:43,640 +a particular embedding is not in + +876 +00:37:40,760 --> 00:37:46,640 +position eight but whether the key the + +877 +00:37:43,640 --> 00:37:49,359 +query sorry whether the key embedding is + +878 +00:37:46,640 --> 00:37:51,319 +like minus5 from the query embedding or + +879 +00:37:49,359 --> 00:37:54,119 +minus8 from the query + +880 +00:37:51,319 --> 00:37:56,280 +embeded and the first paper that did + +881 +00:37:54,119 --> 00:37:57,680 +this they just learned relative uh + +882 +00:37:56,280 --> 00:38:01,760 +position + +883 +00:37:57,680 --> 00:38:01,760 +encodings and they learned it + +884 +00:38:02,520 --> 00:38:09,119 +um if if I remember correctly they + +885 +00:38:05,920 --> 00:38:09,119 +basically learned a + +886 +00:38:09,160 --> 00:38:15,200 +scalar where you're centered at zero and + +887 +00:38:12,800 --> 00:38:19,640 +then you have like + +888 +00:38:15,200 --> 00:38:23,359 +um you have like minus and uh you have + +889 +00:38:19,640 --> 00:38:26,640 +minus and plus uh a certain distance and + +890 +00:38:23,359 --> 00:38:28,760 +then you also um cut this off it might + +891 +00:38:26,640 --> 00:38:33,040 +like minus uh + +892 +00:38:28,760 --> 00:38:35,200 +128 and plus 128 or something like this + +893 +00:38:33,040 --> 00:38:36,599 +uh so you have a fixed length vector and + +894 +00:38:35,200 --> 00:38:38,920 +anything that's farther away from that + +895 +00:38:36,599 --> 00:38:43,240 +gets the same uh embedding + +896 +00:38:38,920 --> 00:38:45,800 +basically um the problem with this is uh + +897 +00:38:43,240 --> 00:38:47,800 +number one it adds learnable parameters + +898 +00:38:45,800 --> 00:38:50,119 +number two it's a little bit more + +899 +00:38:47,800 --> 00:38:53,520 +computationally uh expensive to apply + +900 +00:38:50,119 --> 00:38:55,119 +this every time uh onto your attention + +901 +00:38:53,520 --> 00:38:57,520 +Matrix + +902 +00:38:55,119 --> 00:38:59,079 +and uh because you need to apply this at + +903 +00:38:57,520 --> 00:39:01,720 +every layer you need to apply this every + +904 +00:38:59,079 --> 00:39:04,760 +time you do attention at every + +905 +00:39:01,720 --> 00:39:07,560 +layer so instead there was a really + +906 +00:39:04,760 --> 00:39:10,000 +clever idea uh called rotary positional + +907 +00:39:07,560 --> 00:39:11,960 +encodings and rotary positional + +908 +00:39:10,000 --> 00:39:14,400 +encodings are + +909 +00:39:11,960 --> 00:39:19,280 +basically uh kind of like an absolute + +910 +00:39:14,400 --> 00:39:22,079 +positional encoding um with the a lot of + +911 +00:39:19,280 --> 00:39:23,760 +the desirable qualities of relative + +912 +00:39:22,079 --> 00:39:26,720 +positional in + +913 +00:39:23,760 --> 00:39:30,160 +codings and their basic idea was this so + +914 +00:39:26,720 --> 00:39:32,920 +their basic idea was that they wanted to + +915 +00:39:30,160 --> 00:39:36,440 +um come up with something where you have + +916 +00:39:32,920 --> 00:39:39,480 +an embedding encoding Vector that takes + +917 +00:39:36,440 --> 00:39:42,599 +in the actual vector and the + +918 +00:39:39,480 --> 00:39:44,079 +position and you have another encoding + +919 +00:39:42,599 --> 00:39:46,079 +Vector where you Tak in the absolute + +920 +00:39:44,079 --> 00:39:50,760 +vector and the + +921 +00:39:46,079 --> 00:39:52,119 +position and the product of these two + +922 +00:39:50,760 --> 00:39:54,960 +becomes + +923 +00:39:52,119 --> 00:39:57,359 +another uh another function that is a + +924 +00:39:54,960 --> 00:40:00,720 +function only of + +925 +00:39:57,359 --> 00:40:03,400 +the two vectors and the relative + +926 +00:40:00,720 --> 00:40:05,599 +position and so you lose all information + +927 +00:40:03,400 --> 00:40:07,640 +about the absolute position um you only + +928 +00:40:05,599 --> 00:40:09,440 +have information about the relative + +929 +00:40:07,640 --> 00:40:12,000 +position + +930 +00:40:09,440 --> 00:40:15,079 +and this is trickier than it seems + +931 +00:40:12,000 --> 00:40:16,920 +basically because like you need to you + +932 +00:40:15,079 --> 00:40:20,040 +need to have something where it's not + +933 +00:40:16,920 --> 00:40:22,359 +possible to uh to recover the absolute + +934 +00:40:20,040 --> 00:40:24,480 +position because you want to you want it + +935 +00:40:22,359 --> 00:40:25,680 +to rely only on the relative position + +936 +00:40:24,480 --> 00:40:27,359 +because that will allow it to + +937 +00:40:25,680 --> 00:40:30,280 +extrapolate that will allow it to + +938 +00:40:27,359 --> 00:40:33,040 +generalize well when you see new um see + +939 +00:40:30,280 --> 00:40:35,119 +new outputs so basically what they do is + +940 +00:40:33,040 --> 00:40:37,839 +they do a lot of math uh that I'm not + +941 +00:40:35,119 --> 00:40:42,680 +going to uh cover in a lot of detail + +942 +00:40:37,839 --> 00:40:44,960 +here but by using uh imaginary numbers + +943 +00:40:42,680 --> 00:40:48,319 +uh trigonometry and imaginary numbers + +944 +00:40:44,960 --> 00:40:50,640 +you can essentially come up with a uh a + +945 +00:40:48,319 --> 00:40:54,800 +thing where you have the query vectors + +946 +00:40:50,640 --> 00:40:54,800 +and the key vectors I + +947 +00:40:55,319 --> 00:40:58,599 +believe I + +948 +00:40:59,079 --> 00:41:04,720 +think I might have that backwards I'll + +949 +00:41:02,000 --> 00:41:07,040 +I'll I'll have to check that um but + +950 +00:41:04,720 --> 00:41:11,400 +basically you take the vectors one of + +951 +00:41:07,040 --> 00:41:13,920 +the vectors and you add in um the cosine + +952 +00:41:11,400 --> 00:41:16,319 +M Theta one where Theta is a parameter + +953 +00:41:13,920 --> 00:41:19,920 +similar to The Omega parameter that we + +954 +00:41:16,319 --> 00:41:22,440 +had before um m is the uh the time step + +955 +00:41:19,920 --> 00:41:26,400 +the position in the sequence and then + +956 +00:41:22,440 --> 00:41:27,760 +you on the other side you have S and M + +957 +00:41:26,400 --> 00:41:33,599 +Theta + +958 +00:41:27,760 --> 00:41:37,760 +one over here and then you swap around + +959 +00:41:33,599 --> 00:41:41,000 +the order like minus uh and you invert + +960 +00:41:37,760 --> 00:41:43,880 +the score here uh sorry invert the + +961 +00:41:41,000 --> 00:41:45,520 +embedding here and if you do this you + +962 +00:41:43,880 --> 00:41:49,079 +can prove that essentially you get a + +963 +00:41:45,520 --> 00:41:52,040 +function that has this property and so + +964 +00:41:49,079 --> 00:41:53,720 +by doing this you're only adding you're + +965 +00:41:52,040 --> 00:41:54,960 +you're modifying these directly but + +966 +00:41:53,720 --> 00:41:56,920 +you're getting some of the nice + +967 +00:41:54,960 --> 00:41:58,680 +properties of positional encoding + +968 +00:41:56,920 --> 00:42:00,520 +and you're also kind of guaranteed that + +969 +00:41:58,680 --> 00:42:02,960 +this will extrapolate infinitely because + +970 +00:42:00,520 --> 00:42:06,200 +you're removing all information about + +971 +00:42:02,960 --> 00:42:07,680 +the not entirely infinitely but you're + +972 +00:42:06,200 --> 00:42:09,160 +guaranteed that this will extrapolate + +973 +00:42:07,680 --> 00:42:11,000 +well because you're removing all of the + +974 +00:42:09,160 --> 00:42:13,560 +information about the absolute position + +975 +00:42:11,000 --> 00:42:16,520 +here um so this is what's actually used + +976 +00:42:13,560 --> 00:42:19,119 +in llama um so this is what's used in + +977 +00:42:16,520 --> 00:42:22,920 +llama and uh it has a good positive + +978 +00:42:19,119 --> 00:42:22,920 +effect on fting models in vares + +979 +00:42:24,559 --> 00:42:31,319 +yeah know + +980 +00:42:27,599 --> 00:42:35,520 +the sentence EMB look very + +981 +00:42:31,319 --> 00:42:37,359 +similar this more senstive to tokens at + +982 +00:42:35,520 --> 00:42:39,640 +the beginning of + +983 +00:42:37,359 --> 00:42:40,839 +the I don't know if it's more sensitive + +984 +00:42:39,640 --> 00:42:42,319 +to tokens at the beginning of the + +985 +00:42:40,839 --> 00:42:45,160 +sentence and the end of the sentence + +986 +00:42:42,319 --> 00:42:48,160 +partly because the the earlier ones + +987 +00:42:45,160 --> 00:42:49,720 +don't look like I mean the ones at the + +988 +00:42:48,160 --> 00:42:52,359 +beginning also look similar right like + +989 +00:42:49,720 --> 00:42:55,839 +all of these values are the + +990 +00:42:52,359 --> 00:42:58,599 +same uh like all of the values up here + +991 +00:42:55,839 --> 00:42:59,640 +are the same between one and two right + +992 +00:42:58,599 --> 00:43:01,119 +so the ones at the beginning of the + +993 +00:42:59,640 --> 00:43:02,400 +sentence also look similar the ones at + +994 +00:43:01,119 --> 00:43:05,440 +the end of the sentence also look + +995 +00:43:02,400 --> 00:43:08,720 +similar um if you have something at the + +996 +00:43:05,440 --> 00:43:08,720 +pages and at + +997 +00:43:10,240 --> 00:43:16,440 +one not really because the the things at + +998 +00:43:13,119 --> 00:43:19,119 +the beginning look different right oh + +999 +00:43:16,440 --> 00:43:19,119 +probably + +1000 +00:43:19,720 --> 00:43:25,319 +reading yeah so this is the + +1001 +00:43:22,800 --> 00:43:26,920 +position yeah yeah okay yeah sorry this + +1002 +00:43:25,319 --> 00:43:29,000 +is very small because they just grabbed + +1003 +00:43:26,920 --> 00:43:31,200 +it from this uh this blog post here but + +1004 +00:43:29,000 --> 00:43:35,720 +yeah this is the position and then this + +1005 +00:43:31,200 --> 00:43:35,720 +is the the embedding size + +1006 +00:43:38,960 --> 00:43:46,680 +yeah okay um yeah but this is really + +1007 +00:43:42,480 --> 00:43:49,280 +really important um this uh kind + +1008 +00:43:46,680 --> 00:43:50,760 +of change of the positional encodings + +1009 +00:43:49,280 --> 00:43:54,079 +and I'll talk about a little bit about + +1010 +00:43:50,760 --> 00:43:57,280 +that at the very end yeah does Ro not + +1011 +00:43:54,079 --> 00:44:00,839 +take any um sort of like the maximum + +1012 +00:43:57,280 --> 00:44:04,599 +context l or anything like that rope um + +1013 +00:44:00,839 --> 00:44:07,200 +so this does not have a maximum context + +1014 +00:44:04,599 --> 00:44:09,880 +length this actually also doesn't have a + +1015 +00:44:07,200 --> 00:44:11,520 +maximum context length but rope + +1016 +00:44:09,880 --> 00:44:15,720 +extrapolates better because you + +1017 +00:44:11,520 --> 00:44:18,119 +basically in rope you entirely lose + +1018 +00:44:15,720 --> 00:44:20,880 +information about where you are in the + +1019 +00:44:18,119 --> 00:44:22,440 +um in the sequence whereas the absolute + +1020 +00:44:20,880 --> 00:44:23,680 +positional encodings you still get the + +1021 +00:44:22,440 --> 00:44:25,359 +information about where you are at the + +1022 +00:44:23,680 --> 00:44:27,559 +sequence so the model can overfit to it + +1023 +00:44:25,359 --> 00:44:29,040 +better professor in this example when we + +1024 +00:44:27,559 --> 00:44:30,960 +want to sort of like pick this to a + +1025 +00:44:29,040 --> 00:44:36,119 +longer sequence we have to modify the K + +1026 +00:44:30,960 --> 00:44:38,920 +to get it to you do you do not oh we + +1027 +00:44:36,119 --> 00:44:40,680 +don't yeah so this K is the size this is + +1028 +00:44:38,920 --> 00:44:43,480 +the size of the + +1029 +00:44:40,680 --> 00:44:46,640 +embedding okay so you can you can + +1030 +00:44:43,480 --> 00:44:49,599 +extrapolate by just increasing + +1031 +00:44:46,640 --> 00:44:51,520 +ke Beyond like even if you've never seen + +1032 +00:44:49,599 --> 00:44:53,680 +something about t if you increase T you + +1033 +00:44:51,520 --> 00:44:55,079 +can still calculate this theoretically + +1034 +00:44:53,680 --> 00:44:56,960 +which is not the case for the Learned + +1035 +00:44:55,079 --> 00:44:59,880 +coding so learned coding you you don't + +1036 +00:44:56,960 --> 00:44:59,880 +have any information + +1037 +00:45:03,280 --> 00:45:08,559 +about Okay cool so this is uh this is an + +1038 +00:45:07,040 --> 00:45:11,040 +important thing to know you'll also have + +1039 +00:45:08,559 --> 00:45:14,119 +to um implement this for the assignment + +1040 +00:45:11,040 --> 00:45:17,520 +I believe so good thing to pay attention + +1041 +00:45:14,119 --> 00:45:19,640 +to um okay next is layer normalization + +1042 +00:45:17,520 --> 00:45:21,319 +and residual connections so layer + +1043 +00:45:19,640 --> 00:45:24,000 +normalization and residual connections + +1044 +00:45:21,319 --> 00:45:26,839 +are important for stabilizing um + +1045 +00:45:24,000 --> 00:45:29,960 +stabilizing training in + +1046 +00:45:26,839 --> 00:45:31,720 +Transformers and I talked before about + +1047 +00:45:29,960 --> 00:45:33,200 +rnns with gradients and training + +1048 +00:45:31,720 --> 00:45:35,720 +instability so in + +1049 +00:45:33,200 --> 00:45:39,240 +rnns um + +1050 +00:45:35,720 --> 00:45:41,559 +we uh saw how back propop uh we talked + +1051 +00:45:39,240 --> 00:45:44,280 +about how back propop can reduce the + +1052 +00:45:41,559 --> 00:45:48,240 +gradients the exact same thing would be + +1053 +00:45:44,280 --> 00:45:50,119 +the case for um Transformers and in fact + +1054 +00:45:48,240 --> 00:45:51,720 +there was a problem in the original + +1055 +00:45:50,119 --> 00:45:55,200 +formulation of the Transformer that + +1056 +00:45:51,720 --> 00:45:56,720 +caused this gradient uh Vanishing to + +1057 +00:45:55,200 --> 00:45:59,119 +occur + +1058 +00:45:56,720 --> 00:46:00,920 +and it uh this problem has been + +1059 +00:45:59,119 --> 00:46:02,880 +rectified in new newer versions of + +1060 +00:46:00,920 --> 00:46:04,640 +Transformers so I'll I'll talk a little + +1061 +00:46:02,880 --> 00:46:09,760 +bit about both of + +1062 +00:46:04,640 --> 00:46:13,599 +those um so because we're running this + +1063 +00:46:09,760 --> 00:46:15,640 +multiple times um you know we have eight + +1064 +00:46:13,599 --> 00:46:17,200 +layers of Transformers or 16 layers of + +1065 +00:46:15,640 --> 00:46:19,839 +Transformers or 12 layers of + +1066 +00:46:17,200 --> 00:46:22,000 +Transformers we do have gradient uh + +1067 +00:46:19,839 --> 00:46:25,240 +gradients disappearing at the beginning + +1068 +00:46:22,000 --> 00:46:27,960 +if we're not careful and so there's two + +1069 +00:46:25,240 --> 00:46:29,559 +things that do uh the first thing is + +1070 +00:46:27,960 --> 00:46:32,200 +layer normalization and what layer + +1071 +00:46:29,559 --> 00:46:34,359 +normalization does is this is not so + +1072 +00:46:32,200 --> 00:46:35,920 +much for like gradients disappearing + +1073 +00:46:34,359 --> 00:46:38,680 +it's more for preventing gradients from + +1074 +00:46:35,920 --> 00:46:40,119 +exploding or becoming very unstable and + +1075 +00:46:38,680 --> 00:46:42,599 +the way it works is it normalizes + +1076 +00:46:40,119 --> 00:46:45,319 +outputs to be within a consistent range + +1077 +00:46:42,599 --> 00:46:48,680 +uh preventing too much variance in the + +1078 +00:46:45,319 --> 00:46:52,359 +scale so uh the way layer Norm looks + +1079 +00:46:48,680 --> 00:46:54,040 +like is this and um it's not too + +1080 +00:46:52,359 --> 00:46:55,400 +complicated but it's a little bit of a + +1081 +00:46:54,040 --> 00:46:58,400 +complicated equation so I'll go through + +1082 +00:46:55,400 --> 00:47:01,119 +it one to the time the first thing is we + +1083 +00:46:58,400 --> 00:47:02,640 +take the mean of the vectors so we just + +1084 +00:47:01,119 --> 00:47:04,839 +add up all of the vectors and we divide + +1085 +00:47:02,640 --> 00:47:07,119 +by the number of vectors that we have + +1086 +00:47:04,839 --> 00:47:09,240 +here or sorry the number of elements in + +1087 +00:47:07,119 --> 00:47:10,960 +the vector that we have here the next + +1088 +00:47:09,240 --> 00:47:16,599 +thing is the standard deviation of the + +1089 +00:47:10,960 --> 00:47:18,520 +vector so we add up uh the value minus + +1090 +00:47:16,599 --> 00:47:21,000 +the mean squared and then take the + +1091 +00:47:18,520 --> 00:47:22,119 +square root of it just stand normal + +1092 +00:47:21,000 --> 00:47:26,000 +standard + +1093 +00:47:22,119 --> 00:47:29,480 +deviation um and so if we were just + +1094 +00:47:26,000 --> 00:47:31,559 +doing the vector mean and the vector + +1095 +00:47:29,480 --> 00:47:33,960 +standard deviation what we would be + +1096 +00:47:31,559 --> 00:47:35,440 +doing would be we would be normalizing + +1097 +00:47:33,960 --> 00:47:38,000 +all of the values in the vector to have + +1098 +00:47:35,440 --> 00:47:41,319 +zero mean and unit variance uh sorry and + +1099 +00:47:38,000 --> 00:47:45,040 +unit uh and divided by the standard + +1100 +00:47:41,319 --> 00:47:48,079 +deviation however um layer normalization + +1101 +00:47:45,040 --> 00:47:50,240 +does two other things also so layer + +1102 +00:47:48,079 --> 00:47:54,280 +normalization adds in a + +1103 +00:47:50,240 --> 00:47:56,000 +bias and it multiplies by a gain and + +1104 +00:47:54,280 --> 00:47:58,760 +what adding in the bias and multiply by + +1105 +00:47:56,000 --> 00:48:00,319 +the gain means is after we've normalized + +1106 +00:47:58,760 --> 00:48:04,160 +everything down to be kind of in a + +1107 +00:48:00,319 --> 00:48:08,319 +standard range we then move it out of + +1108 +00:48:04,160 --> 00:48:08,319 +the standard range so we're taking + +1109 +00:48:08,960 --> 00:48:13,119 +like this Vector from over + +1110 +00:48:15,440 --> 00:48:24,640 +here we're normalizing it down so it's + +1111 +00:48:20,680 --> 00:48:27,359 +centered so this is using the the mean + +1112 +00:48:24,640 --> 00:48:30,440 +and the standard wealing it + +1113 +00:48:27,359 --> 00:48:35,800 +down and then we're adding a + +1114 +00:48:30,440 --> 00:48:38,839 +bias and a g so now we're moving it over + +1115 +00:48:35,800 --> 00:48:40,319 +to be in like a standard place so what + +1116 +00:48:38,839 --> 00:48:42,760 +what that means is like let's say we got + +1117 +00:48:40,319 --> 00:48:47,559 +a new Vector let's say this is X1 now we + +1118 +00:48:42,760 --> 00:48:50,559 +got a new Vector X2 and it's over + +1119 +00:48:47,559 --> 00:48:52,960 +here we would normalize it down and move + +1120 +00:48:50,559 --> 00:48:54,559 +it up here again so like basically all + +1121 +00:48:52,960 --> 00:48:56,920 +of our vectors will be in a consistent + +1122 +00:48:54,559 --> 00:48:58,680 +part of the space and what part of the + +1123 +00:48:56,920 --> 00:49:01,000 +space and how big the spread is will be + +1124 +00:48:58,680 --> 00:49:02,319 +determined by the bias IND the gate so + +1125 +00:49:01,000 --> 00:49:04,920 +that that's essentially what's happening + +1126 +00:49:02,319 --> 00:49:07,599 +here and what that means is like every + +1127 +00:49:04,920 --> 00:49:10,400 +time you consume the output of layer + +1128 +00:49:07,599 --> 00:49:11,720 +Norm of a layer normed layer you get + +1129 +00:49:10,400 --> 00:49:14,440 +something predictable you get something + +1130 +00:49:11,720 --> 00:49:16,040 +in a predictable part of the space so + +1131 +00:49:14,440 --> 00:49:18,880 +that's what it's doing um and this is + +1132 +00:49:16,040 --> 00:49:18,880 +good for training + +1133 +00:49:20,880 --> 00:49:26,319 +stability um any any questions about + +1134 +00:49:24,520 --> 00:49:28,799 +this + +1135 +00:49:26,319 --> 00:49:31,400 +okay yeah you just like what's the + +1136 +00:49:28,799 --> 00:49:33,160 +difference between this and batm so the + +1137 +00:49:31,400 --> 00:49:34,680 +difference between this and batch Norm + +1138 +00:49:33,160 --> 00:49:37,400 +this is actually explained really well + +1139 +00:49:34,680 --> 00:49:41,440 +in the layer Norm paper but um what + +1140 +00:49:37,400 --> 00:49:45,319 +batch Norm does is it normalizes + +1141 +00:49:41,440 --> 00:49:47,119 +not um not over the whole layer + +1142 +00:49:45,319 --> 00:49:48,760 +according to all of the elements of the + +1143 +00:49:47,119 --> 00:49:50,760 +vector but over the whole batch + +1144 +00:49:48,760 --> 00:49:55,799 +according to all of the elements in the + +1145 +00:49:50,760 --> 00:49:57,839 +batch and the reason why so batch Norm I + +1146 +00:49:55,799 --> 00:50:00,760 +think a lot of people really didn't like + +1147 +00:49:57,839 --> 00:50:03,319 +it when it was really popular to be used + +1148 +00:50:00,760 --> 00:50:05,280 +because borm actually changes your + +1149 +00:50:03,319 --> 00:50:08,400 +statistics based on the other elements + +1150 +00:50:05,280 --> 00:50:09,839 +of the batch and also at inference time + +1151 +00:50:08,400 --> 00:50:11,680 +when you're doing something at inference + +1152 +00:50:09,839 --> 00:50:13,000 +time basically you don't have any other + +1153 +00:50:11,680 --> 00:50:14,400 +statistics from the other elements of + +1154 +00:50:13,000 --> 00:50:16,720 +the batch so you just have to do one and + +1155 +00:50:14,400 --> 00:50:18,520 +you can't do any normalization layer + +1156 +00:50:16,720 --> 00:50:20,559 +Norm only depends on the current + +1157 +00:50:18,520 --> 00:50:21,920 +instance and because layer Norm only + +1158 +00:50:20,559 --> 00:50:23,720 +depends on the current instance you + +1159 +00:50:21,920 --> 00:50:26,480 +don't need to worry about batches like + +1160 +00:50:23,720 --> 00:50:29,160 +every input and output is like constant + +1161 +00:50:26,480 --> 00:50:30,839 +no no matter what else is in the B so + +1162 +00:50:29,160 --> 00:50:34,520 +that's a basic + +1163 +00:50:30,839 --> 00:50:36,240 +difference um any other any other + +1164 +00:50:34,520 --> 00:50:40,119 +questions + +1165 +00:50:36,240 --> 00:50:42,280 +okay so there's also a Improvement to + +1166 +00:50:40,119 --> 00:50:44,640 +layer Norm called RMS Norm or written + +1167 +00:50:42,280 --> 00:50:48,319 +Square uh + +1168 +00:50:44,640 --> 00:50:50,839 +normalization this is basically just a + +1169 +00:50:48,319 --> 00:50:54,240 +simplification of layer n and what they + +1170 +00:50:50,839 --> 00:50:56,920 +did is they removed the kind of mean + +1171 +00:50:54,240 --> 00:50:58,440 +normalization step so instead of moving + +1172 +00:50:56,920 --> 00:51:01,079 +everything into the middle into a + +1173 +00:50:58,440 --> 00:51:02,920 +different part of the space they're just + +1174 +00:51:01,079 --> 00:51:04,839 +um keeping things in the same part of + +1175 +00:51:02,920 --> 00:51:07,760 +the space but renormalizing like the + +1176 +00:51:04,839 --> 00:51:10,200 +gain uh renormalizing the spread between + +1177 +00:51:07,760 --> 00:51:11,760 +them uh so what you can see is like if + +1178 +00:51:10,200 --> 00:51:14,319 +you look back at layer normalization we + +1179 +00:51:11,760 --> 00:51:15,880 +were calculating the mean here we don't + +1180 +00:51:14,319 --> 00:51:18,240 +have any mean + +1181 +00:51:15,880 --> 00:51:21,079 +calculation um here we're subtracting + +1182 +00:51:18,240 --> 00:51:23,799 +the mean here there's no subtraction of + +1183 +00:51:21,079 --> 00:51:26,240 +the mean so that that's basically the + +1184 +00:51:23,799 --> 00:51:29,760 +difference between the two + +1185 +00:51:26,240 --> 00:51:32,440 +and it's not that RMS Norm is any better + +1186 +00:51:29,760 --> 00:51:34,960 +really like it gives similar results to + +1187 +00:51:32,440 --> 00:51:38,119 +layer Norm uh but it's faster and + +1188 +00:51:34,960 --> 00:51:40,119 +Anderly not very much worse and so + +1189 +00:51:38,119 --> 00:51:42,920 +because of this uh this is used for + +1190 +00:51:40,119 --> 00:51:45,520 +efficiency and this is used um this is + +1191 +00:51:42,920 --> 00:51:45,520 +also used in + +1192 +00:51:47,640 --> 00:51:55,440 +Lama uh it but you and you sorry also + +1193 +00:51:51,920 --> 00:51:57,119 +you remove the bias parameter and you + +1194 +00:51:55,440 --> 00:52:00,200 +keep only the gain parameter so that + +1195 +00:51:57,119 --> 00:52:00,200 +also reduces the number of + +1196 +00:52:00,799 --> 00:52:06,000 +parameters cool um any any questions + +1197 +00:52:03,920 --> 00:52:08,960 +I'll + +1198 +00:52:06,000 --> 00:52:10,319 +set okay um residual connections I + +1199 +00:52:08,960 --> 00:52:12,520 +talked about these a little bit last + +1200 +00:52:10,319 --> 00:52:13,720 +time so I'll go through them relatively + +1201 +00:52:12,520 --> 00:52:15,319 +quickly but they're basically an + +1202 +00:52:13,720 --> 00:52:20,280 +additive connection between the input + +1203 +00:52:15,319 --> 00:52:22,799 +and the output and so you um you take + +1204 +00:52:20,280 --> 00:52:26,559 +the input to multi head attention and + +1205 +00:52:22,799 --> 00:52:29,480 +you pass it into the output here + +1206 +00:52:26,559 --> 00:52:31,359 +um and it looks like this very very + +1207 +00:52:29,480 --> 00:52:33,280 +simple so no matter what function you're + +1208 +00:52:31,359 --> 00:52:36,200 +doing here you just add the um add the + +1209 +00:52:33,280 --> 00:52:38,960 +input into the function this prevents uh + +1210 +00:52:36,200 --> 00:52:40,760 +Vanishing gradients and it allows you to + +1211 +00:52:38,960 --> 00:52:42,720 +learn the difference from the input so + +1212 +00:52:40,760 --> 00:52:46,119 +instead of learning how the input should + +1213 +00:52:42,720 --> 00:52:48,839 +be matched M uh should be mapped into + +1214 +00:52:46,119 --> 00:52:50,839 +the output you learn what difference + +1215 +00:52:48,839 --> 00:52:53,640 +should I apply to the + +1216 +00:52:50,839 --> 00:52:57,400 +outputs so here's an interesting quiz + +1217 +00:52:53,640 --> 00:53:00,960 +there's a very big implication for + +1218 +00:52:57,400 --> 00:53:03,119 +attention multi attention uh anybody + +1219 +00:53:00,960 --> 00:53:05,480 +think what implication for multi + +1220 +00:53:03,119 --> 00:53:08,839 +attention + +1221 +00:53:05,480 --> 00:53:11,760 +here yeah because we now have the + +1222 +00:53:08,839 --> 00:53:14,720 +residual connection it sort of De + +1223 +00:53:11,760 --> 00:53:16,559 +prioritizes looking at itself looks at + +1224 +00:53:14,720 --> 00:53:20,839 +the surrounding to understand what do I + +1225 +00:53:16,559 --> 00:53:24,880 +have to add as Contex yeah exactly so um + +1226 +00:53:20,839 --> 00:53:28,040 +uh basically it de prioritizes attending + +1227 +00:53:24,880 --> 00:53:29,920 +to yourself because you get yourself for + +1228 +00:53:28,040 --> 00:53:31,319 +free through the residual connection you + +1229 +00:53:29,920 --> 00:53:32,760 +get the information from yourself for + +1230 +00:53:31,319 --> 00:53:34,880 +free so you just need to pull in the + +1231 +00:53:32,760 --> 00:53:37,079 +other information that's useful for + +1232 +00:53:34,880 --> 00:53:40,760 +contextualizing the current factors and + +1233 +00:53:37,079 --> 00:53:44,280 +you can actually see how this actually + +1234 +00:53:40,760 --> 00:53:46,400 +happens if we go back and look at our + +1235 +00:53:44,280 --> 00:53:48,480 +visualization you'll notice that there's + +1236 +00:53:46,400 --> 00:53:50,319 +only one attention head that's attending + +1237 +00:53:48,480 --> 00:53:52,440 +to itself and all of the other attention + +1238 +00:53:50,319 --> 00:53:53,920 +heads are not attending to itself and + +1239 +00:53:52,440 --> 00:53:55,319 +this is precisely because you have the + +1240 +00:53:53,920 --> 00:53:56,599 +residual connections if we didn't have + +1241 +00:53:55,319 --> 00:53:58,680 +the residual connections it would have + +1242 +00:53:56,599 --> 00:54:00,760 +to attend to itself heavily and then + +1243 +00:53:58,680 --> 00:54:04,559 +pull in like all the other information + +1244 +00:54:00,760 --> 00:54:05,920 +so uh that's why you see that um + +1245 +00:54:04,559 --> 00:54:08,119 +Behavior + +1246 +00:54:05,920 --> 00:54:11,359 +there cool I didn't expect somebody to + +1247 +00:54:08,119 --> 00:54:14,799 +answer that so quickly so thank + +1248 +00:54:11,359 --> 00:54:18,119 +you another really important Improvement + +1249 +00:54:14,799 --> 00:54:20,359 +to uh another really important + +1250 +00:54:18,119 --> 00:54:23,480 +Improvement to the Transformer is a post + +1251 +00:54:20,359 --> 00:54:25,640 +and pre- layer Norm so the original + +1252 +00:54:23,480 --> 00:54:29,599 +conception of the Transformer + +1253 +00:54:25,640 --> 00:54:32,319 +uh basically had uh + +1254 +00:54:29,599 --> 00:54:33,720 +this over here post layer Norms so what + +1255 +00:54:32,319 --> 00:54:35,200 +you would do is you would run multi had + +1256 +00:54:33,720 --> 00:54:37,599 +attention then you'd have layer Norm + +1257 +00:54:35,200 --> 00:54:38,920 +after it uh then you have the feed + +1258 +00:54:37,599 --> 00:54:43,280 +forward Network then you'd have layer + +1259 +00:54:38,920 --> 00:54:46,359 +Norm after it the problem with this is + +1260 +00:54:43,280 --> 00:54:49,319 +this is kind of breaking the residual + +1261 +00:54:46,359 --> 00:54:51,839 +connection right you see we have this + +1262 +00:54:49,319 --> 00:54:53,760 +residual connection which is gray here + +1263 +00:54:51,839 --> 00:54:56,920 +and then you have a layer Norm in the + +1264 +00:54:53,760 --> 00:54:58,359 +middle of this a residual connection and + +1265 +00:54:56,920 --> 00:54:59,720 +so what this is doing is this is + +1266 +00:54:58,359 --> 00:55:01,760 +actually hurting your gradient + +1267 +00:54:59,720 --> 00:55:04,400 +propagation right because you're you + +1268 +00:55:01,760 --> 00:55:06,240 +have a not you have a function other + +1269 +00:55:04,400 --> 00:55:09,319 +than the identity right in the + +1270 +00:55:06,240 --> 00:55:12,599 +middle of the layers here and that's bad + +1271 +00:55:09,319 --> 00:55:14,799 +for propagating across many layers so a + +1272 +00:55:12,599 --> 00:55:17,319 +modification to this is pre-layer Norm + +1273 +00:55:14,799 --> 00:55:20,359 +where basically layer Norm is applied + +1274 +00:55:17,319 --> 00:55:22,000 +previously to all of the uh like + +1275 +00:55:20,359 --> 00:55:25,200 +multi-head ATT tension and three forward + +1276 +00:55:22,000 --> 00:55:27,839 +layers which gives us a like Direct + +1277 +00:55:25,200 --> 00:55:29,280 +residual connection like this um all the + +1278 +00:55:27,839 --> 00:55:31,599 +way from the beginning to the end and + +1279 +00:55:29,280 --> 00:55:33,280 +that improves gradient pretation so this + +1280 +00:55:31,599 --> 00:55:34,640 +is another big thing that has improved + +1281 +00:55:33,280 --> 00:55:36,520 +the training of Transformers and made + +1282 +00:55:34,640 --> 00:55:38,079 +them more stable in other things like + +1283 +00:55:36,520 --> 00:55:41,720 +this + +1284 +00:55:38,079 --> 00:55:44,599 +yeah can you elaborate more on like why + +1285 +00:55:41,720 --> 00:55:48,760 +layer between the layers is worse for g + +1286 +00:55:44,599 --> 00:55:50,760 +compation i yeah sure so basically + +1287 +00:55:48,760 --> 00:55:52,720 +anything other than the identity is bad + +1288 +00:55:50,760 --> 00:55:55,079 +for gradient propagation because the + +1289 +00:55:52,720 --> 00:55:56,319 +identity function or addition of some + +1290 +00:55:55,079 --> 00:55:57,720 +other piece of information doesn't + +1291 +00:55:56,319 --> 00:55:59,839 +change the gradients that flow back + +1292 +00:55:57,720 --> 00:56:01,760 +through the network but anything other + +1293 +00:55:59,839 --> 00:56:04,200 +than that does right it either makes + +1294 +00:56:01,760 --> 00:56:07,520 +them smaller or bigger or modifies them + +1295 +00:56:04,200 --> 00:56:11,400 +in some way how does layer Norm modify + +1296 +00:56:07,520 --> 00:56:13,839 +them layer Norm modifies them by the + +1297 +00:56:11,400 --> 00:56:16,440 +standard deviation so like let's say the + +1298 +00:56:13,839 --> 00:56:19,760 +standard deviation in the layer is quite + +1299 +00:56:16,440 --> 00:56:23,440 +large and your gain and especially if + +1300 +00:56:19,760 --> 00:56:25,400 +your gain is is small um then that would + +1301 +00:56:23,440 --> 00:56:26,640 +mean that you were dividing every time + +1302 +00:56:25,400 --> 00:56:28,000 +by the standard deviation which would + +1303 +00:56:26,640 --> 00:56:32,559 +make the gradient + +1304 +00:56:28,000 --> 00:56:34,480 +smaller so um it yeah it's pretty like + +1305 +00:56:32,559 --> 00:56:37,760 +straightforward + +1306 +00:56:34,480 --> 00:56:37,760 +actually um + +1307 +00:56:49,599 --> 00:56:52,599 +yeah + +1308 +00:56:53,119 --> 00:56:59,200 +yes so you you're basically right like + +1309 +00:56:56,760 --> 00:57:01,839 +if we apply something like k h then the + +1310 +00:56:59,200 --> 00:57:03,319 +value the gradient would disappear I'm + +1311 +00:57:01,839 --> 00:57:05,119 +actually going to talk about activation + +1312 +00:57:03,319 --> 00:57:07,079 +functions and usually we use activation + +1313 +00:57:05,119 --> 00:57:10,079 +functions that don't have that problem + +1314 +00:57:07,079 --> 00:57:12,480 +quite as much but um the other thing to + +1315 +00:57:10,079 --> 00:57:15,079 +point out here is actually the residual + +1316 +00:57:12,480 --> 00:57:18,000 +connection is going all the way up um + +1317 +00:57:15,079 --> 00:57:19,000 +and it's not like the feed cor network + +1318 +00:57:18,000 --> 00:57:21,480 +is being + +1319 +00:57:19,000 --> 00:57:26,160 +applied like outside of the path of the + +1320 +00:57:21,480 --> 00:57:28,400 +residual function so um + +1321 +00:57:26,160 --> 00:57:31,400 +the essentially the gradients won't be + +1322 +00:57:28,400 --> 00:57:35,559 +Vanishing because the P4 network is not + +1323 +00:57:31,400 --> 00:57:35,559 +blocking like the res + +1324 +00:57:35,599 --> 00:57:42,720 +from um does that make sense another way + +1325 +00:57:39,079 --> 00:57:44,839 +to put it is um this will be like the + +1326 +00:57:42,720 --> 00:57:47,799 +tan H will be inside this function but + +1327 +00:57:44,839 --> 00:57:51,400 +you're separately adding in X Out + +1328 +00:57:47,799 --> 00:57:51,400 +outside that t h so you don't be + +1329 +00:57:52,119 --> 00:57:56,680 +scre cool um any other + +1330 +00:57:55,720 --> 00:57:59,960 +any other + +1331 +00:57:56,680 --> 00:58:03,119 +things okay great um so this is also + +1332 +00:57:59,960 --> 00:58:07,280 +really important this uh causes + +1333 +00:58:03,119 --> 00:58:08,559 +the um the models to work better so next + +1334 +00:58:07,280 --> 00:58:12,200 +is feed forward + +1335 +00:58:08,559 --> 00:58:13,680 +layers so the feed forward layers here + +1336 +00:58:12,200 --> 00:58:15,359 +um what they do is they extract + +1337 +00:58:13,680 --> 00:58:17,160 +combination features from the attended + +1338 +00:58:15,359 --> 00:58:18,160 +output basically the the feed forward + +1339 +00:58:17,160 --> 00:58:22,440 +network is + +1340 +00:58:18,160 --> 00:58:23,280 +applied independently to each Vector in + +1341 +00:58:22,440 --> 00:58:26,960 +the + +1342 +00:58:23,280 --> 00:58:30,160 +sequence um so like if we have our + +1343 +00:58:26,960 --> 00:58:33,319 +Vector uh if we have our Vector here we + +1344 +00:58:30,160 --> 00:58:36,760 +apply it like this um like weight one + +1345 +00:58:33,319 --> 00:58:38,400 +and B1 weight 2 and B2 actually um it's + +1346 +00:58:36,760 --> 00:58:42,319 +pretty common nowadays to remove the + +1347 +00:58:38,400 --> 00:58:44,640 +bias also uh mostly because it's just + +1348 +00:58:42,319 --> 00:58:47,400 +extra parameters and not useful and it + +1349 +00:58:44,640 --> 00:58:50,160 +can be more um it can lead to some + +1350 +00:58:47,400 --> 00:58:51,839 +degree of instability in training so + +1351 +00:58:50,160 --> 00:58:53,960 +you'll often see linear layers that have + +1352 +00:58:51,839 --> 00:58:55,760 +the bias off uh and it's just because + +1353 +00:58:53,960 --> 00:58:58,079 +it's not necessary to learn the network + +1354 +00:58:55,760 --> 00:59:02,119 +well but anyway this is what it looks + +1355 +00:58:58,079 --> 00:59:05,880 +like f here is a nonlinearity of some + +1356 +00:59:02,119 --> 00:59:08,640 +variety uh so it essentially looks like + +1357 +00:59:05,880 --> 00:59:12,119 +this usually the feed forward Network + +1358 +00:59:08,640 --> 00:59:15,079 +and Transformers uh upscales to a very + +1359 +00:59:12,119 --> 00:59:18,039 +large Vector to extract lots of features + +1360 +00:59:15,079 --> 00:59:20,480 +so each one of these each one of these + +1361 +00:59:18,039 --> 00:59:21,799 +elements in here is kind of a feature + +1362 +00:59:20,480 --> 00:59:23,480 +and a lot of people when they do + +1363 +00:59:21,799 --> 00:59:25,000 +interpretation of Transformer models + +1364 +00:59:23,480 --> 00:59:27,599 +they actually look at these features + +1365 +00:59:25,000 --> 00:59:30,240 +because they tend to correspond more + +1366 +00:59:27,599 --> 00:59:32,319 +directly with kind of the information + +1367 +00:59:30,240 --> 00:59:34,799 +that we would expect to see like um for + +1368 +00:59:32,319 --> 00:59:37,000 +example when people memorize individual + +1369 +00:59:34,799 --> 00:59:38,839 +memorized facts in Transformers like who + +1370 +00:59:37,000 --> 00:59:40,920 +is the president of the United States or + +1371 +00:59:38,839 --> 00:59:43,440 +something they usually look at the + +1372 +00:59:40,920 --> 00:59:45,280 +vectors uh in + +1373 +00:59:43,440 --> 00:59:47,880 +here + +1374 +00:59:45,280 --> 00:59:49,440 +um some activation functions that are + +1375 +00:59:47,880 --> 00:59:53,119 +used in Transformers the original + +1376 +00:59:49,440 --> 00:59:57,359 +Transformer used a relu um so the relu + +1377 +00:59:53,119 --> 00:59:59,880 +looks like Max uh zero of X um I asked + +1378 +00:59:57,359 --> 01:00:02,480 +chat GPD to draw a figure for me and it + +1379 +00:59:59,880 --> 01:00:04,200 +did a pretty good job of this I guess so + +1380 +01:00:02,480 --> 01:00:09,400 +this is what it uh this is what it looks + +1381 +01:00:04,200 --> 01:00:12,240 +like um the relu is zero below an input + +1382 +01:00:09,400 --> 01:00:15,640 +of zero and uh the identity greater than + +1383 +01:00:12,240 --> 01:00:17,760 +an input of zero um the problem with + +1384 +01:00:15,640 --> 01:00:20,280 +this though is anytime something is less + +1385 +01:00:17,760 --> 01:00:22,119 +than zero you get a zero gradient so it + +1386 +01:00:20,280 --> 01:00:26,720 +it causes + +1387 +01:00:22,119 --> 01:00:29,680 +issues so in alternative that's used uh + +1388 +01:00:26,720 --> 01:00:33,640 +recently is something called Swiss or + +1389 +01:00:29,680 --> 01:00:36,200 +silu for um sigmoid linear unit and + +1390 +01:00:33,640 --> 01:00:40,200 +basically it looks like this it's x + +1391 +01:00:36,200 --> 01:00:43,880 +times Sig Sigma times beta where beta is + +1392 +01:00:40,200 --> 01:00:46,000 +often set to one and it looks a lot like + +1393 +01:00:43,880 --> 01:00:47,480 +a relu it looks very similar to a reu + +1394 +01:00:46,000 --> 01:00:50,160 +but it doesn't have a zero gradient + +1395 +01:00:47,480 --> 01:00:52,839 +anywhere so you can still um if it gets + +1396 +01:00:50,160 --> 01:00:55,799 +to be very negative you have like a + +1397 +01:00:52,839 --> 01:00:57,440 +light push you have a light push towards + +1398 +01:00:55,799 --> 01:00:59,160 +the middle so you have a chance to + +1399 +01:00:57,440 --> 01:01:01,760 +recover and get things closer to the + +1400 +01:00:59,160 --> 01:01:03,400 +middle so uh empirically this seems to + +1401 +01:01:01,760 --> 01:01:05,799 +work pretty well and this is also uh + +1402 +01:01:03,400 --> 01:01:05,799 +used in + +1403 +01:01:07,480 --> 01:01:12,720 +W cool um any questions about these + +1404 +01:01:10,920 --> 01:01:14,880 +there's of course a ton of other + +1405 +01:01:12,720 --> 01:01:16,119 +activation functions but I am talking + +1406 +01:01:14,880 --> 01:01:18,200 +mostly about the ones that people are + +1407 +01:01:16,119 --> 01:01:20,480 +actually using + +1408 +01:01:18,200 --> 01:01:22,960 +optimiz uh usually you just set it to + +1409 +01:01:20,480 --> 01:01:24,760 +one or set it to some you you could + +1410 +01:01:22,960 --> 01:01:27,799 +hyper parameter optimize over it but I + +1411 +01:01:24,760 --> 01:01:27,799 +think it doesn't make a huge + +1412 +01:01:28,640 --> 01:01:36,640 +difference yeah okay cool um next is + +1413 +01:01:33,559 --> 01:01:38,720 +optimization tricks for Transformers so + +1414 +01:01:36,640 --> 01:01:40,799 +Transformers are powerful but very + +1415 +01:01:38,720 --> 01:01:44,440 +fickle um + +1416 +01:01:40,799 --> 01:01:47,039 +so uh + +1417 +01:01:44,440 --> 01:01:48,480 +Transformers at least when they started + +1418 +01:01:47,039 --> 01:01:51,279 +out and we didn't have stable training + +1419 +01:01:48,480 --> 01:01:53,119 +recipes for them tended to be very uh + +1420 +01:01:51,279 --> 01:01:56,359 +like people tried pretty hard to + +1421 +01:01:53,119 --> 01:01:58,839 +optimize them but they uh uh they were + +1422 +01:01:56,359 --> 01:02:01,200 +difficult to optimize one example of + +1423 +01:01:58,839 --> 01:02:02,520 +this uh that that is really great it + +1424 +01:02:01,200 --> 01:02:03,640 +will make you feel a lot better if + +1425 +01:02:02,520 --> 01:02:08,160 +you're training things and they're not + +1426 +01:02:03,640 --> 01:02:10,960 +working very well is um meta's old log + +1427 +01:02:08,160 --> 01:02:13,520 +book of how they trained 175 billion + +1428 +01:02:10,960 --> 01:02:14,799 +parameter model and you'll see all of + +1429 +01:02:13,520 --> 01:02:16,160 +the problems that they had while they + +1430 +01:02:14,799 --> 01:02:17,920 +were training their model despite the + +1431 +01:02:16,160 --> 01:02:20,079 +fact that they're kind of pros at doing + +1432 +01:02:17,920 --> 01:02:22,240 +this um and that includes things like + +1433 +01:02:20,079 --> 01:02:24,799 +their machines going down and their like + +1434 +01:02:22,240 --> 01:02:27,599 +Hardware Engineers having to go and res + +1435 +01:02:24,799 --> 01:02:30,079 +their machines and um they're loss + +1436 +01:02:27,599 --> 01:02:32,319 +diverging and having to roll back things + +1437 +01:02:30,079 --> 01:02:34,119 +manually and other stuff like this so I + +1438 +01:02:32,319 --> 01:02:35,960 +I really like this I'm really happy that + +1439 +01:02:34,119 --> 01:02:39,520 +they released this for us all to learn + +1440 +01:02:35,960 --> 01:02:42,880 +from um but yeah you can take a look at + +1441 +01:02:39,520 --> 01:02:45,680 +this um so some things that people do to + +1442 +01:02:42,880 --> 01:02:48,640 +stabilize training of Transformer models + +1443 +01:02:45,680 --> 01:02:51,079 +are swap out the optimizer uh do the + +1444 +01:02:48,640 --> 01:02:52,799 +sorts of restarts that I talked about + +1445 +01:02:51,079 --> 01:02:55,839 +and um and other things like this so I'm + +1446 +01:02:52,799 --> 01:02:57,960 +going to go through those very quickly + +1447 +01:02:55,839 --> 01:03:00,319 +so the first thing is optimizers um + +1448 +01:02:57,960 --> 01:03:04,000 +previously what we've talked about is + +1449 +01:03:00,319 --> 01:03:06,599 +SGD um so SGD updates in the direction + +1450 +01:03:04,000 --> 01:03:09,240 +of reducing loss atom which I also + +1451 +01:03:06,599 --> 01:03:12,279 +talked about adds a momentum uh ter + +1452 +01:03:09,240 --> 01:03:14,359 +sorry that should be term momentum term + +1453 +01:03:12,279 --> 01:03:16,799 +and normalized by the standard deviation + +1454 +01:03:14,359 --> 01:03:20,039 +of the outputs to kind of upwe + +1455 +01:03:16,799 --> 01:03:21,920 +infrequently updated + +1456 +01:03:20,039 --> 01:03:24,359 +parameters a new thing that was + +1457 +01:03:21,920 --> 01:03:25,160 +introduced by vasani at all when they + +1458 +01:03:24,359 --> 01:03:27,799 +prod + +1459 +01:03:25,160 --> 01:03:30,960 +Transformers was uh a learning rate + +1460 +01:03:27,799 --> 01:03:35,000 +increase and decrease and the way this + +1461 +01:03:30,960 --> 01:03:37,839 +works is they gradually increase the + +1462 +01:03:35,000 --> 01:03:40,039 +learning rate until you get to a set + +1463 +01:03:37,839 --> 01:03:43,839 +number of warm-up steps in theirs they + +1464 +01:03:40,039 --> 01:03:46,920 +did 4,000 warm-up steps um and then they + +1465 +01:03:43,839 --> 01:03:49,160 +gradually decrease it + +1466 +01:03:46,920 --> 01:03:52,559 +um + +1467 +01:03:49,160 --> 01:03:57,279 +there's and it looks like this + +1468 +01:03:52,559 --> 01:04:00,680 +recently uh is as far as I understand um + +1469 +01:03:57,279 --> 01:04:03,160 +you can actually do a bit better without + +1470 +01:04:00,680 --> 01:04:07,359 +doing this warmup uh as long as you're + +1471 +01:04:03,160 --> 01:04:10,000 +using pre-layer Norm uh so there I think + +1472 +01:04:07,359 --> 01:04:11,559 +the warmup is still used pretty widely + +1473 +01:04:10,000 --> 01:04:13,839 +but it's not absolutely necessary + +1474 +01:04:11,559 --> 01:04:16,559 +anymore with the newer training recipes + +1475 +01:04:13,839 --> 01:04:18,279 +but it's something to be aware of also + +1476 +01:04:16,559 --> 01:04:20,920 +sometimes people do linear learning rate + +1477 +01:04:18,279 --> 01:04:23,079 +Decay instead of this kind of like slope + +1478 +01:04:20,920 --> 01:04:25,000 +learning rate decays so there's a bunch + +1479 +01:04:23,079 --> 01:04:27,079 +of recipes for this I'm not going to go + +1480 +01:04:25,000 --> 01:04:27,960 +into all of them but just be aware that + +1481 +01:04:27,079 --> 01:04:31,680 +they + +1482 +01:04:27,960 --> 01:04:34,640 +exist um another thing is instead of + +1483 +01:04:31,680 --> 01:04:37,559 +straight up atom uh recently people have + +1484 +01:04:34,640 --> 01:04:39,640 +been using atom W and so what Atom W + +1485 +01:04:37,559 --> 01:04:42,440 +does is it does uh weight + +1486 +01:04:39,640 --> 01:04:46,520 +Decay and what weight Decay is is it + +1487 +01:04:42,440 --> 01:04:49,359 +like gradually decreases your weights uh + +1488 +01:04:46,520 --> 01:04:50,920 +towards the zero and the reason why you + +1489 +01:04:49,359 --> 01:04:54,319 +do that is it's like basically an + +1490 +01:04:50,920 --> 01:04:58,119 +approximation of normalization of uh + +1491 +01:04:54,319 --> 01:04:59,599 +sorry regularization of modelss so it it + +1492 +01:04:58,119 --> 01:05:04,319 +has an effect of preventing the model + +1493 +01:04:59,599 --> 01:05:06,799 +from overfitting um admw is kind of + +1494 +01:05:04,319 --> 01:05:08,240 +a you don't need to know all the details + +1495 +01:05:06,799 --> 01:05:11,319 +if you're just using it but it's + +1496 +01:05:08,240 --> 01:05:13,559 +basically a correction of weight Decay + +1497 +01:05:11,319 --> 01:05:15,079 +specifically considering the fact that + +1498 +01:05:13,559 --> 01:05:17,480 +atom is using momentum in this + +1499 +01:05:15,079 --> 01:05:20,599 +normalization so that it actually + +1500 +01:05:17,480 --> 01:05:23,319 +corresponds to proper regularization + +1501 +01:05:20,599 --> 01:05:26,240 +terms so if you're just using atom W out + +1502 +01:05:23,319 --> 01:05:29,000 +of the boxes and Optimizer um that's all + +1503 +01:05:26,240 --> 01:05:30,279 +you need to know but actually um sorry + +1504 +01:05:29,000 --> 01:05:31,480 +never mind for the assignment you're + +1505 +01:05:30,279 --> 01:05:33,599 +actually going to have to implement + +1506 +01:05:31,480 --> 01:05:35,079 +something related to that so you do + +1507 +01:05:33,599 --> 01:05:37,480 +actually need to know it and look into + +1508 +01:05:35,079 --> 01:05:38,680 +it I'll maybe cover the details in a l + +1509 +01:05:37,480 --> 01:05:40,520 +fact + +1510 +01:05:38,680 --> 01:05:42,920 +but okay + +1511 +01:05:40,520 --> 01:05:46,160 +cool another thing is low Precision + +1512 +01:05:42,920 --> 01:05:47,640 +training so low Precision training is uh + +1513 +01:05:46,160 --> 01:05:49,760 +something that's necessary where you're + +1514 +01:05:47,640 --> 01:05:52,760 +training very large models or large + +1515 +01:05:49,760 --> 01:05:55,960 +large-ish models on fewer + +1516 +01:05:52,760 --> 01:05:59,960 +gpus um + +1517 +01:05:55,960 --> 01:06:02,319 +so training a full 32 uh pit bit + +1518 +01:05:59,960 --> 01:06:05,079 +Precision can be costly so it's pretty + +1519 +01:06:02,319 --> 01:06:06,960 +common to train at for example 16bit + +1520 +01:06:05,079 --> 01:06:08,440 +Precision um especially if you're + +1521 +01:06:06,960 --> 01:06:11,359 +training all of the parameters of the + +1522 +01:06:08,440 --> 01:06:15,440 +models and there's kind of two uh + +1523 +01:06:11,359 --> 01:06:19,039 +alternatives for this the first one is + +1524 +01:06:15,440 --> 01:06:21,760 +fp16 and that's the standard uh 16bit + +1525 +01:06:19,039 --> 01:06:25,760 +floating Point numbers that are used by + +1526 +01:06:21,760 --> 01:06:27,520 +most computers and most CPUs and these + +1527 +01:06:25,760 --> 01:06:30,359 +uh floating Point numbers they allocate + +1528 +01:06:27,520 --> 01:06:32,920 +one bit for the sign five bits for the + +1529 +01:06:30,359 --> 01:06:35,440 +exponent and 10 bits for the fractional + +1530 +01:06:32,920 --> 01:06:39,599 +components so they have relatively + +1531 +01:06:35,440 --> 01:06:41,960 +precise fractions and exponents with a + +1532 +01:06:39,599 --> 01:06:43,880 +relatively small range so they can't + +1533 +01:06:41,960 --> 01:06:45,440 +express things with very large or very + +1534 +01:06:43,880 --> 01:06:48,359 +small + +1535 +01:06:45,440 --> 01:06:51,960 +exponents there's uh something called B + +1536 +01:06:48,359 --> 01:06:54,160 +float 16 in B flat 16 uh b stands for + +1537 +01:06:51,960 --> 01:06:56,160 +brain for like Google brain uh because + +1538 +01:06:54,160 --> 01:06:57,920 +that's where they invented this and + +1539 +01:06:56,160 --> 01:07:00,440 +basically what it does is it increases + +1540 +01:06:57,920 --> 01:07:03,319 +the number of uh + +1541 +01:07:00,440 --> 01:07:04,920 +bits for the exponent to eight and it + +1542 +01:07:03,319 --> 01:07:07,400 +decreases the number of bits for the + +1543 +01:07:04,920 --> 01:07:09,640 +fraction to seven so that allows you to + +1544 +01:07:07,400 --> 01:07:12,880 +express a wider range of values uh at a + +1545 +01:07:09,640 --> 01:07:14,960 +lower Precision essentially and this is + +1546 +01:07:12,880 --> 01:07:17,000 +much much more stable with respect to + +1547 +01:07:14,960 --> 01:07:20,000 +training uh because you can handle very + +1548 +01:07:17,000 --> 01:07:21,440 +small numbers and very large um numbers + +1549 +01:07:20,000 --> 01:07:23,160 +better despite the fact that you're + +1550 +01:07:21,440 --> 01:07:25,480 +losing a little bit of prision of the + +1551 +01:07:23,160 --> 01:07:27,720 +fractions so this is pretty essential + +1552 +01:07:25,480 --> 01:07:29,440 +and I I would recommend that no matter + +1553 +01:07:27,720 --> 01:07:32,119 +what you're doing if you're doing 16 bit + +1554 +01:07:29,440 --> 01:07:33,799 +you would uh you would use this instead + +1555 +01:07:32,119 --> 01:07:35,880 +and Hardware support for this is pretty + +1556 +01:07:33,799 --> 01:07:37,319 +good now like Nvidia gpus and other + +1557 +01:07:35,880 --> 01:07:39,799 +things like that all support it ppar + +1558 +01:07:37,319 --> 01:07:39,799 +supports it + +1559 +01:07:41,319 --> 01:07:44,680 +really um another thing that you should + +1560 +01:07:43,480 --> 01:07:46,920 +be aware of especially if you're + +1561 +01:07:44,680 --> 01:07:48,160 +training very large models is uh + +1562 +01:07:46,920 --> 01:07:51,079 +checkpointing and + +1563 +01:07:48,160 --> 01:07:53,440 +resets so um even through best efforts + +1564 +01:07:51,079 --> 01:07:57,279 +training can uh go south it can have + +1565 +01:07:53,440 --> 01:07:58,839 +problems so what do you do um the first + +1566 +01:07:57,279 --> 01:08:02,319 +thing that you can do is you can monitor + +1567 +01:07:58,839 --> 01:08:04,520 +for possible issues and a common way to + +1568 +01:08:02,319 --> 01:08:05,960 +do this is by mod monitoring the norm of + +1569 +01:08:04,520 --> 01:08:08,680 +the gradients and so I pulled this + +1570 +01:08:05,960 --> 01:08:11,839 +directly from the op uh the opt log book + +1571 +01:08:08,680 --> 01:08:13,079 +that I uh posted before the the thing + +1572 +01:08:11,839 --> 01:08:15,200 +that meta did when they were training + +1573 +01:08:13,079 --> 01:08:17,839 +their models and here you can see you're + +1574 +01:08:15,200 --> 01:08:20,480 +monitoring the norm of the gradients and + +1575 +01:08:17,839 --> 01:08:23,520 +suddenly like in the middle of training + +1576 +01:08:20,480 --> 01:08:24,839 +your gradient Norm just goes up by a lot + +1577 +01:08:23,520 --> 01:08:27,239 +or + +1578 +01:08:24,839 --> 01:08:29,880 +and this is an indicator of a problem + +1579 +01:08:27,239 --> 01:08:33,080 +but the interesting thing about this is + +1580 +01:08:29,880 --> 01:08:35,199 +this will Spike and then after it spiked + +1581 +01:08:33,080 --> 01:08:37,480 +you can see that the perplexity of the + +1582 +01:08:35,199 --> 01:08:39,400 +model is going down after the spike it + +1583 +01:08:37,480 --> 01:08:42,239 +continues to go down for a little bit + +1584 +01:08:39,400 --> 01:08:44,839 +but then it starts going up and so + +1585 +01:08:42,239 --> 01:08:46,560 +basically once it started going up now + +1586 +01:08:44,839 --> 01:08:48,400 +your model is kind of in like a bad + +1587 +01:08:46,560 --> 01:08:50,080 +space it's in a bad space of the + +1588 +01:08:48,400 --> 01:08:52,279 +parameter space and it will just + +1589 +01:08:50,080 --> 01:08:54,640 +continue being in a bad space until it + +1590 +01:08:52,279 --> 01:08:56,319 +diverges uh but it's hard to diagnose + +1591 +01:08:54,640 --> 01:08:58,679 +immediately other than through things + +1592 +01:08:56,319 --> 01:09:00,600 +like the gradient Norm so monitoring the + +1593 +01:08:58,679 --> 01:09:01,920 +gradient Norm can be helpful this is + +1594 +01:09:00,600 --> 01:09:03,759 +especially important if you're training + +1595 +01:09:01,920 --> 01:09:05,319 +very large models uh if you're training + +1596 +01:09:03,759 --> 01:09:07,839 +smaller models it's not as big of a + +1597 +01:09:05,319 --> 01:09:09,640 +problem but this is uh this is something + +1598 +01:09:07,839 --> 01:09:13,600 +to pay attention + +1599 +01:09:09,640 --> 01:09:16,319 +to um if training crashes what can you + +1600 +01:09:13,600 --> 01:09:18,319 +do so a very common thing to do is to + +1601 +01:09:16,319 --> 01:09:21,000 +roll back to a previous checkpoint like + +1602 +01:09:18,319 --> 01:09:22,719 +save out checkpoints periodically um + +1603 +01:09:21,000 --> 01:09:24,600 +roll back not to write before the + +1604 +01:09:22,719 --> 01:09:27,120 +gradient spiked but roll back to you + +1605 +01:09:24,600 --> 01:09:29,960 +know like 100 steps before the gradient + +1606 +01:09:27,120 --> 01:09:31,080 +spiked shuffle your training data set or + +1607 +01:09:29,960 --> 01:09:33,520 +jump to a different part of your + +1608 +01:09:31,080 --> 01:09:36,120 +training data set and resume so this is + +1609 +01:09:33,520 --> 01:09:38,520 +a very like hacky thing to do I guess it + +1610 +01:09:36,120 --> 01:09:40,600 +it seems you know but by doing this + +1611 +01:09:38,520 --> 01:09:42,159 +you're injecting some Randomness in the + +1612 +01:09:40,600 --> 01:09:44,080 +process by looking at different data and + +1613 +01:09:42,159 --> 01:09:45,880 +that can cause your model training to + +1614 +01:09:44,080 --> 01:09:47,560 +stabilize um there are even some + +1615 +01:09:45,880 --> 01:09:49,880 +platforms that do this automatically + +1616 +01:09:47,560 --> 01:09:52,120 +like I think the Mosaic ml platform does + +1617 +01:09:49,880 --> 01:09:55,199 +this automatically and and fixes this + +1618 +01:09:52,120 --> 01:09:56,960 +for you so + +1619 +01:09:55,199 --> 01:10:01,280 +another thing though is you should also + +1620 +01:09:56,960 --> 01:10:04,159 +be checking your code um and ideally if + +1621 +01:10:01,280 --> 01:10:08,679 +you have really solid code that doesn't + +1622 +01:10:04,159 --> 01:10:12,320 +have any sort of like any sort of + +1623 +01:10:08,679 --> 01:10:14,199 +dangerous functions in it and also um + +1624 +01:10:12,320 --> 01:10:16,159 +your learning rate is set appr + +1625 +01:10:14,199 --> 01:10:18,040 +appropriately this happens much much + +1626 +01:10:16,159 --> 01:10:20,920 +less so if your model training is + +1627 +01:10:18,040 --> 01:10:22,159 +spiking all the time then this can be an + +1628 +01:10:20,920 --> 01:10:24,159 +indicator that you have a problem and + +1629 +01:10:22,159 --> 01:10:26,840 +just to give an example like + +1630 +01:10:24,159 --> 01:10:29,760 +let's say you're taking an + +1631 +01:10:26,840 --> 01:10:32,640 +exponent um let's say you're taking the + +1632 +01:10:29,760 --> 01:10:34,520 +log of something where you're pretty + +1633 +01:10:32,640 --> 01:10:37,719 +sure that this should be + +1634 +01:10:34,520 --> 01:10:38,920 +positive um you're taking the log of + +1635 +01:10:37,719 --> 01:10:41,520 +something where you're pretty sure that + +1636 +01:10:38,920 --> 01:10:43,760 +this should be positive but in fact it's + +1637 +01:10:41,520 --> 01:10:45,880 +getting very close to zero some of the + +1638 +01:10:43,760 --> 01:10:47,480 +time so if it's getting very close to + +1639 +01:10:45,880 --> 01:10:50,320 +zero some of the time you'll get a huge + +1640 +01:10:47,480 --> 01:10:52,600 +gradient because like log + +1641 +01:10:50,320 --> 01:10:53,880 +Z has an infinite gradient and things + +1642 +01:10:52,600 --> 01:10:56,920 +that are very close to zero have + +1643 +01:10:53,880 --> 01:10:58,280 +something very close to gradi so usually + +1644 +01:10:56,920 --> 01:11:00,640 +if you're seeing these sorts of spikes + +1645 +01:10:58,280 --> 01:11:02,880 +there's a reason for them uh like this + +1646 +01:11:00,640 --> 01:11:06,560 +so you can also try to diagnose and make + +1647 +01:11:02,880 --> 01:11:10,239 +trading more stable um this is kind of a + +1648 +01:11:06,560 --> 01:11:11,800 +like you this is a good thing to look at + +1649 +01:11:10,239 --> 01:11:14,360 +but there's a lot of like experience + +1650 +01:11:11,800 --> 01:11:16,040 +that goes into this so just going in and + +1651 +01:11:14,360 --> 01:11:18,640 +diagnosing and digging into the code is + +1652 +01:11:16,040 --> 01:11:22,199 +a + +1653 +01:11:18,640 --> 01:11:25,560 +that cool um any questions uh any + +1654 +01:11:22,199 --> 01:11:25,560 +questions about this + +1655 +01:11:26,960 --> 01:11:31,840 +I think a lot of this is like lived + +1656 +01:11:28,640 --> 01:11:34,239 +knowledge so just you know try it and uh + +1657 +01:11:31,840 --> 01:11:37,040 +and if you have problems ask uh me or + +1658 +01:11:34,239 --> 01:11:38,239 +the Tas or people so the final thing I'd + +1659 +01:11:37,040 --> 01:11:40,000 +like to talk about is comparing + +1660 +01:11:38,239 --> 01:11:42,120 +Transformer architectures so I talked + +1661 +01:11:40,000 --> 01:11:43,639 +about a lot of design decisions I'm not + +1662 +01:11:42,120 --> 01:11:45,360 +going to talk about every single model + +1663 +01:11:43,639 --> 01:11:47,120 +today because I want to do that a little + +1664 +01:11:45,360 --> 01:11:49,159 +bit later after we've introduced more + +1665 +01:11:47,120 --> 01:11:52,120 +Concepts but I would like to at least + +1666 +01:11:49,159 --> 01:11:54,760 +compare the vasani at all uh paper and + +1667 +01:11:52,120 --> 01:11:56,159 +llama and if we look at some of the + +1668 +01:11:54,760 --> 01:11:57,880 +differences between them you know + +1669 +01:11:56,159 --> 01:12:01,400 +they're both using Transformers they're + +1670 +01:11:57,880 --> 01:12:03,960 +both doing other uh a lot of things + +1671 +01:12:01,400 --> 01:12:07,320 +similarly but um some of the differences + +1672 +01:12:03,960 --> 01:12:09,520 +are where what is the norm position um + +1673 +01:12:07,320 --> 01:12:13,639 +vasani adult is doing postm L is doing + +1674 +01:12:09,520 --> 01:12:16,639 +prorm what is the norm type vasani is + +1675 +01:12:13,639 --> 01:12:19,199 +doing layer Norm llama is doing RMS Norm + +1676 +01:12:16,639 --> 01:12:22,440 +what nonlinearity are they using reu + +1677 +01:12:19,199 --> 01:12:24,000 +versus silu and what positional encoding + +1678 +01:12:22,440 --> 01:12:28,159 +is sinusoidal versus + +1679 +01:12:24,000 --> 01:12:30,639 +rope and you might be asking me you + +1680 +01:12:28,159 --> 01:12:33,560 +might be thinking like well how much do + +1681 +01:12:30,639 --> 01:12:35,880 +I care about this anyway I mean like it + +1682 +01:12:33,560 --> 01:12:37,719 +you know might not be might not be super + +1683 +01:12:35,880 --> 01:12:40,520 +important but there was actually a + +1684 +01:12:37,719 --> 01:12:42,880 +really nice paper by uh Albert goo who + +1685 +01:12:40,520 --> 01:12:45,040 +is a assistant professor in MLD where + +1686 +01:12:42,880 --> 01:12:47,080 +they were proposing a new architecture + +1687 +01:12:45,040 --> 01:12:50,199 +but one of kind of the Easter eggs in + +1688 +01:12:47,080 --> 01:12:52,080 +this paper is this comparison here um + +1689 +01:12:50,199 --> 01:12:55,239 +and in this comparison they basically + +1690 +01:12:52,080 --> 01:12:56,400 +compare the the at all original + +1691 +01:12:55,239 --> 01:12:59,920 +Transformer + +1692 +01:12:56,400 --> 01:13:02,400 +architecture and the Llama style like + +1693 +01:12:59,920 --> 01:13:06,400 +Transformer Plus+ like good Transformer + +1694 +01:13:02,400 --> 01:13:08,840 +architecture and they compare the + +1695 +01:13:06,400 --> 01:13:11,080 +perplexity or actually this is log scale + +1696 +01:13:08,840 --> 01:13:12,320 +perplexity so it's basically like log + +1697 +01:13:11,080 --> 01:13:15,480 +negative log + +1698 +01:13:12,320 --> 01:13:17,400 +likelihood on oh no sorry this is + +1699 +01:13:15,480 --> 01:13:20,639 +perplexity but it's on the log SC so + +1700 +01:13:17,400 --> 01:13:22,520 +yeah this is exal perplexity um and then + +1701 +01:13:20,639 --> 01:13:24,159 +they compare the perplexity based on the + +1702 +01:13:22,520 --> 01:13:26,159 +number of training plot + +1703 +01:13:24,159 --> 01:13:28,960 +and so if you look at the yellow + +1704 +01:13:26,159 --> 01:13:32,920 +transformer and the orange uh + +1705 +01:13:28,960 --> 01:13:36,120 +Transformer Plus+ you can actually see + +1706 +01:13:32,920 --> 01:13:39,750 +that it takes 10 times more + +1707 +01:13:36,120 --> 01:13:41,639 +flops to achieve a + +1708 +01:13:39,750 --> 01:13:45,480 +[Music] + +1709 +01:13:41,639 --> 01:13:47,600 +approximately similar uh it can take 10 + +1710 +01:13:45,480 --> 01:13:50,920 +times more flops to ACH achieve an + +1711 +01:13:47,600 --> 01:13:52,560 +approximately similar result with the + +1712 +01:13:50,920 --> 01:13:54,560 +old architecture compared to the new + +1713 +01:13:52,560 --> 01:13:55,679 +architecture so this is like really + +1714 +01:13:54,560 --> 01:13:59,280 +really important right you want your + +1715 +01:13:55,679 --> 01:14:01,639 +training to be 10 times faster so um you + +1716 +01:13:59,280 --> 01:14:03,120 +can see that like a lot of people were + +1717 +01:14:01,639 --> 01:14:04,800 +saying like scale is all you need and + +1718 +01:14:03,120 --> 01:14:06,040 +architecture engineering isn't important + +1719 +01:14:04,800 --> 01:14:08,719 +or things like that but it turns out + +1720 +01:14:06,040 --> 01:14:12,080 +that architecture engineering is kind of + +1721 +01:14:08,719 --> 01:14:13,840 +kind of important so uh like a lot of + +1722 +01:14:12,080 --> 01:14:15,639 +the advances we've made in the past five + +1723 +01:14:13,840 --> 01:14:17,560 +years or seven years with respect to + +1724 +01:14:15,639 --> 01:14:20,159 +that are actually making a big + +1725 +01:14:17,560 --> 01:14:22,360 +difference cool so I'll I'll leave it at + +1726 +01:14:20,159 --> 01:14:22,360 +that \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (5) Transformers/transcript.vtt b/CMU Advanced NLP 2024 (5) Transformers/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..5de127f68abb57016d5e2dd195d790abc59e0cd4 --- /dev/null +++ b/CMU Advanced NLP 2024 (5) Transformers/transcript.vtt @@ -0,0 +1,5179 @@ +WEBVTT + +00:00:00.240 --> 00:00:04.680 +so this time I'm going to be talking + +00:00:02.720 --> 00:00:07.839 +about Transformers kind of the backbone + +00:00:04.680 --> 00:00:09.719 +of most uh implementations not only a + +00:00:07.839 --> 00:00:11.920 +natural language processing but also you + +00:00:09.719 --> 00:00:14.320 +know a wide variety of other things as + +00:00:11.920 --> 00:00:16.800 +well and I'm going to be talking both + +00:00:14.320 --> 00:00:19.560 +about Transformers as they were + +00:00:16.800 --> 00:00:22.400 +currently as they were originally + +00:00:19.560 --> 00:00:25.119 +conceived and implemented in 2017 and + +00:00:22.400 --> 00:00:26.960 +also some modifications that people make + +00:00:25.119 --> 00:00:28.840 +to Transformers today to make them work + +00:00:26.960 --> 00:00:31.359 +work much better in kind of modern + +00:00:28.840 --> 00:00:35.879 +language models such as so I'll talk + +00:00:31.359 --> 00:00:35.879 +about both of those at the same time + +00:00:36.719 --> 00:00:44.200 +please so as a quick reminder I I just + +00:00:40.000 --> 00:00:47.000 +want to review the attention from last + +00:00:44.200 --> 00:00:48.840 +time very quickly and basically if you + +00:00:47.000 --> 00:00:51.160 +remember attention there were two + +00:00:48.840 --> 00:00:55.760 +varieties of attention one was cross + +00:00:51.160 --> 00:00:57.960 +attention where you attend to another + +00:00:55.760 --> 00:01:00.079 +sentence basically or another sequence + +00:00:57.960 --> 00:01:03.519 +so you have one sequence that serves as + +00:01:00.079 --> 00:01:05.479 +your uh keys that you are attending to + +00:01:03.519 --> 00:01:07.560 +and one sequence that serve as your + +00:01:05.479 --> 00:01:11.200 +queries the things that you are using to + +00:01:07.560 --> 00:01:13.479 +attend to the the uh sequence of keys + +00:01:11.200 --> 00:01:16.680 +and so uh you can do that for you know + +00:01:13.479 --> 00:01:18.360 +every element in the query Vector uh + +00:01:16.680 --> 00:01:21.119 +attending to every element in the key + +00:01:18.360 --> 00:01:25.479 +vector and the other alternative is self + +00:01:21.119 --> 00:01:27.280 +attention where you are uh attending to + +00:01:25.479 --> 00:01:29.960 +the same sequence so basically you're + +00:01:27.280 --> 00:01:32.560 +guaranteed that the queries and the keys + +00:01:29.960 --> 00:01:34.920 +attend uh like correspond to the same + +00:01:32.560 --> 00:01:36.360 +sequence and so that's really the only + +00:01:34.920 --> 00:01:38.920 +difference between self attention and + +00:01:36.360 --> 00:01:42.439 +cross attention um Transformer based + +00:01:38.920 --> 00:01:44.759 +models use either self attention or they + +00:01:42.439 --> 00:01:46.479 +use uh self attention and cross + +00:01:44.759 --> 00:01:48.680 +attention so I'll talk a little bit + +00:01:46.479 --> 00:01:50.920 +about two different types of Transformer + +00:01:48.680 --> 00:01:53.399 +based models that use both of + +00:01:50.920 --> 00:01:56.119 +those and the way we calculated + +00:01:53.399 --> 00:01:59.200 +detention was basically uh by using the + +00:01:56.119 --> 00:02:00.640 +query vectors uh taking all of the key + +00:01:59.200 --> 00:02:03.280 +vectors + +00:02:00.640 --> 00:02:05.119 +and uh for each query hey pair we would + +00:02:03.280 --> 00:02:07.960 +calculate the weight between them like + +00:02:05.119 --> 00:02:09.560 +this then we would normalize it by using + +00:02:07.960 --> 00:02:12.440 +the soft Max to make sure they all add + +00:02:09.560 --> 00:02:14.920 +up to one and are between zero and + +00:02:12.440 --> 00:02:18.080 +one and then based on that we took the + +00:02:14.920 --> 00:02:20.840 +value vectors and uh we multiplied in + +00:02:18.080 --> 00:02:24.720 +these attention weights and we got a + +00:02:20.840 --> 00:02:26.400 +final Vector for that so a single query + +00:02:24.720 --> 00:02:28.400 +Vector would result in a single value + +00:02:26.400 --> 00:02:32.040 +Vector is that + +00:02:28.400 --> 00:02:34.840 +output so that's just the the review uh + +00:02:32.040 --> 00:02:37.239 +to you know get everybody uh everybody's + +00:02:34.840 --> 00:02:39.480 +mind working allow everybody to uh come + +00:02:37.239 --> 00:02:41.440 +into the room and so now I'd like to + +00:02:39.480 --> 00:02:44.440 +jump into the the new content of talking + +00:02:41.440 --> 00:02:48.000 +about how Transformers + +00:02:44.440 --> 00:02:50.959 +work um in Transformers were Post in the + +00:02:48.000 --> 00:02:52.360 +paper attention is all you need uh by + +00:02:50.959 --> 00:02:56.400 +vaswani at all in + +00:02:52.360 --> 00:02:58.239 +2017 um when this paper came out it was + +00:02:56.400 --> 00:03:00.319 +kind of already clear to me that this + +00:02:58.239 --> 00:03:03.440 +was going to be a big thing you know + +00:03:00.319 --> 00:03:05.319 +soon after the paper came out uh and it + +00:03:03.440 --> 00:03:08.120 +it actually has turned out to be a very + +00:03:05.319 --> 00:03:10.879 +big thing of course um but basically + +00:03:08.120 --> 00:03:12.640 +when it came out it was a sequence + +00:03:10.879 --> 00:03:14.720 +sequence model a model that could + +00:03:12.640 --> 00:03:17.599 +generate sequences based entirely on + +00:03:14.720 --> 00:03:19.560 +attention and so this is in contrast to + +00:03:17.599 --> 00:03:22.640 +what came before it which is like they + +00:03:19.560 --> 00:03:24.879 +would have an RNN based encoder and then + +00:03:22.640 --> 00:03:28.080 +they would only use attention for cross + +00:03:24.879 --> 00:03:32.000 +attention so um all of + +00:03:28.080 --> 00:03:34.239 +the all of the in the + +00:03:32.000 --> 00:03:38.239 +encoder up until this point these would + +00:03:34.239 --> 00:03:40.040 +all be RNN RNN based + +00:03:38.239 --> 00:03:42.599 +blocks and then they would have a + +00:03:40.040 --> 00:03:46.799 +decoder over here maybe + +00:03:42.599 --> 00:03:48.239 +also maybe also consisting of uh RNN + +00:03:46.799 --> 00:03:50.879 +actually this could be a bidirectional + +00:03:48.239 --> 00:03:50.879 +RNN for + +00:03:51.000 --> 00:03:56.159 +example um and this could be a + +00:03:53.319 --> 00:03:56.159 +unidirectional + +00:03:57.360 --> 00:04:01.720 +rnm um and then they would only use + +00:03:59.760 --> 00:04:04.360 +attention for the cross attention part + +00:04:01.720 --> 00:04:07.760 +here so this would be + +00:04:04.360 --> 00:04:09.040 +attention um and so what the Transformer + +00:04:07.760 --> 00:04:11.319 +did is it basically said we're going to + +00:04:09.040 --> 00:04:12.439 +remove the RNN as sequence modeling and + +00:04:11.319 --> 00:04:15.280 +we're going to replace this all with + +00:04:12.439 --> 00:04:16.600 +self attention so um hence the name + +00:04:15.280 --> 00:04:18.199 +attention is all you need so they + +00:04:16.600 --> 00:04:21.120 +removed all of the other sequence + +00:04:18.199 --> 00:04:23.840 +modeling components other + +00:04:21.120 --> 00:04:25.240 +than um at the time the paper came out + +00:04:23.840 --> 00:04:26.880 +it gave strong results on machine + +00:04:25.240 --> 00:04:30.120 +translation and of course now it gives + +00:04:26.880 --> 00:04:33.080 +strong results on everything um another + +00:04:30.120 --> 00:04:34.960 +really important thing is it's uh fast + +00:04:33.080 --> 00:04:37.440 +and it only consists of Matrix + +00:04:34.960 --> 00:04:38.880 +multiplications um and so this is really + +00:04:37.440 --> 00:04:42.080 +important for the same reason that I + +00:04:38.880 --> 00:04:43.759 +mentioned uh last class which is that + +00:04:42.080 --> 00:04:45.080 +rnns are kind of bottlenecked by the + +00:04:43.759 --> 00:04:46.479 +fact that you have to wait for the + +00:04:45.080 --> 00:04:48.759 +calculation from the previous state + +00:04:46.479 --> 00:04:50.400 +before you can calculate the next one um + +00:04:48.759 --> 00:04:53.160 +Transformers you don't have to do that + +00:04:50.400 --> 00:04:56.240 +so it makes it um makes it much faster + +00:04:53.160 --> 00:04:58.479 +and actually I would argue that that's + +00:04:56.240 --> 00:05:00.680 +probably a bigger reason why they became + +00:04:58.479 --> 00:05:02.160 +very popular than uh like that + +00:05:00.680 --> 00:05:04.280 +Transformers are better modeling + +00:05:02.160 --> 00:05:06.800 +methodology or anything like that I I + +00:05:04.280 --> 00:05:09.280 +think it's actually mostly due to them + +00:05:06.800 --> 00:05:09.280 +being + +00:05:10.840 --> 00:05:17.000 +fast so I'm going to go through two + +00:05:13.400 --> 00:05:19.199 +types of Transformers um specifically + +00:05:17.000 --> 00:05:22.319 +encoder decoder + +00:05:19.199 --> 00:05:26.720 +Transformers uh and these are used in + +00:05:22.319 --> 00:05:30.039 +models such as um T5 and + +00:05:26.720 --> 00:05:32.479 +Bart uh and T5 is actually still uh + +00:05:30.039 --> 00:05:34.919 +reasonably widely used in in some + +00:05:32.479 --> 00:05:36.639 +applications um and also decoder only + +00:05:34.919 --> 00:05:39.520 +models and these are things like GP and + +00:05:36.639 --> 00:05:40.880 +llama and this is used uh kind of most + +00:05:39.520 --> 00:05:42.479 +widely right now most of the new + +00:05:40.880 --> 00:05:44.039 +language models coming out are decoder + +00:05:42.479 --> 00:05:46.639 +only + +00:05:44.039 --> 00:05:49.919 +models so here are the architecture + +00:05:46.639 --> 00:05:51.960 +diagrams uh between them and what you + +00:05:49.919 --> 00:05:53.440 +can see is a decoder only model only has + +00:05:51.960 --> 00:05:57.080 +a + +00:05:53.440 --> 00:05:59.800 +single model here where the encoder + +00:05:57.080 --> 00:06:01.919 +decoder model has an encoder and a + +00:05:59.800 --> 00:06:06.440 +decoder like + +00:06:01.919 --> 00:06:08.080 +this so the way the blocks of the trans + +00:06:06.440 --> 00:06:09.960 +or the way the Transformer works is you + +00:06:08.080 --> 00:06:11.599 +have an input embedding you have + +00:06:09.960 --> 00:06:13.479 +something called positional encodings + +00:06:11.599 --> 00:06:16.199 +which I'll talk about a bit then you + +00:06:13.479 --> 00:06:17.800 +have multi-head attention blocks and + +00:06:16.199 --> 00:06:21.400 +these multi-head attention blocks are + +00:06:17.800 --> 00:06:22.560 +followed by feed forward blocks so the + +00:06:21.400 --> 00:06:23.800 +multi-head attention blocks are + +00:06:22.560 --> 00:06:25.680 +basically doing attention the feed + +00:06:23.800 --> 00:06:29.080 +forward blocks are basically doing uh + +00:06:25.680 --> 00:06:31.000 +extraction of combination features um to + +00:06:29.080 --> 00:06:33.880 +kind of mix together the different + +00:06:31.000 --> 00:06:38.240 +features from the calculated by the + +00:06:33.880 --> 00:06:40.360 +attention and then in the in a decoder + +00:06:38.240 --> 00:06:42.560 +only model that's all you have and then + +00:06:40.360 --> 00:06:46.400 +in the encoder decoder model you also + +00:06:42.560 --> 00:06:49.840 +have um something like this where you + +00:06:46.400 --> 00:06:52.520 +have a mass multi-head attension uh to + +00:06:49.840 --> 00:06:55.160 +calculate kind of uh in place of the RNN + +00:06:52.520 --> 00:06:56.919 +here and then you have this multihead + +00:06:55.160 --> 00:06:59.199 +attention here in place of the Cross + +00:06:56.919 --> 00:07:01.319 +attention here and then you also have + +00:06:59.199 --> 00:07:02.720 +the Fe filler Network and I'm going to + +00:07:01.319 --> 00:07:05.000 +go through each one of these in detail + +00:07:02.720 --> 00:07:06.759 +but that's just kind of the general uh + +00:07:05.000 --> 00:07:09.080 +the general + +00:07:06.759 --> 00:07:11.160 +structure and so I mentioned that like + +00:07:09.080 --> 00:07:14.680 +encoder decoder models were widely used + +00:07:11.160 --> 00:07:18.319 +in T5 um this was also the original uh + +00:07:14.680 --> 00:07:21.039 +Transformer paper had uh had this um + +00:07:18.319 --> 00:07:24.240 +thing here uh this architecture here + +00:07:21.039 --> 00:07:29.479 +this is a little bit newer so why would + +00:07:24.240 --> 00:07:31.720 +you pick one or the other um T5 and Bart + +00:07:29.479 --> 00:07:33.800 +basically uh they picked this one kind + +00:07:31.720 --> 00:07:37.560 +of partly out of tradition but also + +00:07:33.800 --> 00:07:39.400 +partly out of um uh for things where you + +00:07:37.560 --> 00:07:41.960 +definitely have like a clear input + +00:07:39.400 --> 00:07:44.280 +output structure right so it's like I + +00:07:41.960 --> 00:07:46.639 +want to take in a summary or I want to + +00:07:44.280 --> 00:07:48.680 +take in a document that's my input I + +00:07:46.639 --> 00:07:51.360 +want to generate a summary or I want to + +00:07:48.680 --> 00:07:53.440 +take an an English sentence and I want + +00:07:51.360 --> 00:07:57.360 +to generate a Japanese sentence and + +00:07:53.440 --> 00:07:58.560 +that's a translation um however things + +00:07:57.360 --> 00:08:00.199 +get a little bit tricky when you're + +00:07:58.560 --> 00:08:02.360 +talking about something like a chatot + +00:08:00.199 --> 00:08:04.800 +right so if you have a chatbot what is + +00:08:02.360 --> 00:08:06.479 +your input and what's your output like + +00:08:04.800 --> 00:08:08.159 +one thing that could be your input is + +00:08:06.479 --> 00:08:11.159 +like all of the context that you've seen + +00:08:08.159 --> 00:08:12.400 +before um and then your output could be + +00:08:11.159 --> 00:08:15.960 +you know the + +00:08:12.400 --> 00:08:18.360 +next the next like utterance or the next + +00:08:15.960 --> 00:08:19.879 +dialogue turn but on the other hand + +00:08:18.360 --> 00:08:21.360 +another way you could look at it is well + +00:08:19.879 --> 00:08:22.919 +it's all just part of this one big + +00:08:21.360 --> 00:08:27.080 +sequence and we want to model this whole + +00:08:22.919 --> 00:08:29.319 +big sequence at a time and so um because + +00:08:27.080 --> 00:08:30.720 +of that decoder only models basically + +00:08:29.319 --> 00:08:32.360 +Don't force you to decide what your + +00:08:30.720 --> 00:08:34.399 +input and what your output is you can + +00:08:32.360 --> 00:08:36.680 +just treat all of it as one long + +00:08:34.399 --> 00:08:38.760 +sequence and that's a little bit more + +00:08:36.680 --> 00:08:40.159 +convenient another reason why decoder + +00:08:38.760 --> 00:08:42.959 +only models are a little bit more + +00:08:40.159 --> 00:08:46.040 +convenient is they're simpler uh so they + +00:08:42.959 --> 00:08:48.600 +just have you know these two layers and + +00:08:46.040 --> 00:08:50.440 +they don't have like separate multi-head + +00:08:48.600 --> 00:08:53.320 +attention blocks for the encoder and the + +00:08:50.440 --> 00:08:56.440 +cross attention and uh the decoder + +00:08:53.320 --> 00:08:57.920 +attention here and so because of this + +00:08:56.440 --> 00:08:59.279 +because this is simpler and has fewer + +00:08:57.920 --> 00:09:01.360 +parameters overall you can just make + +00:08:59.279 --> 00:09:04.480 +make each layer bigger or you can make + +00:09:01.360 --> 00:09:05.839 +more layers or other things like that um + +00:09:04.480 --> 00:09:07.880 +actually one thing I forgot to mention + +00:09:05.839 --> 00:09:09.560 +is this NX means you do this over and + +00:09:07.880 --> 00:09:12.800 +over again so you have like multiple + +00:09:09.560 --> 00:09:12.800 +layers of these blocks + +00:09:12.880 --> 00:09:19.160 +basically cool um any any questions + +00:09:17.200 --> 00:09:23.200 +here + +00:09:19.160 --> 00:09:23.200 +yeah same + +00:09:23.839 --> 00:09:27.839 +same so what do you mean by same size is + +00:09:26.399 --> 00:09:30.480 +the first thing to ask about do you mean + +00:09:27.839 --> 00:09:33.360 +same number of parameters you + +00:09:30.480 --> 00:09:37.640 +me so for the same number of parameters + +00:09:33.360 --> 00:09:40.040 +I I think it really depends um there was + +00:09:37.640 --> 00:09:41.800 +a comparison in the T5 paper where they + +00:09:40.040 --> 00:09:43.519 +did something like that and I think they + +00:09:41.800 --> 00:09:45.279 +did demonstrate that the encoder decoder + +00:09:43.519 --> 00:09:47.000 +was like slightly better but I don't + +00:09:45.279 --> 00:09:49.279 +know if they exactly controlled for the + +00:09:47.000 --> 00:09:51.720 +size um I have to go back and look at + +00:09:49.279 --> 00:09:53.519 +that to tell you the details but the T5 + +00:09:51.720 --> 00:09:56.399 +paper is actually a really really nice + +00:09:53.519 --> 00:09:57.800 +paper in terms of uh how they explore + +00:09:56.399 --> 00:09:59.399 +all the design dimensions and like + +00:09:57.800 --> 00:10:02.399 +training objectives and stuff like that + +00:09:59.399 --> 00:10:07.399 +so you could take a look at that if you + +00:10:02.399 --> 00:10:09.720 +want um any other any other + +00:10:07.399 --> 00:10:12.839 +questions okay so let's go into the + +00:10:09.720 --> 00:10:17.240 +details so my goal of this uh by the end + +00:10:12.839 --> 00:10:19.519 +is that you have a very good grasp of + +00:10:17.240 --> 00:10:22.160 +you know all of the all of the basic + +00:10:19.519 --> 00:10:25.640 +components that go in here and also uh + +00:10:22.160 --> 00:10:28.839 +some of the parts that llama is changing + +00:10:25.640 --> 00:10:31.279 +from the original architecture and how + +00:10:28.839 --> 00:10:35.200 +why that's important so uh that's kind + +00:10:31.279 --> 00:10:35.200 +of the main uh the main goal for + +00:10:36.320 --> 00:10:42.800 +today okay so core Transformer Concepts + +00:10:40.360 --> 00:10:45.639 +uh as I said positional encodings Are + +00:10:42.800 --> 00:10:48.160 +One Core concept multi-headed detention + +00:10:45.639 --> 00:10:49.839 +is another core concept um mask + +00:10:48.160 --> 00:10:51.320 +detention is a core concept which I kind + +00:10:49.839 --> 00:10:54.360 +of talked about last time but I'll I'll + +00:10:51.320 --> 00:10:56.639 +talk in a little more detail um residual + +00:10:54.360 --> 00:10:58.040 +layers and layer normalization and feed + +00:10:56.639 --> 00:11:01.040 +the feed forward + +00:10:58.040 --> 00:11:03.600 +layers + +00:11:01.040 --> 00:11:06.360 +so inputs and embeddings are are kind of + +00:11:03.600 --> 00:11:09.000 +boring I guess uh since we've already + +00:11:06.360 --> 00:11:10.639 +covered them inputs are generally split + +00:11:09.000 --> 00:11:13.160 +into subwords like this like we talked + +00:11:10.639 --> 00:11:15.000 +about before embeddings normally you + +00:11:13.160 --> 00:11:18.040 +just look them up like we discussed in + +00:11:15.000 --> 00:11:19.880 +previous models so it Transformer based + +00:11:18.040 --> 00:11:22.839 +models don't really do anything fancy + +00:11:19.880 --> 00:11:25.320 +here um the only big thing I guess is + +00:11:22.839 --> 00:11:28.320 +that they really when Transformer models + +00:11:25.320 --> 00:11:29.880 +came out they kind of like normalized + +00:11:28.320 --> 00:11:31.480 +the fact that you do subord segmentation + +00:11:29.880 --> 00:11:35.360 +and like every major Transformer based + +00:11:31.480 --> 00:11:35.360 +model does subord segmentation now + +00:11:35.519 --> 00:11:39.959 +um so skipping over that briefly uh the + +00:11:38.880 --> 00:11:42.000 +next thing I want to talk about is + +00:11:39.959 --> 00:11:43.440 +multi-head attention and this is kind of + +00:11:42.000 --> 00:11:45.800 +one of the big Innovations in the + +00:11:43.440 --> 00:11:49.480 +Transformer + +00:11:45.800 --> 00:11:53.120 +paper so multi-head attention + +00:11:49.480 --> 00:11:56.839 +um the basic intuition behind it is that + +00:11:53.120 --> 00:11:58.160 +information from different parts of the + +00:11:56.839 --> 00:12:01.639 +sentence or sequence that you're + +00:11:58.160 --> 00:12:04.880 +modeling can be useful in different ways + +00:12:01.639 --> 00:12:08.480 +and if you are just doing + +00:12:04.880 --> 00:12:11.600 +attention um if you are just doing + +00:12:08.480 --> 00:12:14.360 +attention with a single attention head + +00:12:11.600 --> 00:12:16.480 +basically a single uh you know attention + +00:12:14.360 --> 00:12:17.920 +Vector you might need to make hard + +00:12:16.480 --> 00:12:21.199 +decisions about which part of the + +00:12:17.920 --> 00:12:24.639 +sentence you pay attention to so um I I + +00:12:21.199 --> 00:12:27.880 +wrote four examples of the word run here + +00:12:24.639 --> 00:12:30.040 +um can anybody tell me how these are + +00:12:27.880 --> 00:12:31.959 +different + +00:12:30.040 --> 00:12:33.800 +how are how are one and two different + +00:12:31.959 --> 00:12:36.600 +from three and + +00:12:33.800 --> 00:12:42.040 +four yeah the first two are verbs and + +00:12:36.600 --> 00:12:45.320 +the second two are nouns yeah um and so + +00:12:42.040 --> 00:12:45.320 +how how is one different from + +00:12:47.720 --> 00:12:52.079 +two if you know another language if you + +00:12:50.480 --> 00:12:55.240 +translate them into another language are + +00:12:52.079 --> 00:12:55.240 +they translated the same or + +00:12:55.800 --> 00:12:59.160 +differently yeah the meaning the + +00:12:57.519 --> 00:13:01.920 +meanings are different so this is Al + +00:12:59.160 --> 00:13:04.639 +also called word sense um or it's called + +00:13:01.920 --> 00:13:06.480 +semantics or it's called other uh other + +00:13:04.639 --> 00:13:08.000 +lexical semantics or something like this + +00:13:06.480 --> 00:13:10.160 +but basically the meanings are different + +00:13:08.000 --> 00:13:12.079 +like if you translate these two into + +00:13:10.160 --> 00:13:13.240 +probably many other languages in the + +00:13:12.079 --> 00:13:15.600 +world they'd have a different + +00:13:13.240 --> 00:13:17.440 +translation uh because it they mean + +00:13:15.600 --> 00:13:20.160 +different things like physically uh + +00:13:17.440 --> 00:13:23.040 +sorry uh run run a business versus + +00:13:20.160 --> 00:13:25.160 +physically run um and same for three and + +00:13:23.040 --> 00:13:28.079 +four right running a staffing is very + +00:13:25.160 --> 00:13:29.680 +different than um making making it run + +00:13:28.079 --> 00:13:32.600 +there + +00:13:29.680 --> 00:13:33.959 +now if you look at the information you + +00:13:32.600 --> 00:13:35.240 +might not even think about it but if you + +00:13:33.959 --> 00:13:38.720 +look at the information you use to + +00:13:35.240 --> 00:13:41.680 +disambiguate these things it's pretty + +00:13:38.720 --> 00:13:43.920 +different usually for syntactic things + +00:13:41.680 --> 00:13:47.079 +you can just tell from the nearby + +00:13:43.920 --> 00:13:49.160 +context so for example if you have a + +00:13:47.079 --> 00:13:51.279 +noun to the left usually that means + +00:13:49.160 --> 00:13:53.199 +something is going to be a verb uh on + +00:13:51.279 --> 00:13:55.000 +the other hand if you have a determiner + +00:13:53.199 --> 00:13:57.079 +on the left it's almost certain that + +00:13:55.000 --> 00:14:00.199 +that that thing is going to be either an + +00:13:57.079 --> 00:14:01.480 +uh a noun or an adjective so you only + +00:14:00.199 --> 00:14:03.079 +really need to look at very local + +00:14:01.480 --> 00:14:05.120 +context to do this sort of + +00:14:03.079 --> 00:14:07.399 +disambiguation but in order to + +00:14:05.120 --> 00:14:09.480 +disambiguate uh semantics you need to + +00:14:07.399 --> 00:14:11.759 +look at farther uh + +00:14:09.480 --> 00:14:13.720 +context one interesting thing is like + +00:14:11.759 --> 00:14:16.880 +let's say you want to learn embeddings + +00:14:13.720 --> 00:14:18.320 +of uh embeddings of words there's + +00:14:16.880 --> 00:14:19.839 +actually a trick that you can use when + +00:14:18.320 --> 00:14:22.040 +training word embeddings where you only + +00:14:19.839 --> 00:14:23.639 +look at the local uh the local context + +00:14:22.040 --> 00:14:26.120 +and you can learn syntactic embeddings + +00:14:23.639 --> 00:14:27.480 +or you don't look at the local context + +00:14:26.120 --> 00:14:30.160 +and you only look at the farther away + +00:14:27.480 --> 00:14:33.600 +context and you can learn some Mets so + +00:14:30.160 --> 00:14:35.000 +like you can actually use this to get um + +00:14:33.600 --> 00:14:36.519 +like influence your models in + +00:14:35.000 --> 00:14:40.920 +interesting ways but anyway that's kind + +00:14:36.519 --> 00:14:42.720 +of in aide so um the the basic idea here + +00:14:40.920 --> 00:14:44.360 +though is different pieces of context + +00:14:42.720 --> 00:14:47.000 +can be useful for different + +00:14:44.360 --> 00:14:49.199 +purposes and that's kind of what + +00:14:47.000 --> 00:14:51.160 +multi-head attention is trying to uh + +00:14:49.199 --> 00:14:53.279 +trying to get at so it doesn't want to + +00:14:51.160 --> 00:14:55.440 +force you to decide whether to look at I + +00:14:53.279 --> 00:14:57.440 +or to look at business but it wants you + +00:14:55.440 --> 00:15:00.680 +to allow you to look at both of them for + +00:14:57.440 --> 00:15:00.680 +different purposes + +00:15:01.639 --> 00:15:07.399 +so how exactly does multi-headed + +00:15:03.720 --> 00:15:11.040 +detention work I wrote the equation up + +00:15:07.399 --> 00:15:15.880 +here and actually I should point out um + +00:15:11.040 --> 00:15:18.279 +that the reference on the web page for + +00:15:15.880 --> 00:15:21.199 +the annotated Transformer is really nice + +00:15:18.279 --> 00:15:22.920 +like I uh I got some of the equations + +00:15:21.199 --> 00:15:24.800 +directly from that and you can look + +00:15:22.920 --> 00:15:27.360 +through and see pytorch code for all of + +00:15:24.800 --> 00:15:29.279 +these things too uh which can be helpful + +00:15:27.360 --> 00:15:31.480 +so um + +00:15:29.279 --> 00:15:33.120 +anyway uh we have the multi-headed + +00:15:31.480 --> 00:15:36.160 +attention and it looks like this I'm + +00:15:33.120 --> 00:15:37.519 +going to walk through the uh the diagram + +00:15:36.160 --> 00:15:39.360 +that I have down here though because it + +00:15:37.519 --> 00:15:42.560 +might be a little bit easier to + +00:15:39.360 --> 00:15:45.440 +follow so the this diagram is a little + +00:15:42.560 --> 00:15:47.839 +bit different than what is presented in + +00:15:45.440 --> 00:15:49.639 +the attention is all you need paper but + +00:15:47.839 --> 00:15:51.199 +I intentionally made the diagram closer + +00:15:49.639 --> 00:15:54.279 +to what you how you actually want to + +00:15:51.199 --> 00:15:58.319 +implement it in pytorch uh for example + +00:15:54.279 --> 00:16:00.079 +so um the first thing that you do is you + +00:15:58.319 --> 00:16:01.720 +have a a whole bunch of query vectors + +00:16:00.079 --> 00:16:05.440 +and a whole bunch of key + +00:16:01.720 --> 00:16:07.000 +vectors um so the query vectors here I + +00:16:05.440 --> 00:16:09.240 +only have three of them the key vectors + +00:16:07.000 --> 00:16:12.839 +and value vectors I have four that's + +00:16:09.240 --> 00:16:15.920 +kind of intentional um so this this + +00:16:12.839 --> 00:16:17.399 +would be this is permissible uh you must + +00:16:15.920 --> 00:16:19.279 +have the same number of key vectors and + +00:16:17.399 --> 00:16:20.759 +value vectors uh but you can have a + +00:16:19.279 --> 00:16:22.959 +different number of query vectors if you + +00:16:20.759 --> 00:16:27.519 +want + +00:16:22.959 --> 00:16:29.079 +um so is that + +00:16:27.519 --> 00:16:30.880 +clear + +00:16:29.079 --> 00:16:33.160 +in which case can you have a different + +00:16:30.880 --> 00:16:36.759 +number of query vectors and key vectors + +00:16:33.160 --> 00:16:39.839 +yeah when it's Mas when it's masked you + +00:16:36.759 --> 00:16:44.720 +could do that um yeah that that is true + +00:16:39.839 --> 00:16:44.720 +um I I was thinking something else + +00:16:49.560 --> 00:16:53.240 +yeah when you're decoding and you have a + +00:16:51.759 --> 00:16:56.360 +short sequence and you're attending to a + +00:16:53.240 --> 00:17:00.680 +longer sequence yeah um that's basically + +00:16:56.360 --> 00:17:03.079 +it um you can have this when it's cross + +00:17:00.680 --> 00:17:04.400 +attention um because in Cross attention + +00:17:03.079 --> 00:17:05.959 +the sequence that you're attending to + +00:17:04.400 --> 00:17:08.480 +can be different than the sequence that + +00:17:05.959 --> 00:17:10.079 +you're using to attend if you're doing + +00:17:08.480 --> 00:17:12.120 +self attention these need to be the same + +00:17:10.079 --> 00:17:13.600 +one because the sequences are the same + +00:17:12.120 --> 00:17:16.079 +so the length of the sequence will also + +00:17:13.600 --> 00:17:19.839 +be the same so + +00:17:16.079 --> 00:17:22.360 +um so yeah that that's one thing + +00:17:19.839 --> 00:17:24.760 +uh the reason why I made these different + +00:17:22.360 --> 00:17:26.600 +just to demonstrate that + +00:17:24.760 --> 00:17:28.799 +they + +00:17:26.600 --> 00:17:32.080 +so the first thing that you do is you + +00:17:28.799 --> 00:17:35.200 +multiply by weights and you have three + +00:17:32.080 --> 00:17:36.720 +different weight matrices uh the first + +00:17:35.200 --> 00:17:38.360 +or actually you have four different + +00:17:36.720 --> 00:17:40.240 +weight matrices overall but here we're + +00:17:38.360 --> 00:17:42.240 +going to use three of them the first one + +00:17:40.240 --> 00:17:46.039 +is the query Matrix so you multiply this + +00:17:42.240 --> 00:17:47.600 +input by the query Matrix uh then you + +00:17:46.039 --> 00:17:49.760 +have your key Matrix you multiply the + +00:17:47.600 --> 00:17:51.919 +input by the the key weight Matrix and + +00:17:49.760 --> 00:17:54.120 +then you have the value Matrix so um + +00:17:51.919 --> 00:17:57.400 +that you multiply by the weights here + +00:17:54.120 --> 00:18:01.520 +and that's what we have up here in the + +00:17:57.400 --> 00:18:05.720 +equation um then the next thing that we + +00:18:01.520 --> 00:18:07.400 +do is we split and rearrange these into + +00:18:05.720 --> 00:18:11.919 +n attention + +00:18:07.400 --> 00:18:15.200 +inputs and so the way we do this is we + +00:18:11.919 --> 00:18:18.120 +split these up like this so we we have 1 + +00:18:15.200 --> 00:18:19.840 +2 3 four we split them up into two of + +00:18:18.120 --> 00:18:21.760 +them of size two so this is the case + +00:18:19.840 --> 00:18:25.000 +where you have two attention + +00:18:21.760 --> 00:18:27.840 +heads um and each attention head has a + +00:18:25.000 --> 00:18:29.760 +vector of size two in reality usually + +00:18:27.840 --> 00:18:31.280 +your vector will be size like 512 or + +00:18:29.760 --> 00:18:33.919 +1024 and then you'll have eight + +00:18:31.280 --> 00:18:37.080 +attention heads or something like this + +00:18:33.919 --> 00:18:40.120 +um and or you know much more if you have + +00:18:37.080 --> 00:18:42.240 +a larger model um but here I'm just + +00:18:40.120 --> 00:18:44.720 +doing a simple example for illustrative + +00:18:42.240 --> 00:18:47.080 +purposes and we do this over all of + +00:18:44.720 --> 00:18:48.720 +these note that this is like a little + +00:18:47.080 --> 00:18:52.159 +bit different than the equation that you + +00:18:48.720 --> 00:18:54.080 +have here um so the equation that you + +00:18:52.159 --> 00:18:59.400 +have here you're splitting up first and + +00:18:54.080 --> 00:19:00.840 +then doing the Matrix multiply so um + +00:18:59.400 --> 00:19:03.360 +so you would be doing the Matrix + +00:19:00.840 --> 00:19:04.440 +multiply of this Matrix uh resulting in + +00:19:03.360 --> 00:19:06.520 +this then you would do the Matrix + +00:19:04.440 --> 00:19:08.360 +multiply resulting in this but in + +00:19:06.520 --> 00:19:09.720 +reality we do the big Matrix multiply + +00:19:08.360 --> 00:19:11.679 +all at once just because it's more + +00:19:09.720 --> 00:19:13.640 +efficient to do it that way uh because + +00:19:11.679 --> 00:19:16.360 +we want to do more big operations than + +00:19:13.640 --> 00:19:19.280 +do a bunch of operations separately so + +00:19:16.360 --> 00:19:23.120 +uh this diagram here is closer to what + +00:19:19.280 --> 00:19:23.120 +you actually do in py for + +00:19:25.080 --> 00:19:29.400 +example so this is now a + +00:19:27.280 --> 00:19:33.480 +three-dimensional + +00:19:29.400 --> 00:19:35.200 +um so uh like at this at this point here + +00:19:33.480 --> 00:19:40.919 +you would have a threedimensional fenor + +00:19:35.200 --> 00:19:43.840 +where we have um two rows and three + +00:19:40.919 --> 00:19:46.400 +columns and uh the third dimension is + +00:19:43.840 --> 00:19:49.080 +two so you can see it's kind of + +00:19:46.400 --> 00:19:50.480 +threedimensional here and that's also + +00:19:49.080 --> 00:19:52.919 +good because in the next step we're + +00:19:50.480 --> 00:19:54.960 +going to run a tension over each head + +00:19:52.919 --> 00:19:56.760 +and when we run attention over each head + +00:19:54.960 --> 00:19:59.000 +if we run attention over + +00:19:56.760 --> 00:20:00.360 +threedimensional tensors once that's a + +00:19:59.000 --> 00:20:02.200 +lot more efficient than writing a for + +00:20:00.360 --> 00:20:06.640 +Loop and doing it individually over each + +00:20:02.200 --> 00:20:08.280 +of these split up things here so um so + +00:20:06.640 --> 00:20:11.919 +that's another uh that's the next thing + +00:20:08.280 --> 00:20:14.559 +we do and when we run attention we + +00:20:11.919 --> 00:20:16.480 +basically calculate the attention Vector + +00:20:14.559 --> 00:20:20.640 +using the query and the + +00:20:16.480 --> 00:20:22.919 +key and uh then we multiply the value + +00:20:20.640 --> 00:20:25.080 +vectors by that attention Vector uh take + +00:20:22.919 --> 00:20:27.799 +the weighted sum by via tension vector + +00:20:25.080 --> 00:20:32.320 +and that gives us a result that looks + +00:20:27.799 --> 00:20:35.880 +like this basically so um of course the + +00:20:32.320 --> 00:20:38.799 +number of columns in this will be equal + +00:20:35.880 --> 00:20:40.919 +to the number of columns in the query uh + +00:20:38.799 --> 00:20:43.000 +the query typ here because we calculate + +00:20:40.919 --> 00:20:45.480 +one representation for each thing in the + +00:20:43.000 --> 00:20:48.039 +query uh in the query + +00:20:45.480 --> 00:20:49.919 +Matrix and then we concat them + +00:20:48.039 --> 00:20:52.520 +concatenate them + +00:20:49.919 --> 00:20:54.799 +together uh and when we concatenate them + +00:20:52.520 --> 00:20:58.440 +together we get a bigger um we get a + +00:20:54.799 --> 00:21:01.559 +bigger Vector here and so when we do + +00:20:58.440 --> 00:21:03.159 +this each um each one will get a + +00:21:01.559 --> 00:21:05.320 +different attention weight so we have a + +00:21:03.159 --> 00:21:06.400 +different attention weighting uh over + +00:21:05.320 --> 00:21:09.240 +all of + +00:21:06.400 --> 00:21:12.120 +them this is what the code looks like uh + +00:21:09.240 --> 00:21:14.480 +I basically put it up here um but you B + +00:21:12.120 --> 00:21:18.480 +you do linear projections for all of + +00:21:14.480 --> 00:21:21.240 +these um we reshape to get H heads we + +00:21:18.480 --> 00:21:24.159 +apply attention to all of the heads and + +00:21:21.240 --> 00:21:26.080 +then we concatenate them back Al + +00:21:24.159 --> 00:21:29.799 +together and then we apply a final + +00:21:26.080 --> 00:21:34.520 +linear layer so we have a final uh final + +00:21:29.799 --> 00:21:34.520 +matrix multiplication uh at the very end + +00:21:35.159 --> 00:21:41.880 +here and so I didn't really uh I didn't + +00:21:38.200 --> 00:21:46.400 +really explicitly expand the attention + +00:21:41.880 --> 00:21:48.440 +uh vectors in the previous uh diagram + +00:21:46.400 --> 00:21:50.760 +but I have them here so this is an + +00:21:48.440 --> 00:21:52.039 +example from the vaswani at all paper + +00:21:50.760 --> 00:21:53.440 +and they're showing what happens when + +00:21:52.039 --> 00:21:56.320 +you calculate self + +00:21:53.440 --> 00:21:58.880 +attention um and this this is the self + +00:21:56.320 --> 00:22:01.799 +attention values for the word + +00:21:58.880 --> 00:22:04.720 +making and the self attention values for + +00:22:01.799 --> 00:22:08.400 +the word making are mostly attending to + +00:22:04.720 --> 00:22:09.919 +like more uh more difficult and that + +00:22:08.400 --> 00:22:12.679 +really closely matches with what I + +00:22:09.919 --> 00:22:16.360 +talked about before right + +00:22:12.679 --> 00:22:19.000 +so run in English is kind of a a verb + +00:22:16.360 --> 00:22:21.159 +with lots of ambiguity uh like how you + +00:22:19.000 --> 00:22:23.640 +translate the verb how you translate the + +00:22:21.159 --> 00:22:26.559 +word run would be very different based + +00:22:23.640 --> 00:22:29.279 +on you know the other words in the + +00:22:26.559 --> 00:22:31.320 +sentence make is also a word with lots + +00:22:29.279 --> 00:22:33.640 +of ambiguity so in order to understand + +00:22:31.320 --> 00:22:35.240 +how you would translate it you would uh + +00:22:33.640 --> 00:22:37.640 +need to pull in information from other + +00:22:35.240 --> 00:22:39.200 +parts of the sentence and specifically + +00:22:37.640 --> 00:22:40.960 +making something more difficult is + +00:22:39.200 --> 00:22:44.080 +different than like making a cake or + +00:22:40.960 --> 00:22:45.919 +making a house or something like that um + +00:22:44.080 --> 00:22:48.480 +and so because of that uh it's pulling + +00:22:45.919 --> 00:22:50.200 +in lots of information from over here + +00:22:48.480 --> 00:22:53.120 +but there are some attention heads that + +00:22:50.200 --> 00:22:54.640 +are like attending to the word itself uh + +00:22:53.120 --> 00:22:56.480 +so this is pulling in information from + +00:22:54.640 --> 00:22:58.480 +the word itself there's also another + +00:22:56.480 --> 00:23:00.400 +attention head that's pulling in word + +00:22:58.480 --> 00:23:02.520 +from uh information from the previous + +00:23:00.400 --> 00:23:04.440 +word so this could be one that's doing + +00:23:02.520 --> 00:23:07.200 +like syntactic dis invigoration of some + +00:23:04.440 --> 00:23:08.799 +variety so you can see that each head is + +00:23:07.200 --> 00:23:10.200 +pulling in different varieties of + +00:23:08.799 --> 00:23:13.360 +information here which is kind of the + +00:23:10.200 --> 00:23:13.360 +function of + +00:23:15.679 --> 00:23:23.600 +multi so any yeah so happens you + +00:23:26.880 --> 00:23:30.640 +have what happens if you have multi-head + +00:23:29.200 --> 00:23:31.919 +attention and the sentence is shorter + +00:23:30.640 --> 00:23:33.840 +than the number of heads so that's a + +00:23:31.919 --> 00:23:37.000 +good question um it's actually not a + +00:23:33.840 --> 00:23:39.600 +problem at all uh because here let's + +00:23:37.000 --> 00:23:41.360 +look at the um the length of the + +00:23:39.600 --> 00:23:44.240 +sentence the length of the sentence here + +00:23:41.360 --> 00:23:47.760 +would be three uh this this number of + +00:23:44.240 --> 00:23:49.640 +columns is the length of the sentence um + +00:23:47.760 --> 00:23:52.720 +here the length of the sentence would be + +00:23:49.640 --> 00:23:55.200 +four but we're not splitting on the + +00:23:52.720 --> 00:23:57.080 +columns we're splitting on the rows so + +00:23:55.200 --> 00:23:58.559 +you need to make sure that the rows are + +00:23:57.080 --> 00:23:59.840 +greater than the number heads and you + +00:23:58.559 --> 00:24:02.039 +always do that because you pick a + +00:23:59.840 --> 00:24:04.679 +representation size of something like + +00:24:02.039 --> 00:24:06.200 +512 and then you pick the number of + +00:24:04.679 --> 00:24:08.520 +heads to be equal to something like + +00:24:06.200 --> 00:24:10.559 +eight so you're sure that it's always + +00:24:08.520 --> 00:24:12.120 +divisible it's always larger there's + +00:24:10.559 --> 00:24:13.840 +actually something crazy called fine + +00:24:12.120 --> 00:24:15.919 +grain detention that was proposed like + +00:24:13.840 --> 00:24:17.799 +right after attention was composed where + +00:24:15.919 --> 00:24:21.240 +you made the number of heads equal to + +00:24:17.799 --> 00:24:23.640 +the number of uh of representations but + +00:24:21.240 --> 00:24:27.000 +people stopped doing this just because + +00:24:23.640 --> 00:24:29.200 +it's uh like it's Overkill you don't + +00:24:27.000 --> 00:24:30.720 +need that many attention heads and + +00:24:29.200 --> 00:24:32.039 +actually in the Transformer paper they + +00:24:30.720 --> 00:24:33.720 +experiment with different numbers of + +00:24:32.039 --> 00:24:37.080 +attention heads and found eight was like + +00:24:33.720 --> 00:24:38.880 +sufficient for their their purposes yeah + +00:24:37.080 --> 00:24:41.240 +attention in the original paper is not + +00:24:38.880 --> 00:24:44.120 +causal right so it can like look into + +00:24:41.240 --> 00:24:47.520 +future tokens as + +00:24:44.120 --> 00:24:49.640 +well attention in the original attention + +00:24:47.520 --> 00:24:50.679 +paper from like 2014 where they first + +00:24:49.640 --> 00:24:54.880 +proposed + +00:24:50.679 --> 00:24:58.080 +attention um in the original paper in + +00:24:54.880 --> 00:25:01.279 +2014 where they first proposed attention + +00:24:58.080 --> 00:25:03.279 +they were doing exclusively cross + +00:25:01.279 --> 00:25:06.559 +attention like this so they were + +00:25:03.279 --> 00:25:08.559 +attending to um like they encoded + +00:25:06.559 --> 00:25:11.080 +everything with bidirectional rnns and + +00:25:08.559 --> 00:25:13.760 +then they were just attending to things + +00:25:11.080 --> 00:25:16.120 +into input not like doing causal + +00:25:13.760 --> 00:25:18.200 +attention um causal attention was + +00:25:16.120 --> 00:25:20.159 +basically causal attention is like left + +00:25:18.200 --> 00:25:22.559 +to right or mask attention there's like + +00:25:20.159 --> 00:25:24.960 +different ways of of saying it but M + +00:25:22.559 --> 00:25:29.679 +attention was first proposed in the + +00:25:24.960 --> 00:25:29.679 +Transformer paper and it was um + +00:25:29.720 --> 00:25:36.320 +uh it it basically was only in the + +00:25:31.960 --> 00:25:39.559 +output also um and then actually the the + +00:25:36.320 --> 00:25:41.640 +first um decoder only models the first + +00:25:39.559 --> 00:25:45.720 +decoder only model was basically like + +00:25:41.640 --> 00:25:48.720 +gpt1 uh like the first GPT model uh and + +00:25:45.720 --> 00:25:52.240 +there they did causal or mass detention + +00:25:48.720 --> 00:25:56.399 +just on out side or just uh modeling + +00:25:52.240 --> 00:26:00.000 +sequences yeah um so in the input to the + +00:25:56.399 --> 00:26:01.760 +GPD when we say that in the making + +00:26:00.000 --> 00:26:05.080 +example when we sort of looked at the + +00:26:01.760 --> 00:26:07.960 +self form in the decoder only model + +00:26:05.080 --> 00:26:09.760 +would making not be able to attemp to + +00:26:07.960 --> 00:26:11.840 +Future tokens that comes + +00:26:09.760 --> 00:26:13.399 +after uh that's a good question and + +00:26:11.840 --> 00:26:17.520 +basically the answer is + +00:26:13.399 --> 00:26:20.240 +yes um so encoder that is one argument + +00:26:17.520 --> 00:26:21.559 +for why encoder decoder models might be + +00:26:20.240 --> 00:26:23.520 +more useful because you can do + +00:26:21.559 --> 00:26:26.159 +bidirectional attention on the + +00:26:23.520 --> 00:26:29.440 +inputs um + +00:26:26.159 --> 00:26:30.799 +and there's also so there's actually + +00:26:29.440 --> 00:26:32.919 +something right in the middle it's not + +00:26:30.799 --> 00:26:35.799 +used super widely + +00:26:32.919 --> 00:26:38.360 +nowadays um but + +00:26:35.799 --> 00:26:40.760 +basically it's something called a prefix + +00:26:38.360 --> 00:26:40.760 +language + +00:26:43.520 --> 00:26:47.559 +model um in a prefix language model is + +00:26:46.240 --> 00:26:50.880 +something where you only have the + +00:26:47.559 --> 00:26:52.679 +parameters of a decoder but you allow it + +00:26:50.880 --> 00:26:55.880 +during training you allow it to do + +00:26:52.679 --> 00:26:57.600 +either masked or unmasked detention so + +00:26:55.880 --> 00:26:59.960 +you only do mask detention when you're + +00:26:57.600 --> 00:27:02.360 +generating but you also like do unmasked + +00:26:59.960 --> 00:27:07.120 +attention also so it's just a way to + +00:27:02.360 --> 00:27:08.799 +train the model um it's it's a small + +00:27:07.120 --> 00:27:10.799 +modification to how you train the model + +00:27:08.799 --> 00:27:12.440 +but uh some papers have said that's more + +00:27:10.799 --> 00:27:14.720 +effective but I guess it's like more + +00:27:12.440 --> 00:27:16.559 +complicated and I don't see it used + +00:27:14.720 --> 00:27:19.559 +super widely right + +00:27:16.559 --> 00:27:19.559 +now + +00:27:22.919 --> 00:27:29.520 +yeah uh the multi + +00:27:26.520 --> 00:27:29.520 +yeah + +00:27:33.679 --> 00:27:37.080 +the number of rows is the dimension of + +00:27:50.000 --> 00:27:56.519 +the yeah + +00:27:52.600 --> 00:27:58.159 +so the reason why we don't split on the + +00:27:56.519 --> 00:28:01.279 +rows + +00:27:58.159 --> 00:28:03.159 +so if we go back to the the reason why + +00:28:01.279 --> 00:28:05.120 +attention is so powerful in the first + +00:28:03.159 --> 00:28:07.399 +place the reason why attention is so + +00:28:05.120 --> 00:28:10.320 +powerful in the first place is we're + +00:28:07.399 --> 00:28:12.679 +applying the exact same function no + +00:28:10.320 --> 00:28:15.159 +matter how long the length is so that + +00:28:12.679 --> 00:28:18.559 +allows us to extrapolate essentially to + +00:28:15.159 --> 00:28:21.080 +like infinite like as long as we want or + +00:28:18.559 --> 00:28:23.080 +short as we want sentences if we were + +00:28:21.080 --> 00:28:24.760 +doing things like splitting on the + +00:28:23.080 --> 00:28:27.640 +length of the sentence and we would run + +00:28:24.760 --> 00:28:30.159 +into the problem where the like we had + +00:28:27.640 --> 00:28:31.600 +question about before which is like what + +00:28:30.159 --> 00:28:34.799 +if the number of attention heads is + +00:28:31.600 --> 00:28:36.159 +shorter than the sequence link so you + +00:28:34.799 --> 00:28:37.440 +could you could come up with a model + +00:28:36.159 --> 00:28:38.880 +that did something like that you could + +00:28:37.440 --> 00:28:41.279 +come up with a model that said okay I'm + +00:28:38.880 --> 00:28:42.679 +going to split the first quarter of the + +00:28:41.279 --> 00:28:43.919 +sentence and then the next quarter of + +00:28:42.679 --> 00:28:45.519 +the sentence and the next quar of the + +00:28:43.919 --> 00:28:46.880 +sentence that there actually were models + +00:28:45.519 --> 00:28:50.000 +like that back in the day like where + +00:28:46.880 --> 00:28:51.880 +people encoded different like quartiles + +00:28:50.000 --> 00:28:53.760 +of the sentence separately but it + +00:28:51.880 --> 00:28:55.360 +becomes a little bit tricky and like + +00:28:53.760 --> 00:28:56.840 +what if it's shorter and stuff like that + +00:28:55.360 --> 00:28:59.399 +so you need to deal with all these scor + +00:28:56.840 --> 00:28:59.399 +cases + +00:28:59.440 --> 00:29:06.200 +yeah cool um okay any any other things + +00:29:03.679 --> 00:29:10.519 +these are all good questions + +00:29:06.200 --> 00:29:10.519 +so okay I'll move on to the next + +00:29:14.279 --> 00:29:22.080 +one okay so positional inputting um so + +00:29:18.000 --> 00:29:25.919 +this is another really core part of + +00:29:22.080 --> 00:29:25.919 +the Transformer + +00:29:26.320 --> 00:29:29.320 +model + +00:29:30.440 --> 00:29:36.200 +and the positional encoding uh goes in + +00:29:33.679 --> 00:29:37.880 +here it's added together with the input + +00:29:36.200 --> 00:29:40.440 +embedding + +00:29:37.880 --> 00:29:42.279 +and because the Transformer model is + +00:29:40.440 --> 00:29:45.519 +purely + +00:29:42.279 --> 00:29:47.000 +attentional if embeddings only are used + +00:29:45.519 --> 00:29:50.559 +there actually would be no way to + +00:29:47.000 --> 00:29:52.240 +distinguish between identical words so + +00:29:50.559 --> 00:29:55.519 +because + +00:29:52.240 --> 00:29:58.519 +you're just taking the input embedding + +00:29:55.519 --> 00:30:00.840 +if you had a big dog and a big cat the + +00:29:58.519 --> 00:30:02.320 +attention values from every other place + +00:30:00.840 --> 00:30:04.519 +in the sentence would be guaranteed to + +00:30:02.320 --> 00:30:07.519 +be the same for big right so you would + +00:30:04.519 --> 00:30:09.000 +always have the same attention value for + +00:30:07.519 --> 00:30:11.480 +these words because their vectors are + +00:30:09.000 --> 00:30:13.919 +identical and that's a problem I guess + +00:30:11.480 --> 00:30:17.519 +because like as I said sometimes + +00:30:13.919 --> 00:30:19.440 +syntactic um syntactic information needs + +00:30:17.519 --> 00:30:22.320 +to be pulled in from like locally + +00:30:19.440 --> 00:30:24.200 +coherent contexts so that's a problem a + +00:30:22.320 --> 00:30:25.640 +couple ways you can fix this the first + +00:30:24.200 --> 00:30:28.000 +way you can fix this is use something + +00:30:25.640 --> 00:30:30.960 +that is sensitive to position like an + +00:30:28.000 --> 00:30:32.720 +RNN um and an RNN you know looks at + +00:30:30.960 --> 00:30:34.640 +which words came before which words came + +00:30:32.720 --> 00:30:36.360 +after and stuff like that so that would + +00:30:34.640 --> 00:30:38.600 +solve your problem but the whole point + +00:30:36.360 --> 00:30:41.679 +of Transformers or attention is all you + +00:30:38.600 --> 00:30:43.840 +need is to not use rnms so uh we need + +00:30:41.679 --> 00:30:47.399 +another way to fix this + +00:30:43.840 --> 00:30:49.360 +problem um so the way this is fixed is + +00:30:47.399 --> 00:30:51.760 +uh using something called positional + +00:30:49.360 --> 00:30:53.399 +encodings and so positional encodings + +00:30:51.760 --> 00:30:56.679 +add another embedding that's based on + +00:30:53.399 --> 00:30:57.840 +the word position and so in addition to + +00:30:56.679 --> 00:30:59.799 +having something that's based on the + +00:30:57.840 --> 00:31:02.080 +word identity you have another embedding + +00:30:59.799 --> 00:31:04.880 +that's based on the position so then the + +00:31:02.080 --> 00:31:07.639 +word big uh that appears in position two + +00:31:04.880 --> 00:31:10.000 +would be uh embedding of big plus + +00:31:07.639 --> 00:31:12.880 +embedding of position two and the word + +00:31:10.000 --> 00:31:14.120 +big that appears over here would be uh + +00:31:12.880 --> 00:31:16.080 +the embeding a big and then the + +00:31:14.120 --> 00:31:18.840 +embedding a position eight for example + +00:31:16.080 --> 00:31:20.440 +so uh that would uh that kind of solves + +00:31:18.840 --> 00:31:22.399 +that + +00:31:20.440 --> 00:31:24.559 +problem there's a number of different + +00:31:22.399 --> 00:31:26.000 +ways to make these uh the original + +00:31:24.559 --> 00:31:28.480 +Transformer + +00:31:26.000 --> 00:31:31.440 +paper they did it using something called + +00:31:28.480 --> 00:31:32.320 +sinusoidal encodings this is kind of uh + +00:31:31.440 --> 00:31:35.519 +one of + +00:31:32.320 --> 00:31:37.120 +the I don't know when this paper came + +00:31:35.519 --> 00:31:39.600 +out and we were first reading it when it + +00:31:37.120 --> 00:31:40.760 +first came out it was like they + +00:31:39.600 --> 00:31:42.320 +explained what they did and they + +00:31:40.760 --> 00:31:43.880 +explained very briefly why they did it + +00:31:42.320 --> 00:31:46.639 +but it was kind of like a mystery like + +00:31:43.880 --> 00:31:48.000 +nobody like actually understood what + +00:31:46.639 --> 00:31:49.399 +they wrote in the paper luckily now + +00:31:48.000 --> 00:31:52.799 +there's a lot of nice blogs that + +00:31:49.399 --> 00:31:56.360 +actually explain this um and so the way + +00:31:52.799 --> 00:31:58.159 +these work essentially is you have a uh + +00:31:56.360 --> 00:32:01.760 +a sign + +00:31:58.159 --> 00:32:05.960 +uh like this and the sign is this uh + +00:32:01.760 --> 00:32:08.919 +weight times the time step in the uh in + +00:32:05.960 --> 00:32:12.159 +the output in every even numbered + +00:32:08.919 --> 00:32:15.600 +embedding is uh uses as a sign every odd + +00:32:12.159 --> 00:32:20.679 +numbered embedding uses the cosine and + +00:32:15.600 --> 00:32:25.440 +this Omega over here is 10,000 to the 2 + +00:32:20.679 --> 00:32:28.399 +k divided D um and so that's the value + +00:32:25.440 --> 00:32:31.240 +and then this is the the dimension size + +00:32:28.399 --> 00:32:36.000 +and what these embeddings look like is + +00:32:31.240 --> 00:32:38.159 +something like this so um if you sorry + +00:32:36.000 --> 00:32:40.039 +this is very small and also I should + +00:32:38.159 --> 00:32:43.440 +acknowledge that comes from this this + +00:32:40.039 --> 00:32:46.279 +blog up here um but this is the position + +00:32:43.440 --> 00:32:49.200 +in the sentence and then this is the uh + +00:32:46.279 --> 00:32:52.760 +embedding size I + +00:32:49.200 --> 00:32:54.919 +believe um and so why why did they + +00:32:52.760 --> 00:32:57.760 +choose to do it this way they chose to + +00:32:54.919 --> 00:33:00.240 +do it this way because if you multip + +00:32:57.760 --> 00:33:03.000 +these positional encodings together you + +00:33:00.240 --> 00:33:05.760 +get something that looks a bit like this + +00:33:03.000 --> 00:33:07.960 +and so if you multiply the two vectors + +00:33:05.760 --> 00:33:10.440 +together you get something where if + +00:33:07.960 --> 00:33:13.000 +you're closer together in position space + +00:33:10.440 --> 00:33:15.480 +you you get a higher number and so that + +00:33:13.000 --> 00:33:18.480 +kind of gives you a bias to uping the + +00:33:15.480 --> 00:33:20.840 +attention values of uh things that are + +00:33:18.480 --> 00:33:23.320 +closer together at least right at the + +00:33:20.840 --> 00:33:24.799 +very beginning uh like layers of the + +00:33:23.320 --> 00:33:27.960 +model where it's kind of more important + +00:33:24.799 --> 00:33:30.000 +because you don't have it uh + +00:33:27.960 --> 00:33:31.600 +calculated from the + +00:33:30.000 --> 00:33:34.440 +previous + +00:33:31.600 --> 00:33:36.000 +so this is uh this is a basic idea I + +00:33:34.440 --> 00:33:37.960 +think the thing on the right is the most + +00:33:36.000 --> 00:33:39.320 +important thing to to know here which is + +00:33:37.960 --> 00:33:41.840 +like this is the reason why they chose + +00:33:39.320 --> 00:33:45.360 +to do it that way um but that's the + +00:33:41.840 --> 00:33:49.440 +basic idea yeah prly I think you at this + +00:33:45.360 --> 00:33:52.240 +line Bing positional import like if it + +00:33:49.440 --> 00:33:52.240 +next to you you would + +00:33:53.840 --> 00:33:58.960 +multiply I just think why do we need to + +00:34:02.120 --> 00:34:06.720 +um so sorry which part were you talking + +00:34:19.720 --> 00:34:24.440 +about yeah so I'm G to talk about that + +00:34:22.280 --> 00:34:28.879 +in a second + +00:34:24.440 --> 00:34:28.879 +actually um any any other + +00:34:30.520 --> 00:34:35.520 +okay so this is what is done um note + +00:34:33.879 --> 00:34:38.200 +that these are added right at the very + +00:34:35.520 --> 00:34:41.000 +beginning and then you pass this through + +00:34:38.200 --> 00:34:43.359 +every layer but basically at the very + +00:34:41.000 --> 00:34:45.919 +beginning layer at the very first layer + +00:34:43.359 --> 00:34:48.119 +by using these positional encodings you + +00:34:45.919 --> 00:34:50.399 +can kind of + +00:34:48.119 --> 00:34:52.560 +disambiguate you can disambiguate this + +00:34:50.399 --> 00:34:54.760 +case here and then after you passed it + +00:34:52.560 --> 00:34:56.520 +through the first layer you're combining + +00:34:54.760 --> 00:34:58.440 +together information anyway so you can + +00:34:56.520 --> 00:35:03.480 +have pull in information about the local + +00:34:58.440 --> 00:35:06.960 +context and now like it's essentially uh + +00:35:03.480 --> 00:35:08.400 +um it's essentially already handled for + +00:35:06.960 --> 00:35:11.040 +you another thing is if you have + +00:35:08.400 --> 00:35:12.480 +residual connections this gets passed um + +00:35:11.040 --> 00:35:14.440 +into the following layers which I'll + +00:35:12.480 --> 00:35:17.440 +talk about a second + +00:35:14.440 --> 00:35:17.440 +so + +00:35:17.640 --> 00:35:23.240 +um the second thing that you could do is + +00:35:19.839 --> 00:35:27.960 +learned encodings and learned encodings + +00:35:23.240 --> 00:35:30.880 +basically what they do is they um create + +00:35:27.960 --> 00:35:36.160 +a learnable uh embedding that you just + +00:35:30.880 --> 00:35:37.800 +add in and um this is super simple uh so + +00:35:36.160 --> 00:35:40.200 +it's like just + +00:35:37.800 --> 00:35:42.720 +like just like you learned the embedding + +00:35:40.200 --> 00:35:45.320 +for wbig you learn the embedding for w+ + +00:35:42.720 --> 00:35:48.200 +two or uh plus + +00:35:45.320 --> 00:35:49.640 +six and this is simpler uh like you + +00:35:48.200 --> 00:35:51.800 +don't need to think about signs and + +00:35:49.640 --> 00:35:53.520 +cosiness and stuff like that um it's + +00:35:51.800 --> 00:35:55.400 +also more flexible because s model can + +00:35:53.520 --> 00:35:56.960 +learn anything it needs to you know + +00:35:55.400 --> 00:35:59.640 +learn in order to do a good job of + +00:35:56.960 --> 00:36:01.520 +minimizing the loss but the dis the + +00:35:59.640 --> 00:36:03.760 +biggest disadvantage is it makes it + +00:36:01.520 --> 00:36:06.480 +impossible to extrapolate to longer + +00:36:03.760 --> 00:36:08.599 +sequences than you saw at training time + +00:36:06.480 --> 00:36:12.880 +so uh because you have no learned + +00:36:08.599 --> 00:36:16.079 +embeddings for longer sequences it's + +00:36:12.880 --> 00:36:18.400 +just it's at least in principle + +00:36:16.079 --> 00:36:20.119 +impossible to extrapolate to longer + +00:36:18.400 --> 00:36:21.960 +sequences unless you do some sort of + +00:36:20.119 --> 00:36:22.960 +heuristics and if you do with eristics + +00:36:21.960 --> 00:36:25.400 +you don't really know what's going to + +00:36:22.960 --> 00:36:27.640 +happen so um that's the disadvantage to + +00:36:25.400 --> 00:36:29.680 +doing it this way + +00:36:27.640 --> 00:36:31.319 +in in contrast you know this you just + +00:36:29.680 --> 00:36:33.160 +make K larger and calculate this + +00:36:31.319 --> 00:36:35.400 +deterministic function and you could you + +00:36:33.160 --> 00:36:38.160 +know theoretically extrapolate but + +00:36:35.400 --> 00:36:39.760 +empirically models even ones that use + +00:36:38.160 --> 00:36:41.359 +this sort of extrapolatable embedding + +00:36:39.760 --> 00:36:43.720 +don't do super well at extrapolating to + +00:36:41.359 --> 00:36:43.720 +longer + +00:36:45.960 --> 00:36:50.960 +sequences um so going back to the + +00:36:49.040 --> 00:36:52.920 +question um there's a distinction + +00:36:50.960 --> 00:36:56.040 +between absolute versus relative + +00:36:52.920 --> 00:36:57.680 +positional encodings and absolute + +00:36:56.040 --> 00:37:00.040 +positional encoding in are like what I + +00:36:57.680 --> 00:37:05.200 +said before they're basically positional + +00:37:00.040 --> 00:37:08.240 +encodings where you add in a um you + +00:37:05.200 --> 00:37:10.800 +specifically add in an encoding at each + +00:37:08.240 --> 00:37:14.440 +position but you don't consider whether + +00:37:10.800 --> 00:37:16.480 +like one query Vector is close to a key + +00:37:14.440 --> 00:37:18.079 +vector or far away from a key vector or + +00:37:16.480 --> 00:37:19.400 +anything like that you don't consider + +00:37:18.079 --> 00:37:21.240 +that + +00:37:19.400 --> 00:37:24.280 +directly + +00:37:21.240 --> 00:37:27.920 +um on the other hand relative positional + +00:37:24.280 --> 00:37:31.359 +encodings explicitly encode the relative + +00:37:27.920 --> 00:37:32.240 +position and so what this means is when + +00:37:31.359 --> 00:37:35.599 +you do + +00:37:32.240 --> 00:37:38.599 +attention um when you do attention it + +00:37:35.599 --> 00:37:40.760 +explicitly thinks about whether you know + +00:37:38.599 --> 00:37:43.640 +a particular embedding is not in + +00:37:40.760 --> 00:37:46.640 +position eight but whether the key the + +00:37:43.640 --> 00:37:49.359 +query sorry whether the key embedding is + +00:37:46.640 --> 00:37:51.319 +like minus5 from the query embedding or + +00:37:49.359 --> 00:37:54.119 +minus8 from the query + +00:37:51.319 --> 00:37:56.280 +embeded and the first paper that did + +00:37:54.119 --> 00:37:57.680 +this they just learned relative uh + +00:37:56.280 --> 00:38:01.760 +position + +00:37:57.680 --> 00:38:01.760 +encodings and they learned it + +00:38:02.520 --> 00:38:09.119 +um if if I remember correctly they + +00:38:05.920 --> 00:38:09.119 +basically learned a + +00:38:09.160 --> 00:38:15.200 +scalar where you're centered at zero and + +00:38:12.800 --> 00:38:19.640 +then you have like + +00:38:15.200 --> 00:38:23.359 +um you have like minus and uh you have + +00:38:19.640 --> 00:38:26.640 +minus and plus uh a certain distance and + +00:38:23.359 --> 00:38:28.760 +then you also um cut this off it might + +00:38:26.640 --> 00:38:33.040 +like minus uh + +00:38:28.760 --> 00:38:35.200 +128 and plus 128 or something like this + +00:38:33.040 --> 00:38:36.599 +uh so you have a fixed length vector and + +00:38:35.200 --> 00:38:38.920 +anything that's farther away from that + +00:38:36.599 --> 00:38:43.240 +gets the same uh embedding + +00:38:38.920 --> 00:38:45.800 +basically um the problem with this is uh + +00:38:43.240 --> 00:38:47.800 +number one it adds learnable parameters + +00:38:45.800 --> 00:38:50.119 +number two it's a little bit more + +00:38:47.800 --> 00:38:53.520 +computationally uh expensive to apply + +00:38:50.119 --> 00:38:55.119 +this every time uh onto your attention + +00:38:53.520 --> 00:38:57.520 +Matrix + +00:38:55.119 --> 00:38:59.079 +and uh because you need to apply this at + +00:38:57.520 --> 00:39:01.720 +every layer you need to apply this every + +00:38:59.079 --> 00:39:04.760 +time you do attention at every + +00:39:01.720 --> 00:39:07.560 +layer so instead there was a really + +00:39:04.760 --> 00:39:10.000 +clever idea uh called rotary positional + +00:39:07.560 --> 00:39:11.960 +encodings and rotary positional + +00:39:10.000 --> 00:39:14.400 +encodings are + +00:39:11.960 --> 00:39:19.280 +basically uh kind of like an absolute + +00:39:14.400 --> 00:39:22.079 +positional encoding um with the a lot of + +00:39:19.280 --> 00:39:23.760 +the desirable qualities of relative + +00:39:22.079 --> 00:39:26.720 +positional in + +00:39:23.760 --> 00:39:30.160 +codings and their basic idea was this so + +00:39:26.720 --> 00:39:32.920 +their basic idea was that they wanted to + +00:39:30.160 --> 00:39:36.440 +um come up with something where you have + +00:39:32.920 --> 00:39:39.480 +an embedding encoding Vector that takes + +00:39:36.440 --> 00:39:42.599 +in the actual vector and the + +00:39:39.480 --> 00:39:44.079 +position and you have another encoding + +00:39:42.599 --> 00:39:46.079 +Vector where you Tak in the absolute + +00:39:44.079 --> 00:39:50.760 +vector and the + +00:39:46.079 --> 00:39:52.119 +position and the product of these two + +00:39:50.760 --> 00:39:54.960 +becomes + +00:39:52.119 --> 00:39:57.359 +another uh another function that is a + +00:39:54.960 --> 00:40:00.720 +function only of + +00:39:57.359 --> 00:40:03.400 +the two vectors and the relative + +00:40:00.720 --> 00:40:05.599 +position and so you lose all information + +00:40:03.400 --> 00:40:07.640 +about the absolute position um you only + +00:40:05.599 --> 00:40:09.440 +have information about the relative + +00:40:07.640 --> 00:40:12.000 +position + +00:40:09.440 --> 00:40:15.079 +and this is trickier than it seems + +00:40:12.000 --> 00:40:16.920 +basically because like you need to you + +00:40:15.079 --> 00:40:20.040 +need to have something where it's not + +00:40:16.920 --> 00:40:22.359 +possible to uh to recover the absolute + +00:40:20.040 --> 00:40:24.480 +position because you want to you want it + +00:40:22.359 --> 00:40:25.680 +to rely only on the relative position + +00:40:24.480 --> 00:40:27.359 +because that will allow it to + +00:40:25.680 --> 00:40:30.280 +extrapolate that will allow it to + +00:40:27.359 --> 00:40:33.040 +generalize well when you see new um see + +00:40:30.280 --> 00:40:35.119 +new outputs so basically what they do is + +00:40:33.040 --> 00:40:37.839 +they do a lot of math uh that I'm not + +00:40:35.119 --> 00:40:42.680 +going to uh cover in a lot of detail + +00:40:37.839 --> 00:40:44.960 +here but by using uh imaginary numbers + +00:40:42.680 --> 00:40:48.319 +uh trigonometry and imaginary numbers + +00:40:44.960 --> 00:40:50.640 +you can essentially come up with a uh a + +00:40:48.319 --> 00:40:54.800 +thing where you have the query vectors + +00:40:50.640 --> 00:40:54.800 +and the key vectors I + +00:40:55.319 --> 00:40:58.599 +believe I + +00:40:59.079 --> 00:41:04.720 +think I might have that backwards I'll + +00:41:02.000 --> 00:41:07.040 +I'll I'll have to check that um but + +00:41:04.720 --> 00:41:11.400 +basically you take the vectors one of + +00:41:07.040 --> 00:41:13.920 +the vectors and you add in um the cosine + +00:41:11.400 --> 00:41:16.319 +M Theta one where Theta is a parameter + +00:41:13.920 --> 00:41:19.920 +similar to The Omega parameter that we + +00:41:16.319 --> 00:41:22.440 +had before um m is the uh the time step + +00:41:19.920 --> 00:41:26.400 +the position in the sequence and then + +00:41:22.440 --> 00:41:27.760 +you on the other side you have S and M + +00:41:26.400 --> 00:41:33.599 +Theta + +00:41:27.760 --> 00:41:37.760 +one over here and then you swap around + +00:41:33.599 --> 00:41:41.000 +the order like minus uh and you invert + +00:41:37.760 --> 00:41:43.880 +the score here uh sorry invert the + +00:41:41.000 --> 00:41:45.520 +embedding here and if you do this you + +00:41:43.880 --> 00:41:49.079 +can prove that essentially you get a + +00:41:45.520 --> 00:41:52.040 +function that has this property and so + +00:41:49.079 --> 00:41:53.720 +by doing this you're only adding you're + +00:41:52.040 --> 00:41:54.960 +you're modifying these directly but + +00:41:53.720 --> 00:41:56.920 +you're getting some of the nice + +00:41:54.960 --> 00:41:58.680 +properties of positional encoding + +00:41:56.920 --> 00:42:00.520 +and you're also kind of guaranteed that + +00:41:58.680 --> 00:42:02.960 +this will extrapolate infinitely because + +00:42:00.520 --> 00:42:06.200 +you're removing all information about + +00:42:02.960 --> 00:42:07.680 +the not entirely infinitely but you're + +00:42:06.200 --> 00:42:09.160 +guaranteed that this will extrapolate + +00:42:07.680 --> 00:42:11.000 +well because you're removing all of the + +00:42:09.160 --> 00:42:13.560 +information about the absolute position + +00:42:11.000 --> 00:42:16.520 +here um so this is what's actually used + +00:42:13.560 --> 00:42:19.119 +in llama um so this is what's used in + +00:42:16.520 --> 00:42:22.920 +llama and uh it has a good positive + +00:42:19.119 --> 00:42:22.920 +effect on fting models in vares + +00:42:24.559 --> 00:42:31.319 +yeah know + +00:42:27.599 --> 00:42:35.520 +the sentence EMB look very + +00:42:31.319 --> 00:42:37.359 +similar this more senstive to tokens at + +00:42:35.520 --> 00:42:39.640 +the beginning of + +00:42:37.359 --> 00:42:40.839 +the I don't know if it's more sensitive + +00:42:39.640 --> 00:42:42.319 +to tokens at the beginning of the + +00:42:40.839 --> 00:42:45.160 +sentence and the end of the sentence + +00:42:42.319 --> 00:42:48.160 +partly because the the earlier ones + +00:42:45.160 --> 00:42:49.720 +don't look like I mean the ones at the + +00:42:48.160 --> 00:42:52.359 +beginning also look similar right like + +00:42:49.720 --> 00:42:55.839 +all of these values are the + +00:42:52.359 --> 00:42:58.599 +same uh like all of the values up here + +00:42:55.839 --> 00:42:59.640 +are the same between one and two right + +00:42:58.599 --> 00:43:01.119 +so the ones at the beginning of the + +00:42:59.640 --> 00:43:02.400 +sentence also look similar the ones at + +00:43:01.119 --> 00:43:05.440 +the end of the sentence also look + +00:43:02.400 --> 00:43:08.720 +similar um if you have something at the + +00:43:05.440 --> 00:43:08.720 +pages and at + +00:43:10.240 --> 00:43:16.440 +one not really because the the things at + +00:43:13.119 --> 00:43:19.119 +the beginning look different right oh + +00:43:16.440 --> 00:43:19.119 +probably + +00:43:19.720 --> 00:43:25.319 +reading yeah so this is the + +00:43:22.800 --> 00:43:26.920 +position yeah yeah okay yeah sorry this + +00:43:25.319 --> 00:43:29.000 +is very small because they just grabbed + +00:43:26.920 --> 00:43:31.200 +it from this uh this blog post here but + +00:43:29.000 --> 00:43:35.720 +yeah this is the position and then this + +00:43:31.200 --> 00:43:35.720 +is the the embedding size + +00:43:38.960 --> 00:43:46.680 +yeah okay um yeah but this is really + +00:43:42.480 --> 00:43:49.280 +really important um this uh kind + +00:43:46.680 --> 00:43:50.760 +of change of the positional encodings + +00:43:49.280 --> 00:43:54.079 +and I'll talk about a little bit about + +00:43:50.760 --> 00:43:57.280 +that at the very end yeah does Ro not + +00:43:54.079 --> 00:44:00.839 +take any um sort of like the maximum + +00:43:57.280 --> 00:44:04.599 +context l or anything like that rope um + +00:44:00.839 --> 00:44:07.200 +so this does not have a maximum context + +00:44:04.599 --> 00:44:09.880 +length this actually also doesn't have a + +00:44:07.200 --> 00:44:11.520 +maximum context length but rope + +00:44:09.880 --> 00:44:15.720 +extrapolates better because you + +00:44:11.520 --> 00:44:18.119 +basically in rope you entirely lose + +00:44:15.720 --> 00:44:20.880 +information about where you are in the + +00:44:18.119 --> 00:44:22.440 +um in the sequence whereas the absolute + +00:44:20.880 --> 00:44:23.680 +positional encodings you still get the + +00:44:22.440 --> 00:44:25.359 +information about where you are at the + +00:44:23.680 --> 00:44:27.559 +sequence so the model can overfit to it + +00:44:25.359 --> 00:44:29.040 +better professor in this example when we + +00:44:27.559 --> 00:44:30.960 +want to sort of like pick this to a + +00:44:29.040 --> 00:44:36.119 +longer sequence we have to modify the K + +00:44:30.960 --> 00:44:38.920 +to get it to you do you do not oh we + +00:44:36.119 --> 00:44:40.680 +don't yeah so this K is the size this is + +00:44:38.920 --> 00:44:43.480 +the size of the + +00:44:40.680 --> 00:44:46.640 +embedding okay so you can you can + +00:44:43.480 --> 00:44:49.599 +extrapolate by just increasing + +00:44:46.640 --> 00:44:51.520 +ke Beyond like even if you've never seen + +00:44:49.599 --> 00:44:53.680 +something about t if you increase T you + +00:44:51.520 --> 00:44:55.079 +can still calculate this theoretically + +00:44:53.680 --> 00:44:56.960 +which is not the case for the Learned + +00:44:55.079 --> 00:44:59.880 +coding so learned coding you you don't + +00:44:56.960 --> 00:44:59.880 +have any information + +00:45:03.280 --> 00:45:08.559 +about Okay cool so this is uh this is an + +00:45:07.040 --> 00:45:11.040 +important thing to know you'll also have + +00:45:08.559 --> 00:45:14.119 +to um implement this for the assignment + +00:45:11.040 --> 00:45:17.520 +I believe so good thing to pay attention + +00:45:14.119 --> 00:45:19.640 +to um okay next is layer normalization + +00:45:17.520 --> 00:45:21.319 +and residual connections so layer + +00:45:19.640 --> 00:45:24.000 +normalization and residual connections + +00:45:21.319 --> 00:45:26.839 +are important for stabilizing um + +00:45:24.000 --> 00:45:29.960 +stabilizing training in + +00:45:26.839 --> 00:45:31.720 +Transformers and I talked before about + +00:45:29.960 --> 00:45:33.200 +rnns with gradients and training + +00:45:31.720 --> 00:45:35.720 +instability so in + +00:45:33.200 --> 00:45:39.240 +rnns um + +00:45:35.720 --> 00:45:41.559 +we uh saw how back propop uh we talked + +00:45:39.240 --> 00:45:44.280 +about how back propop can reduce the + +00:45:41.559 --> 00:45:48.240 +gradients the exact same thing would be + +00:45:44.280 --> 00:45:50.119 +the case for um Transformers and in fact + +00:45:48.240 --> 00:45:51.720 +there was a problem in the original + +00:45:50.119 --> 00:45:55.200 +formulation of the Transformer that + +00:45:51.720 --> 00:45:56.720 +caused this gradient uh Vanishing to + +00:45:55.200 --> 00:45:59.119 +occur + +00:45:56.720 --> 00:46:00.920 +and it uh this problem has been + +00:45:59.119 --> 00:46:02.880 +rectified in new newer versions of + +00:46:00.920 --> 00:46:04.640 +Transformers so I'll I'll talk a little + +00:46:02.880 --> 00:46:09.760 +bit about both of + +00:46:04.640 --> 00:46:13.599 +those um so because we're running this + +00:46:09.760 --> 00:46:15.640 +multiple times um you know we have eight + +00:46:13.599 --> 00:46:17.200 +layers of Transformers or 16 layers of + +00:46:15.640 --> 00:46:19.839 +Transformers or 12 layers of + +00:46:17.200 --> 00:46:22.000 +Transformers we do have gradient uh + +00:46:19.839 --> 00:46:25.240 +gradients disappearing at the beginning + +00:46:22.000 --> 00:46:27.960 +if we're not careful and so there's two + +00:46:25.240 --> 00:46:29.559 +things that do uh the first thing is + +00:46:27.960 --> 00:46:32.200 +layer normalization and what layer + +00:46:29.559 --> 00:46:34.359 +normalization does is this is not so + +00:46:32.200 --> 00:46:35.920 +much for like gradients disappearing + +00:46:34.359 --> 00:46:38.680 +it's more for preventing gradients from + +00:46:35.920 --> 00:46:40.119 +exploding or becoming very unstable and + +00:46:38.680 --> 00:46:42.599 +the way it works is it normalizes + +00:46:40.119 --> 00:46:45.319 +outputs to be within a consistent range + +00:46:42.599 --> 00:46:48.680 +uh preventing too much variance in the + +00:46:45.319 --> 00:46:52.359 +scale so uh the way layer Norm looks + +00:46:48.680 --> 00:46:54.040 +like is this and um it's not too + +00:46:52.359 --> 00:46:55.400 +complicated but it's a little bit of a + +00:46:54.040 --> 00:46:58.400 +complicated equation so I'll go through + +00:46:55.400 --> 00:47:01.119 +it one to the time the first thing is we + +00:46:58.400 --> 00:47:02.640 +take the mean of the vectors so we just + +00:47:01.119 --> 00:47:04.839 +add up all of the vectors and we divide + +00:47:02.640 --> 00:47:07.119 +by the number of vectors that we have + +00:47:04.839 --> 00:47:09.240 +here or sorry the number of elements in + +00:47:07.119 --> 00:47:10.960 +the vector that we have here the next + +00:47:09.240 --> 00:47:16.599 +thing is the standard deviation of the + +00:47:10.960 --> 00:47:18.520 +vector so we add up uh the value minus + +00:47:16.599 --> 00:47:21.000 +the mean squared and then take the + +00:47:18.520 --> 00:47:22.119 +square root of it just stand normal + +00:47:21.000 --> 00:47:26.000 +standard + +00:47:22.119 --> 00:47:29.480 +deviation um and so if we were just + +00:47:26.000 --> 00:47:31.559 +doing the vector mean and the vector + +00:47:29.480 --> 00:47:33.960 +standard deviation what we would be + +00:47:31.559 --> 00:47:35.440 +doing would be we would be normalizing + +00:47:33.960 --> 00:47:38.000 +all of the values in the vector to have + +00:47:35.440 --> 00:47:41.319 +zero mean and unit variance uh sorry and + +00:47:38.000 --> 00:47:45.040 +unit uh and divided by the standard + +00:47:41.319 --> 00:47:48.079 +deviation however um layer normalization + +00:47:45.040 --> 00:47:50.240 +does two other things also so layer + +00:47:48.079 --> 00:47:54.280 +normalization adds in a + +00:47:50.240 --> 00:47:56.000 +bias and it multiplies by a gain and + +00:47:54.280 --> 00:47:58.760 +what adding in the bias and multiply by + +00:47:56.000 --> 00:48:00.319 +the gain means is after we've normalized + +00:47:58.760 --> 00:48:04.160 +everything down to be kind of in a + +00:48:00.319 --> 00:48:08.319 +standard range we then move it out of + +00:48:04.160 --> 00:48:08.319 +the standard range so we're taking + +00:48:08.960 --> 00:48:13.119 +like this Vector from over + +00:48:15.440 --> 00:48:24.640 +here we're normalizing it down so it's + +00:48:20.680 --> 00:48:27.359 +centered so this is using the the mean + +00:48:24.640 --> 00:48:30.440 +and the standard wealing it + +00:48:27.359 --> 00:48:35.800 +down and then we're adding a + +00:48:30.440 --> 00:48:38.839 +bias and a g so now we're moving it over + +00:48:35.800 --> 00:48:40.319 +to be in like a standard place so what + +00:48:38.839 --> 00:48:42.760 +what that means is like let's say we got + +00:48:40.319 --> 00:48:47.559 +a new Vector let's say this is X1 now we + +00:48:42.760 --> 00:48:50.559 +got a new Vector X2 and it's over + +00:48:47.559 --> 00:48:52.960 +here we would normalize it down and move + +00:48:50.559 --> 00:48:54.559 +it up here again so like basically all + +00:48:52.960 --> 00:48:56.920 +of our vectors will be in a consistent + +00:48:54.559 --> 00:48:58.680 +part of the space and what part of the + +00:48:56.920 --> 00:49:01.000 +space and how big the spread is will be + +00:48:58.680 --> 00:49:02.319 +determined by the bias IND the gate so + +00:49:01.000 --> 00:49:04.920 +that that's essentially what's happening + +00:49:02.319 --> 00:49:07.599 +here and what that means is like every + +00:49:04.920 --> 00:49:10.400 +time you consume the output of layer + +00:49:07.599 --> 00:49:11.720 +Norm of a layer normed layer you get + +00:49:10.400 --> 00:49:14.440 +something predictable you get something + +00:49:11.720 --> 00:49:16.040 +in a predictable part of the space so + +00:49:14.440 --> 00:49:18.880 +that's what it's doing um and this is + +00:49:16.040 --> 00:49:18.880 +good for training + +00:49:20.880 --> 00:49:26.319 +stability um any any questions about + +00:49:24.520 --> 00:49:28.799 +this + +00:49:26.319 --> 00:49:31.400 +okay yeah you just like what's the + +00:49:28.799 --> 00:49:33.160 +difference between this and batm so the + +00:49:31.400 --> 00:49:34.680 +difference between this and batch Norm + +00:49:33.160 --> 00:49:37.400 +this is actually explained really well + +00:49:34.680 --> 00:49:41.440 +in the layer Norm paper but um what + +00:49:37.400 --> 00:49:45.319 +batch Norm does is it normalizes + +00:49:41.440 --> 00:49:47.119 +not um not over the whole layer + +00:49:45.319 --> 00:49:48.760 +according to all of the elements of the + +00:49:47.119 --> 00:49:50.760 +vector but over the whole batch + +00:49:48.760 --> 00:49:55.799 +according to all of the elements in the + +00:49:50.760 --> 00:49:57.839 +batch and the reason why so batch Norm I + +00:49:55.799 --> 00:50:00.760 +think a lot of people really didn't like + +00:49:57.839 --> 00:50:03.319 +it when it was really popular to be used + +00:50:00.760 --> 00:50:05.280 +because borm actually changes your + +00:50:03.319 --> 00:50:08.400 +statistics based on the other elements + +00:50:05.280 --> 00:50:09.839 +of the batch and also at inference time + +00:50:08.400 --> 00:50:11.680 +when you're doing something at inference + +00:50:09.839 --> 00:50:13.000 +time basically you don't have any other + +00:50:11.680 --> 00:50:14.400 +statistics from the other elements of + +00:50:13.000 --> 00:50:16.720 +the batch so you just have to do one and + +00:50:14.400 --> 00:50:18.520 +you can't do any normalization layer + +00:50:16.720 --> 00:50:20.559 +Norm only depends on the current + +00:50:18.520 --> 00:50:21.920 +instance and because layer Norm only + +00:50:20.559 --> 00:50:23.720 +depends on the current instance you + +00:50:21.920 --> 00:50:26.480 +don't need to worry about batches like + +00:50:23.720 --> 00:50:29.160 +every input and output is like constant + +00:50:26.480 --> 00:50:30.839 +no no matter what else is in the B so + +00:50:29.160 --> 00:50:34.520 +that's a basic + +00:50:30.839 --> 00:50:36.240 +difference um any other any other + +00:50:34.520 --> 00:50:40.119 +questions + +00:50:36.240 --> 00:50:42.280 +okay so there's also a Improvement to + +00:50:40.119 --> 00:50:44.640 +layer Norm called RMS Norm or written + +00:50:42.280 --> 00:50:48.319 +Square uh + +00:50:44.640 --> 00:50:50.839 +normalization this is basically just a + +00:50:48.319 --> 00:50:54.240 +simplification of layer n and what they + +00:50:50.839 --> 00:50:56.920 +did is they removed the kind of mean + +00:50:54.240 --> 00:50:58.440 +normalization step so instead of moving + +00:50:56.920 --> 00:51:01.079 +everything into the middle into a + +00:50:58.440 --> 00:51:02.920 +different part of the space they're just + +00:51:01.079 --> 00:51:04.839 +um keeping things in the same part of + +00:51:02.920 --> 00:51:07.760 +the space but renormalizing like the + +00:51:04.839 --> 00:51:10.200 +gain uh renormalizing the spread between + +00:51:07.760 --> 00:51:11.760 +them uh so what you can see is like if + +00:51:10.200 --> 00:51:14.319 +you look back at layer normalization we + +00:51:11.760 --> 00:51:15.880 +were calculating the mean here we don't + +00:51:14.319 --> 00:51:18.240 +have any mean + +00:51:15.880 --> 00:51:21.079 +calculation um here we're subtracting + +00:51:18.240 --> 00:51:23.799 +the mean here there's no subtraction of + +00:51:21.079 --> 00:51:26.240 +the mean so that that's basically the + +00:51:23.799 --> 00:51:29.760 +difference between the two + +00:51:26.240 --> 00:51:32.440 +and it's not that RMS Norm is any better + +00:51:29.760 --> 00:51:34.960 +really like it gives similar results to + +00:51:32.440 --> 00:51:38.119 +layer Norm uh but it's faster and + +00:51:34.960 --> 00:51:40.119 +Anderly not very much worse and so + +00:51:38.119 --> 00:51:42.920 +because of this uh this is used for + +00:51:40.119 --> 00:51:45.520 +efficiency and this is used um this is + +00:51:42.920 --> 00:51:45.520 +also used in + +00:51:47.640 --> 00:51:55.440 +Lama uh it but you and you sorry also + +00:51:51.920 --> 00:51:57.119 +you remove the bias parameter and you + +00:51:55.440 --> 00:52:00.200 +keep only the gain parameter so that + +00:51:57.119 --> 00:52:00.200 +also reduces the number of + +00:52:00.799 --> 00:52:06.000 +parameters cool um any any questions + +00:52:03.920 --> 00:52:08.960 +I'll + +00:52:06.000 --> 00:52:10.319 +set okay um residual connections I + +00:52:08.960 --> 00:52:12.520 +talked about these a little bit last + +00:52:10.319 --> 00:52:13.720 +time so I'll go through them relatively + +00:52:12.520 --> 00:52:15.319 +quickly but they're basically an + +00:52:13.720 --> 00:52:20.280 +additive connection between the input + +00:52:15.319 --> 00:52:22.799 +and the output and so you um you take + +00:52:20.280 --> 00:52:26.559 +the input to multi head attention and + +00:52:22.799 --> 00:52:29.480 +you pass it into the output here + +00:52:26.559 --> 00:52:31.359 +um and it looks like this very very + +00:52:29.480 --> 00:52:33.280 +simple so no matter what function you're + +00:52:31.359 --> 00:52:36.200 +doing here you just add the um add the + +00:52:33.280 --> 00:52:38.960 +input into the function this prevents uh + +00:52:36.200 --> 00:52:40.760 +Vanishing gradients and it allows you to + +00:52:38.960 --> 00:52:42.720 +learn the difference from the input so + +00:52:40.760 --> 00:52:46.119 +instead of learning how the input should + +00:52:42.720 --> 00:52:48.839 +be matched M uh should be mapped into + +00:52:46.119 --> 00:52:50.839 +the output you learn what difference + +00:52:48.839 --> 00:52:53.640 +should I apply to the + +00:52:50.839 --> 00:52:57.400 +outputs so here's an interesting quiz + +00:52:53.640 --> 00:53:00.960 +there's a very big implication for + +00:52:57.400 --> 00:53:03.119 +attention multi attention uh anybody + +00:53:00.960 --> 00:53:05.480 +think what implication for multi + +00:53:03.119 --> 00:53:08.839 +attention + +00:53:05.480 --> 00:53:11.760 +here yeah because we now have the + +00:53:08.839 --> 00:53:14.720 +residual connection it sort of De + +00:53:11.760 --> 00:53:16.559 +prioritizes looking at itself looks at + +00:53:14.720 --> 00:53:20.839 +the surrounding to understand what do I + +00:53:16.559 --> 00:53:24.880 +have to add as Contex yeah exactly so um + +00:53:20.839 --> 00:53:28.040 +uh basically it de prioritizes attending + +00:53:24.880 --> 00:53:29.920 +to yourself because you get yourself for + +00:53:28.040 --> 00:53:31.319 +free through the residual connection you + +00:53:29.920 --> 00:53:32.760 +get the information from yourself for + +00:53:31.319 --> 00:53:34.880 +free so you just need to pull in the + +00:53:32.760 --> 00:53:37.079 +other information that's useful for + +00:53:34.880 --> 00:53:40.760 +contextualizing the current factors and + +00:53:37.079 --> 00:53:44.280 +you can actually see how this actually + +00:53:40.760 --> 00:53:46.400 +happens if we go back and look at our + +00:53:44.280 --> 00:53:48.480 +visualization you'll notice that there's + +00:53:46.400 --> 00:53:50.319 +only one attention head that's attending + +00:53:48.480 --> 00:53:52.440 +to itself and all of the other attention + +00:53:50.319 --> 00:53:53.920 +heads are not attending to itself and + +00:53:52.440 --> 00:53:55.319 +this is precisely because you have the + +00:53:53.920 --> 00:53:56.599 +residual connections if we didn't have + +00:53:55.319 --> 00:53:58.680 +the residual connections it would have + +00:53:56.599 --> 00:54:00.760 +to attend to itself heavily and then + +00:53:58.680 --> 00:54:04.559 +pull in like all the other information + +00:54:00.760 --> 00:54:05.920 +so uh that's why you see that um + +00:54:04.559 --> 00:54:08.119 +Behavior + +00:54:05.920 --> 00:54:11.359 +there cool I didn't expect somebody to + +00:54:08.119 --> 00:54:14.799 +answer that so quickly so thank + +00:54:11.359 --> 00:54:18.119 +you another really important Improvement + +00:54:14.799 --> 00:54:20.359 +to uh another really important + +00:54:18.119 --> 00:54:23.480 +Improvement to the Transformer is a post + +00:54:20.359 --> 00:54:25.640 +and pre- layer Norm so the original + +00:54:23.480 --> 00:54:29.599 +conception of the Transformer + +00:54:25.640 --> 00:54:32.319 +uh basically had uh + +00:54:29.599 --> 00:54:33.720 +this over here post layer Norms so what + +00:54:32.319 --> 00:54:35.200 +you would do is you would run multi had + +00:54:33.720 --> 00:54:37.599 +attention then you'd have layer Norm + +00:54:35.200 --> 00:54:38.920 +after it uh then you have the feed + +00:54:37.599 --> 00:54:43.280 +forward Network then you'd have layer + +00:54:38.920 --> 00:54:46.359 +Norm after it the problem with this is + +00:54:43.280 --> 00:54:49.319 +this is kind of breaking the residual + +00:54:46.359 --> 00:54:51.839 +connection right you see we have this + +00:54:49.319 --> 00:54:53.760 +residual connection which is gray here + +00:54:51.839 --> 00:54:56.920 +and then you have a layer Norm in the + +00:54:53.760 --> 00:54:58.359 +middle of this a residual connection and + +00:54:56.920 --> 00:54:59.720 +so what this is doing is this is + +00:54:58.359 --> 00:55:01.760 +actually hurting your gradient + +00:54:59.720 --> 00:55:04.400 +propagation right because you're you + +00:55:01.760 --> 00:55:06.240 +have a not you have a function other + +00:55:04.400 --> 00:55:09.319 +than the identity right in the + +00:55:06.240 --> 00:55:12.599 +middle of the layers here and that's bad + +00:55:09.319 --> 00:55:14.799 +for propagating across many layers so a + +00:55:12.599 --> 00:55:17.319 +modification to this is pre-layer Norm + +00:55:14.799 --> 00:55:20.359 +where basically layer Norm is applied + +00:55:17.319 --> 00:55:22.000 +previously to all of the uh like + +00:55:20.359 --> 00:55:25.200 +multi-head ATT tension and three forward + +00:55:22.000 --> 00:55:27.839 +layers which gives us a like Direct + +00:55:25.200 --> 00:55:29.280 +residual connection like this um all the + +00:55:27.839 --> 00:55:31.599 +way from the beginning to the end and + +00:55:29.280 --> 00:55:33.280 +that improves gradient pretation so this + +00:55:31.599 --> 00:55:34.640 +is another big thing that has improved + +00:55:33.280 --> 00:55:36.520 +the training of Transformers and made + +00:55:34.640 --> 00:55:38.079 +them more stable in other things like + +00:55:36.520 --> 00:55:41.720 +this + +00:55:38.079 --> 00:55:44.599 +yeah can you elaborate more on like why + +00:55:41.720 --> 00:55:48.760 +layer between the layers is worse for g + +00:55:44.599 --> 00:55:50.760 +compation i yeah sure so basically + +00:55:48.760 --> 00:55:52.720 +anything other than the identity is bad + +00:55:50.760 --> 00:55:55.079 +for gradient propagation because the + +00:55:52.720 --> 00:55:56.319 +identity function or addition of some + +00:55:55.079 --> 00:55:57.720 +other piece of information doesn't + +00:55:56.319 --> 00:55:59.839 +change the gradients that flow back + +00:55:57.720 --> 00:56:01.760 +through the network but anything other + +00:55:59.839 --> 00:56:04.200 +than that does right it either makes + +00:56:01.760 --> 00:56:07.520 +them smaller or bigger or modifies them + +00:56:04.200 --> 00:56:11.400 +in some way how does layer Norm modify + +00:56:07.520 --> 00:56:13.839 +them layer Norm modifies them by the + +00:56:11.400 --> 00:56:16.440 +standard deviation so like let's say the + +00:56:13.839 --> 00:56:19.760 +standard deviation in the layer is quite + +00:56:16.440 --> 00:56:23.440 +large and your gain and especially if + +00:56:19.760 --> 00:56:25.400 +your gain is is small um then that would + +00:56:23.440 --> 00:56:26.640 +mean that you were dividing every time + +00:56:25.400 --> 00:56:28.000 +by the standard deviation which would + +00:56:26.640 --> 00:56:32.559 +make the gradient + +00:56:28.000 --> 00:56:34.480 +smaller so um it yeah it's pretty like + +00:56:32.559 --> 00:56:37.760 +straightforward + +00:56:34.480 --> 00:56:37.760 +actually um + +00:56:49.599 --> 00:56:52.599 +yeah + +00:56:53.119 --> 00:56:59.200 +yes so you you're basically right like + +00:56:56.760 --> 00:57:01.839 +if we apply something like k h then the + +00:56:59.200 --> 00:57:03.319 +value the gradient would disappear I'm + +00:57:01.839 --> 00:57:05.119 +actually going to talk about activation + +00:57:03.319 --> 00:57:07.079 +functions and usually we use activation + +00:57:05.119 --> 00:57:10.079 +functions that don't have that problem + +00:57:07.079 --> 00:57:12.480 +quite as much but um the other thing to + +00:57:10.079 --> 00:57:15.079 +point out here is actually the residual + +00:57:12.480 --> 00:57:18.000 +connection is going all the way up um + +00:57:15.079 --> 00:57:19.000 +and it's not like the feed cor network + +00:57:18.000 --> 00:57:21.480 +is being + +00:57:19.000 --> 00:57:26.160 +applied like outside of the path of the + +00:57:21.480 --> 00:57:28.400 +residual function so um + +00:57:26.160 --> 00:57:31.400 +the essentially the gradients won't be + +00:57:28.400 --> 00:57:35.559 +Vanishing because the P4 network is not + +00:57:31.400 --> 00:57:35.559 +blocking like the res + +00:57:35.599 --> 00:57:42.720 +from um does that make sense another way + +00:57:39.079 --> 00:57:44.839 +to put it is um this will be like the + +00:57:42.720 --> 00:57:47.799 +tan H will be inside this function but + +00:57:44.839 --> 00:57:51.400 +you're separately adding in X Out + +00:57:47.799 --> 00:57:51.400 +outside that t h so you don't be + +00:57:52.119 --> 00:57:56.680 +scre cool um any other + +00:57:55.720 --> 00:57:59.960 +any other + +00:57:56.680 --> 00:58:03.119 +things okay great um so this is also + +00:57:59.960 --> 00:58:07.280 +really important this uh causes + +00:58:03.119 --> 00:58:08.559 +the um the models to work better so next + +00:58:07.280 --> 00:58:12.200 +is feed forward + +00:58:08.559 --> 00:58:13.680 +layers so the feed forward layers here + +00:58:12.200 --> 00:58:15.359 +um what they do is they extract + +00:58:13.680 --> 00:58:17.160 +combination features from the attended + +00:58:15.359 --> 00:58:18.160 +output basically the the feed forward + +00:58:17.160 --> 00:58:22.440 +network is + +00:58:18.160 --> 00:58:23.280 +applied independently to each Vector in + +00:58:22.440 --> 00:58:26.960 +the + +00:58:23.280 --> 00:58:30.160 +sequence um so like if we have our + +00:58:26.960 --> 00:58:33.319 +Vector uh if we have our Vector here we + +00:58:30.160 --> 00:58:36.760 +apply it like this um like weight one + +00:58:33.319 --> 00:58:38.400 +and B1 weight 2 and B2 actually um it's + +00:58:36.760 --> 00:58:42.319 +pretty common nowadays to remove the + +00:58:38.400 --> 00:58:44.640 +bias also uh mostly because it's just + +00:58:42.319 --> 00:58:47.400 +extra parameters and not useful and it + +00:58:44.640 --> 00:58:50.160 +can be more um it can lead to some + +00:58:47.400 --> 00:58:51.839 +degree of instability in training so + +00:58:50.160 --> 00:58:53.960 +you'll often see linear layers that have + +00:58:51.839 --> 00:58:55.760 +the bias off uh and it's just because + +00:58:53.960 --> 00:58:58.079 +it's not necessary to learn the network + +00:58:55.760 --> 00:59:02.119 +well but anyway this is what it looks + +00:58:58.079 --> 00:59:05.880 +like f here is a nonlinearity of some + +00:59:02.119 --> 00:59:08.640 +variety uh so it essentially looks like + +00:59:05.880 --> 00:59:12.119 +this usually the feed forward Network + +00:59:08.640 --> 00:59:15.079 +and Transformers uh upscales to a very + +00:59:12.119 --> 00:59:18.039 +large Vector to extract lots of features + +00:59:15.079 --> 00:59:20.480 +so each one of these each one of these + +00:59:18.039 --> 00:59:21.799 +elements in here is kind of a feature + +00:59:20.480 --> 00:59:23.480 +and a lot of people when they do + +00:59:21.799 --> 00:59:25.000 +interpretation of Transformer models + +00:59:23.480 --> 00:59:27.599 +they actually look at these features + +00:59:25.000 --> 00:59:30.240 +because they tend to correspond more + +00:59:27.599 --> 00:59:32.319 +directly with kind of the information + +00:59:30.240 --> 00:59:34.799 +that we would expect to see like um for + +00:59:32.319 --> 00:59:37.000 +example when people memorize individual + +00:59:34.799 --> 00:59:38.839 +memorized facts in Transformers like who + +00:59:37.000 --> 00:59:40.920 +is the president of the United States or + +00:59:38.839 --> 00:59:43.440 +something they usually look at the + +00:59:40.920 --> 00:59:45.280 +vectors uh in + +00:59:43.440 --> 00:59:47.880 +here + +00:59:45.280 --> 00:59:49.440 +um some activation functions that are + +00:59:47.880 --> 00:59:53.119 +used in Transformers the original + +00:59:49.440 --> 00:59:57.359 +Transformer used a relu um so the relu + +00:59:53.119 --> 00:59:59.880 +looks like Max uh zero of X um I asked + +00:59:57.359 --> 01:00:02.480 +chat GPD to draw a figure for me and it + +00:59:59.880 --> 01:00:04.200 +did a pretty good job of this I guess so + +01:00:02.480 --> 01:00:09.400 +this is what it uh this is what it looks + +01:00:04.200 --> 01:00:12.240 +like um the relu is zero below an input + +01:00:09.400 --> 01:00:15.640 +of zero and uh the identity greater than + +01:00:12.240 --> 01:00:17.760 +an input of zero um the problem with + +01:00:15.640 --> 01:00:20.280 +this though is anytime something is less + +01:00:17.760 --> 01:00:22.119 +than zero you get a zero gradient so it + +01:00:20.280 --> 01:00:26.720 +it causes + +01:00:22.119 --> 01:00:29.680 +issues so in alternative that's used uh + +01:00:26.720 --> 01:00:33.640 +recently is something called Swiss or + +01:00:29.680 --> 01:00:36.200 +silu for um sigmoid linear unit and + +01:00:33.640 --> 01:00:40.200 +basically it looks like this it's x + +01:00:36.200 --> 01:00:43.880 +times Sig Sigma times beta where beta is + +01:00:40.200 --> 01:00:46.000 +often set to one and it looks a lot like + +01:00:43.880 --> 01:00:47.480 +a relu it looks very similar to a reu + +01:00:46.000 --> 01:00:50.160 +but it doesn't have a zero gradient + +01:00:47.480 --> 01:00:52.839 +anywhere so you can still um if it gets + +01:00:50.160 --> 01:00:55.799 +to be very negative you have like a + +01:00:52.839 --> 01:00:57.440 +light push you have a light push towards + +01:00:55.799 --> 01:00:59.160 +the middle so you have a chance to + +01:00:57.440 --> 01:01:01.760 +recover and get things closer to the + +01:00:59.160 --> 01:01:03.400 +middle so uh empirically this seems to + +01:01:01.760 --> 01:01:05.799 +work pretty well and this is also uh + +01:01:03.400 --> 01:01:05.799 +used in + +01:01:07.480 --> 01:01:12.720 +W cool um any questions about these + +01:01:10.920 --> 01:01:14.880 +there's of course a ton of other + +01:01:12.720 --> 01:01:16.119 +activation functions but I am talking + +01:01:14.880 --> 01:01:18.200 +mostly about the ones that people are + +01:01:16.119 --> 01:01:20.480 +actually using + +01:01:18.200 --> 01:01:22.960 +optimiz uh usually you just set it to + +01:01:20.480 --> 01:01:24.760 +one or set it to some you you could + +01:01:22.960 --> 01:01:27.799 +hyper parameter optimize over it but I + +01:01:24.760 --> 01:01:27.799 +think it doesn't make a huge + +01:01:28.640 --> 01:01:36.640 +difference yeah okay cool um next is + +01:01:33.559 --> 01:01:38.720 +optimization tricks for Transformers so + +01:01:36.640 --> 01:01:40.799 +Transformers are powerful but very + +01:01:38.720 --> 01:01:44.440 +fickle um + +01:01:40.799 --> 01:01:47.039 +so uh + +01:01:44.440 --> 01:01:48.480 +Transformers at least when they started + +01:01:47.039 --> 01:01:51.279 +out and we didn't have stable training + +01:01:48.480 --> 01:01:53.119 +recipes for them tended to be very uh + +01:01:51.279 --> 01:01:56.359 +like people tried pretty hard to + +01:01:53.119 --> 01:01:58.839 +optimize them but they uh uh they were + +01:01:56.359 --> 01:02:01.200 +difficult to optimize one example of + +01:01:58.839 --> 01:02:02.520 +this uh that that is really great it + +01:02:01.200 --> 01:02:03.640 +will make you feel a lot better if + +01:02:02.520 --> 01:02:08.160 +you're training things and they're not + +01:02:03.640 --> 01:02:10.960 +working very well is um meta's old log + +01:02:08.160 --> 01:02:13.520 +book of how they trained 175 billion + +01:02:10.960 --> 01:02:14.799 +parameter model and you'll see all of + +01:02:13.520 --> 01:02:16.160 +the problems that they had while they + +01:02:14.799 --> 01:02:17.920 +were training their model despite the + +01:02:16.160 --> 01:02:20.079 +fact that they're kind of pros at doing + +01:02:17.920 --> 01:02:22.240 +this um and that includes things like + +01:02:20.079 --> 01:02:24.799 +their machines going down and their like + +01:02:22.240 --> 01:02:27.599 +Hardware Engineers having to go and res + +01:02:24.799 --> 01:02:30.079 +their machines and um they're loss + +01:02:27.599 --> 01:02:32.319 +diverging and having to roll back things + +01:02:30.079 --> 01:02:34.119 +manually and other stuff like this so I + +01:02:32.319 --> 01:02:35.960 +I really like this I'm really happy that + +01:02:34.119 --> 01:02:39.520 +they released this for us all to learn + +01:02:35.960 --> 01:02:42.880 +from um but yeah you can take a look at + +01:02:39.520 --> 01:02:45.680 +this um so some things that people do to + +01:02:42.880 --> 01:02:48.640 +stabilize training of Transformer models + +01:02:45.680 --> 01:02:51.079 +are swap out the optimizer uh do the + +01:02:48.640 --> 01:02:52.799 +sorts of restarts that I talked about + +01:02:51.079 --> 01:02:55.839 +and um and other things like this so I'm + +01:02:52.799 --> 01:02:57.960 +going to go through those very quickly + +01:02:55.839 --> 01:03:00.319 +so the first thing is optimizers um + +01:02:57.960 --> 01:03:04.000 +previously what we've talked about is + +01:03:00.319 --> 01:03:06.599 +SGD um so SGD updates in the direction + +01:03:04.000 --> 01:03:09.240 +of reducing loss atom which I also + +01:03:06.599 --> 01:03:12.279 +talked about adds a momentum uh ter + +01:03:09.240 --> 01:03:14.359 +sorry that should be term momentum term + +01:03:12.279 --> 01:03:16.799 +and normalized by the standard deviation + +01:03:14.359 --> 01:03:20.039 +of the outputs to kind of upwe + +01:03:16.799 --> 01:03:21.920 +infrequently updated + +01:03:20.039 --> 01:03:24.359 +parameters a new thing that was + +01:03:21.920 --> 01:03:25.160 +introduced by vasani at all when they + +01:03:24.359 --> 01:03:27.799 +prod + +01:03:25.160 --> 01:03:30.960 +Transformers was uh a learning rate + +01:03:27.799 --> 01:03:35.000 +increase and decrease and the way this + +01:03:30.960 --> 01:03:37.839 +works is they gradually increase the + +01:03:35.000 --> 01:03:40.039 +learning rate until you get to a set + +01:03:37.839 --> 01:03:43.839 +number of warm-up steps in theirs they + +01:03:40.039 --> 01:03:46.920 +did 4,000 warm-up steps um and then they + +01:03:43.839 --> 01:03:49.160 +gradually decrease it + +01:03:46.920 --> 01:03:52.559 +um + +01:03:49.160 --> 01:03:57.279 +there's and it looks like this + +01:03:52.559 --> 01:04:00.680 +recently uh is as far as I understand um + +01:03:57.279 --> 01:04:03.160 +you can actually do a bit better without + +01:04:00.680 --> 01:04:07.359 +doing this warmup uh as long as you're + +01:04:03.160 --> 01:04:10.000 +using pre-layer Norm uh so there I think + +01:04:07.359 --> 01:04:11.559 +the warmup is still used pretty widely + +01:04:10.000 --> 01:04:13.839 +but it's not absolutely necessary + +01:04:11.559 --> 01:04:16.559 +anymore with the newer training recipes + +01:04:13.839 --> 01:04:18.279 +but it's something to be aware of also + +01:04:16.559 --> 01:04:20.920 +sometimes people do linear learning rate + +01:04:18.279 --> 01:04:23.079 +Decay instead of this kind of like slope + +01:04:20.920 --> 01:04:25.000 +learning rate decays so there's a bunch + +01:04:23.079 --> 01:04:27.079 +of recipes for this I'm not going to go + +01:04:25.000 --> 01:04:27.960 +into all of them but just be aware that + +01:04:27.079 --> 01:04:31.680 +they + +01:04:27.960 --> 01:04:34.640 +exist um another thing is instead of + +01:04:31.680 --> 01:04:37.559 +straight up atom uh recently people have + +01:04:34.640 --> 01:04:39.640 +been using atom W and so what Atom W + +01:04:37.559 --> 01:04:42.440 +does is it does uh weight + +01:04:39.640 --> 01:04:46.520 +Decay and what weight Decay is is it + +01:04:42.440 --> 01:04:49.359 +like gradually decreases your weights uh + +01:04:46.520 --> 01:04:50.920 +towards the zero and the reason why you + +01:04:49.359 --> 01:04:54.319 +do that is it's like basically an + +01:04:50.920 --> 01:04:58.119 +approximation of normalization of uh + +01:04:54.319 --> 01:04:59.599 +sorry regularization of modelss so it it + +01:04:58.119 --> 01:05:04.319 +has an effect of preventing the model + +01:04:59.599 --> 01:05:06.799 +from overfitting um admw is kind of + +01:05:04.319 --> 01:05:08.240 +a you don't need to know all the details + +01:05:06.799 --> 01:05:11.319 +if you're just using it but it's + +01:05:08.240 --> 01:05:13.559 +basically a correction of weight Decay + +01:05:11.319 --> 01:05:15.079 +specifically considering the fact that + +01:05:13.559 --> 01:05:17.480 +atom is using momentum in this + +01:05:15.079 --> 01:05:20.599 +normalization so that it actually + +01:05:17.480 --> 01:05:23.319 +corresponds to proper regularization + +01:05:20.599 --> 01:05:26.240 +terms so if you're just using atom W out + +01:05:23.319 --> 01:05:29.000 +of the boxes and Optimizer um that's all + +01:05:26.240 --> 01:05:30.279 +you need to know but actually um sorry + +01:05:29.000 --> 01:05:31.480 +never mind for the assignment you're + +01:05:30.279 --> 01:05:33.599 +actually going to have to implement + +01:05:31.480 --> 01:05:35.079 +something related to that so you do + +01:05:33.599 --> 01:05:37.480 +actually need to know it and look into + +01:05:35.079 --> 01:05:38.680 +it I'll maybe cover the details in a l + +01:05:37.480 --> 01:05:40.520 +fact + +01:05:38.680 --> 01:05:42.920 +but okay + +01:05:40.520 --> 01:05:46.160 +cool another thing is low Precision + +01:05:42.920 --> 01:05:47.640 +training so low Precision training is uh + +01:05:46.160 --> 01:05:49.760 +something that's necessary where you're + +01:05:47.640 --> 01:05:52.760 +training very large models or large + +01:05:49.760 --> 01:05:55.960 +large-ish models on fewer + +01:05:52.760 --> 01:05:59.960 +gpus um + +01:05:55.960 --> 01:06:02.319 +so training a full 32 uh pit bit + +01:05:59.960 --> 01:06:05.079 +Precision can be costly so it's pretty + +01:06:02.319 --> 01:06:06.960 +common to train at for example 16bit + +01:06:05.079 --> 01:06:08.440 +Precision um especially if you're + +01:06:06.960 --> 01:06:11.359 +training all of the parameters of the + +01:06:08.440 --> 01:06:15.440 +models and there's kind of two uh + +01:06:11.359 --> 01:06:19.039 +alternatives for this the first one is + +01:06:15.440 --> 01:06:21.760 +fp16 and that's the standard uh 16bit + +01:06:19.039 --> 01:06:25.760 +floating Point numbers that are used by + +01:06:21.760 --> 01:06:27.520 +most computers and most CPUs and these + +01:06:25.760 --> 01:06:30.359 +uh floating Point numbers they allocate + +01:06:27.520 --> 01:06:32.920 +one bit for the sign five bits for the + +01:06:30.359 --> 01:06:35.440 +exponent and 10 bits for the fractional + +01:06:32.920 --> 01:06:39.599 +components so they have relatively + +01:06:35.440 --> 01:06:41.960 +precise fractions and exponents with a + +01:06:39.599 --> 01:06:43.880 +relatively small range so they can't + +01:06:41.960 --> 01:06:45.440 +express things with very large or very + +01:06:43.880 --> 01:06:48.359 +small + +01:06:45.440 --> 01:06:51.960 +exponents there's uh something called B + +01:06:48.359 --> 01:06:54.160 +float 16 in B flat 16 uh b stands for + +01:06:51.960 --> 01:06:56.160 +brain for like Google brain uh because + +01:06:54.160 --> 01:06:57.920 +that's where they invented this and + +01:06:56.160 --> 01:07:00.440 +basically what it does is it increases + +01:06:57.920 --> 01:07:03.319 +the number of uh + +01:07:00.440 --> 01:07:04.920 +bits for the exponent to eight and it + +01:07:03.319 --> 01:07:07.400 +decreases the number of bits for the + +01:07:04.920 --> 01:07:09.640 +fraction to seven so that allows you to + +01:07:07.400 --> 01:07:12.880 +express a wider range of values uh at a + +01:07:09.640 --> 01:07:14.960 +lower Precision essentially and this is + +01:07:12.880 --> 01:07:17.000 +much much more stable with respect to + +01:07:14.960 --> 01:07:20.000 +training uh because you can handle very + +01:07:17.000 --> 01:07:21.440 +small numbers and very large um numbers + +01:07:20.000 --> 01:07:23.160 +better despite the fact that you're + +01:07:21.440 --> 01:07:25.480 +losing a little bit of prision of the + +01:07:23.160 --> 01:07:27.720 +fractions so this is pretty essential + +01:07:25.480 --> 01:07:29.440 +and I I would recommend that no matter + +01:07:27.720 --> 01:07:32.119 +what you're doing if you're doing 16 bit + +01:07:29.440 --> 01:07:33.799 +you would uh you would use this instead + +01:07:32.119 --> 01:07:35.880 +and Hardware support for this is pretty + +01:07:33.799 --> 01:07:37.319 +good now like Nvidia gpus and other + +01:07:35.880 --> 01:07:39.799 +things like that all support it ppar + +01:07:37.319 --> 01:07:39.799 +supports it + +01:07:41.319 --> 01:07:44.680 +really um another thing that you should + +01:07:43.480 --> 01:07:46.920 +be aware of especially if you're + +01:07:44.680 --> 01:07:48.160 +training very large models is uh + +01:07:46.920 --> 01:07:51.079 +checkpointing and + +01:07:48.160 --> 01:07:53.440 +resets so um even through best efforts + +01:07:51.079 --> 01:07:57.279 +training can uh go south it can have + +01:07:53.440 --> 01:07:58.839 +problems so what do you do um the first + +01:07:57.279 --> 01:08:02.319 +thing that you can do is you can monitor + +01:07:58.839 --> 01:08:04.520 +for possible issues and a common way to + +01:08:02.319 --> 01:08:05.960 +do this is by mod monitoring the norm of + +01:08:04.520 --> 01:08:08.680 +the gradients and so I pulled this + +01:08:05.960 --> 01:08:11.839 +directly from the op uh the opt log book + +01:08:08.680 --> 01:08:13.079 +that I uh posted before the the thing + +01:08:11.839 --> 01:08:15.200 +that meta did when they were training + +01:08:13.079 --> 01:08:17.839 +their models and here you can see you're + +01:08:15.200 --> 01:08:20.480 +monitoring the norm of the gradients and + +01:08:17.839 --> 01:08:23.520 +suddenly like in the middle of training + +01:08:20.480 --> 01:08:24.839 +your gradient Norm just goes up by a lot + +01:08:23.520 --> 01:08:27.239 +or + +01:08:24.839 --> 01:08:29.880 +and this is an indicator of a problem + +01:08:27.239 --> 01:08:33.080 +but the interesting thing about this is + +01:08:29.880 --> 01:08:35.199 +this will Spike and then after it spiked + +01:08:33.080 --> 01:08:37.480 +you can see that the perplexity of the + +01:08:35.199 --> 01:08:39.400 +model is going down after the spike it + +01:08:37.480 --> 01:08:42.239 +continues to go down for a little bit + +01:08:39.400 --> 01:08:44.839 +but then it starts going up and so + +01:08:42.239 --> 01:08:46.560 +basically once it started going up now + +01:08:44.839 --> 01:08:48.400 +your model is kind of in like a bad + +01:08:46.560 --> 01:08:50.080 +space it's in a bad space of the + +01:08:48.400 --> 01:08:52.279 +parameter space and it will just + +01:08:50.080 --> 01:08:54.640 +continue being in a bad space until it + +01:08:52.279 --> 01:08:56.319 +diverges uh but it's hard to diagnose + +01:08:54.640 --> 01:08:58.679 +immediately other than through things + +01:08:56.319 --> 01:09:00.600 +like the gradient Norm so monitoring the + +01:08:58.679 --> 01:09:01.920 +gradient Norm can be helpful this is + +01:09:00.600 --> 01:09:03.759 +especially important if you're training + +01:09:01.920 --> 01:09:05.319 +very large models uh if you're training + +01:09:03.759 --> 01:09:07.839 +smaller models it's not as big of a + +01:09:05.319 --> 01:09:09.640 +problem but this is uh this is something + +01:09:07.839 --> 01:09:13.600 +to pay attention + +01:09:09.640 --> 01:09:16.319 +to um if training crashes what can you + +01:09:13.600 --> 01:09:18.319 +do so a very common thing to do is to + +01:09:16.319 --> 01:09:21.000 +roll back to a previous checkpoint like + +01:09:18.319 --> 01:09:22.719 +save out checkpoints periodically um + +01:09:21.000 --> 01:09:24.600 +roll back not to write before the + +01:09:22.719 --> 01:09:27.120 +gradient spiked but roll back to you + +01:09:24.600 --> 01:09:29.960 +know like 100 steps before the gradient + +01:09:27.120 --> 01:09:31.080 +spiked shuffle your training data set or + +01:09:29.960 --> 01:09:33.520 +jump to a different part of your + +01:09:31.080 --> 01:09:36.120 +training data set and resume so this is + +01:09:33.520 --> 01:09:38.520 +a very like hacky thing to do I guess it + +01:09:36.120 --> 01:09:40.600 +it seems you know but by doing this + +01:09:38.520 --> 01:09:42.159 +you're injecting some Randomness in the + +01:09:40.600 --> 01:09:44.080 +process by looking at different data and + +01:09:42.159 --> 01:09:45.880 +that can cause your model training to + +01:09:44.080 --> 01:09:47.560 +stabilize um there are even some + +01:09:45.880 --> 01:09:49.880 +platforms that do this automatically + +01:09:47.560 --> 01:09:52.120 +like I think the Mosaic ml platform does + +01:09:49.880 --> 01:09:55.199 +this automatically and and fixes this + +01:09:52.120 --> 01:09:56.960 +for you so + +01:09:55.199 --> 01:10:01.280 +another thing though is you should also + +01:09:56.960 --> 01:10:04.159 +be checking your code um and ideally if + +01:10:01.280 --> 01:10:08.679 +you have really solid code that doesn't + +01:10:04.159 --> 01:10:12.320 +have any sort of like any sort of + +01:10:08.679 --> 01:10:14.199 +dangerous functions in it and also um + +01:10:12.320 --> 01:10:16.159 +your learning rate is set appr + +01:10:14.199 --> 01:10:18.040 +appropriately this happens much much + +01:10:16.159 --> 01:10:20.920 +less so if your model training is + +01:10:18.040 --> 01:10:22.159 +spiking all the time then this can be an + +01:10:20.920 --> 01:10:24.159 +indicator that you have a problem and + +01:10:22.159 --> 01:10:26.840 +just to give an example like + +01:10:24.159 --> 01:10:29.760 +let's say you're taking an + +01:10:26.840 --> 01:10:32.640 +exponent um let's say you're taking the + +01:10:29.760 --> 01:10:34.520 +log of something where you're pretty + +01:10:32.640 --> 01:10:37.719 +sure that this should be + +01:10:34.520 --> 01:10:38.920 +positive um you're taking the log of + +01:10:37.719 --> 01:10:41.520 +something where you're pretty sure that + +01:10:38.920 --> 01:10:43.760 +this should be positive but in fact it's + +01:10:41.520 --> 01:10:45.880 +getting very close to zero some of the + +01:10:43.760 --> 01:10:47.480 +time so if it's getting very close to + +01:10:45.880 --> 01:10:50.320 +zero some of the time you'll get a huge + +01:10:47.480 --> 01:10:52.600 +gradient because like log + +01:10:50.320 --> 01:10:53.880 +Z has an infinite gradient and things + +01:10:52.600 --> 01:10:56.920 +that are very close to zero have + +01:10:53.880 --> 01:10:58.280 +something very close to gradi so usually + +01:10:56.920 --> 01:11:00.640 +if you're seeing these sorts of spikes + +01:10:58.280 --> 01:11:02.880 +there's a reason for them uh like this + +01:11:00.640 --> 01:11:06.560 +so you can also try to diagnose and make + +01:11:02.880 --> 01:11:10.239 +trading more stable um this is kind of a + +01:11:06.560 --> 01:11:11.800 +like you this is a good thing to look at + +01:11:10.239 --> 01:11:14.360 +but there's a lot of like experience + +01:11:11.800 --> 01:11:16.040 +that goes into this so just going in and + +01:11:14.360 --> 01:11:18.640 +diagnosing and digging into the code is + +01:11:16.040 --> 01:11:22.199 +a + +01:11:18.640 --> 01:11:25.560 +that cool um any questions uh any + +01:11:22.199 --> 01:11:25.560 +questions about this + +01:11:26.960 --> 01:11:31.840 +I think a lot of this is like lived + +01:11:28.640 --> 01:11:34.239 +knowledge so just you know try it and uh + +01:11:31.840 --> 01:11:37.040 +and if you have problems ask uh me or + +01:11:34.239 --> 01:11:38.239 +the Tas or people so the final thing I'd + +01:11:37.040 --> 01:11:40.000 +like to talk about is comparing + +01:11:38.239 --> 01:11:42.120 +Transformer architectures so I talked + +01:11:40.000 --> 01:11:43.639 +about a lot of design decisions I'm not + +01:11:42.120 --> 01:11:45.360 +going to talk about every single model + +01:11:43.639 --> 01:11:47.120 +today because I want to do that a little + +01:11:45.360 --> 01:11:49.159 +bit later after we've introduced more + +01:11:47.120 --> 01:11:52.120 +Concepts but I would like to at least + +01:11:49.159 --> 01:11:54.760 +compare the vasani at all uh paper and + +01:11:52.120 --> 01:11:56.159 +llama and if we look at some of the + +01:11:54.760 --> 01:11:57.880 +differences between them you know + +01:11:56.159 --> 01:12:01.400 +they're both using Transformers they're + +01:11:57.880 --> 01:12:03.960 +both doing other uh a lot of things + +01:12:01.400 --> 01:12:07.320 +similarly but um some of the differences + +01:12:03.960 --> 01:12:09.520 +are where what is the norm position um + +01:12:07.320 --> 01:12:13.639 +vasani adult is doing postm L is doing + +01:12:09.520 --> 01:12:16.639 +prorm what is the norm type vasani is + +01:12:13.639 --> 01:12:19.199 +doing layer Norm llama is doing RMS Norm + +01:12:16.639 --> 01:12:22.440 +what nonlinearity are they using reu + +01:12:19.199 --> 01:12:24.000 +versus silu and what positional encoding + +01:12:22.440 --> 01:12:28.159 +is sinusoidal versus + +01:12:24.000 --> 01:12:30.639 +rope and you might be asking me you + +01:12:28.159 --> 01:12:33.560 +might be thinking like well how much do + +01:12:30.639 --> 01:12:35.880 +I care about this anyway I mean like it + +01:12:33.560 --> 01:12:37.719 +you know might not be might not be super + +01:12:35.880 --> 01:12:40.520 +important but there was actually a + +01:12:37.719 --> 01:12:42.880 +really nice paper by uh Albert goo who + +01:12:40.520 --> 01:12:45.040 +is a assistant professor in MLD where + +01:12:42.880 --> 01:12:47.080 +they were proposing a new architecture + +01:12:45.040 --> 01:12:50.199 +but one of kind of the Easter eggs in + +01:12:47.080 --> 01:12:52.080 +this paper is this comparison here um + +01:12:50.199 --> 01:12:55.239 +and in this comparison they basically + +01:12:52.080 --> 01:12:56.400 +compare the the at all original + +01:12:55.239 --> 01:12:59.920 +Transformer + +01:12:56.400 --> 01:13:02.400 +architecture and the Llama style like + +01:12:59.920 --> 01:13:06.400 +Transformer Plus+ like good Transformer + +01:13:02.400 --> 01:13:08.840 +architecture and they compare the + +01:13:06.400 --> 01:13:11.080 +perplexity or actually this is log scale + +01:13:08.840 --> 01:13:12.320 +perplexity so it's basically like log + +01:13:11.080 --> 01:13:15.480 +negative log + +01:13:12.320 --> 01:13:17.400 +likelihood on oh no sorry this is + +01:13:15.480 --> 01:13:20.639 +perplexity but it's on the log SC so + +01:13:17.400 --> 01:13:22.520 +yeah this is exal perplexity um and then + +01:13:20.639 --> 01:13:24.159 +they compare the perplexity based on the + +01:13:22.520 --> 01:13:26.159 +number of training plot + +01:13:24.159 --> 01:13:28.960 +and so if you look at the yellow + +01:13:26.159 --> 01:13:32.920 +transformer and the orange uh + +01:13:28.960 --> 01:13:36.120 +Transformer Plus+ you can actually see + +01:13:32.920 --> 01:13:39.750 +that it takes 10 times more + +01:13:36.120 --> 01:13:41.639 +flops to achieve a + +01:13:39.750 --> 01:13:45.480 +[Music] + +01:13:41.639 --> 01:13:47.600 +approximately similar uh it can take 10 + +01:13:45.480 --> 01:13:50.920 +times more flops to ACH achieve an + +01:13:47.600 --> 01:13:52.560 +approximately similar result with the + +01:13:50.920 --> 01:13:54.560 +old architecture compared to the new + +01:13:52.560 --> 01:13:55.679 +architecture so this is like really + +01:13:54.560 --> 01:13:59.280 +really important right you want your + +01:13:55.679 --> 01:14:01.639 +training to be 10 times faster so um you + +01:13:59.280 --> 01:14:03.120 +can see that like a lot of people were + +01:14:01.639 --> 01:14:04.800 +saying like scale is all you need and + +01:14:03.120 --> 01:14:06.040 +architecture engineering isn't important + +01:14:04.800 --> 01:14:08.719 +or things like that but it turns out + +01:14:06.040 --> 01:14:12.080 +that architecture engineering is kind of + +01:14:08.719 --> 01:14:13.840 +kind of important so uh like a lot of + +01:14:12.080 --> 01:14:15.639 +the advances we've made in the past five + +01:14:13.840 --> 01:14:17.560 +years or seven years with respect to + +01:14:15.639 --> 01:14:20.159 +that are actually making a big + +01:14:17.560 --> 01:14:22.360 +difference cool so I'll I'll leave it at + +01:14:20.159 --> 01:14:22.360 +that diff --git a/CMU Advanced NLP 2024 (6) Generation Algorithms/CMU Advanced NLP 2024 (6) Generation Algorithms.mp4 b/CMU Advanced NLP 2024 (6) Generation Algorithms/CMU Advanced NLP 2024 (6) Generation Algorithms.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b1411597f707ff9db59ff0bb9f552dfaf4cb0690 --- /dev/null +++ b/CMU Advanced NLP 2024 (6) Generation Algorithms/CMU Advanced NLP 2024 (6) Generation Algorithms.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29a88878a1888777a04a02c4d430d2133af7fd1703664de2fae840d8da8c64e0 +size 78842134 diff --git a/CMU Advanced NLP 2024 (6) Generation Algorithms/metadata.json b/CMU Advanced NLP 2024 (6) Generation Algorithms/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..61c3642704c29fbbfc10158f43853e3ba4d5e55d --- /dev/null +++ b/CMU Advanced NLP 2024 (6) Generation Algorithms/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=96MMXDA7F74", + "title": "CMU Advanced NLP 2024 (6) Generation Algorithms" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (6) Generation Algorithms/transcript.srt b/CMU Advanced NLP 2024 (6) Generation Algorithms/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..bd1755f4521ac6304a817f1314b3fc875af5ecf4 --- /dev/null +++ b/CMU Advanced NLP 2024 (6) Generation Algorithms/transcript.srt @@ -0,0 +1,8087 @@ +1 +00:00:00,399 --> 00:00:04,720 +great um yeah so today we're going to be + +2 +00:00:03,320 --> 00:00:07,040 +talking a little bit about generation + +3 +00:00:04,720 --> 00:00:08,639 +algorithms um this will be sort of a + +4 +00:00:07,040 --> 00:00:10,160 +tour through some of the most common + +5 +00:00:08,639 --> 00:00:12,080 +methods and we're going to talk a little + +6 +00:00:10,160 --> 00:00:13,480 +bit about the theory behind them as well + +7 +00:00:12,080 --> 00:00:15,080 +um if you're looking at the slides on + +8 +00:00:13,480 --> 00:00:18,359 +the website these might be ever so + +9 +00:00:15,080 --> 00:00:20,000 +slightly different um but yeah I'll try + +10 +00:00:18,359 --> 00:00:21,640 +to stop at each section boundary for + +11 +00:00:20,000 --> 00:00:23,840 +questions also feel free to sort of + +12 +00:00:21,640 --> 00:00:25,720 +interrupt at any point for + +13 +00:00:23,840 --> 00:00:27,720 +clarifications so we're starting off + +14 +00:00:25,720 --> 00:00:29,560 +today with some great news um let's say + +15 +00:00:27,720 --> 00:00:31,199 +that you have some friend who maybe owns + +16 +00:00:29,560 --> 00:00:34,800 +a giant tech company and they've gifted + +17 +00:00:31,199 --> 00:00:36,480 +you this absolutely massive new model M + +18 +00:00:34,800 --> 00:00:38,079 +um it's a great model it's pre-trained + +19 +00:00:36,480 --> 00:00:40,879 +with the latest architecture it's + +20 +00:00:38,079 --> 00:00:42,920 +pre-trained on um trillions of tokens of + +21 +00:00:40,879 --> 00:00:44,520 +text it's got seven billion parameters + +22 +00:00:42,920 --> 00:00:46,399 +it looks like a really promising new + +23 +00:00:44,520 --> 00:00:48,399 +model you know it's the top of all these + +24 +00:00:46,399 --> 00:00:50,320 +leaderboards um but if you actually take + +25 +00:00:48,399 --> 00:00:52,520 +your new model M and you sort of open up + +26 +00:00:50,320 --> 00:00:53,719 +this box and kind of Shake It Out maybe + +27 +00:00:52,520 --> 00:00:55,239 +from last class you know a little bit + +28 +00:00:53,719 --> 00:00:57,000 +architecturally what this model might + +29 +00:00:55,239 --> 00:00:58,239 +look like but if you actually kind of + +30 +00:00:57,000 --> 00:01:00,320 +take a closer look at it from a + +31 +00:00:58,239 --> 00:01:01,719 +different angle what you see is that m + +32 +00:01:00,320 --> 00:01:04,920 +is actually just a conditional + +33 +00:01:01,719 --> 00:01:07,200 +probability distribution um you put some + +34 +00:01:04,920 --> 00:01:09,680 +input X into your model and you get some + +35 +00:01:07,200 --> 00:01:10,680 +probability out for any given sequence + +36 +00:01:09,680 --> 00:01:13,360 +that you're sort of interested in + +37 +00:01:10,680 --> 00:01:14,960 +evaluating right um and in particular M + +38 +00:01:13,360 --> 00:01:17,560 +gives you a probability distribution + +39 +00:01:14,960 --> 00:01:19,439 +over all tokens in its vocabulary to + +40 +00:01:17,560 --> 00:01:21,040 +predict like what token you would output + +41 +00:01:19,439 --> 00:01:24,840 +next right and so this is what this + +42 +00:01:21,040 --> 00:01:26,880 +equation says um given some input X and + +43 +00:01:24,840 --> 00:01:29,520 +everything that you've predicted so far + +44 +00:01:26,880 --> 00:01:32,399 +you get the probability of the next + +45 +00:01:29,520 --> 00:01:33,600 +token in YJ and if you multiply this out + +46 +00:01:32,399 --> 00:01:34,840 +over all the probabilities in your + +47 +00:01:33,600 --> 00:01:37,159 +sequence you can calculate the + +48 +00:01:34,840 --> 00:01:41,240 +probability of any output y given your + +49 +00:01:37,159 --> 00:01:42,640 +input X so what this like super fancy + +50 +00:01:41,240 --> 00:01:44,119 +model that you spend a lot of money to + +51 +00:01:42,640 --> 00:01:46,280 +train is really just a conditional + +52 +00:01:44,119 --> 00:01:47,920 +probability distribution um but this + +53 +00:01:46,280 --> 00:01:49,600 +turns out to be okay because you can use + +54 +00:01:47,920 --> 00:01:51,920 +a conditional probability distribution + +55 +00:01:49,600 --> 00:01:54,399 +to do sort of any task that we're really + +56 +00:01:51,920 --> 00:01:56,719 +interested in in NLP um pretty much any + +57 +00:01:54,399 --> 00:01:58,680 +task right so by changing what you + +58 +00:01:56,719 --> 00:02:01,360 +consider your input X and your output y + +59 +00:01:58,680 --> 00:02:03,560 +to be you can can get outputs from this + +60 +00:02:01,360 --> 00:02:06,479 +model for things like translation for + +61 +00:02:03,560 --> 00:02:08,720 +summarization for reasoning Tas um just + +62 +00:02:06,479 --> 00:02:10,520 +by sort of changing what you consider + +63 +00:02:08,720 --> 00:02:12,760 +your inputs and outputs in this + +64 +00:02:10,520 --> 00:02:14,239 +setting but there's sort of both good + +65 +00:02:12,760 --> 00:02:15,920 +and bad things about your model being a + +66 +00:02:14,239 --> 00:02:17,120 +probability distribution instead of just + +67 +00:02:15,920 --> 00:02:20,599 +an oracle that gives you sort of a + +68 +00:02:17,120 --> 00:02:22,080 +single answer for every input um one + +69 +00:02:20,599 --> 00:02:24,480 +kind of nice thing about this + +70 +00:02:22,080 --> 00:02:26,080 +distribution um is that you can get at + +71 +00:02:24,480 --> 00:02:27,720 +an idea of something like confidence + +72 +00:02:26,080 --> 00:02:30,120 +right if you give your model the input 2 + +73 +00:02:27,720 --> 00:02:32,480 +plus 2 equals and almost all the + +74 +00:02:30,120 --> 00:02:34,200 +probability mass is on the token of four + +75 +00:02:32,480 --> 00:02:35,760 +you can say like the model predicts with + +76 +00:02:34,200 --> 00:02:38,319 +pretty high confidence that 2 plus 2 + +77 +00:02:35,760 --> 00:02:39,480 +equals four um versus if you give it + +78 +00:02:38,319 --> 00:02:40,959 +something that's maybe a little more + +79 +00:02:39,480 --> 00:02:43,120 +open-ended like you ask it to predict + +80 +00:02:40,959 --> 00:02:44,640 +Graham's favorite color and you see this + +81 +00:02:43,120 --> 00:02:47,040 +distribution that's sort of a lot + +82 +00:02:44,640 --> 00:02:48,440 +flatter you know the most likely output + +83 +00:02:47,040 --> 00:02:49,720 +is green but maybe we don't have a lot + +84 +00:02:48,440 --> 00:02:51,560 +of confidence that that's the correct + +85 +00:02:49,720 --> 00:02:53,040 +answer um this is really closely tied + +86 +00:02:51,560 --> 00:02:55,200 +into the idea of calibration which you + +87 +00:02:53,040 --> 00:02:58,879 +guys talked about um I guess a couple of + +88 +00:02:55,200 --> 00:03:00,640 +classes ago now the flip side of this + +89 +00:02:58,879 --> 00:03:03,680 +though is that you know Noti that for + +90 +00:03:00,640 --> 00:03:06,760 +this case like 2 plus 2al 4 not all of + +91 +00:03:03,680 --> 00:03:08,519 +the probability mass is on four um and + +92 +00:03:06,760 --> 00:03:09,720 +so models that are conditional + +93 +00:03:08,519 --> 00:03:11,560 +probability distributions can + +94 +00:03:09,720 --> 00:03:13,560 +hallucinate right um pretty much no + +95 +00:03:11,560 --> 00:03:15,799 +matter what you do there's going to be + +96 +00:03:13,560 --> 00:03:17,680 +some nonzero probability to some output + +97 +00:03:15,799 --> 00:03:19,920 +that's incorrect or + +98 +00:03:17,680 --> 00:03:21,239 +undesirable um in some cases maybe even + +99 +00:03:19,920 --> 00:03:23,760 +offensive something that you don't want + +100 +00:03:21,239 --> 00:03:25,280 +the model to Output um and this is sort + +101 +00:03:23,760 --> 00:03:27,840 +of an artifact of the way these models + +102 +00:03:25,280 --> 00:03:29,280 +are trained if there's some great work + +103 +00:03:27,840 --> 00:03:31,400 +kind of more on the theory side here + +104 +00:03:29,280 --> 00:03:32,840 +that shows that this is actually true + +105 +00:03:31,400 --> 00:03:35,120 +even if everything in your input + +106 +00:03:32,840 --> 00:03:36,920 +training data is sort of correct and + +107 +00:03:35,120 --> 00:03:38,439 +factual and doesn't have any errors + +108 +00:03:36,920 --> 00:03:41,200 +you'll still wind up with a situation + +109 +00:03:38,439 --> 00:03:44,480 +where some nonzero probability mass is + +110 +00:03:41,200 --> 00:03:47,000 +on some outputs that are undesirable or + +111 +00:03:44,480 --> 00:03:50,120 +hallucinatory for sort of most inputs + +112 +00:03:47,000 --> 00:03:52,159 +that you care about evaluating so if we + +113 +00:03:50,120 --> 00:03:55,079 +have these issues how do we actually get + +114 +00:03:52,159 --> 00:03:56,519 +a good output out of the model um and to + +115 +00:03:55,079 --> 00:03:58,640 +do that we're first going to talk about + +116 +00:03:56,519 --> 00:04:00,079 +some sampling methods um but I want to + +117 +00:03:58,640 --> 00:04:01,879 +pause here in case there are of any + +118 +00:04:00,079 --> 00:04:04,159 +questions on this idea of a model is a + +119 +00:04:01,879 --> 00:04:04,159 +conditional + +120 +00:04:05,040 --> 00:04:11,680 +distribution great so we can jump right + +121 +00:04:07,519 --> 00:04:13,560 +in so we have this model right we know + +122 +00:04:11,680 --> 00:04:15,959 +at each step at each token we might want + +123 +00:04:13,560 --> 00:04:17,919 +to decode the distribution of likelihood + +124 +00:04:15,959 --> 00:04:18,959 +over all vocabulary tokens right this + +125 +00:04:17,919 --> 00:04:21,680 +conditional distribution we've been + +126 +00:04:18,959 --> 00:04:24,240 +talking about um for the next time step + +127 +00:04:21,680 --> 00:04:26,400 +and what we want out of this is a good + +128 +00:04:24,240 --> 00:04:28,000 +output um for some definition of good + +129 +00:04:26,400 --> 00:04:30,919 +that we can sort of develop as we go + +130 +00:04:28,000 --> 00:04:32,479 +here so maybe the natural first thing to + +131 +00:04:30,919 --> 00:04:34,880 +try is we have a probability + +132 +00:04:32,479 --> 00:04:36,600 +distribution can we just sample from it + +133 +00:04:34,880 --> 00:04:39,600 +right and this is something called + +134 +00:04:36,600 --> 00:04:41,639 +ancestral sampling so at each time step + +135 +00:04:39,600 --> 00:04:43,560 +we're going to draw a token from this + +136 +00:04:41,639 --> 00:04:45,039 +distribution sort of according to its + +137 +00:04:43,560 --> 00:04:47,199 +relative probability right so if + +138 +00:04:45,039 --> 00:04:48,639 +something has twice as much probability + +139 +00:04:47,199 --> 00:04:51,280 +Mass according to the model we'll draw + +140 +00:04:48,639 --> 00:04:54,000 +it twice as often um and we can sample + +141 +00:04:51,280 --> 00:04:55,560 +from this distribution at each time step + +142 +00:04:54,000 --> 00:04:58,080 +and this is sort of this is sort of a + +143 +00:04:55,560 --> 00:05:00,199 +nice setup um we get exact samples from + +144 +00:04:58,080 --> 00:05:02,639 +the model distribution so using the + +145 +00:05:00,199 --> 00:05:04,479 +setup if you can you imagine like + +146 +00:05:02,639 --> 00:05:06,680 +drawing an almost infinite number of + +147 +00:05:04,479 --> 00:05:08,320 +samples like a ridiculously large number + +148 +00:05:06,680 --> 00:05:10,160 +and you look at their probabilities + +149 +00:05:08,320 --> 00:05:11,840 +you'd sort of get something from this + +150 +00:05:10,160 --> 00:05:13,039 +distribution with exactly the + +151 +00:05:11,840 --> 00:05:15,720 +probability that the real model + +152 +00:05:13,039 --> 00:05:17,280 +distribution is given you um so this is + +153 +00:05:15,720 --> 00:05:19,039 +great this gives us an exact sample from + +154 +00:05:17,280 --> 00:05:21,400 +the model this seems to be exactly what + +155 +00:05:19,039 --> 00:05:22,880 +we want um but you can guess probably by + +156 +00:05:21,400 --> 00:05:24,639 +the fact that we're only like 10 minutes + +157 +00:05:22,880 --> 00:05:27,000 +into class here this is not really the + +158 +00:05:24,639 --> 00:05:28,280 +end of the story um and there's actually + +159 +00:05:27,000 --> 00:05:30,800 +a couple of problems with sampling + +160 +00:05:28,280 --> 00:05:32,560 +directly from our model distribu + +161 +00:05:30,800 --> 00:05:35,280 +the one that we're really going to focus + +162 +00:05:32,560 --> 00:05:37,919 +on first here is this idea of a long + +163 +00:05:35,280 --> 00:05:41,400 +tail so a model like llama and maybe our + +164 +00:05:37,919 --> 00:05:43,639 +new model M um has 32,000 vocabulary + +165 +00:05:41,400 --> 00:05:46,280 +tokens and you can imagine maybe out of + +166 +00:05:43,639 --> 00:05:48,000 +those tokens there might be one or even + +167 +00:05:46,280 --> 00:05:49,720 +2,000 of those tokens that are sort of a + +168 +00:05:48,000 --> 00:05:51,919 +reasonable next thing to predict for a + +169 +00:05:49,720 --> 00:05:53,479 +really open-ended task right but there's + +170 +00:05:51,919 --> 00:05:55,440 +going to be all kinds of things in that + +171 +00:05:53,479 --> 00:05:57,039 +distribution um that are maybe like + +172 +00:05:55,440 --> 00:05:58,440 +punctuation there maybe tokens that + +173 +00:05:57,039 --> 00:06:00,280 +won't actually lead to the correct + +174 +00:05:58,440 --> 00:06:01,840 +answer like there's a lot of things in + +175 +00:06:00,280 --> 00:06:04,560 +this distribution that would be all + +176 +00:06:01,840 --> 00:06:06,160 +really low likelihood and this is fine + +177 +00:06:04,560 --> 00:06:08,759 +these things just get low probability + +178 +00:06:06,160 --> 00:06:11,039 +Mass but the problem is if you give sort + +179 +00:06:08,759 --> 00:06:13,639 +of a small amount of probability Mass to + +180 +00:06:11,039 --> 00:06:16,599 +30,000 different things that mass will + +181 +00:06:13,639 --> 00:06:19,360 +add up pretty quickly um and to see this + +182 +00:06:16,599 --> 00:06:20,360 +we have sort of this illustration here + +183 +00:06:19,360 --> 00:06:21,560 +um I don't know if you can see the + +184 +00:06:20,360 --> 00:06:23,280 +difference between the green and the + +185 +00:06:21,560 --> 00:06:25,720 +yellow but I've also drawn a little bar + +186 +00:06:23,280 --> 00:06:27,800 +between them this is a really longtailed + +187 +00:06:25,720 --> 00:06:29,720 +distribution and the green part of the + +188 +00:06:27,800 --> 00:06:31,960 +distribution which is a lot of tokens + +189 +00:06:29,720 --> 00:06:34,000 +with high likelihood has 50% of the + +190 +00:06:31,960 --> 00:06:35,560 +total probability the Yellow Part which + +191 +00:06:34,000 --> 00:06:37,360 +is all a lot of things that are all + +192 +00:06:35,560 --> 00:06:40,280 +individually not super likely is the + +193 +00:06:37,360 --> 00:06:41,720 +other 50% of the probability and so what + +194 +00:06:40,280 --> 00:06:44,360 +that means is if you're doing something + +195 +00:06:41,720 --> 00:06:46,120 +like ancestral sampling 50% of the time + +196 +00:06:44,360 --> 00:06:49,160 +you'll be sampling something really + +197 +00:06:46,120 --> 00:06:51,520 +unlikely from this long tail um that + +198 +00:06:49,160 --> 00:06:53,759 +seems sort of not like what we want + +199 +00:06:51,520 --> 00:06:56,080 +right um so is there anything we can do + +200 +00:06:53,759 --> 00:06:58,080 +about this and the obvious for solution + +201 +00:06:56,080 --> 00:06:59,400 +here is can we just cut off that tail + +202 +00:06:58,080 --> 00:07:01,680 +like if we know these tokens are not + +203 +00:06:59,400 --> 00:07:03,039 +super likely can we just ignore them and + +204 +00:07:01,680 --> 00:07:05,039 +there's a couple of different ways to do + +205 +00:07:03,039 --> 00:07:07,919 +that um the first of these is something + +206 +00:07:05,039 --> 00:07:10,080 +called topk sampling where we say okay + +207 +00:07:07,919 --> 00:07:12,479 +you know maybe we think there are 10 + +208 +00:07:10,080 --> 00:07:14,000 +reasonable like outputs is right maybe + +209 +00:07:12,479 --> 00:07:17,280 +we'll just sample from the 10 most + +210 +00:07:14,000 --> 00:07:19,759 +probable tokens um here maybe we say if + +211 +00:07:17,280 --> 00:07:21,479 +we want to pick top six sampling we'll + +212 +00:07:19,759 --> 00:07:23,919 +sample from just the six most probable + +213 +00:07:21,479 --> 00:07:26,240 +tokens and so in this example you can + +214 +00:07:23,919 --> 00:07:27,680 +see we originally had 10 tokens and + +215 +00:07:26,240 --> 00:07:30,560 +we're going to sample from just the blue + +216 +00:07:27,680 --> 00:07:32,919 +ones just the six most likely tokens + +217 +00:07:30,560 --> 00:07:34,360 +um in this example this distribution is + +218 +00:07:32,919 --> 00:07:37,280 +pretty flat there's a lot of things that + +219 +00:07:34,360 --> 00:07:40,120 +are like kind of likely right so that + +220 +00:07:37,280 --> 00:07:43,000 +those six tokens are only 68% of the + +221 +00:07:40,120 --> 00:07:45,360 +total probability Mass um if we go like + +222 +00:07:43,000 --> 00:07:47,240 +one time step further here we might have + +223 +00:07:45,360 --> 00:07:49,360 +a distribution that's a lot peier most + +224 +00:07:47,240 --> 00:07:51,759 +of the mass is on just a single token + +225 +00:07:49,360 --> 00:07:53,919 +and so sampling from just the top six + +226 +00:07:51,759 --> 00:07:56,400 +tokens actually captures 99% of the + +227 +00:07:53,919 --> 00:07:58,360 +probability mes maybe we say that seems + +228 +00:07:56,400 --> 00:08:01,199 +a little excessive right we don't really + +229 +00:07:58,360 --> 00:08:03,400 +need um maybe all of these tokens that + +230 +00:08:01,199 --> 00:08:05,479 +are all kind of low probability maybe we + +231 +00:08:03,400 --> 00:08:07,000 +just want to sort of sample from the top + +232 +00:08:05,479 --> 00:08:08,080 +half of our distribution or something or + +233 +00:08:07,000 --> 00:08:10,840 +the top + +234 +00:08:08,080 --> 00:08:12,919 +90% um so instead of choosing a top + +235 +00:08:10,840 --> 00:08:15,560 +number of tokens to sample from you + +236 +00:08:12,919 --> 00:08:17,400 +could choose a top amount of probability + +237 +00:08:15,560 --> 00:08:20,000 +and this is something called top P or + +238 +00:08:17,400 --> 00:08:21,520 +nucleus sampling so P here is the amount + +239 +00:08:20,000 --> 00:08:24,039 +of probability from your distribution + +240 +00:08:21,520 --> 00:08:26,639 +you want to consider so if you decide + +241 +00:08:24,039 --> 00:08:29,280 +your p is about like 94% of the + +242 +00:08:26,639 --> 00:08:31,639 +probability Mass you in this first examp + +243 +00:08:29,280 --> 00:08:33,719 +example here would choose almost all of + +244 +00:08:31,639 --> 00:08:35,440 +the tokens you keep adding tokens in + +245 +00:08:33,719 --> 00:08:37,159 +until you reach an amount of total + +246 +00:08:35,440 --> 00:08:39,479 +probability that's about + +247 +00:08:37,159 --> 00:08:40,880 +094 but then when you get to the Second + +248 +00:08:39,479 --> 00:08:43,240 +Step where you have a couple of really + +249 +00:08:40,880 --> 00:08:45,959 +highly probable tokens you'd only need a + +250 +00:08:43,240 --> 00:08:47,959 +couple of tokens to add up to 094 or + +251 +00:08:45,959 --> 00:08:50,320 +even higher than 0.94 and so you would + +252 +00:08:47,959 --> 00:08:52,200 +just sample from a smaller set of tokens + +253 +00:08:50,320 --> 00:08:54,600 +so in top K sampling the total amount of + +254 +00:08:52,200 --> 00:08:56,560 +probability your sampling from can move + +255 +00:08:54,600 --> 00:08:58,120 +around in top P sampling the total + +256 +00:08:56,560 --> 00:08:59,839 +number of tokens you're sampling from + +257 +00:08:58,120 --> 00:09:01,959 +might change + +258 +00:08:59,839 --> 00:09:04,760 +um but maybe we sort of don't want to + +259 +00:09:01,959 --> 00:09:07,279 +impose a strong constraint like we want + +260 +00:09:04,760 --> 00:09:09,279 +like 94% here maybe just what we really + +261 +00:09:07,279 --> 00:09:11,040 +care about is saying that we're not + +262 +00:09:09,279 --> 00:09:14,000 +going to sample anything that's really + +263 +00:09:11,040 --> 00:09:16,800 +really unlikely right another way of + +264 +00:09:14,000 --> 00:09:18,560 +doing this is called Epsilon sampling + +265 +00:09:16,800 --> 00:09:20,519 +where we just sample tokens that have at + +266 +00:09:18,560 --> 00:09:22,920 +least some minimum amount of probability + +267 +00:09:20,519 --> 00:09:24,720 +to them right so maybe we just want + +268 +00:09:22,920 --> 00:09:29,519 +tokens that have probability of at least + +269 +00:09:24,720 --> 00:09:31,240 +0.05 here um in this first um example + +270 +00:09:29,519 --> 00:09:32,640 +everything has at least some reasonable + +271 +00:09:31,240 --> 00:09:34,240 +amount of probability so we're actually + +272 +00:09:32,640 --> 00:09:36,240 +going to sample from our full + +273 +00:09:34,240 --> 00:09:37,720 +distribution and then in the second + +274 +00:09:36,240 --> 00:09:39,279 +example when we have a lot of things + +275 +00:09:37,720 --> 00:09:41,160 +that are really unlikely we'll only + +276 +00:09:39,279 --> 00:09:43,800 +sample from sort of the more likely part + +277 +00:09:41,160 --> 00:09:45,240 +of the distribution um so all three of + +278 +00:09:43,800 --> 00:09:47,000 +these methods are sort of different ways + +279 +00:09:45,240 --> 00:09:49,399 +of trying to cut off the long tail using + +280 +00:09:47,000 --> 00:09:51,480 +sort of different + +281 +00:09:49,399 --> 00:09:53,000 +characteristics the tail of the + +282 +00:09:51,480 --> 00:09:55,680 +distribution though isn't the only thing + +283 +00:09:53,000 --> 00:09:58,000 +we could choose to modify um we could + +284 +00:09:55,680 --> 00:09:59,880 +also choose to modify this sort of + +285 +00:09:58,000 --> 00:10:02,120 +peakiness of the distribution + +286 +00:09:59,880 --> 00:10:03,880 +so if you look here at the middle of + +287 +00:10:02,120 --> 00:10:06,600 +these diagrams say this is your original + +288 +00:10:03,880 --> 00:10:08,519 +distribution over next tokens and maybe + +289 +00:10:06,600 --> 00:10:11,040 +you want to modify some properties of + +290 +00:10:08,519 --> 00:10:12,640 +this distribution like you say I want an + +291 +00:10:11,040 --> 00:10:14,200 +output that's really diverse and + +292 +00:10:12,640 --> 00:10:15,680 +interesting and open-ended like maybe + +293 +00:10:14,200 --> 00:10:17,920 +this is something like story generation + +294 +00:10:15,680 --> 00:10:20,120 +where you want to have sort of a lot of + +295 +00:10:17,920 --> 00:10:21,279 +maybe surprising things in your output + +296 +00:10:20,120 --> 00:10:23,480 +you could say I want to sort of + +297 +00:10:21,279 --> 00:10:26,440 +distribute my probability Mass more over + +298 +00:10:23,480 --> 00:10:28,399 +the token space and you can do this um + +299 +00:10:26,440 --> 00:10:32,720 +by sort of flattening this distribution + +300 +00:10:28,399 --> 00:10:34,240 +like you see on the the right here um + +301 +00:10:32,720 --> 00:10:36,800 +where now there's sort of more + +302 +00:10:34,240 --> 00:10:39,040 +probability Mass spread over this um + +303 +00:10:36,800 --> 00:10:40,320 +like wider set of tokens you could also + +304 +00:10:39,040 --> 00:10:42,720 +say the opposite right you could say + +305 +00:10:40,320 --> 00:10:44,120 +maybe I'm doing something like math + +306 +00:10:42,720 --> 00:10:45,519 +where there shouldn't really be a lot of + +307 +00:10:44,120 --> 00:10:47,800 +correct answers there should be really + +308 +00:10:45,519 --> 00:10:50,399 +only one or maybe only like a few + +309 +00:10:47,800 --> 00:10:52,320 +potential reasonable next answers and so + +310 +00:10:50,399 --> 00:10:54,160 +you can make your distribution peier or + +311 +00:10:52,320 --> 00:10:56,639 +sharper so that more of the probability + +312 +00:10:54,160 --> 00:11:00,200 +mass is on the things at the very top um + +313 +00:10:56,639 --> 00:11:02,000 +the way you do this is you modify y your + +314 +00:11:00,200 --> 00:11:04,320 +loges your outputs of the last layer of + +315 +00:11:02,000 --> 00:11:06,399 +the model before you apply softn so when + +316 +00:11:04,320 --> 00:11:08,360 +you're predicting you get your outputs + +317 +00:11:06,399 --> 00:11:10,040 +of the last layer of the model and then + +318 +00:11:08,360 --> 00:11:11,560 +you apply softmax which turns those + +319 +00:11:10,040 --> 00:11:15,240 +outputs into a distribution right they + +320 +00:11:11,560 --> 00:11:17,399 +all sum up the um like Mass over all + +321 +00:11:15,240 --> 00:11:18,839 +vocabulary tokens sums to one and so + +322 +00:11:17,399 --> 00:11:21,920 +that is sort of a distribution you could + +323 +00:11:18,839 --> 00:11:23,519 +sample from if you divide those Logics + +324 +00:11:21,920 --> 00:11:26,000 +by some number before you apply that + +325 +00:11:23,519 --> 00:11:27,880 +softmax you can make that distribution + +326 +00:11:26,000 --> 00:11:30,760 +flatter by using a number greater than + +327 +00:11:27,880 --> 00:11:32,440 +one or peier by using a number less than + +328 +00:11:30,760 --> 00:11:35,079 +one and this is this type of parameter + +329 +00:11:32,440 --> 00:11:36,839 +is called temperature um you can apply + +330 +00:11:35,079 --> 00:11:38,480 +this with any of the other methods for + +331 +00:11:36,839 --> 00:11:40,279 +sort of cutting off the long tail but + +332 +00:11:38,480 --> 00:11:41,920 +what people will often do is just apply + +333 +00:11:40,279 --> 00:11:43,639 +a temperature and then sample from that + +334 +00:11:41,920 --> 00:11:45,320 +distribution and that's what we call + +335 +00:11:43,639 --> 00:11:48,720 +temperature + +336 +00:11:45,320 --> 00:11:49,920 +sampling so these I think most of you + +337 +00:11:48,720 --> 00:11:51,320 +might already have been at least a + +338 +00:11:49,920 --> 00:11:53,000 +little bit familiar with some of these + +339 +00:11:51,320 --> 00:11:56,079 +methods I want to touch briefly on a + +340 +00:11:53,000 --> 00:11:58,160 +couple of other ideas for modifying this + +341 +00:11:56,079 --> 00:11:59,680 +distribution maybe some more complex and + +342 +00:11:58,160 --> 00:12:01,839 +more recent ideas and the one that I + +343 +00:11:59,680 --> 00:12:04,279 +want to talk about in more detail is + +344 +00:12:01,839 --> 00:12:05,399 +something called contrastive decoding so + +345 +00:12:04,279 --> 00:12:07,360 +the idea here is that we could + +346 +00:12:05,399 --> 00:12:10,800 +incorporate some extra information at + +347 +00:12:07,360 --> 00:12:12,760 +decoding time um using some other + +348 +00:12:10,800 --> 00:12:15,320 +distribution some other data or in this + +349 +00:12:12,760 --> 00:12:17,320 +case some other model so if you've ever + +350 +00:12:15,320 --> 00:12:19,240 +played around with a really like + +351 +00:12:17,320 --> 00:12:21,800 +relatively small language model maybe + +352 +00:12:19,240 --> 00:12:23,320 +something like gbt2 small um You + +353 +00:12:21,800 --> 00:12:26,560 +probably noticed you try to give it some + +354 +00:12:23,320 --> 00:12:28,240 +inputs and maybe it degenerates into + +355 +00:12:26,560 --> 00:12:30,160 +just repeating the same sequence over + +356 +00:12:28,240 --> 00:12:31,720 +and over maybe it gives you outputs that + +357 +00:12:30,160 --> 00:12:33,399 +are just completely incorrect like you + +358 +00:12:31,720 --> 00:12:35,320 +ask it a factual question and it gets it + +359 +00:12:33,399 --> 00:12:37,120 +wrong um and you don't see those + +360 +00:12:35,320 --> 00:12:39,519 +problems if you look at sort of a larger + +361 +00:12:37,120 --> 00:12:41,399 +model that's trained on more data so the + +362 +00:12:39,519 --> 00:12:43,199 +question here is can you use what that + +363 +00:12:41,399 --> 00:12:46,480 +smaller model is getting wrong to make + +364 +00:12:43,199 --> 00:12:49,120 +your larger model even better um and the + +365 +00:12:46,480 --> 00:12:51,360 +way we do this is by sort of the + +366 +00:12:49,120 --> 00:12:52,880 +intuition that if the smaller model + +367 +00:12:51,360 --> 00:12:55,079 +doesn't have a lot of probability on + +368 +00:12:52,880 --> 00:12:57,160 +some answer but the the larger model + +369 +00:12:55,079 --> 00:12:58,519 +does it's likely because that larger + +370 +00:12:57,160 --> 00:13:02,279 +model has learned something with the + +371 +00:12:58,519 --> 00:13:04,000 +smaller model didn't know and so here we + +372 +00:13:02,279 --> 00:13:06,199 +modify the probability distribution + +373 +00:13:04,000 --> 00:13:08,199 +coming out of the larger model to choose + +374 +00:13:06,199 --> 00:13:11,120 +outputs that that model thinks are very + +375 +00:13:08,199 --> 00:13:12,600 +likely and the amateur or the the weaker + +376 +00:13:11,120 --> 00:13:15,480 +model thinks are not + +377 +00:13:12,600 --> 00:13:20,000 +likely so in this example here from + +378 +00:13:15,480 --> 00:13:22,560 +their paper um if you have sort of a + +379 +00:13:20,000 --> 00:13:27,199 +input like Barack Obama was born in + +380 +00:13:22,560 --> 00:13:29,720 +Hawaii he was born in L um the smaller + +381 +00:13:27,199 --> 00:13:31,360 +model would often do something like + +382 +00:13:29,720 --> 00:13:35,399 +start repeating and actually if you + +383 +00:13:31,360 --> 00:13:36,720 +sample sort of naively from the um + +384 +00:13:35,399 --> 00:13:38,560 +larger model you can wind up in these + +385 +00:13:36,720 --> 00:13:40,000 +situations as well right so if you just + +386 +00:13:38,560 --> 00:13:41,959 +choose the most likely thing at each + +387 +00:13:40,000 --> 00:13:43,399 +step you wind up in this Loop where it's + +388 +00:13:41,959 --> 00:13:45,560 +like he was born in Hawaii he was born + +389 +00:13:43,399 --> 00:13:48,199 +in Hawaii he was born in Hawaii um and + +390 +00:13:45,560 --> 00:13:51,320 +this is behavior we generally don't want + +391 +00:13:48,199 --> 00:13:52,680 +um if you do something like nucleus or + +392 +00:13:51,320 --> 00:13:53,720 +top PE sampling you can wind up with + +393 +00:13:52,680 --> 00:13:55,880 +things that are actually completely + +394 +00:13:53,720 --> 00:13:58,839 +incorrect like he was born in Washington + +395 +00:13:55,880 --> 00:14:01,480 +DC um but if you use contrastive + +396 +00:13:58,839 --> 00:14:04,120 +decoding you take the outputs coming out + +397 +00:14:01,480 --> 00:14:05,720 +of your expert model here and you + +398 +00:14:04,120 --> 00:14:07,680 +subtract out the probabilities coming + +399 +00:14:05,720 --> 00:14:10,160 +out of the weaker model and you can wind + +400 +00:14:07,680 --> 00:14:11,880 +up with things that the higher model the + +401 +00:14:10,160 --> 00:14:13,759 +stronger model ascribed probability to + +402 +00:14:11,880 --> 00:14:15,480 +but the weaker model did not likely + +403 +00:14:13,759 --> 00:14:16,920 +because these are sort of facts that the + +404 +00:14:15,480 --> 00:14:18,959 +larger model knows that the smaller + +405 +00:14:16,920 --> 00:14:20,800 +model does not so here we actually get + +406 +00:14:18,959 --> 00:14:23,199 +the year Barack Obama was born which is + +407 +00:14:20,800 --> 00:14:25,800 +maybe a fact that the larger model knows + +408 +00:14:23,199 --> 00:14:27,639 +and the smaller model didn't know um and + +409 +00:14:25,800 --> 00:14:29,759 +so this is just one of sort of a broad + +410 +00:14:27,639 --> 00:14:32,560 +class of methods where you use external + +411 +00:14:29,759 --> 00:14:35,199 +information to improve your decoding by + +412 +00:14:32,560 --> 00:14:38,720 +modifying this distribution at each + +413 +00:14:35,199 --> 00:14:40,720 +set um those are sort of a brief tour of + +414 +00:14:38,720 --> 00:14:43,920 +a couple of different sampling methods + +415 +00:14:40,720 --> 00:14:43,920 +before we move into search + +416 +00:14:44,600 --> 00:14:50,440 +yeah + +417 +00:14:46,279 --> 00:14:54,880 +yeah is it going to improve upon just + +418 +00:14:50,440 --> 00:14:57,240 +the yeah it generally does um and the + +419 +00:14:54,880 --> 00:14:59,800 +intuition for why this might be I think + +420 +00:14:57,240 --> 00:15:01,680 +is that there are sort of these + +421 +00:14:59,800 --> 00:15:04,560 +degenerate cases like just repeating + +422 +00:15:01,680 --> 00:15:06,120 +over and over that both the expert and + +423 +00:15:04,560 --> 00:15:09,000 +the weak model would give relatively + +424 +00:15:06,120 --> 00:15:10,880 +high probability to um maybe the expert + +425 +00:15:09,000 --> 00:15:13,199 +model is like slightly less likely to do + +426 +00:15:10,880 --> 00:15:14,959 +these things but it's still like sort of + +427 +00:15:13,199 --> 00:15:16,639 +an easy case for the model to learn and + +428 +00:15:14,959 --> 00:15:18,120 +so both of those models will have high + +429 +00:15:16,639 --> 00:15:20,079 +probability for those things but the + +430 +00:15:18,120 --> 00:15:21,800 +things that are genuinely like good + +431 +00:15:20,079 --> 00:15:23,880 +outputs that only the expert would get + +432 +00:15:21,800 --> 00:15:25,519 +right those will have low probability + +433 +00:15:23,880 --> 00:15:27,600 +under the weak model and so you're sort + +434 +00:15:25,519 --> 00:15:30,880 +of subtracting out all the degenerate + +435 +00:15:27,600 --> 00:15:33,759 +behaviors and keeping to really good out + +436 +00:15:30,880 --> 00:15:35,240 +this if you're generating a longer + +437 +00:15:33,759 --> 00:15:37,440 +sequence with with + +438 +00:15:35,240 --> 00:15:40,759 +contacing how do you know which steps + +439 +00:15:37,440 --> 00:15:45,120 +you want to bring out yeah this is a + +440 +00:15:40,759 --> 00:15:48,560 +great question so for this particular + +441 +00:15:45,120 --> 00:15:50,560 +case oh yeah sorry so this was if you're + +442 +00:15:48,560 --> 00:15:52,279 +doing contrastive decoding over a really + +443 +00:15:50,560 --> 00:15:54,399 +long sequence like when do you choose to + +444 +00:15:52,279 --> 00:15:55,800 +bring in the expert right and for + +445 +00:15:54,399 --> 00:15:58,600 +contrastive decoding we're actually + +446 +00:15:55,800 --> 00:16:00,759 +going to do this at every individual + +447 +00:15:58,600 --> 00:16:02,440 +time step so we're going to use the + +448 +00:16:00,759 --> 00:16:04,800 +expert model to decode and we're going + +449 +00:16:02,440 --> 00:16:07,000 +to bring in the amateur to sort of + +450 +00:16:04,800 --> 00:16:09,079 +subtract out probabilities at each next + +451 +00:16:07,000 --> 00:16:10,399 +token prediction um you don't have to do + +452 +00:16:09,079 --> 00:16:12,800 +that I think that's that's what they do + +453 +00:16:10,399 --> 00:16:15,000 +in the paper um you could also decide to + +454 +00:16:12,800 --> 00:16:16,680 +only do this sort of if you have high + +455 +00:16:15,000 --> 00:16:19,639 +uncertainty or something if you don't + +456 +00:16:16,680 --> 00:16:22,639 +have a really sharp probability + +457 +00:16:19,639 --> 00:16:22,639 +distribution + +458 +00:16:23,160 --> 00:16:28,160 +yeah yeah how weak should the weak + +459 +00:16:25,399 --> 00:16:30,199 +predictor be um in the in the paper what + +460 +00:16:28,160 --> 00:16:31,600 +they're look at is actually not a huge + +461 +00:16:30,199 --> 00:16:34,560 +difference between the two models so you + +462 +00:16:31,600 --> 00:16:35,800 +can see here this is gpd2 XL and small + +463 +00:16:34,560 --> 00:16:37,319 +so there's a difference in parameter + +464 +00:16:35,800 --> 00:16:39,519 +counts and like a bit of a difference in + +465 +00:16:37,319 --> 00:16:42,160 +data I think here but these are actually + +466 +00:16:39,519 --> 00:16:44,959 +not like gpd2 XL is certainly not like a + +467 +00:16:42,160 --> 00:16:48,399 +super strong model now um I think they + +468 +00:16:44,959 --> 00:16:50,920 +try a couple of different settings and + +469 +00:16:48,399 --> 00:16:52,319 +the general intuition I think if I'm + +470 +00:16:50,920 --> 00:16:54,880 +remembering it correctly is that you + +471 +00:16:52,319 --> 00:16:56,319 +want a model that's not like so close in + +472 +00:16:54,880 --> 00:16:58,000 +performance to your expert that you're + +473 +00:16:56,319 --> 00:16:59,839 +basically just subtracting out useful + +474 +00:16:58,000 --> 00:17:02,240 +things but you also don't want a model + +475 +00:16:59,839 --> 00:17:03,519 +that's like so degenerate that it is not + +476 +00:17:02,240 --> 00:17:04,959 +hasn't learned anything useful about + +477 +00:17:03,519 --> 00:17:06,839 +your task at all so I think it might + +478 +00:17:04,959 --> 00:17:09,600 +depend on what task you're looking + +479 +00:17:06,839 --> 00:17:12,919 +at + +480 +00:17:09,600 --> 00:17:14,559 +yes this is for inference um so actually + +481 +00:17:12,919 --> 00:17:17,640 +everything we look at today will not + +482 +00:17:14,559 --> 00:17:17,640 +require aning of the + +483 +00:17:19,360 --> 00:17:26,559 +model Okay cool so now we're going to + +484 +00:17:24,000 --> 00:17:30,039 +step into sort of a slightly different + +485 +00:17:26,559 --> 00:17:31,280 +um set of strategies here which is maybe + +486 +00:17:30,039 --> 00:17:33,039 +we don't just want something from the + +487 +00:17:31,280 --> 00:17:35,160 +model distribution or something from a + +488 +00:17:33,039 --> 00:17:37,760 +modified distribution maybe we actually + +489 +00:17:35,160 --> 00:17:39,840 +just want the quote unquote best thing + +490 +00:17:37,760 --> 00:17:42,960 +the single most likely output given our + +491 +00:17:39,840 --> 00:17:45,200 +input right and here this would be the Y + +492 +00:17:42,960 --> 00:17:48,039 +hat the single sequence that satisfies + +493 +00:17:45,200 --> 00:17:51,919 +that has the highest score py given X + +494 +00:17:48,039 --> 00:17:54,240 +for the X that we gave the model um this + +495 +00:17:51,919 --> 00:17:56,000 +is this section is called mode seeking + +496 +00:17:54,240 --> 00:17:58,039 +search because this is the mode of the + +497 +00:17:56,000 --> 00:18:00,440 +distribution over outputs if you sampled + +498 +00:17:58,039 --> 00:18:01,760 +a huge huge number of times and you + +499 +00:18:00,440 --> 00:18:04,720 +looked at the single most likely + +500 +00:18:01,760 --> 00:18:06,720 +sequence you got it would be this y hat + +501 +00:18:04,720 --> 00:18:09,280 +and so how do we find this + +502 +00:18:06,720 --> 00:18:11,600 +thing well one idea is we know the + +503 +00:18:09,280 --> 00:18:13,159 +distribution at each individual setep + +504 +00:18:11,600 --> 00:18:16,000 +can we just pick the most likely thing + +505 +00:18:13,159 --> 00:18:18,960 +from that distribution and so in Greedy + +506 +00:18:16,000 --> 00:18:21,080 +decoding we take the argmax the single + +507 +00:18:18,960 --> 00:18:22,720 +highest probability token at each step + +508 +00:18:21,080 --> 00:18:24,840 +and we continue generating until the + +509 +00:18:22,720 --> 00:18:26,600 +single highest most the single highest + +510 +00:18:24,840 --> 00:18:28,840 +probability token is the stop token + +511 +00:18:26,600 --> 00:18:31,559 +right the end of sequence token + +512 +00:18:28,840 --> 00:18:33,400 +um for an individual token right if we + +513 +00:18:31,559 --> 00:18:35,559 +only want a single token output this is + +514 +00:18:33,400 --> 00:18:38,320 +exactly what we want this is the single + +515 +00:18:35,559 --> 00:18:40,400 +most likely output um and that's great + +516 +00:18:38,320 --> 00:18:44,000 +but if we're looking at something that + +517 +00:18:40,400 --> 00:18:45,120 +is maybe several tokens long are we + +518 +00:18:44,000 --> 00:18:47,360 +actually going to get the highest + +519 +00:18:45,120 --> 00:18:49,720 +probability thing and if you kind of + +520 +00:18:47,360 --> 00:18:52,159 +squint at this you can see that maybe we + +521 +00:18:49,720 --> 00:18:54,120 +have a problem here where the highest + +522 +00:18:52,159 --> 00:18:56,320 +probability sequence that you get from + +523 +00:18:54,120 --> 00:18:58,039 +multiplying across multiple steps + +524 +00:18:56,320 --> 00:18:59,559 +doesn't necessarily start with the token + +525 +00:18:58,039 --> 00:19:01,600 +that was highest probability at time + +526 +00:18:59,559 --> 00:19:03,200 +step one right maybe if you're doing + +527 +00:19:01,600 --> 00:19:04,720 +something like unconditional generation + +528 +00:19:03,200 --> 00:19:06,720 +the highest probability token at time + +529 +00:19:04,720 --> 00:19:08,360 +step one is always the but there could + +530 +00:19:06,720 --> 00:19:09,919 +be a really probable sentence that just + +531 +00:19:08,360 --> 00:19:11,480 +doesn't happen to start with the the + +532 +00:19:09,919 --> 00:19:12,720 +word the' and you would never find it + +533 +00:19:11,480 --> 00:19:15,080 +using GRE + +534 +00:19:12,720 --> 00:19:17,360 +decoding so this isn't going to give us + +535 +00:19:15,080 --> 00:19:19,799 +the highest probability output over a + +536 +00:19:17,360 --> 00:19:22,000 +sequence that's more than one token one + +537 +00:19:19,799 --> 00:19:23,360 +can we do anything better to try to find + +538 +00:19:22,000 --> 00:19:25,640 +this um + +539 +00:19:23,360 --> 00:19:27,559 +output and here we get into sort of one + +540 +00:19:25,640 --> 00:19:29,520 +of the most popular decoding methods the + +541 +00:19:27,559 --> 00:19:32,600 +one that you maybe heard of before which + +542 +00:19:29,520 --> 00:19:35,080 +is beam search the idea here is that we + +543 +00:19:32,600 --> 00:19:36,559 +don't want to miss a high probability + +544 +00:19:35,080 --> 00:19:38,880 +token that's hidden behind a lower + +545 +00:19:36,559 --> 00:19:40,200 +probability prefix so we want to kind of + +546 +00:19:38,880 --> 00:19:42,000 +search through a couple of different + +547 +00:19:40,200 --> 00:19:43,760 +options so that we don't discard + +548 +00:19:42,000 --> 00:19:47,120 +something too early that might have high + +549 +00:19:43,760 --> 00:19:49,360 +probability um later on in generation + +550 +00:19:47,120 --> 00:19:50,919 +and this is a type of bread first search + +551 +00:19:49,360 --> 00:19:53,200 +so we're going to look at a wide variety + +552 +00:19:50,919 --> 00:19:54,600 +of options at a given time step we're + +553 +00:19:53,200 --> 00:19:55,600 +going to pick some set of them to + +554 +00:19:54,600 --> 00:19:57,120 +continue and then we're going to look at + +555 +00:19:55,600 --> 00:19:58,919 +a wide variety of options for the next + +556 +00:19:57,120 --> 00:19:59,960 +time step instead of generating all the + +557 +00:19:58,919 --> 00:20:02,200 +way through a sequence and then + +558 +00:19:59,960 --> 00:20:04,320 +generating all the way through another + +559 +00:20:02,200 --> 00:20:05,760 +sequence um and how this works is we're + +560 +00:20:04,320 --> 00:20:07,559 +going to pick sort of a number of + +561 +00:20:05,760 --> 00:20:09,400 +candidates we'd like to explore a beam + +562 +00:20:07,559 --> 00:20:11,039 +with so in this example we're going to + +563 +00:20:09,400 --> 00:20:12,799 +pick three and we're going to say all + +564 +00:20:11,039 --> 00:20:15,480 +right here are maybe three options for + +565 +00:20:12,799 --> 00:20:17,640 +time step one for if we pick each of + +566 +00:20:15,480 --> 00:20:19,760 +those three options what would be the + +567 +00:20:17,640 --> 00:20:21,799 +three most likely things for time step + +568 +00:20:19,760 --> 00:20:23,200 +two right rather than choosing just the + +569 +00:20:21,799 --> 00:20:24,520 +single most likely thing in Greedy + +570 +00:20:23,200 --> 00:20:26,960 +decoding we're going to pick three + +571 +00:20:24,520 --> 00:20:29,120 +options and so now we have three options + +572 +00:20:26,960 --> 00:20:32,559 +for time step one three options for time + +573 +00:20:29,120 --> 00:20:34,280 +step two we now have nine options um + +574 +00:20:32,559 --> 00:20:36,320 +here right three options and then three + +575 +00:20:34,280 --> 00:20:37,679 +more for each of these and we don't want + +576 +00:20:36,320 --> 00:20:40,159 +to continue doing this because this is + +577 +00:20:37,679 --> 00:20:41,960 +going to sort of combinator explode so + +578 +00:20:40,159 --> 00:20:44,080 +we need to choose some subset of these + +579 +00:20:41,960 --> 00:20:45,880 +to continue with and the way we do that + +580 +00:20:44,080 --> 00:20:47,799 +is we look at the probability over this + +581 +00:20:45,880 --> 00:20:49,240 +two token sequence and we choose the two + +582 +00:20:47,799 --> 00:20:51,520 +that have the highest probability + +583 +00:20:49,240 --> 00:20:53,400 +overall so in this instance we've chosen + +584 +00:20:51,520 --> 00:20:55,679 +sort of one thing from this first group + +585 +00:20:53,400 --> 00:20:57,760 +and two things from the second group and + +586 +00:20:55,679 --> 00:20:59,760 +now we're back down to three hypotheses + +587 +00:20:57,760 --> 00:21:02,120 +each now two tokens long and we'll + +588 +00:20:59,760 --> 00:21:04,000 +continue generating to time step three + +589 +00:21:02,120 --> 00:21:05,600 +we'll get nine options we'll pre it back + +590 +00:21:04,000 --> 00:21:07,760 +down to three and we'll continue until + +591 +00:21:05,600 --> 00:21:09,159 +the end of generation where we now have + +592 +00:21:07,760 --> 00:21:10,679 +three sequences and we'll just pick the + +593 +00:21:09,159 --> 00:21:14,000 +one that's highest probability out of + +594 +00:21:10,679 --> 00:21:15,679 +those three to return um this is not + +595 +00:21:14,000 --> 00:21:17,360 +guaranteed to get you the highest + +596 +00:21:15,679 --> 00:21:18,480 +probability thing right you still have + +597 +00:21:17,360 --> 00:21:20,039 +this risk that you could be sort of + +598 +00:21:18,480 --> 00:21:22,279 +pruning out something that's high + +599 +00:21:20,039 --> 00:21:24,159 +probability but in general this sort of + +600 +00:21:22,279 --> 00:21:26,600 +works um much better than greedy + +601 +00:21:24,159 --> 00:21:28,520 +decoding and this is if you have a + +602 +00:21:26,600 --> 00:21:31,120 +language model and you're sort of not + +603 +00:21:28,520 --> 00:21:32,440 +what um decoding method it's using outs + +604 +00:21:31,120 --> 00:21:34,200 +are pretty good it's either beam search + +605 +00:21:32,440 --> 00:21:37,120 +or temperature samping right this is + +606 +00:21:34,200 --> 00:21:40,039 +very effective this is used um pretty + +607 +00:21:37,120 --> 00:21:41,760 +broadly there are however some issues + +608 +00:21:40,039 --> 00:21:43,760 +with beam search and one of the biggest + +609 +00:21:41,760 --> 00:21:46,159 +ones is that when you're doing this + +610 +00:21:43,760 --> 00:21:47,679 +maximum likelihood sampling you really + +611 +00:21:46,159 --> 00:21:50,080 +or the sampling to search for something + +612 +00:21:47,679 --> 00:21:51,760 +that's very high likelihood um you + +613 +00:21:50,080 --> 00:21:53,679 +really sacrifice a lot of diversity in + +614 +00:21:51,760 --> 00:21:55,320 +your outputs and in particular you could + +615 +00:21:53,679 --> 00:21:57,279 +wind up at the end of beam search with + +616 +00:21:55,320 --> 00:21:58,919 +three different outputs to choose from + +617 +00:21:57,279 --> 00:22:00,120 +that are all pretty pretty much the same + +618 +00:21:58,919 --> 00:22:02,640 +like they're slightly different token + +619 +00:22:00,120 --> 00:22:04,559 +sequences but they look very similar and + +620 +00:22:02,640 --> 00:22:07,480 +so maybe you want to S get sort of a + +621 +00:22:04,559 --> 00:22:08,919 +more diverse set um there's a couple of + +622 +00:22:07,480 --> 00:22:10,640 +different methods in this category I'm + +623 +00:22:08,919 --> 00:22:12,679 +going to very briefly shout out two of + +624 +00:22:10,640 --> 00:22:14,200 +them um but the idea here is to sort of + +625 +00:22:12,679 --> 00:22:16,440 +reintroduce some of the benefits of + +626 +00:22:14,200 --> 00:22:19,120 +sampling while still doing this kind of + +627 +00:22:16,440 --> 00:22:20,919 +search for high probability things um + +628 +00:22:19,120 --> 00:22:22,600 +diverse beam search is one of these + +629 +00:22:20,919 --> 00:22:25,520 +methods and here the idea is that we + +630 +00:22:22,600 --> 00:22:27,279 +want to modify that scoring step when we + +631 +00:22:25,520 --> 00:22:28,600 +choose which three out of our nine beams + +632 +00:22:27,279 --> 00:22:30,200 +we want to continue + +633 +00:22:28,600 --> 00:22:32,000 +to avoid choosing things that are really + +634 +00:22:30,200 --> 00:22:34,320 +really close to each other right so + +635 +00:22:32,000 --> 00:22:36,039 +maybe our highest probability thing is + +636 +00:22:34,320 --> 00:22:37,559 +some sequence a and then if we look at + +637 +00:22:36,039 --> 00:22:39,520 +the other sequences there's one that's + +638 +00:22:37,559 --> 00:22:41,279 +pretty high probability but very similar + +639 +00:22:39,520 --> 00:22:43,600 +to that sequence and there's one that's + +640 +00:22:41,279 --> 00:22:45,320 +like slightly lower probability but very + +641 +00:22:43,600 --> 00:22:47,200 +different and so maybe we would choose a + +642 +00:22:45,320 --> 00:22:49,679 +sequence that is a little lower + +643 +00:22:47,200 --> 00:22:51,760 +probability to maximize diversity in our + +644 +00:22:49,679 --> 00:22:53,799 +set to try to get like sort of a wider + +645 +00:22:51,760 --> 00:22:56,200 +range of options to choose from later in + +646 +00:22:53,799 --> 00:22:58,200 +generation so this modifies the scoring + +647 +00:22:56,200 --> 00:23:00,120 +to not just take into account likelihood + +648 +00:22:58,200 --> 00:23:03,200 +but also similarity to other + +649 +00:23:00,120 --> 00:23:05,400 +KS another option down this path is + +650 +00:23:03,200 --> 00:23:07,640 +stochastic beam search where we're going + +651 +00:23:05,400 --> 00:23:09,279 +to keep the scoring the same but rather + +652 +00:23:07,640 --> 00:23:11,679 +than choosing just the top three most + +653 +00:23:09,279 --> 00:23:13,279 +likely tokens to expand out each beam + +654 +00:23:11,679 --> 00:23:15,200 +we're actually going to sample from some + +655 +00:23:13,279 --> 00:23:17,000 +distribution and you could sample from + +656 +00:23:15,200 --> 00:23:18,760 +the model distribution directly using + +657 +00:23:17,000 --> 00:23:20,200 +ancestral sampling or you could use any + +658 +00:23:18,760 --> 00:23:22,679 +of our sampling methods we talked about + +659 +00:23:20,200 --> 00:23:24,200 +in the last section to do this and the + +660 +00:23:22,679 --> 00:23:25,799 +the idea here is sort of similar to + +661 +00:23:24,200 --> 00:23:29,279 +diverse beam search we want to get sort + +662 +00:23:25,799 --> 00:23:31,240 +of a wider exploration of our models + +663 +00:23:29,279 --> 00:23:33,520 +like output space you know we want to + +664 +00:23:31,240 --> 00:23:35,360 +sort of explore more things instead of + +665 +00:23:33,520 --> 00:23:36,760 +just seeking winding up with a bunch of + +666 +00:23:35,360 --> 00:23:39,679 +outputs that look very similar at the + +667 +00:23:36,760 --> 00:23:41,120 +end of beam search um if folks are + +668 +00:23:39,679 --> 00:23:43,679 +interested in these I think these are + +669 +00:23:41,120 --> 00:23:46,159 +both linked on the website um the the + +670 +00:23:43,679 --> 00:23:48,679 +papers that both of these ideas came + +671 +00:23:46,159 --> 00:23:51,480 +from + +672 +00:23:48,679 --> 00:23:54,400 +Yes um for stochastic + +673 +00:23:51,480 --> 00:23:57,039 +resarch the sampl probability takes into + +674 +00:23:54,400 --> 00:23:59,039 +account the current part that we already + +675 +00:23:57,039 --> 00:24:02,000 +travel okay + +676 +00:23:59,039 --> 00:24:04,320 +yeah exactly so it's this um like + +677 +00:24:02,000 --> 00:24:05,640 +selection step here but we're instead of + +678 +00:24:04,320 --> 00:24:07,760 +just doing greedy selection we're going + +679 +00:24:05,640 --> 00:24:11,760 +to do + +680 +00:24:07,760 --> 00:24:17,520 +assembling yes my question was on the T + +681 +00:24:11,760 --> 00:24:23,200 +yeah like you for something super simple + +682 +00:24:17,520 --> 00:24:26,520 +like if both of them have a high are you + +683 +00:24:23,200 --> 00:24:28,120 +like yeah so you would if it has a + +684 +00:24:26,520 --> 00:24:30,080 +really high probability under both + +685 +00:24:28,120 --> 00:24:32,880 +models it would have a lower probability + +686 +00:24:30,080 --> 00:24:35,080 +after doing this sort of contrasted + +687 +00:24:32,880 --> 00:24:36,600 +de right so if the if the smaller + +688 +00:24:35,080 --> 00:24:38,799 +model's really good at your task this + +689 +00:24:36,600 --> 00:24:40,960 +might not work very + +690 +00:24:38,799 --> 00:24:43,360 +well yeah I think in the paper they're + +691 +00:24:40,960 --> 00:24:45,320 +generally evaluating on these sort of + +692 +00:24:43,360 --> 00:24:48,279 +like open ended generation task I bet + +693 +00:24:45,320 --> 00:24:51,279 +this works a lot worse for + +694 +00:24:48,279 --> 00:24:51,279 +now + +695 +00:24:56,760 --> 00:24:59,760 +yes + +696 +00:25:02,440 --> 00:25:08,120 +you yeah this is a great question um and + +697 +00:25:05,960 --> 00:25:11,559 +so the question is how do we measure + +698 +00:25:08,120 --> 00:25:14,120 +similar beams um you can sort of Define + +699 +00:25:11,559 --> 00:25:15,559 +any kind of similarity function you like + +700 +00:25:14,120 --> 00:25:17,520 +here um anything that you'd use to + +701 +00:25:15,559 --> 00:25:20,440 +evaluate like how similar something is + +702 +00:25:17,520 --> 00:25:22,360 +to a gold reference right um I think in + +703 +00:25:20,440 --> 00:25:25,039 +the original diverse beam search they do + +704 +00:25:22,360 --> 00:25:27,760 +this by looking at like exact token + +705 +00:25:25,039 --> 00:25:30,640 +match across the two right like if these + +706 +00:25:27,760 --> 00:25:33,880 +beams are the same in all but one of the + +707 +00:25:30,640 --> 00:25:35,600 +tokens or they have like you know 50% of + +708 +00:25:33,880 --> 00:25:37,120 +the tokens are shared across the beams + +709 +00:25:35,600 --> 00:25:38,559 +and maybe these are really similar and + +710 +00:25:37,120 --> 00:25:40,559 +they should try to choose two things + +711 +00:25:38,559 --> 00:25:42,600 +that are different um but you could swap + +712 +00:25:40,559 --> 00:25:46,200 +that out for any + +713 +00:25:42,600 --> 00:25:49,440 +metc yes so + +714 +00:25:46,200 --> 00:25:50,960 +the there's kind of like a that's Happ + +715 +00:25:49,440 --> 00:25:53,360 +at + +716 +00:25:50,960 --> 00:25:55,000 +every for the stochastic be search + +717 +00:25:53,360 --> 00:25:57,720 +there's like a shering what do you mean + +718 +00:25:55,000 --> 00:26:00,520 +by a shepher so it says modify the next + +719 +00:25:57,720 --> 00:26:03,000 +sech selection because they're like um + +720 +00:26:00,520 --> 00:26:06,919 +it is searching at a different space and + +721 +00:26:03,000 --> 00:26:09,679 +it's not searching within the same 3D + +722 +00:26:06,919 --> 00:26:14,080 +SE is it searching in a different space + +723 +00:26:09,679 --> 00:26:15,799 +yeah so it's um in the same probability + +724 +00:26:14,080 --> 00:26:18,399 +distribution but it'll see a different + +725 +00:26:15,799 --> 00:26:20,840 +part of the distribution so when you're + +726 +00:26:18,399 --> 00:26:22,640 +doing the grey search you'll only ever + +727 +00:26:20,840 --> 00:26:24,559 +look at the top three tokens in the next + +728 +00:26:22,640 --> 00:26:27,120 +token distribution because you're just + +729 +00:26:24,559 --> 00:26:29,840 +selecting like the maximums um but in + +730 +00:26:27,120 --> 00:26:31,360 +sampling you could you could get the + +731 +00:26:29,840 --> 00:26:32,880 +same tokens right if they're really high + +732 +00:26:31,360 --> 00:26:35,720 +likelihood but you could also sample + +733 +00:26:32,880 --> 00:26:38,399 +something that's further down in the + +734 +00:26:35,720 --> 00:26:42,760 +distribution yeah as a followup to that + +735 +00:26:38,399 --> 00:26:44,880 +like into uh our stamping we take into + +736 +00:26:42,760 --> 00:26:46,960 +account the probability of the prefix + +737 +00:26:44,880 --> 00:26:50,679 +like the current hypothesis right + +738 +00:26:46,960 --> 00:26:51,760 +because otherwise it is the same as just + +739 +00:26:50,679 --> 00:26:54,279 +uh + +740 +00:26:51,760 --> 00:26:57,159 +in yeah so in the sampling we're taking + +741 +00:26:54,279 --> 00:27:00,120 +into account the previous the prefix + +742 +00:26:57,159 --> 00:27:02,600 +yeah so so it we will take into account + +743 +00:27:00,120 --> 00:27:06,200 +the prefix but this sampling mechanism + +744 +00:27:02,600 --> 00:27:08,320 +here could be ancestral sampling um the + +745 +00:27:06,200 --> 00:27:10,480 +only the difference here is that we're + +746 +00:27:08,320 --> 00:27:12,600 +also doing a sort of search step on top + +747 +00:27:10,480 --> 00:27:14,679 +of that to choose the maximum likelihood + +748 +00:27:12,600 --> 00:27:18,080 +things across multiple + +749 +00:27:14,679 --> 00:27:20,559 +me another important thing um is you + +750 +00:27:18,080 --> 00:27:22,279 +sample without replacement and so + +751 +00:27:20,559 --> 00:27:24,120 +normally you sample with replacement and + +752 +00:27:22,279 --> 00:27:25,840 +you might get exactly the same thing but + +753 +00:27:24,120 --> 00:27:28,000 +when you're doing stasic beam search you + +754 +00:27:25,840 --> 00:27:30,240 +sample without replacement so you get + +755 +00:27:28,000 --> 00:27:33,279 +like three ones according to the + +756 +00:27:30,240 --> 00:27:36,080 +probability but they're guaranteed to be + +757 +00:27:33,279 --> 00:27:37,799 +different right so beam search like one + +758 +00:27:36,080 --> 00:27:39,559 +of the characteristics of beam search is + +759 +00:27:37,799 --> 00:27:41,640 +you always get three different things + +760 +00:27:39,559 --> 00:27:44,240 +because you're picking the three top + +761 +00:27:41,640 --> 00:27:45,760 +when you do sampling uh like stochastic + +762 +00:27:44,240 --> 00:27:47,399 +Bean shirts you get three different + +763 +00:27:45,760 --> 00:27:49,440 +things they're not guaranteed to be the + +764 +00:27:47,399 --> 00:27:51,760 +top they could be distributed according + +765 +00:27:49,440 --> 00:27:54,360 +to the prob distribution but they're + +766 +00:27:51,760 --> 00:27:55,840 +guaranteed so um you can take a look at + +767 +00:27:54,360 --> 00:27:58,039 +the paper for more details of exactly + +768 +00:27:55,840 --> 00:28:00,159 +how it looks but that that's + +769 +00:27:58,039 --> 00:28:03,039 +so then is the main difference that + +770 +00:28:00,159 --> 00:28:05,120 +compared to plus temping that we have n + +771 +00:28:03,039 --> 00:28:08,519 +options that we're cheing tet instead of + +772 +00:28:05,120 --> 00:28:10,320 +going with the going with only one and + +773 +00:28:08,519 --> 00:28:11,200 +you can't yeah you can't simple the same + +774 +00:28:10,320 --> 00:28:14,960 +thing + +775 +00:28:11,200 --> 00:28:16,919 +right yeah so just uh repeat recording + +776 +00:28:14,960 --> 00:28:19,159 +is that n options we're keeping track of + +777 +00:28:16,919 --> 00:28:22,240 +and they're all going to be unique token + +778 +00:28:19,159 --> 00:28:24,240 +sequences at least um you can actually + +779 +00:28:22,240 --> 00:28:26,200 +get the same output sequence from two + +780 +00:28:24,240 --> 00:28:28,120 +different toen sequences if you tokenize + +781 +00:28:26,200 --> 00:28:32,360 +slightly differently um but these will + +782 +00:28:28,120 --> 00:28:37,840 +always be unique tokens + +783 +00:28:32,360 --> 00:28:39,279 +Le so that was sort of a a why like a a + +784 +00:28:37,840 --> 00:28:41,320 +set of methods that we've developed to + +785 +00:28:39,279 --> 00:28:43,600 +try to find the most probable sequence + +786 +00:28:41,320 --> 00:28:44,480 +out of the model um but in the next + +787 +00:28:43,600 --> 00:28:46,039 +section here we're going to sort of + +788 +00:28:44,480 --> 00:28:50,240 +think about whether that's actually what + +789 +00:28:46,039 --> 00:28:51,679 +we want to do at all um so what is like + +790 +00:28:50,240 --> 00:28:54,240 +is do we really want the highest + +791 +00:28:51,679 --> 00:28:56,880 +probability thing um we know that + +792 +00:28:54,240 --> 00:28:58,600 +outputs with really low probability tend + +793 +00:28:56,880 --> 00:29:00,640 +to be really like worse than outfits + +794 +00:28:58,600 --> 00:29:03,240 +with high probability right maybe I'm + +795 +00:29:00,640 --> 00:29:05,840 +trying to predict like what the next + +796 +00:29:03,240 --> 00:29:08,640 +sentence should be after the cat saw the + +797 +00:29:05,840 --> 00:29:11,240 +dog right the cat sat down is way higher + +798 +00:29:08,640 --> 00:29:12,559 +probability than the cat grew wings and + +799 +00:29:11,240 --> 00:29:14,039 +at least with the cats I've met that + +800 +00:29:12,559 --> 00:29:15,679 +sounds pretty that sounds pretty much + +801 +00:29:14,039 --> 00:29:19,559 +right right like this is a much better + +802 +00:29:15,679 --> 00:29:21,720 +output than the cat gr wings but if you + +803 +00:29:19,559 --> 00:29:24,159 +look at just the outputs with relatively + +804 +00:29:21,720 --> 00:29:25,960 +high probability it's sort of less clear + +805 +00:29:24,159 --> 00:29:27,880 +that this defines an exact ranking + +806 +00:29:25,960 --> 00:29:30,559 +between those outputs right + +807 +00:29:27,880 --> 00:29:32,600 +is the cat sat down necessarily better + +808 +00:29:30,559 --> 00:29:34,519 +than the cat ran away these both seem + +809 +00:29:32,600 --> 00:29:35,720 +like pretty reasonable outputs to me + +810 +00:29:34,519 --> 00:29:40,200 +even though one of them is slightly + +811 +00:29:35,720 --> 00:29:42,799 +higher probability and so we do we + +812 +00:29:40,200 --> 00:29:45,240 +really like necessarily need to recover + +813 +00:29:42,799 --> 00:29:47,200 +the cat that down um and this gets a + +814 +00:29:45,240 --> 00:29:49,399 +little a little more complicated still + +815 +00:29:47,200 --> 00:29:51,120 +if we look at sort of a range of outputs + +816 +00:29:49,399 --> 00:29:53,120 +so say there's sort of six outputs that + +817 +00:29:51,120 --> 00:29:55,240 +our model could give us um and here + +818 +00:29:53,120 --> 00:29:57,559 +we're looking at sort of full sequences + +819 +00:29:55,240 --> 00:30:00,120 +not individual tokens just for clarity + +820 +00:29:57,559 --> 00:30:02,640 +so maybe our outputs in order of + +821 +00:30:00,120 --> 00:30:05,840 +probability are the cat sat down it ran + +822 +00:30:02,640 --> 00:30:08,240 +away it sprinted off it got out of there + +823 +00:30:05,840 --> 00:30:09,720 +it's very small and it grew Wings right + +824 +00:30:08,240 --> 00:30:11,440 +so we're definitely sure that the cat + +825 +00:30:09,720 --> 00:30:13,159 +sat down is a better output than the cat + +826 +00:30:11,440 --> 00:30:15,360 +grew wings and if we're doing a mod + +827 +00:30:13,159 --> 00:30:17,600 +seeking search we would find that as our + +828 +00:30:15,360 --> 00:30:19,440 +most likely thing if we're if we you + +829 +00:30:17,600 --> 00:30:21,440 +know do a good job searching and we'd + +830 +00:30:19,440 --> 00:30:23,519 +return that as our output but if you + +831 +00:30:21,440 --> 00:30:25,919 +look at the rest of this distribution + +832 +00:30:23,519 --> 00:30:27,880 +you see that there's actually a whole + +833 +00:30:25,919 --> 00:30:29,240 +set of outputs after that all say + +834 +00:30:27,880 --> 00:30:31,720 +something that kind of means the cat + +835 +00:30:29,240 --> 00:30:33,480 +left the area right it's just that this + +836 +00:30:31,720 --> 00:30:35,200 +probability is split over these three + +837 +00:30:33,480 --> 00:30:37,080 +different generations and if you + +838 +00:30:35,200 --> 00:30:39,120 +actually add up the probability mass of + +839 +00:30:37,080 --> 00:30:40,880 +all three of these sequences this is + +840 +00:30:39,120 --> 00:30:42,919 +double the probability mass of the cat + +841 +00:30:40,880 --> 00:30:44,360 +sat down but because none of these + +842 +00:30:42,919 --> 00:30:45,960 +individual sequences is higher + +843 +00:30:44,360 --> 00:30:47,399 +probability if you're doing mode seeking + +844 +00:30:45,960 --> 00:30:50,640 +search you wouldn't you wouldn't be able + +845 +00:30:47,399 --> 00:30:52,480 +to see this effect right so do we really + +846 +00:30:50,640 --> 00:30:53,760 +want to return the cat sat down or do we + +847 +00:30:52,480 --> 00:30:55,200 +want to return something that means the + +848 +00:30:53,760 --> 00:30:57,559 +cat left the + +849 +00:30:55,200 --> 00:30:59,200 +area the question then is like if it's + +850 +00:30:57,559 --> 00:31:03,120 +not probability that makes an output + +851 +00:30:59,200 --> 00:31:04,679 +good what is it so we have this one + +852 +00:31:03,120 --> 00:31:06,039 +output that's really high probability + +853 +00:31:04,679 --> 00:31:09,000 +but it's very different from everything + +854 +00:31:06,039 --> 00:31:10,720 +else in our set and then we have a + +855 +00:31:09,000 --> 00:31:13,200 +couple of outputs that are all pretty + +856 +00:31:10,720 --> 00:31:15,080 +high probability and similar to a bunch + +857 +00:31:13,200 --> 00:31:17,840 +of other relatively high probability + +858 +00:31:15,080 --> 00:31:19,720 +things so maybe it's sort of less risky + +859 +00:31:17,840 --> 00:31:21,399 +to return one of these right are thing + +860 +00:31:19,720 --> 00:31:23,200 +that's higher probability but different + +861 +00:31:21,399 --> 00:31:24,600 +than everything else could be different + +862 +00:31:23,200 --> 00:31:26,840 +because it's way better or it could be + +863 +00:31:24,600 --> 00:31:29,000 +different because it's way worse um + +864 +00:31:26,840 --> 00:31:31,120 +another way to think about this is you + +865 +00:31:29,000 --> 00:31:32,600 +know maybe if you and your friends were + +866 +00:31:31,120 --> 00:31:34,200 +cheating on a test which you shouldn't + +867 +00:31:32,600 --> 00:31:35,480 +do but if you were going to do it and + +868 +00:31:34,200 --> 00:31:37,519 +all of your friends sent you their + +869 +00:31:35,480 --> 00:31:39,240 +answers um maybe one of your friends has + +870 +00:31:37,519 --> 00:31:40,960 +a slightly higher score in the class + +871 +00:31:39,240 --> 00:31:42,519 +than everyone else but they said the + +872 +00:31:40,960 --> 00:31:44,480 +answer was answer a and everyone else + +873 +00:31:42,519 --> 00:31:45,799 +said the answer was B right you still + +874 +00:31:44,480 --> 00:31:48,480 +might go with the answer that everyone + +875 +00:31:45,799 --> 00:31:50,679 +else said because like what there's it + +876 +00:31:48,480 --> 00:31:52,679 +sort of feels less risky like maybe + +877 +00:31:50,679 --> 00:31:54,440 +everyone else got the answer get that + +878 +00:31:52,679 --> 00:31:55,880 +answer and so your one friend could be + +879 +00:31:54,440 --> 00:31:56,919 +right when everyone else is wrong or + +880 +00:31:55,880 --> 00:31:59,679 +they could have made a mistake that no + +881 +00:31:56,919 --> 00:32:01,240 +one El else is making so this is sort of + +882 +00:31:59,679 --> 00:32:03,519 +the same concept right we want an output + +883 +00:32:01,240 --> 00:32:06,320 +that's relatively high probability but + +884 +00:32:03,519 --> 00:32:09,399 +also relatively low + +885 +00:32:06,320 --> 00:32:11,320 +risk and so here maybe if we were using + +886 +00:32:09,399 --> 00:32:13,679 +this criteria we'd return the cat ran + +887 +00:32:11,320 --> 00:32:14,720 +away as our sort of as our sort of + +888 +00:32:13,679 --> 00:32:16,720 +single + +889 +00:32:14,720 --> 00:32:19,440 +output so how do you find something + +890 +00:32:16,720 --> 00:32:21,000 +that's high probability and low risk + +891 +00:32:19,440 --> 00:32:22,480 +there's sort of two questions here right + +892 +00:32:21,000 --> 00:32:24,399 +we have to figure out how to estimate + +893 +00:32:22,480 --> 00:32:26,120 +probability and if we're looking at a + +894 +00:32:24,399 --> 00:32:28,519 +set of outputs like the six we saw + +895 +00:32:26,120 --> 00:32:29,880 +before maybe we can just do this by + +896 +00:32:28,519 --> 00:32:31,720 +counting right we could sample + +897 +00:32:29,880 --> 00:32:34,000 +everything from the model and get exact + +898 +00:32:31,720 --> 00:32:35,200 +probability or we could take a sample + +899 +00:32:34,000 --> 00:32:38,080 +from the model and just look at + +900 +00:32:35,200 --> 00:32:40,200 +probabilities in that set and from there + +901 +00:32:38,080 --> 00:32:41,840 +from that sample um sort of one + +902 +00:32:40,200 --> 00:32:43,559 +reasonable thing to do is just count + +903 +00:32:41,840 --> 00:32:45,320 +frequency right if something's in our + +904 +00:32:43,559 --> 00:32:47,919 +sample twice as often we just say it's + +905 +00:32:45,320 --> 00:32:49,799 +twice as frequent or it's twice as + +906 +00:32:47,919 --> 00:32:52,880 +probable um this is something called + +907 +00:32:49,799 --> 00:32:54,440 +Monte Carlos sampling if you do this um + +908 +00:32:52,880 --> 00:32:56,039 +enough times like if you sample an + +909 +00:32:54,440 --> 00:32:58,279 +infinite set this is would give you + +910 +00:32:56,039 --> 00:33:00,880 +exactly the model distri distribution um + +911 +00:32:58,279 --> 00:33:02,840 +but for the sort of reasonable size sets + +912 +00:33:00,880 --> 00:33:04,200 +we're working with maybe like a 100 + +913 +00:33:02,840 --> 00:33:06,320 +samples this gives us a sort of + +914 +00:33:04,200 --> 00:33:09,440 +reasonable approximation for what we for + +915 +00:33:06,320 --> 00:33:10,840 +what we need to do here at least so + +916 +00:33:09,440 --> 00:33:12,000 +we're just going to take a sample to get + +917 +00:33:10,840 --> 00:33:13,440 +probability and we're just going to + +918 +00:33:12,000 --> 00:33:15,519 +count things in that sample to see how + +919 +00:33:13,440 --> 00:33:17,320 +likely things are that doesn't seem too + +920 +00:33:15,519 --> 00:33:20,080 +bad how do we estimate + +921 +00:33:17,320 --> 00:33:21,679 +risk the idea here is that we have a + +922 +00:33:20,080 --> 00:33:24,080 +bunch of other things in this set of + +923 +00:33:21,679 --> 00:33:26,080 +outputs and we can treat those as sort + +924 +00:33:24,080 --> 00:33:27,880 +of like pseudo references right we can + +925 +00:33:26,080 --> 00:33:29,840 +evaluate agreement between the thing + +926 +00:33:27,880 --> 00:33:31,519 +we're looking at and each of those other + +927 +00:33:29,840 --> 00:33:33,480 +references and this is sort of the same + +928 +00:33:31,519 --> 00:33:35,519 +idea of calculating similarity in + +929 +00:33:33,480 --> 00:33:37,159 +diverse beam search we're going to use + +930 +00:33:35,519 --> 00:33:39,639 +some kind of metric to compare how + +931 +00:33:37,159 --> 00:33:41,279 +similar these things are um this metric + +932 +00:33:39,639 --> 00:33:43,080 +could be anything you use Downstream it + +933 +00:33:41,279 --> 00:33:44,840 +could be like an engram overlap metric + +934 +00:33:43,080 --> 00:33:48,600 +like Rouge or blue or it could also be + +935 +00:33:44,840 --> 00:33:51,120 +something um neural or semantic like um + +936 +00:33:48,600 --> 00:33:54,799 +something like BT score or Bart + +937 +00:33:51,120 --> 00:33:56,600 +score and so this concept um is a type + +938 +00:33:54,799 --> 00:33:57,919 +of decoding called minimum based risk + +939 +00:33:56,600 --> 00:33:59,600 +decoding + +940 +00:33:57,919 --> 00:34:01,840 +and what this equation captures is + +941 +00:33:59,600 --> 00:34:03,919 +exactly the intuition that we were um + +942 +00:34:01,840 --> 00:34:06,600 +sort of talking about just a slide ago + +943 +00:34:03,919 --> 00:34:08,159 +where we're going to choose something + +944 +00:34:06,600 --> 00:34:09,919 +that is low risk which means it's + +945 +00:34:08,159 --> 00:34:11,960 +similar to a lot of other things in this + +946 +00:34:09,919 --> 00:34:12,800 +set of outputs we've sampled and we're + +947 +00:34:11,960 --> 00:34:14,800 +going to choose something that's + +948 +00:34:12,800 --> 00:34:17,560 +relatively high probability which means + +949 +00:34:14,800 --> 00:34:19,159 +that sort of when we sum up over this if + +950 +00:34:17,560 --> 00:34:21,399 +something occurs in our set a bunch of + +951 +00:34:19,159 --> 00:34:23,320 +times it's going to have pretty strong + +952 +00:34:21,399 --> 00:34:25,800 +weight in picking which um of these + +953 +00:34:23,320 --> 00:34:27,000 +outputs are similar right if sort of + +954 +00:34:25,800 --> 00:34:28,399 +there's one thing in the set that + +955 +00:34:27,000 --> 00:34:29,919 +appears a bunch of times it's going to + +956 +00:34:28,399 --> 00:34:32,040 +have a strong influence on which thing + +957 +00:34:29,919 --> 00:34:34,119 +we pick and so that kind of captures + +958 +00:34:32,040 --> 00:34:38,520 +high probability in this + +959 +00:34:34,119 --> 00:34:41,119 +setting so to see how this works we can + +960 +00:34:38,520 --> 00:34:44,639 +look at an example um in + +961 +00:34:41,119 --> 00:34:47,399 +summarization so we choose some Metric + +962 +00:34:44,639 --> 00:34:49,639 +maybe we choose um Rouge which is an + +963 +00:34:47,399 --> 00:34:51,399 +engram overlap metric for summarization + +964 +00:34:49,639 --> 00:34:52,879 +and we say we're going to sample 100 + +965 +00:34:51,399 --> 00:34:55,960 +things and we're going to use this + +966 +00:34:52,879 --> 00:35:00,359 +equation to choose the one that has the + +967 +00:34:55,960 --> 00:35:03,960 +sort of lower EST risk according to MBR + +968 +00:35:00,359 --> 00:35:06,480 +um so if we do that and we look at this + +969 +00:35:03,960 --> 00:35:07,560 +sort of table of results here um you can + +970 +00:35:06,480 --> 00:35:09,680 +see that this + +971 +00:35:07,560 --> 00:35:11,320 +outperforms the other sampling methods + +972 +00:35:09,680 --> 00:35:13,720 +that we've looked at before so greedy + +973 +00:35:11,320 --> 00:35:15,640 +decoding here is just sampling the + +974 +00:35:13,720 --> 00:35:18,760 +single most likely thing in each step + +975 +00:35:15,640 --> 00:35:21,800 +beam search here is the BS with five or + +976 +00:35:18,760 --> 00:35:24,359 +10 beams and DBS is the diverse beam + +977 +00:35:21,800 --> 00:35:27,040 +search we were talking about um if we + +978 +00:35:24,359 --> 00:35:29,440 +use minimum based risk and we use grou + +979 +00:35:27,040 --> 00:35:31,240 +is the sort of determiner of similarity + +980 +00:35:29,440 --> 00:35:32,680 +we do way better across all of our + +981 +00:35:31,240 --> 00:35:33,960 +metrics but we especially do really good + +982 +00:35:32,680 --> 00:35:36,680 +at Rouge because that's sort of the + +983 +00:35:33,960 --> 00:35:38,119 +metric that we've been using to evaluate + +984 +00:35:36,680 --> 00:35:40,240 +and then if we swap this out for other + +985 +00:35:38,119 --> 00:35:43,599 +metrics you still see an performance + +986 +00:35:40,240 --> 00:35:46,440 +improvement over these um search methods + +987 +00:35:43,599 --> 00:35:48,119 +here um what's the sort of catch here + +988 +00:35:46,440 --> 00:35:49,920 +the catch here is that MBR requires you + +989 +00:35:48,119 --> 00:35:51,599 +to sample a hundred things and so this + +990 +00:35:49,920 --> 00:35:54,760 +is a lot more expensive it's a lot + +991 +00:35:51,599 --> 00:35:54,760 +slower at infin + +992 +00:35:54,800 --> 00:35:58,800 +time um yes + +993 +00:36:04,200 --> 00:36:10,040 +yes a great question why does the beam + +994 +00:36:07,000 --> 00:36:14,000 +search with more beams perform worse um + +995 +00:36:10,040 --> 00:36:16,720 +this is a well a relatively welln + +996 +00:36:14,000 --> 00:36:19,359 +phenomena called the cursive beam search + +997 +00:36:16,720 --> 00:36:21,640 +which is we actually lost your M so you + +998 +00:36:19,359 --> 00:36:24,599 +mic and we can speak okay yeah so this + +999 +00:36:21,640 --> 00:36:26,079 +is called the cursive beam search um and + +1000 +00:36:24,599 --> 00:36:27,760 +the idea here is that beam search is + +1001 +00:36:26,079 --> 00:36:29,359 +like an approxim search right so if you + +1002 +00:36:27,760 --> 00:36:31,200 +add more beams you should be doing + +1003 +00:36:29,359 --> 00:36:33,319 +better and better at finding the maximum + +1004 +00:36:31,200 --> 00:36:34,800 +likelihood thing and generally you are + +1005 +00:36:33,319 --> 00:36:37,160 +you get something that is higher + +1006 +00:36:34,800 --> 00:36:39,160 +probability but as you add more beams + +1007 +00:36:37,160 --> 00:36:42,319 +you also often get something that does + +1008 +00:36:39,160 --> 00:36:42,319 +worse on your Downstream + +1009 +00:36:44,160 --> 00:36:47,560 +metrics back up + +1010 +00:36:54,240 --> 00:36:58,680 +there is that back online + +1011 +00:36:59,119 --> 00:37:06,520 +yeah is that back is that any louder no + +1012 +00:37:03,520 --> 00:37:06,520 +it + +1013 +00:37:07,000 --> 00:37:12,640 +question oh there we go is that better + +1014 +00:37:09,599 --> 00:37:13,760 +great um yeah so why why does this + +1015 +00:37:12,640 --> 00:37:16,040 +happen right why do you get something + +1016 +00:37:13,760 --> 00:37:18,560 +that's higher likelihood but um lower + +1017 +00:37:16,040 --> 00:37:22,040 +performance Downstream um and this is + +1018 +00:37:18,560 --> 00:37:24,000 +like another sort of degeneracy of beam + +1019 +00:37:22,040 --> 00:37:25,680 +search that this idea that the thing + +1020 +00:37:24,000 --> 00:37:27,440 +that is the absolute highest likelihood + +1021 +00:37:25,680 --> 00:37:28,599 +in your distribution might not actually + +1022 +00:37:27,440 --> 00:37:31,079 +be what you want + +1023 +00:37:28,599 --> 00:37:33,960 +Downstream um this is sort of one of the + +1024 +00:37:31,079 --> 00:37:35,200 +other things that people use to motivate + +1025 +00:37:33,960 --> 00:37:37,599 +why you might want to do something like + +1026 +00:37:35,200 --> 00:37:39,400 +MBR instead um and there's a great paper + +1027 +00:37:37,599 --> 00:37:41,640 +about this problem called the inadequacy + +1028 +00:37:39,400 --> 00:37:43,680 +of the mode because beam search is + +1029 +00:37:41,640 --> 00:37:45,520 +looking for the mode of the + +1030 +00:37:43,680 --> 00:37:47,880 +distribution well one other thing I'd + +1031 +00:37:45,520 --> 00:37:49,680 +like to mention is it also goes together + +1032 +00:37:47,880 --> 00:37:51,119 +with how you train your models because + +1033 +00:37:49,680 --> 00:37:53,760 +most of our models are trained using + +1034 +00:37:51,119 --> 00:37:57,079 +maximum likelihood maximum likelihood + +1035 +00:37:53,760 --> 00:37:59,040 +isn't explicitly maximizing our ability + +1036 +00:37:57,079 --> 00:38:01,079 +to get the best answer it's explicitly + +1037 +00:37:59,040 --> 00:38:05,720 +maximizing our ability to estimate the + +1038 +00:38:01,079 --> 00:38:10,160 +the distribution of answers so if I + +1039 +00:38:05,720 --> 00:38:13,040 +say um if you said like what is what is + +1040 +00:38:10,160 --> 00:38:15,839 +your favorite hobby or something like + +1041 +00:38:13,040 --> 00:38:17,680 +that uh what is your favorite hobby in a + +1042 +00:38:15,839 --> 00:38:19,280 +dialogue system often it'll answer I + +1043 +00:38:17,680 --> 00:38:22,400 +don't know or something like that + +1044 +00:38:19,280 --> 00:38:24,920 +because it like you know that that's + +1045 +00:38:22,400 --> 00:38:26,599 +more likely than answering any specific + +1046 +00:38:24,920 --> 00:38:29,240 +hobby like it's more likely than + +1047 +00:38:26,599 --> 00:38:32,119 +answering basketball bowling you know + +1048 +00:38:29,240 --> 00:38:35,040 +whatever else because you have many many + +1049 +00:38:32,119 --> 00:38:36,560 +different options and so like especially + +1050 +00:38:35,040 --> 00:38:39,880 +if it's something that's a little bit + +1051 +00:38:36,560 --> 00:38:42,160 +more comp complicated it will avoid + +1052 +00:38:39,880 --> 00:38:44,680 +answering that and in particular it ends + +1053 +00:38:42,160 --> 00:38:47,240 +up answering very short things for + +1054 +00:38:44,680 --> 00:38:49,280 +example um or sometimes it ends up + +1055 +00:38:47,240 --> 00:38:51,160 +repeating itself over and over again or + +1056 +00:38:49,280 --> 00:38:53,240 +or things like that so it also goes + +1057 +00:38:51,160 --> 00:38:57,760 +together with like the training of the + +1058 +00:38:53,240 --> 00:38:59,359 +model yeah and this is um one of the + +1059 +00:38:57,760 --> 00:39:01,079 +this is still a problem in modern + +1060 +00:38:59,359 --> 00:39:02,560 +systems so if you actually look at the + +1061 +00:39:01,079 --> 00:39:03,839 +single like if you could enumerate + +1062 +00:39:02,560 --> 00:39:05,680 +everything and see the single most + +1063 +00:39:03,839 --> 00:39:07,520 +likely sequence it's often the empty + +1064 +00:39:05,680 --> 00:39:10,920 +sequence just not opening anything at + +1065 +00:39:07,520 --> 00:39:12,640 +all um and so if that's your true mode + +1066 +00:39:10,920 --> 00:39:16,119 +of the distribution then doing better at + +1067 +00:39:12,640 --> 00:39:16,119 +mode seeking is not always like + +1068 +00:39:16,599 --> 00:39:19,599 +helpful + +1069 +00:39:25,440 --> 00:39:32,960 +yes could this be influenced by the + +1070 +00:39:28,200 --> 00:39:32,960 +confidence problem like um how + +1071 +00:39:37,560 --> 00:39:41,079 +so seems + +1072 +00:39:49,760 --> 00:39:53,599 +bees + +1073 +00:39:51,010 --> 00:39:57,280 +[Music] + +1074 +00:39:53,599 --> 00:39:59,760 +might right I think I I think I see + +1075 +00:39:57,280 --> 00:40:02,000 +what you're saying which is that like + +1076 +00:39:59,760 --> 00:40:04,200 +the the confidence gives you the + +1077 +00:40:02,000 --> 00:40:06,680 +confidence of like a single exact + +1078 +00:40:04,200 --> 00:40:11,000 +sequence right not the like actual sort + +1079 +00:40:06,680 --> 00:40:13,200 +of semantic space of and so yeah if you + +1080 +00:40:11,000 --> 00:40:14,920 +looked at just like the if you look at + +1081 +00:40:13,200 --> 00:40:17,000 +just the probability scores you get the + +1082 +00:40:14,920 --> 00:40:18,520 +probability of an exact string when what + +1083 +00:40:17,000 --> 00:40:20,119 +you really actually care about with + +1084 +00:40:18,520 --> 00:40:22,319 +confidence is the probability of sort of + +1085 +00:40:20,119 --> 00:40:23,800 +like things that mean the same thing + +1086 +00:40:22,319 --> 00:40:25,359 +yeah this is um part of why like + +1087 +00:40:23,800 --> 00:40:28,359 +calibration is really hard for long + +1088 +00:40:25,359 --> 00:40:28,359 +sequences + +1089 +00:40:30,720 --> 00:40:37,319 +great so we're g to touch sort of + +1090 +00:40:34,359 --> 00:40:39,520 +briefly on a couple of other things that + +1091 +00:40:37,319 --> 00:40:40,920 +aren't sort of always explicitly + +1092 +00:40:39,520 --> 00:40:42,480 +described in this framework but that you + +1093 +00:40:40,920 --> 00:40:45,040 +can think of as variance of minimum + +1094 +00:40:42,480 --> 00:40:46,960 +based risk um and if you're interested + +1095 +00:40:45,040 --> 00:40:49,560 +in this analysis um I think as Graham + +1096 +00:40:46,960 --> 00:40:51,800 +mentioned earlier um Alex Z is a first + +1097 +00:40:49,560 --> 00:40:53,680 +year MLT and I wrote a paper about this + +1098 +00:40:51,800 --> 00:40:57,839 +um which you can check out if you're + +1099 +00:40:53,680 --> 00:41:01,200 +interested so the um two that I really + +1100 +00:40:57,839 --> 00:41:03,800 +want to touch on here are other sort of + +1101 +00:41:01,200 --> 00:41:05,240 +inference time things you can consider + +1102 +00:41:03,800 --> 00:41:07,520 +which might look a little bit different + +1103 +00:41:05,240 --> 00:41:09,480 +on the first BL um the first of these is + +1104 +00:41:07,520 --> 00:41:11,680 +output ensembling so say you have + +1105 +00:41:09,480 --> 00:41:13,240 +multiple different models and you get + +1106 +00:41:11,680 --> 00:41:15,480 +outputs from all of them and now you + +1107 +00:41:13,240 --> 00:41:19,560 +need to choose a best output among that + +1108 +00:41:15,480 --> 00:41:21,599 +set um one of the sort of common ways to + +1109 +00:41:19,560 --> 00:41:24,480 +do this is to compare like an embedding + +1110 +00:41:21,599 --> 00:41:25,920 +similarity across models like does model + +1111 +00:41:24,480 --> 00:41:27,560 +one think these two things are really + +1112 +00:41:25,920 --> 00:41:28,880 +similar does model two think these two + +1113 +00:41:27,560 --> 00:41:32,599 +things are really similar and try to + +1114 +00:41:28,880 --> 00:41:34,680 +choose something that the um has really + +1115 +00:41:32,599 --> 00:41:37,319 +high similarity with a lot of other + +1116 +00:41:34,680 --> 00:41:39,200 +outputs um of course now that we've just + +1117 +00:41:37,319 --> 00:41:41,440 +recently been talking about MBR you can + +1118 +00:41:39,200 --> 00:41:44,920 +see that you can probably see that this + +1119 +00:41:41,440 --> 00:41:46,280 +is um the same general formulation just + +1120 +00:41:44,920 --> 00:41:47,880 +rather than summing over a set of + +1121 +00:41:46,280 --> 00:41:49,520 +outputs from a single model now you're + +1122 +00:41:47,880 --> 00:41:52,160 +looking at outputs over a whole set of + +1123 +00:41:49,520 --> 00:41:54,640 +models um so some types of ensembling + +1124 +00:41:52,160 --> 00:41:57,319 +fall into this category of minimum based + +1125 +00:41:54,640 --> 00:42:00,680 +risk methods another thing in this + +1126 +00:41:57,319 --> 00:42:03,280 +category is a um sort of recent decoding + +1127 +00:42:00,680 --> 00:42:06,079 +method called self-consistency and the + +1128 +00:42:03,280 --> 00:42:08,200 +idea here is that you want to do + +1129 +00:42:06,079 --> 00:42:09,359 +something like mathematical reasoning + +1130 +00:42:08,200 --> 00:42:10,599 +and you really care about getting the + +1131 +00:42:09,359 --> 00:42:12,000 +final answer right but you don't + +1132 +00:42:10,599 --> 00:42:15,000 +necessarily care about getting all of + +1133 +00:42:12,000 --> 00:42:18,079 +the the reasoning steps in between right + +1134 +00:42:15,000 --> 00:42:19,520 +so you prompt the model for an answer um + +1135 +00:42:18,079 --> 00:42:20,800 +using something like Chain of Thought + +1136 +00:42:19,520 --> 00:42:22,680 +right you ask it to sort of talk through + +1137 +00:42:20,800 --> 00:42:26,440 +the steps it's going to do and then give + +1138 +00:42:22,680 --> 00:42:28,599 +you a final answer um you sample many + +1139 +00:42:26,440 --> 00:42:30,400 +puts using this and then you completely + +1140 +00:42:28,599 --> 00:42:32,200 +throw away the chains of thought um and + +1141 +00:42:30,400 --> 00:42:35,359 +you just take the answer from each + +1142 +00:42:32,200 --> 00:42:37,640 +output um you have that set of answers + +1143 +00:42:35,359 --> 00:42:38,960 +maybe you have like 20 30 100 answers + +1144 +00:42:37,640 --> 00:42:40,000 +you just return the one that was most + +1145 +00:42:38,960 --> 00:42:43,720 +frequently + +1146 +00:42:40,000 --> 00:42:46,119 +generated um what this is doing is a + +1147 +00:42:43,720 --> 00:42:48,800 +type of MBR where the metric that you + +1148 +00:42:46,119 --> 00:42:51,160 +actually care about is exact match of + +1149 +00:42:48,800 --> 00:42:51,839 +this answer right ignoring the rest of + +1150 +00:42:51,160 --> 00:42:54,079 +the + +1151 +00:42:51,839 --> 00:42:55,800 +generation um and so here we have sort + +1152 +00:42:54,079 --> 00:42:56,839 +of the same intuition that we want an + +1153 +00:42:55,800 --> 00:42:59,160 +output + +1154 +00:42:56,839 --> 00:43:01,520 +that is high probability right we're + +1155 +00:42:59,160 --> 00:43:03,359 +getting it generated a lot but also low + +1156 +00:43:01,520 --> 00:43:06,079 +risk not a lot of the other outputs in + +1157 +00:43:03,359 --> 00:43:08,440 +our in our set disagree with this + +1158 +00:43:06,079 --> 00:43:10,359 +answer so those are a couple of + +1159 +00:43:08,440 --> 00:43:11,920 +different variants of methods where + +1160 +00:43:10,359 --> 00:43:13,880 +we're sort of sampling a wide set of + +1161 +00:43:11,920 --> 00:43:17,359 +sequences and trying to choose the best + +1162 +00:43:13,880 --> 00:43:20,960 +one um MBR is one set is one type of + +1163 +00:43:17,359 --> 00:43:22,680 +sort of sequence set reranking method um + +1164 +00:43:20,960 --> 00:43:24,760 +you could do other things to rerank sets + +1165 +00:43:22,680 --> 00:43:27,400 +as well but this is sort of one + +1166 +00:43:24,760 --> 00:43:30,359 +representative class of these yes uh or + +1167 +00:43:27,400 --> 00:43:32,280 +of the of these methods before we get + +1168 +00:43:30,359 --> 00:43:35,200 +into constrain generation those are sort + +1169 +00:43:32,280 --> 00:43:37,000 +of the three broad categories of + +1170 +00:43:35,200 --> 00:43:39,480 +inference methods we'll discuss which is + +1171 +00:43:37,000 --> 00:43:41,680 +sort of sampling from some distribution + +1172 +00:43:39,480 --> 00:43:45,040 +searching over some space of + +1173 +00:43:41,680 --> 00:43:47,400 +distributions and doing some kind of um + +1174 +00:43:45,040 --> 00:43:48,559 +analysis over a set of samples to choose + +1175 +00:43:47,400 --> 00:43:51,359 +which ones they + +1176 +00:43:48,559 --> 00:43:52,559 +return um does anyone have any questions + +1177 +00:43:51,359 --> 00:43:55,079 +at this + +1178 +00:43:52,559 --> 00:44:00,680 +point + +1179 +00:43:55,079 --> 00:44:00,680 +yeah that a model + +1180 +00:44:05,800 --> 00:44:12,760 +cannot yeah like why is averaging model + +1181 +00:44:08,359 --> 00:44:16,400 +weights not MBR um I think it's not MBR + +1182 +00:44:12,760 --> 00:44:18,559 +because the two um the key thing that I + +1183 +00:44:16,400 --> 00:44:20,880 +think really makes a method MBR is this + +1184 +00:44:18,559 --> 00:44:22,480 +concept of comparing between multiple um + +1185 +00:44:20,880 --> 00:44:24,880 +sort of pseudo + +1186 +00:44:22,480 --> 00:44:26,839 +references um and there you don't have + +1187 +00:44:24,880 --> 00:44:28,359 +the same like you aage model way can you + +1188 +00:44:26,839 --> 00:44:32,440 +wind up with sort of a single output on + +1189 +00:44:28,359 --> 00:44:34,040 +the end that maybe is like using like + +1190 +00:44:32,440 --> 00:44:35,800 +information from these two model + +1191 +00:44:34,040 --> 00:44:38,240 +distributions that you've sort of smush + +1192 +00:44:35,800 --> 00:44:41,160 +together um but it's not the same + +1193 +00:44:38,240 --> 00:44:44,720 +concept of like comparing against pseudo + +1194 +00:44:41,160 --> 00:44:44,720 +references or ranking in a + +1195 +00:44:48,920 --> 00:44:55,599 +set right so now this is sort of a this + +1196 +00:44:52,720 --> 00:44:57,559 +was a wide variety of methods to try to + +1197 +00:44:55,599 --> 00:44:59,040 +find an output that's just sort of good + +1198 +00:44:57,559 --> 00:45:01,440 +right we want an output that that is + +1199 +00:44:59,040 --> 00:45:03,480 +nice out of our model um but now we'd + +1200 +00:45:01,440 --> 00:45:05,880 +like to maybe enclose a few additional + +1201 +00:45:03,480 --> 00:45:08,280 +constraints so say I'm asking our model + +1202 +00:45:05,880 --> 00:45:10,720 +for some Hobbies I could use to stay in + +1203 +00:45:08,280 --> 00:45:11,920 +to stay in shape and no matter what I + +1204 +00:45:10,720 --> 00:45:14,160 +don't want the model to recommend + +1205 +00:45:11,920 --> 00:45:16,880 +climbing like I I just I don't want this + +1206 +00:45:14,160 --> 00:45:18,400 +as an option I've tried it I'm not a fan + +1207 +00:45:16,880 --> 00:45:21,240 +um how do I get the model to stop + +1208 +00:45:18,400 --> 00:45:22,760 +suggesting climbing to me and if you've + +1209 +00:45:21,240 --> 00:45:24,559 +sort of played around with some of the + +1210 +00:45:22,760 --> 00:45:26,200 +more recent llms you'd say maybe this is + +1211 +00:45:24,559 --> 00:45:27,480 +easy right you just tell the model the + +1212 +00:45:26,200 --> 00:45:30,160 +instruction that you don't want to talk + +1213 +00:45:27,480 --> 00:45:31,640 +about climbing and having talked to Bard + +1214 +00:45:30,160 --> 00:45:33,640 +recently I can tell you unfortunately + +1215 +00:45:31,640 --> 00:45:34,800 +that it's not that easy so I tell the + +1216 +00:45:33,640 --> 00:45:36,599 +model I don't want to talk about + +1217 +00:45:34,800 --> 00:45:38,000 +climbing it does okay for a little bit + +1218 +00:45:36,599 --> 00:45:40,920 +and then it's like all right but maybe + +1219 +00:45:38,000 --> 00:45:42,359 +you want to try rap climbing um and so + +1220 +00:45:40,920 --> 00:45:44,559 +we could continue trying to instruction + +1221 +00:45:42,359 --> 00:45:46,200 +to our model but maybe there's sort of a + +1222 +00:45:44,559 --> 00:45:49,079 +way to impose this constraint on the + +1223 +00:45:46,200 --> 00:45:50,680 +decoding side instead and so I'd say all + +1224 +00:45:49,079 --> 00:45:52,960 +right I'm going to do something dramatic + +1225 +00:45:50,680 --> 00:45:54,440 +right I know I can manipulate the + +1226 +00:45:52,960 --> 00:45:56,200 +probability distribution I'm just going + +1227 +00:45:54,440 --> 00:45:57,920 +to set the probability of climbing to be + +1228 +00:45:56,200 --> 00:46:00,440 +zero I don't want to see this token like + +1229 +00:45:57,920 --> 00:46:02,640 +I'm I'm completely over it um and this + +1230 +00:46:00,440 --> 00:46:04,839 +is sort of nice in some sense because + +1231 +00:46:02,640 --> 00:46:06,720 +this is pretty easy to do um remember + +1232 +00:46:04,839 --> 00:46:08,440 +we're doing a soft Max over the outputs + +1233 +00:46:06,720 --> 00:46:10,599 +to choose this probability distribution + +1234 +00:46:08,440 --> 00:46:12,400 +and so if we add a huge negative number + +1235 +00:46:10,599 --> 00:46:14,160 +to the logic for climbing before we do + +1236 +00:46:12,400 --> 00:46:15,520 +this softmax its probability is going to + +1237 +00:46:14,160 --> 00:46:18,640 +be basically zero and we're never going + +1238 +00:46:15,520 --> 00:46:20,240 +to see it as an output um but this + +1239 +00:46:18,640 --> 00:46:22,480 +doesn't seem like a perfect solution + +1240 +00:46:20,240 --> 00:46:24,400 +right because you know what if the model + +1241 +00:46:22,480 --> 00:46:26,160 +recommends bouldering to me do I have to + +1242 +00:46:24,400 --> 00:46:28,599 +write like a sort of a list of every + +1243 +00:46:26,160 --> 00:46:30,599 +possible climbing synonym in the world + +1244 +00:46:28,599 --> 00:46:32,079 +um what if there's sort of an allowable + +1245 +00:46:30,599 --> 00:46:33,920 +way to use this token like I want the + +1246 +00:46:32,079 --> 00:46:35,319 +model to suggest hiking because climbing + +1247 +00:46:33,920 --> 00:46:37,480 +up a mountain to see a good view is + +1248 +00:46:35,319 --> 00:46:38,720 +relaxing but that's a use of the word + +1249 +00:46:37,480 --> 00:46:41,400 +climbing and we just said that we can't + +1250 +00:46:38,720 --> 00:46:43,520 +use the word climbing um or what if we + +1251 +00:46:41,400 --> 00:46:45,480 +sort of generate other related terms + +1252 +00:46:43,520 --> 00:46:47,520 +before we get to the restricted term + +1253 +00:46:45,480 --> 00:46:49,359 +like the model starts suggesting maybe + +1254 +00:46:47,520 --> 00:46:51,480 +you can work out by going to an indoor + +1255 +00:46:49,359 --> 00:46:52,920 +rock blank and then what are we going to + +1256 +00:46:51,480 --> 00:46:54,800 +say there's not we can't say rock + +1257 +00:46:52,920 --> 00:46:57,079 +climbing so maybe the model suggests + +1258 +00:46:54,800 --> 00:46:58,640 +rock climbing is rock collecting is a + +1259 +00:46:57,079 --> 00:47:01,400 +hobby to stay in shape and that doesn't + +1260 +00:46:58,640 --> 00:47:03,480 +sound good either um you could continue + +1261 +00:47:01,400 --> 00:47:05,640 +like sort of engineering more and more + +1262 +00:47:03,480 --> 00:47:06,599 +complicated rules here but maybe we + +1263 +00:47:05,640 --> 00:47:08,760 +could do something that's a little + +1264 +00:47:06,599 --> 00:47:10,559 +simpler so what if I just sample a bunch + +1265 +00:47:08,760 --> 00:47:11,920 +of outputs from the model and then I + +1266 +00:47:10,559 --> 00:47:14,359 +check if they're about climbing and I + +1267 +00:47:11,920 --> 00:47:16,280 +get rid of them if they are right um + +1268 +00:47:14,359 --> 00:47:18,200 +this is sort of the advantage that it's + +1269 +00:47:16,280 --> 00:47:19,599 +pretty easy to check after the fact if + +1270 +00:47:18,200 --> 00:47:22,480 +the sequence has satisfied this + +1271 +00:47:19,599 --> 00:47:24,400 +constraint you know we could train some + +1272 +00:47:22,480 --> 00:47:26,200 +smaller model to guess if the topic of a + +1273 +00:47:24,400 --> 00:47:27,960 +sentence is about climbing could check + +1274 +00:47:26,200 --> 00:47:30,040 +for keywords we could have a friend + +1275 +00:47:27,960 --> 00:47:31,359 +who's willing to see this content like + +1276 +00:47:30,040 --> 00:47:33,040 +filter through it and throw everything + +1277 +00:47:31,359 --> 00:47:36,480 +out that's not about climing that is + +1278 +00:47:33,040 --> 00:47:38,280 +about climbing but if this model um + +1279 +00:47:36,480 --> 00:47:40,119 +ascribes really high likelihood to this + +1280 +00:47:38,280 --> 00:47:42,559 +like if this model was trained on you + +1281 +00:47:40,119 --> 00:47:44,760 +know data from CS PhD students this + +1282 +00:47:42,559 --> 00:47:46,240 +could be an extremely high likelihood + +1283 +00:47:44,760 --> 00:47:48,319 +suggestion and so we might need to + +1284 +00:47:46,240 --> 00:47:49,839 +regenerate hundreds or thousands of + +1285 +00:47:48,319 --> 00:47:52,559 +sequences to find something that's not + +1286 +00:47:49,839 --> 00:47:55,240 +about climing um and that feels a little + +1287 +00:47:52,559 --> 00:47:56,920 +bit inefficient right so is there + +1288 +00:47:55,240 --> 00:47:59,040 +something that we can do that's a little + +1289 +00:47:56,920 --> 00:48:01,599 +bit better than that well really we'd + +1290 +00:47:59,040 --> 00:48:03,200 +like to guess at some point during our + +1291 +00:48:01,599 --> 00:48:05,200 +generation if the sequence is going to + +1292 +00:48:03,200 --> 00:48:08,000 +be about climbing and maybe like + +1293 +00:48:05,200 --> 00:48:10,640 +recalibrate or you know we could even + +1294 +00:48:08,000 --> 00:48:12,079 +restart or sort of shape Our Generations + +1295 +00:48:10,640 --> 00:48:14,520 +so that we don't wind up with a sequence + +1296 +00:48:12,079 --> 00:48:16,319 +that's about climbing in the first place + +1297 +00:48:14,520 --> 00:48:19,359 +um one of the methods that we'll discuss + +1298 +00:48:16,319 --> 00:48:20,920 +to do this is a method called fudge um + +1299 +00:48:19,359 --> 00:48:22,800 +and unfortunately in their paper they + +1300 +00:48:20,920 --> 00:48:24,240 +don't have the same anti-climbing bias I + +1301 +00:48:22,800 --> 00:48:27,000 +do so this example is actually about + +1302 +00:48:24,240 --> 00:48:29,000 +formality instead um the idea here is + +1303 +00:48:27,000 --> 00:48:32,079 +that we want a sequence output of the + +1304 +00:48:29,000 --> 00:48:34,079 +model that is sort of satisfies this + +1305 +00:48:32,079 --> 00:48:36,079 +constraint of being formal and the way + +1306 +00:48:34,079 --> 00:48:39,960 +we're going to do this is at each step + +1307 +00:48:36,079 --> 00:48:41,640 +of prediction we get the outputs of what + +1308 +00:48:39,960 --> 00:48:44,160 +the model predicts is the next token + +1309 +00:48:41,640 --> 00:48:47,319 +right this sort of distribution here in + +1310 +00:48:44,160 --> 00:48:49,760 +blue and we also have some second + +1311 +00:48:47,319 --> 00:48:52,079 +distribution which says given sort of + +1312 +00:48:49,760 --> 00:48:54,480 +what we have so far How likely is this + +1313 +00:48:52,079 --> 00:48:56,920 +to be a formal sentence at the end right + +1314 +00:48:54,480 --> 00:48:58,880 +does a sentence that starts do you want + +1315 +00:48:56,920 --> 00:49:01,200 +have a high likelihood of being formal + +1316 +00:48:58,880 --> 00:49:04,559 +versus a sentence that starts do you + +1317 +00:49:01,200 --> 00:49:07,200 +prefer and so this sort of guess at what + +1318 +00:49:04,559 --> 00:49:09,520 +will be formal at the end of the um + +1319 +00:49:07,200 --> 00:49:10,960 +generation will put High likelihood on + +1320 +00:49:09,520 --> 00:49:13,599 +things that result in really formal + +1321 +00:49:10,960 --> 00:49:15,880 +sentences like do you prefer or do you + +1322 +00:49:13,599 --> 00:49:17,200 +thus whereas the original model might + +1323 +00:49:15,880 --> 00:49:19,440 +have higher likelihood on things that + +1324 +00:49:17,200 --> 00:49:22,559 +are maybe more commonly said like do you + +1325 +00:49:19,440 --> 00:49:24,319 +want um so we combine these two + +1326 +00:49:22,559 --> 00:49:26,280 +distributions you can just multiply them + +1327 +00:49:24,319 --> 00:49:29,079 +together and then we sample from this + +1328 +00:49:26,280 --> 00:49:30,520 +modified distribution which now has some + +1329 +00:49:29,079 --> 00:49:32,359 +sort of high weight on things that the + +1330 +00:49:30,520 --> 00:49:33,559 +model thinks are likely but also takes + +1331 +00:49:32,359 --> 00:49:35,960 +into account the likelihood of + +1332 +00:49:33,559 --> 00:49:38,240 +satisfying a constraint um this is + +1333 +00:49:35,960 --> 00:49:40,640 +another sort of method of modifying or + +1334 +00:49:38,240 --> 00:49:42,520 +sampling distribution um with some + +1335 +00:49:40,640 --> 00:49:44,520 +external information here and so there's + +1336 +00:49:42,520 --> 00:49:47,440 +results and sequences that wind up being + +1337 +00:49:44,520 --> 00:49:48,799 +sort of more likely to be formal without + +1338 +00:49:47,440 --> 00:49:50,280 +having to sample a whole bunch of + +1339 +00:49:48,799 --> 00:49:52,880 +sentences and reject the ones that we + +1340 +00:49:50,280 --> 00:49:54,720 +think don't satisfy this constraint so + +1341 +00:49:52,880 --> 00:49:57,119 +how do we get sort of a guess of what + +1342 +00:49:54,720 --> 00:49:58,839 +will be formal at the end of Generation + +1343 +00:49:57,119 --> 00:50:01,319 +Um this is where the name fudge comes + +1344 +00:49:58,839 --> 00:50:03,319 +from the fud stands for future + +1345 +00:50:01,319 --> 00:50:06,640 +discriminator and so what they do is + +1346 +00:50:03,319 --> 00:50:08,920 +they train a model on prefixes to guess + +1347 +00:50:06,640 --> 00:50:10,400 +whether that sequence will be formal um + +1348 +00:50:08,920 --> 00:50:12,040 +you can do this if you have a bunch of + +1349 +00:50:10,400 --> 00:50:15,319 +data that's sort of sorted into formal + +1350 +00:50:12,040 --> 00:50:17,720 +and not formal right every um sort of + +1351 +00:50:15,319 --> 00:50:20,119 +prefix of a sentence in the formal + +1352 +00:50:17,720 --> 00:50:21,480 +category is a training example right you + +1353 +00:50:20,119 --> 00:50:23,720 +know a sentence that starts do you + +1354 +00:50:21,480 --> 00:50:27,599 +prefer you can shop off each token to + +1355 +00:50:23,720 --> 00:50:29,920 +get sort of a um set of sequ of prefixes + +1356 +00:50:27,599 --> 00:50:31,160 +to sequences that have the label formal + +1357 +00:50:29,920 --> 00:50:33,559 +and you can do the same thing to your + +1358 +00:50:31,160 --> 00:50:34,920 +informal set and train a discriminator + +1359 +00:50:33,559 --> 00:50:36,559 +to choose between them to say like + +1360 +00:50:34,920 --> 00:50:38,400 +what's the probability the sentence but + +1361 +00:50:36,559 --> 00:50:41,160 +will belong to the formal set when we + +1362 +00:50:38,400 --> 00:50:43,319 +finish and so this idea of sort of + +1363 +00:50:41,160 --> 00:50:44,359 +trying to guess at a given decoding step + +1364 +00:50:43,319 --> 00:50:49,480 +if we're going to wind up with our + +1365 +00:50:44,359 --> 00:50:50,799 +constraints satisfied at the end um is a + +1366 +00:50:49,480 --> 00:50:53,000 +sort of key way to do constraint + +1367 +00:50:50,799 --> 00:50:56,000 +decoding um and one that we'll return to + +1368 +00:50:53,000 --> 00:50:58,280 +in just a couple slides here + +1369 +00:50:56,000 --> 00:51:00,440 +I want to talk touch on something + +1370 +00:50:58,280 --> 00:51:03,079 +slightly different which is that maybe + +1371 +00:51:00,440 --> 00:51:04,599 +one of the constraints we care about is + +1372 +00:51:03,079 --> 00:51:07,319 +something a little more nebulous like we + +1373 +00:51:04,599 --> 00:51:09,160 +want to match human preference um the + +1374 +00:51:07,319 --> 00:51:12,079 +way that we usually accomplish this + +1375 +00:51:09,160 --> 00:51:14,920 +constraint is a little bit different + +1376 +00:51:12,079 --> 00:51:16,040 +right um this we' usually do through + +1377 +00:51:14,920 --> 00:51:18,839 +like reinforcement learning through + +1378 +00:51:16,040 --> 00:51:21,559 +human feedback um and so we take sort of + +1379 +00:51:18,839 --> 00:51:24,960 +our original model distribution and we + +1380 +00:51:21,559 --> 00:51:27,960 +take a sort of really like tight like + +1381 +00:51:24,960 --> 00:51:30,200 +distrib tion of evidence that says like + +1382 +00:51:27,960 --> 00:51:31,680 +um this model says that this sequence is + +1383 +00:51:30,200 --> 00:51:33,960 +really high reward this sequence is + +1384 +00:51:31,680 --> 00:51:35,640 +really low reward and we try to sort of + +1385 +00:51:33,960 --> 00:51:38,200 +combine them somehow through training so + +1386 +00:51:35,640 --> 00:51:41,240 +we get a new model that is um quote + +1387 +00:51:38,200 --> 00:51:43,240 +unquote aligned and that it has like a + +1388 +00:51:41,240 --> 00:51:45,280 +higher likelihood of giving us things + +1389 +00:51:43,240 --> 00:51:48,640 +that have really high reward according + +1390 +00:51:45,280 --> 00:51:51,319 +to our reward distribution um you can + +1391 +00:51:48,640 --> 00:51:53,599 +view this though as a type of basian + +1392 +00:51:51,319 --> 00:51:55,119 +inference and so what this means is the + +1393 +00:51:53,599 --> 00:51:57,440 +distribution that we really want to get + +1394 +00:51:55,119 --> 00:51:59,880 +at the end is a distribution that + +1395 +00:51:57,440 --> 00:52:03,160 +combines our original models + +1396 +00:51:59,880 --> 00:52:05,680 +distribution and some idea of like How + +1397 +00:52:03,160 --> 00:52:08,480 +likely we are to satisfy the reward + +1398 +00:52:05,680 --> 00:52:10,720 +right um this we do through + +1399 +00:52:08,480 --> 00:52:12,359 +reinforcement learning but if we sort of + +1400 +00:52:10,720 --> 00:52:14,480 +know what these two distributions look + +1401 +00:52:12,359 --> 00:52:16,119 +like we've we've just been talking about + +1402 +00:52:14,480 --> 00:52:17,680 +a lot of methods that modify the + +1403 +00:52:16,119 --> 00:52:20,119 +original models distribution with + +1404 +00:52:17,680 --> 00:52:21,880 +external information it seems like maybe + +1405 +00:52:20,119 --> 00:52:24,760 +we could just add that external + +1406 +00:52:21,880 --> 00:52:26,200 +information in at decoding time to get + +1407 +00:52:24,760 --> 00:52:29,040 +some of the same + +1408 +00:52:26,200 --> 00:52:31,040 +effects um and it turns out you can do + +1409 +00:52:29,040 --> 00:52:32,799 +exactly this so this is a paper from + +1410 +00:52:31,040 --> 00:52:36,680 +last year called reward augmented + +1411 +00:52:32,799 --> 00:52:39,079 +decoding and the idea here is sort of um + +1412 +00:52:36,680 --> 00:52:41,839 +in the same conceptual class as fudge + +1413 +00:52:39,079 --> 00:52:44,079 +but instead of um predicting whether + +1414 +00:52:41,839 --> 00:52:46,079 +we're likely to satisfy the constraint + +1415 +00:52:44,079 --> 00:52:47,599 +we're predicting how much reward we + +1416 +00:52:46,079 --> 00:52:49,880 +think that sequence will have at the end + +1417 +00:52:47,599 --> 00:52:52,599 +of generation so we take our original + +1418 +00:52:49,880 --> 00:52:54,839 +model without doing any rhf and we get + +1419 +00:52:52,599 --> 00:52:58,160 +the output we get the predictions for + +1420 +00:52:54,839 --> 00:52:59,400 +the next token and then we use a model + +1421 +00:52:58,160 --> 00:53:02,359 +that's been trained to predict the + +1422 +00:52:59,400 --> 00:53:05,040 +likely reward given some prefix like a + +1423 +00:53:02,359 --> 00:53:06,720 +future discriminator and we calculate + +1424 +00:53:05,040 --> 00:53:08,200 +the likely reward if we pick each of + +1425 +00:53:06,720 --> 00:53:09,799 +those tokens and then we use the + +1426 +00:53:08,200 --> 00:53:12,319 +combination of those two distributions + +1427 +00:53:09,799 --> 00:53:13,720 +to choose what to decode next um and + +1428 +00:53:12,319 --> 00:53:16,000 +this sort of gives you some of the + +1429 +00:53:13,720 --> 00:53:18,440 +benefits of rlf without actually having + +1430 +00:53:16,000 --> 00:53:21,200 +to do reinforcement learning so it's a + +1431 +00:53:18,440 --> 00:53:23,160 +way of treating like aligning to human + +1432 +00:53:21,200 --> 00:53:26,839 +feedback as just another constraint that + +1433 +00:53:23,160 --> 00:53:30,400 +you can impose at decoding point + +1434 +00:53:26,839 --> 00:53:32,319 +so those were sort of a a subset of the + +1435 +00:53:30,400 --> 00:53:34,280 +um constrains decoding strategies that + +1436 +00:53:32,319 --> 00:53:35,799 +people use um before we get into the + +1437 +00:53:34,280 --> 00:53:38,400 +human and the loop stack are there any + +1438 +00:53:35,799 --> 00:53:38,400 +questions on + +1439 +00:53:39,040 --> 00:53:43,599 +this yes for + +1440 +00:53:44,960 --> 00:53:48,319 +the do you have + +1441 +00:53:52,799 --> 00:53:57,440 +to right so for the discrimin do you + +1442 +00:53:55,640 --> 00:54:00,000 +need to train one for every constraint + +1443 +00:53:57,440 --> 00:54:01,440 +and you do yeah so you need to have some + +1444 +00:54:00,000 --> 00:54:02,920 +set of data that satisfies your + +1445 +00:54:01,440 --> 00:54:05,319 +constraint and some set of data that + +1446 +00:54:02,920 --> 00:54:08,200 +doesn't before you can enforce a new + +1447 +00:54:05,319 --> 00:54:10,200 +constraint in an alternative might be + +1448 +00:54:08,200 --> 00:54:12,040 +like in the paper that's what they did + +1449 +00:54:10,200 --> 00:54:16,400 +but an alternative might be just to + +1450 +00:54:12,040 --> 00:54:18,359 +train a discriminator to determine + +1451 +00:54:16,400 --> 00:54:20,880 +whether any constraint was violated so + +1452 +00:54:18,359 --> 00:54:23,359 +if you have 100 constraints you could do + +1453 +00:54:20,880 --> 00:54:25,599 +a binary prier about whether any + +1454 +00:54:23,359 --> 00:54:26,880 +constraint is violated and then + +1455 +00:54:25,599 --> 00:54:29,040 +also + +1456 +00:54:26,880 --> 00:54:30,559 +sufficient but if you wanted to add a + +1457 +00:54:29,040 --> 00:54:34,079 +new constraint you'd still have to + +1458 +00:54:30,559 --> 00:54:34,079 +retrain or you have to retrain + +1459 +00:54:35,160 --> 00:54:41,319 +or the the reason that this is sort of + +1460 +00:54:38,119 --> 00:54:43,119 +relatively reasonable to do is that this + +1461 +00:54:41,319 --> 00:54:45,240 +determination of if a constraint is + +1462 +00:54:43,119 --> 00:54:46,960 +likely to be violated is sort of a a + +1463 +00:54:45,240 --> 00:54:48,520 +lighter weight or an easier task to + +1464 +00:54:46,960 --> 00:54:50,520 +learn you can use a relatively small + +1465 +00:54:48,520 --> 00:54:52,079 +model for this versus like your big + +1466 +00:54:50,520 --> 00:54:53,680 +model just that has to be able to + +1467 +00:54:52,079 --> 00:54:55,920 +predict the next token for any sequence + +1468 +00:54:53,680 --> 00:54:58,400 +anymore yeah another another like + +1469 +00:54:55,920 --> 00:55:00,760 +interesting thing is if you think about + +1470 +00:54:58,400 --> 00:55:01,520 +it normally you're predicting with your + +1471 +00:55:00,760 --> 00:55:04,119 +big + +1472 +00:55:01,520 --> 00:55:06,359 +softmax like this over all of your + +1473 +00:55:04,119 --> 00:55:09,680 +vocabulary you can even use the same + +1474 +00:55:06,359 --> 00:55:11,920 +representations here to predict with a + +1475 +00:55:09,680 --> 00:55:13,359 +binary classifier uh whether the + +1476 +00:55:11,920 --> 00:55:14,559 +constraint is violated let's say you + +1477 +00:55:13,359 --> 00:55:17,240 +have 100 + +1478 +00:55:14,559 --> 00:55:19,240 +constraints this is still a vector of + +1479 +00:55:17,240 --> 00:55:21,520 +size 100 compared to your vector of size + +1480 +00:55:19,240 --> 00:55:26,240 +32,000 that you're using for llama right + +1481 +00:55:21,520 --> 00:55:28,280 +so it's not like this adds the training + +1482 +00:55:26,240 --> 00:55:32,799 +would cost some time but it adds very + +1483 +00:55:28,280 --> 00:55:32,799 +little like inference time I guess + +1484 +00:55:33,440 --> 00:55:38,960 +basically the rock + +1485 +00:55:35,880 --> 00:55:41,400 +sound so when you do the constraint you + +1486 +00:55:38,960 --> 00:55:43,160 +use like a more General + +1487 +00:55:41,400 --> 00:55:44,680 +like do + +1488 +00:55:43,160 --> 00:55:48,160 +notest + +1489 +00:55:44,680 --> 00:55:50,799 +or I guess like in that constraint for + +1490 +00:55:48,160 --> 00:55:50,799 +you can add + +1491 +00:55:52,559 --> 00:55:57,000 +like, is there + +1492 +00:55:57,880 --> 00:56:00,720 +like is there a way to generalize your + +1493 +00:55:59,400 --> 00:56:04,760 +constraint would be like don't talk + +1494 +00:56:00,720 --> 00:56:07,039 +about this whole set of hobes um you + +1495 +00:56:04,760 --> 00:56:08,960 +could do that by training a + +1496 +00:56:07,039 --> 00:56:10,400 +discriminator um by training one + +1497 +00:56:08,960 --> 00:56:12,359 +discriminator that considers all of + +1498 +00:56:10,400 --> 00:56:15,119 +those or by training like a hundred + +1499 +00:56:12,359 --> 00:56:17,559 +different discriminators and then um + +1500 +00:56:15,119 --> 00:56:19,520 +sort of taking like the maximum score + +1501 +00:56:17,559 --> 00:56:21,240 +from any of them right like you want to + +1502 +00:56:19,520 --> 00:56:23,240 +you want to be able to exclude all of + +1503 +00:56:21,240 --> 00:56:27,799 +these things so you consider if any of + +1504 +00:56:23,240 --> 00:56:30,720 +them are violated yeah and for um reward + +1505 +00:56:27,799 --> 00:56:32,839 +augmented recoding how do we sort of + +1506 +00:56:30,720 --> 00:56:36,039 +like frame that reward model or is that + +1507 +00:56:32,839 --> 00:56:38,400 +just come from the previously done rhf + +1508 +00:56:36,039 --> 00:56:41,079 +data that the store from there and then + +1509 +00:56:38,400 --> 00:56:44,119 +you sort of like FR another + +1510 +00:56:41,079 --> 00:56:47,880 +discriminator but this one + +1511 +00:56:44,119 --> 00:56:50,799 +now I I fully understand yeah so how do + +1512 +00:56:47,880 --> 00:56:52,920 +we get the the reward model here this is + +1513 +00:56:50,799 --> 00:56:55,280 +we can use the same data that we' use + +1514 +00:56:52,920 --> 00:56:58,000 +for rhf but we need a slightly different + +1515 +00:56:55,280 --> 00:57:01,119 +model so for rhf we'll train a reward + +1516 +00:56:58,000 --> 00:57:02,599 +model over full sequences right and here + +1517 +00:57:01,119 --> 00:57:05,280 +we need to do the same trick where we + +1518 +00:57:02,599 --> 00:57:07,280 +sort of look at just prefixes and try to + +1519 +00:57:05,280 --> 00:57:09,640 +guess the reward Downstream but if we + +1520 +00:57:07,280 --> 00:57:12,440 +have already have preference data then + +1521 +00:57:09,640 --> 00:57:15,119 +we have some um like we have a data + +1522 +00:57:12,440 --> 00:57:16,720 +source to do this with I think if I'm + +1523 +00:57:15,119 --> 00:57:19,240 +remembering correctly they also had a + +1524 +00:57:16,720 --> 00:57:20,920 +couple more sort of tricks for data + +1525 +00:57:19,240 --> 00:57:22,640 +augmentation to get this to work this is + +1526 +00:57:20,920 --> 00:57:25,720 +sort of like a non-trivial thing to + +1527 +00:57:22,640 --> 00:57:28,039 +figure out um because like reward is + +1528 +00:57:25,720 --> 00:57:30,200 +generally a secret bual + +1529 +00:57:28,039 --> 00:57:32,280 +attribute and also if you don't know + +1530 +00:57:30,200 --> 00:57:34,160 +very much about rhf we're going to cover + +1531 +00:57:32,280 --> 00:57:36,400 +that the future class so don't worry if + +1532 +00:57:34,160 --> 00:57:37,880 +this is a yeah sorry to Jump Ahead a + +1533 +00:57:36,400 --> 00:57:39,880 +little no no + +1534 +00:57:37,880 --> 00:57:43,640 +wores + +1535 +00:57:39,880 --> 00:57:47,240 +yeah application this like why would we + +1536 +00:57:43,640 --> 00:57:49,640 +doing this to ensure it could be like + +1537 +00:57:47,240 --> 00:57:52,839 +our llm would want to highlight certain + +1538 +00:57:49,640 --> 00:57:53,799 +qualities like we want our evence to be + +1539 +00:57:52,839 --> 00:57:55,960 +more + +1540 +00:57:53,799 --> 00:57:57,839 +empathetic is there + +1541 +00:57:55,960 --> 00:57:59,440 +something yeah like what are the real + +1542 +00:57:57,839 --> 00:58:01,280 +world applications like could we use + +1543 +00:57:59,440 --> 00:58:03,680 +this to make L more empathetic or + +1544 +00:58:01,280 --> 00:58:06,359 +something yeah any any real attribute + +1545 +00:58:03,680 --> 00:58:08,000 +that you can sort of collect like + +1546 +00:58:06,359 --> 00:58:09,839 +positive and negative data for you could + +1547 +00:58:08,000 --> 00:58:12,200 +do this kind of constraints for I think + +1548 +00:58:09,839 --> 00:58:15,119 +the the ones you see most commonly are + +1549 +00:58:12,200 --> 00:58:16,480 +the human preference and then like + +1550 +00:58:15,119 --> 00:58:18,839 +negative constraints like you don't want + +1551 +00:58:16,480 --> 00:58:20,000 +your model to generate offensive content + +1552 +00:58:18,839 --> 00:58:21,839 +and if you can build like a good + +1553 +00:58:20,000 --> 00:58:23,319 +discriminator for is a sentence going in + +1554 +00:58:21,839 --> 00:58:26,160 +a really offensive Direction you can + +1555 +00:58:23,319 --> 00:58:28,440 +kind of stop it from gener + +1556 +00:58:26,160 --> 00:58:30,480 +yeah would it be a good idea if you + +1557 +00:58:28,440 --> 00:58:33,760 +generate a bunch of cons and ask the + +1558 +00:58:30,480 --> 00:58:35,480 +model itself whether it violates the + +1559 +00:58:33,760 --> 00:58:37,319 +yeah you could do that for sure could + +1560 +00:58:35,480 --> 00:58:38,920 +you ask like could you generate a bunch + +1561 +00:58:37,319 --> 00:58:42,440 +of samples and ask the model if it + +1562 +00:58:38,920 --> 00:58:44,720 +violates the constraint um this is also + +1563 +00:58:42,440 --> 00:58:47,119 +a type of sort of sample and then rerank + +1564 +00:58:44,720 --> 00:58:52,319 +strategy um but yeah this would be sort + +1565 +00:58:47,119 --> 00:58:54,000 +of a more um clever like less + +1566 +00:58:52,319 --> 00:58:55,559 +heavyweight version of this checking if + +1567 +00:58:54,000 --> 00:58:57,319 +it's about climate means right you'd + +1568 +00:58:55,559 --> 00:58:58,520 +like ask the model if it violated the + +1569 +00:58:57,319 --> 00:59:00,160 +constraint and if it's a good enough + +1570 +00:58:58,520 --> 00:59:02,480 +model it could probably do that pretty + +1571 +00:59:00,160 --> 00:59:05,160 +well I suppose in that case you don't + +1572 +00:59:02,480 --> 00:59:08,160 +have to thing anything yeah yeah and + +1573 +00:59:05,160 --> 00:59:10,359 +this is sort of a general like the + +1574 +00:59:08,160 --> 00:59:12,240 +generating text that like satisfies a + +1575 +00:59:10,359 --> 00:59:14,079 +constraint is harder than checking if a + +1576 +00:59:12,240 --> 00:59:16,280 +text satisfies a constraint so even if + +1577 +00:59:14,079 --> 00:59:17,880 +the model isn't good about like not + +1578 +00:59:16,280 --> 00:59:19,440 +generating text about climbing when you + +1579 +00:59:17,880 --> 00:59:20,520 +tell it to it might be able to tell if + +1580 +00:59:19,440 --> 00:59:23,640 +text is + +1581 +00:59:20,520 --> 00:59:26,640 +about yeah yeah so how do + +1582 +00:59:23,640 --> 00:59:26,640 +you + +1583 +00:59:28,400 --> 00:59:32,359 +have different + +1584 +00:59:32,920 --> 00:59:36,319 +different you have + +1585 +00:59:36,599 --> 00:59:42,119 +to yeah like how do you collect the data + +1586 +00:59:38,839 --> 00:59:45,720 +to train this discriminator um generally + +1587 +00:59:42,119 --> 00:59:47,160 +you're going to see like you'll look to + +1588 +00:59:45,720 --> 00:59:48,720 +see if there are data sets that already + +1589 +00:59:47,160 --> 00:59:50,160 +captured this attribute or you could + +1590 +00:59:48,720 --> 00:59:51,599 +sort of write her istics to try to + +1591 +00:59:50,160 --> 00:59:53,839 +recover it if it's an attribute that not + +1592 +00:59:51,599 --> 00:59:55,480 +a lot of other people care about like + +1593 +00:59:53,839 --> 00:59:58,280 +you could write your puristic to check + +1594 +00:59:55,480 --> 01:00:00,160 +if text is about climbing for instance + +1595 +00:59:58,280 --> 01:00:02,359 +um and then try to recover what noisy + +1596 +01:00:00,160 --> 01:00:04,200 +samples of data that is or is not about + +1597 +01:00:02,359 --> 01:00:05,559 +climbing maybe you could scrape a + +1598 +01:00:04,200 --> 01:00:07,000 +climbing forum and then scrape like a + +1599 +01:00:05,559 --> 01:00:09,079 +hiking forum and use the difference + +1600 +01:00:07,000 --> 01:00:10,319 +between them um but for a lot of tests + +1601 +01:00:09,079 --> 01:00:11,760 +there's actually pretty good data sets + +1602 +01:00:10,319 --> 01:00:14,400 +already out there for this so there's + +1603 +01:00:11,760 --> 01:00:17,480 +like in there's a lot of style transfer + +1604 +01:00:14,400 --> 01:00:20,200 +tasks that are like go from informal to + +1605 +01:00:17,480 --> 01:00:22,240 +formal or go from this to that or like + +1606 +01:00:20,200 --> 01:00:24,039 +make this text in an iic contamin and + +1607 +01:00:22,240 --> 01:00:26,559 +you can find like data from those + +1608 +01:00:24,039 --> 01:00:26,559 +sources + +1609 +01:00:26,799 --> 01:00:31,599 +we never like talked about F yet but I'm + +1610 +01:00:29,520 --> 01:00:34,520 +really curious with like the word a + +1611 +01:00:31,599 --> 01:00:38,039 +beting whether it would perform better + +1612 +01:00:34,520 --> 01:00:39,079 +than like fineing on RF like certainly + +1613 +01:00:38,039 --> 01:00:42,720 +more + +1614 +01:00:39,079 --> 01:00:45,039 +efficient but I I was I think this is a + +1615 +01:00:42,720 --> 01:00:49,760 +comparison they make in their paper but + +1616 +01:00:45,039 --> 01:00:52,520 +I don't remember their pun on yeah um in + +1617 +01:00:49,760 --> 01:00:55,280 +general there's this sort of a like you + +1618 +01:00:52,520 --> 01:00:57,039 +can pay a onetime kind of heavy cost to + +1619 +01:00:55,280 --> 01:00:58,880 +fine-tune or you can pay costs at + +1620 +01:00:57,039 --> 01:01:01,160 +inference time every time to make sort + +1621 +01:00:58,880 --> 01:01:03,880 +of a to make your model better in any of + +1622 +01:01:01,160 --> 01:01:06,160 +these ways and depending on how much + +1623 +01:01:03,880 --> 01:01:09,119 +inference you're playing do like one or + +1624 +01:01:06,160 --> 01:01:09,119 +the other of these could be + +1625 +01:01:11,240 --> 01:01:16,400 +better + +1626 +01:01:12,839 --> 01:01:19,200 +great so now we're going to talk about + +1627 +01:01:16,400 --> 01:01:21,160 +sort of methods for introducing human + +1628 +01:01:19,200 --> 01:01:22,680 +interaction into the decoding process + +1629 +01:01:21,160 --> 01:01:25,240 +and everything we've looked at so far + +1630 +01:01:22,680 --> 01:01:26,920 +has been very sort of black booss kind + +1631 +01:01:25,240 --> 01:01:28,920 +of hands off right like you give the + +1632 +01:01:26,920 --> 01:01:30,640 +model M some input maybe we do some kind + +1633 +01:01:28,920 --> 01:01:33,640 +of manipulation on the decoding side you + +1634 +01:01:30,640 --> 01:01:37,160 +get one output back right um but in a + +1635 +01:01:33,640 --> 01:01:38,920 +lot of situations where maybe you have + +1636 +01:01:37,160 --> 01:01:40,960 +some high-risk application and you need + +1637 +01:01:38,920 --> 01:01:42,640 +somebody to be consistently monitoring + +1638 +01:01:40,960 --> 01:01:43,799 +and maybe intervening or you're doing + +1639 +01:01:42,640 --> 01:01:46,359 +something where you want to do some kind + +1640 +01:01:43,799 --> 01:01:47,960 +of human AI collaboration um and you + +1641 +01:01:46,359 --> 01:01:49,160 +want to be able to go back and forth or + +1642 +01:01:47,960 --> 01:01:50,960 +you want to have a conversation with the + +1643 +01:01:49,160 --> 01:01:53,480 +model what you're actually doing is sort + +1644 +01:01:50,960 --> 01:01:54,960 +of a series of decodings with human + +1645 +01:01:53,480 --> 01:01:56,319 +intervention in between + +1646 +01:01:54,960 --> 01:01:58,640 +um and I'm going to talk about a couple + +1647 +01:01:56,319 --> 01:02:00,760 +of these strategies briefly I think if + +1648 +01:01:58,640 --> 01:02:02,200 +you've used sort of a modern llm you're + +1649 +01:02:00,760 --> 01:02:04,440 +probably familiar with at least a few of + +1650 +01:02:02,200 --> 01:02:06,720 +them already um we'll sort of put names + +1651 +01:02:04,440 --> 01:02:08,359 +to each of them and the set of examples + +1652 +01:02:06,720 --> 01:02:10,880 +that we're running with here are from a + +1653 +01:02:08,359 --> 01:02:13,880 +paper called wordcraft which is about um + +1654 +01:02:10,880 --> 01:02:15,480 +story generation with llm assistants but + +1655 +01:02:13,880 --> 01:02:17,559 +these can also be applied sort of more + +1656 +01:02:15,480 --> 01:02:20,319 +generally to any kind of task where + +1657 +01:02:17,559 --> 01:02:23,799 +you'd want to go back and forth with a + +1658 +01:02:20,319 --> 01:02:25,319 +model um the sort of easiest or maybe + +1659 +01:02:23,799 --> 01:02:27,599 +simplest place to start here is just + +1660 +01:02:25,319 --> 01:02:29,760 +with interleaving text right you can + +1661 +01:02:27,599 --> 01:02:31,400 +choose when the model starts and stops + +1662 +01:02:29,760 --> 01:02:33,720 +decoding and you can choose when a human + +1663 +01:02:31,400 --> 01:02:34,920 +is writing text in between and you can + +1664 +01:02:33,720 --> 01:02:36,680 +condition your model in sort of a + +1665 +01:02:34,920 --> 01:02:39,240 +mixture of human and model generated + +1666 +01:02:36,680 --> 01:02:41,279 +text to choose what to continue next um + +1667 +01:02:39,240 --> 01:02:43,680 +you can also do something like have the + +1668 +01:02:41,279 --> 01:02:45,319 +model generate a set of text edit that + +1669 +01:02:43,680 --> 01:02:47,119 +text in some way maybe the human is + +1670 +01:02:45,319 --> 01:02:48,640 +imposing some really subtle constraint + +1671 +01:02:47,119 --> 01:02:50,559 +like I want it to sound like my writing + +1672 +01:02:48,640 --> 01:02:52,200 +style we don't have a discriminator for + +1673 +01:02:50,559 --> 01:02:54,119 +this but the human can sort of modify + +1674 +01:02:52,200 --> 01:02:55,680 +the text and then continue generating + +1675 +01:02:54,119 --> 01:02:57,160 +from that point and that will influence + +1676 +01:02:55,680 --> 01:03:01,160 +the style of the text that continues + +1677 +01:02:57,160 --> 01:03:03,240 +being generative um a this case here is + +1678 +01:03:01,160 --> 01:03:04,720 +sort of a you're writing a story + +1679 +01:03:03,240 --> 01:03:06,520 +together and so you're going back and + +1680 +01:03:04,720 --> 01:03:07,799 +forth and editing the text like that but + +1681 +01:03:06,520 --> 01:03:10,319 +you can also think of any kind of + +1682 +01:03:07,799 --> 01:03:11,920 +conversation with a model as the same + +1683 +01:03:10,319 --> 01:03:15,319 +kind of interleaving of text right the + +1684 +01:03:11,920 --> 01:03:17,000 +model gives some um text you provide + +1685 +01:03:15,319 --> 01:03:18,599 +some text you go back and forth on like + +1686 +01:03:17,000 --> 01:03:20,480 +who's providing the text that conditions + +1687 +01:03:18,599 --> 01:03:23,039 +the + +1688 +01:03:20,480 --> 01:03:24,880 +model you also might want to do things + +1689 +01:03:23,039 --> 01:03:26,760 +like more fine brain replace + +1690 +01:03:24,880 --> 01:03:28,559 +so here the person has highlighted some + +1691 +01:03:26,760 --> 01:03:31,640 +text and said like make this more + +1692 +01:03:28,559 --> 01:03:33,960 +descriptive or shorten this to two words + +1693 +01:03:31,640 --> 01:03:36,079 +or maybe you want some additional + +1694 +01:03:33,960 --> 01:03:38,520 +constraint like can this be happier can + +1695 +01:03:36,079 --> 01:03:40,960 +this be sad like change the ending or + +1696 +01:03:38,520 --> 01:03:43,760 +something um you can accomplish this in + +1697 +01:03:40,960 --> 01:03:45,799 +a variety of ways um here this is done + +1698 +01:03:43,760 --> 01:03:47,680 +through input manipulation so you prompt + +1699 +01:03:45,799 --> 01:03:50,359 +your model differently with different + +1700 +01:03:47,680 --> 01:03:52,200 +constraints you can also do this with an + +1701 +01:03:50,359 --> 01:03:54,440 +actual modeling change like if you want + +1702 +01:03:52,200 --> 01:03:56,119 +some kind of infilling model um + +1703 +01:03:54,440 --> 01:03:57,720 +particularly for things like code this + +1704 +01:03:56,119 --> 01:04:01,119 +can be helpful so you want context from + +1705 +01:03:57,720 --> 01:04:02,440 +left and right sides um or you can do + +1706 +01:04:01,119 --> 01:04:03,799 +this with the decoding changes that we + +1707 +01:04:02,440 --> 01:04:05,960 +talked about in the previous section + +1708 +01:04:03,799 --> 01:04:07,799 +right you could add a discriminator for + +1709 +01:04:05,960 --> 01:04:09,680 +descriptiveness of text or you could do + +1710 +01:04:07,799 --> 01:04:11,680 +some kind of sampling ranking method to + +1711 +01:04:09,680 --> 01:04:13,880 +recover a more descriptive + +1712 +01:04:11,680 --> 01:04:16,640 +output another thing that's very common + +1713 +01:04:13,880 --> 01:04:17,960 +in this space is sampling and reranking + +1714 +01:04:16,640 --> 01:04:20,839 +methods where the human is the one + +1715 +01:04:17,960 --> 01:04:23,640 +choosing what to return right so in + +1716 +01:04:20,839 --> 01:04:25,960 +wordcraft you see a set of choices and + +1717 +01:04:23,640 --> 01:04:28,200 +you can choose text to insert but more + +1718 +01:04:25,960 --> 01:04:30,720 +commonly in something like um chat gbt + +1719 +01:04:28,200 --> 01:04:33,160 +or Bard you see this little option to + +1720 +01:04:30,720 --> 01:04:34,880 +regenerate text right you as the human + +1721 +01:04:33,160 --> 01:04:36,160 +can reject the text and say like no I + +1722 +01:04:34,880 --> 01:04:38,680 +don't like this give me a different + +1723 +01:04:36,160 --> 01:04:41,359 +output and this is also sort of a way of + +1724 +01:04:38,680 --> 01:04:44,079 +controlling decoding um just by doing it + +1725 +01:04:41,359 --> 01:04:46,319 +on on a human rather in an algorithmic + +1726 +01:04:44,079 --> 01:04:49,279 +level of course you don't necessarily + +1727 +01:04:46,319 --> 01:04:51,200 +need a human in here and so um some + +1728 +01:04:49,279 --> 01:04:52,960 +recent work has looked at functionally + +1729 +01:04:51,200 --> 01:04:55,799 +using models to make these decisions + +1730 +01:04:52,960 --> 01:04:57,480 +instead um this is a a a prompting paper + +1731 +01:04:55,799 --> 01:05:00,359 +called free of thought which was sort of + +1732 +01:04:57,480 --> 01:05:02,279 +very popular on Twitter last summer um + +1733 +01:05:00,359 --> 01:05:06,119 +and the idea here is that you're going + +1734 +01:05:02,279 --> 01:05:08,480 +to generate um several smaller sequences + +1735 +01:05:06,119 --> 01:05:11,200 +um like a couple of sentences a + +1736 +01:05:08,480 --> 01:05:13,160 +reasoning step or a thought in the paper + +1737 +01:05:11,200 --> 01:05:14,839 +and you're going to use a model to + +1738 +01:05:13,160 --> 01:05:16,839 +choose which ones to continue and you + +1739 +01:05:14,839 --> 01:05:19,000 +can do different sort of constraints + +1740 +01:05:16,839 --> 01:05:21,960 +here like I want to sort of rank this + +1741 +01:05:19,000 --> 01:05:25,079 +set of three or maybe I want to predict + +1742 +01:05:21,960 --> 01:05:26,839 +if any in this set is wrong like is this + +1743 +01:05:25,079 --> 01:05:29,400 +a good reasoning step and if the model + +1744 +01:05:26,839 --> 01:05:32,240 +says no you no longer continue that but + +1745 +01:05:29,400 --> 01:05:33,559 +the idea here is through prompting + +1746 +01:05:32,240 --> 01:05:35,640 +really achieving something that's sort + +1747 +01:05:33,559 --> 01:05:38,960 +of if you squint at it looks a lot like + +1748 +01:05:35,640 --> 01:05:41,279 +beam search right instead of doing a um + +1749 +01:05:38,960 --> 01:05:43,160 +like token level thing and making a + +1750 +01:05:41,279 --> 01:05:45,079 +decision based on likelihood you're + +1751 +01:05:43,160 --> 01:05:47,880 +generating sort of several sentences out + +1752 +01:05:45,079 --> 01:05:50,599 +a time and making a decision based on + +1753 +01:05:47,880 --> 01:05:52,359 +this models feedback right this signal + +1754 +01:05:50,599 --> 01:05:53,799 +from an external source which here is a + +1755 +01:05:52,359 --> 01:05:55,279 +model but could also be a human if + +1756 +01:05:53,799 --> 01:05:57,920 +you're willing willing to sort of wait + +1757 +01:05:55,279 --> 01:06:01,559 +around for them to make the decision and + +1758 +01:05:57,920 --> 01:06:03,839 +so this is a way of sort of giving + +1759 +01:06:01,559 --> 01:06:06,640 +feedback on a broader level than single + +1760 +01:06:03,839 --> 01:06:09,079 +tokens um to guide a decoding process to + +1761 +01:06:06,640 --> 01:06:09,079 +a final + +1762 +01:06:09,839 --> 01:06:15,079 +outut so the last couple of things we'll + +1763 +01:06:12,760 --> 01:06:17,520 +talk about here are sort of practical + +1764 +01:06:15,079 --> 01:06:19,839 +considerations speed choosing decoding + +1765 +01:06:17,520 --> 01:06:22,599 +methods um but I can take any questions + +1766 +01:06:19,839 --> 01:06:22,599 +before that + +1767 +01:06:23,000 --> 01:06:26,000 +to + +1768 +01:06:26,760 --> 01:06:32,920 +great so how do you make this fast and + +1769 +01:06:30,359 --> 01:06:34,920 +in particular if you've ever tried to + +1770 +01:06:32,920 --> 01:06:36,920 +sort of Benchmark performance of a model + +1771 +01:06:34,920 --> 01:06:38,720 +what you realize pretty quickly is that + +1772 +01:06:36,920 --> 01:06:40,720 +the vast majority of time is actually + +1773 +01:06:38,720 --> 01:06:43,440 +spent in decoding you have to generate + +1774 +01:06:40,720 --> 01:06:45,319 +one token at a time you have to sort of + +1775 +01:06:43,440 --> 01:06:46,920 +pass that back through the model to get + +1776 +01:06:45,319 --> 01:06:51,279 +conditioning to generate the next token + +1777 +01:06:46,920 --> 01:06:53,599 +and so this is um generally fairly slow + +1778 +01:06:51,279 --> 01:06:54,839 +um this is sort of a a major impediment + +1779 +01:06:53,599 --> 01:06:56,359 +if you're d to do something like a + +1780 +01:06:54,839 --> 01:06:57,839 +streaming application where you want or + +1781 +01:06:56,359 --> 01:06:59,559 +a chat application where you don't want + +1782 +01:06:57,839 --> 01:07:03,599 +the person to be waiting around for an + +1783 +01:06:59,559 --> 01:07:06,799 +answer um one way to do this is a method + +1784 +01:07:03,599 --> 01:07:09,160 +called Spectra of decoding and this is a + +1785 +01:07:06,799 --> 01:07:12,599 +method where you're using a smaller + +1786 +01:07:09,160 --> 01:07:14,039 +model um not as like we're in contrast + +1787 +01:07:12,599 --> 01:07:16,240 +of decoding right we're using a smaller + +1788 +01:07:14,039 --> 01:07:17,559 +model to decide what not to generate but + +1789 +01:07:16,240 --> 01:07:20,119 +here we're using a smaller model to + +1790 +01:07:17,559 --> 01:07:21,880 +decide be what to generate um and the + +1791 +01:07:20,119 --> 01:07:24,960 +idea here is that most of these tokens + +1792 +01:07:21,880 --> 01:07:26,480 +are maybe not super hard to side it's + +1793 +01:07:24,960 --> 01:07:27,400 +just that occasionally the bigger model + +1794 +01:07:26,480 --> 01:07:30,240 +might want to go in a different + +1795 +01:07:27,400 --> 01:07:32,920 +direction so these green tokens here are + +1796 +01:07:30,240 --> 01:07:35,160 +generated by a smaller model our amateur + +1797 +01:07:32,920 --> 01:07:37,079 +model here and the larger model acts + +1798 +01:07:35,160 --> 01:07:39,960 +largely as a verifier and what it does + +1799 +01:07:37,079 --> 01:07:43,000 +is it checks if the output so far is + +1800 +01:07:39,960 --> 01:07:44,920 +going in a an a Direction that's sort of + +1801 +01:07:43,000 --> 01:07:46,400 +in distribution for the big model like + +1802 +01:07:44,920 --> 01:07:49,240 +something that's within the realm of + +1803 +01:07:46,400 --> 01:07:50,720 +what it might SLE and to there's sort of + +1804 +01:07:49,240 --> 01:07:52,400 +an involved discussion in this paper of + +1805 +01:07:50,720 --> 01:07:55,200 +how you determine if something is in + +1806 +01:07:52,400 --> 01:07:58,000 +distribution um so here the smaller + +1807 +01:07:55,200 --> 01:08:00,240 +models generates like five or six tokens + +1808 +01:07:58,000 --> 01:08:02,559 +that the larger model says okay this + +1809 +01:08:00,240 --> 01:08:03,680 +looks great until it hits a token that + +1810 +01:08:02,559 --> 01:08:06,079 +the larger model would not have + +1811 +01:08:03,680 --> 01:08:07,920 +generated in that circumstance and then + +1812 +01:08:06,079 --> 01:08:10,279 +the larger model rejects that token and + +1813 +01:08:07,920 --> 01:08:13,000 +generates a different token instead so + +1814 +01:08:10,279 --> 01:08:15,440 +you can see here each of these red and + +1815 +01:08:13,000 --> 01:08:17,600 +then blue sections is where the larger + +1816 +01:08:15,440 --> 01:08:19,400 +model has rejected something and has to + +1817 +01:08:17,600 --> 01:08:21,920 +actually autor regressively decode a + +1818 +01:08:19,400 --> 01:08:24,199 +single token by contrast if you were + +1819 +01:08:21,920 --> 01:08:27,359 +doing regular decoding at each + +1820 +01:08:24,199 --> 01:08:28,799 +individual token in this sequence the um + +1821 +01:08:27,359 --> 01:08:31,640 +larger model would have had to make the + +1822 +01:08:28,799 --> 01:08:35,359 +fall forward pass to decoda token so + +1823 +01:08:31,640 --> 01:08:37,359 +here rather than de doing maybe what + +1824 +01:08:35,359 --> 01:08:39,239 +probably like 20ish decoding steps to + +1825 +01:08:37,359 --> 01:08:41,560 +get this full sequence the larger model + +1826 +01:08:39,239 --> 01:08:43,040 +has done about eight decoring steps and + +1827 +01:08:41,560 --> 01:08:47,560 +everything else is able to sort of + +1828 +01:08:43,040 --> 01:08:49,759 +verify a block of tokens at once um this + +1829 +01:08:47,560 --> 01:08:51,400 +sort of idea of like using a smaller + +1830 +01:08:49,759 --> 01:08:54,120 +model as an approximation is pretty + +1831 +01:08:51,400 --> 01:08:55,839 +powerful um and there's some great um + +1832 +01:08:54,120 --> 01:08:58,159 +followup work cons specul decoding and + +1833 +01:08:55,839 --> 01:08:59,000 +sort of ways to do this faster or with + +1834 +01:08:58,159 --> 01:09:01,520 +stronger + +1835 +01:08:59,000 --> 01:09:04,839 +guarantees um but this General concept + +1836 +01:09:01,520 --> 01:09:06,920 +is I would bet probably how models like + +1837 +01:09:04,839 --> 01:09:09,080 +um part of how models like chat GPT or + +1838 +01:09:06,920 --> 01:09:11,159 +Bard are sort of generating text so + +1839 +01:09:09,080 --> 01:09:13,120 +quickly um there's another element here + +1840 +01:09:11,159 --> 01:09:16,159 +which is like the model architecture + +1841 +01:09:13,120 --> 01:09:17,679 +being sparse but I think that um if you + +1842 +01:09:16,159 --> 01:09:19,920 +folks talk about mixture of experts we + +1843 +01:09:17,679 --> 01:09:22,880 +might get into that + +1844 +01:09:19,920 --> 01:09:26,080 +later um how do you do this kind of fast + +1845 +01:09:22,880 --> 01:09:27,679 +inference um libraries like BLM will + +1846 +01:09:26,080 --> 01:09:29,440 +Implement things I think Implement + +1847 +01:09:27,679 --> 01:09:32,199 +speculative decoding and Implement sort + +1848 +01:09:29,440 --> 01:09:34,400 +of Hardware level tricks like choosing + +1849 +01:09:32,199 --> 01:09:37,799 +which attention um weights to Cash wear + +1850 +01:09:34,400 --> 01:09:39,199 +to do faster inflence um there's also + +1851 +01:09:37,799 --> 01:09:40,799 +great libraries for doing things like + +1852 +01:09:39,199 --> 01:09:42,679 +constraint decoding so things like + +1853 +01:09:40,799 --> 01:09:45,520 +outlines will let you set constraints + +1854 +01:09:42,679 --> 01:09:46,960 +like I want my outputs to all be Json + +1855 +01:09:45,520 --> 01:09:48,640 +and it will impose additional + +1856 +01:09:46,960 --> 01:09:50,839 +constraints during decoding to ensure + +1857 +01:09:48,640 --> 01:09:52,279 +that that happens and then pretty much + +1858 +01:09:50,839 --> 01:09:53,960 +anything in these first couple of + +1859 +01:09:52,279 --> 01:09:56,560 +sections we talked about um like + +1860 +01:09:53,960 --> 01:09:58,440 +sampling mode seeking search and + +1861 +01:09:56,560 --> 01:10:00,400 +sometimes MBR will also be implemented + +1862 +01:09:58,440 --> 01:10:05,080 +in pretty much any Library you use for + +1863 +01:10:00,400 --> 01:10:07,679 +models like huggingface Fair seek or + +1864 +01:10:05,080 --> 01:10:10,000 +Jacks so to kind of take a step back + +1865 +01:10:07,679 --> 01:10:12,520 +here is when you get to the end of class + +1866 +01:10:10,000 --> 01:10:15,640 +um there's really two broad categories + +1867 +01:10:12,520 --> 01:10:17,679 +of methods that we talked about today um + +1868 +01:10:15,640 --> 01:10:20,360 +given our initial distribution from the + +1869 +01:10:17,679 --> 01:10:22,600 +model for a next token given our our + +1870 +01:10:20,360 --> 01:10:24,920 +input we can do two kind of different + +1871 +01:10:22,600 --> 01:10:26,400 +things we can each individual decoding + +1872 +01:10:24,920 --> 01:10:28,360 +step choose some kind of function to + +1873 +01:10:26,400 --> 01:10:30,280 +manipulate this distribution and this + +1874 +01:10:28,360 --> 01:10:32,280 +could be something like short like + +1875 +01:10:30,280 --> 01:10:33,960 +cutting off the long tail like modifying + +1876 +01:10:32,280 --> 01:10:36,239 +the temperature or adding external + +1877 +01:10:33,960 --> 01:10:38,400 +information from another model or from a + +1878 +01:10:36,239 --> 01:10:41,480 +discriminator model + +1879 +01:10:38,400 --> 01:10:43,159 +right or we can over a larger part of + +1880 +01:10:41,480 --> 01:10:45,120 +the decoding process choose some + +1881 +01:10:43,159 --> 01:10:47,120 +function to choose between sequences and + +1882 +01:10:45,120 --> 01:10:49,199 +this could be like choosing between next + +1883 +01:10:47,120 --> 01:10:51,679 +tokens in beam search when we pruning + +1884 +01:10:49,199 --> 01:10:53,120 +beams this could be choosing from Full + +1885 +01:10:51,679 --> 01:10:56,760 +sequences when we're doing something + +1886 +01:10:53,120 --> 01:10:58,040 +like MB r or sample and rerank methods + +1887 +01:10:56,760 --> 01:11:00,239 +um and you can do these two things in + +1888 +01:10:58,040 --> 01:11:01,440 +parallel right you can choose like a + +1889 +01:11:00,239 --> 01:11:03,159 +different function to manipulate the + +1890 +01:11:01,440 --> 01:11:04,760 +next token distribution and then some + +1891 +01:11:03,159 --> 01:11:06,199 +sort of like broader thing to choose + +1892 +01:11:04,760 --> 01:11:08,280 +what you do with the full sequences you + +1893 +01:11:06,199 --> 01:11:09,920 +get out of that distribution um but + +1894 +01:11:08,280 --> 01:11:12,040 +there are sort of these two broad + +1895 +01:11:09,920 --> 01:11:14,880 +categories of + +1896 +01:11:12,040 --> 01:11:17,440 +decoding so what should you take away + +1897 +01:11:14,880 --> 01:11:19,400 +from this um I think a couple of things + +1898 +01:11:17,440 --> 01:11:21,000 +you decoding methods can be really + +1899 +01:11:19,400 --> 01:11:23,040 +powerful to control features of your + +1900 +01:11:21,000 --> 01:11:25,040 +output if you want to impose particular + +1901 +01:11:23,040 --> 01:11:26,679 +constraints if you want to factor in + +1902 +01:11:25,040 --> 01:11:27,960 +reward function or factor in a data + +1903 +01:11:26,679 --> 01:11:31,800 +source that you maybe didn't have at + +1904 +01:11:27,960 --> 01:11:34,239 +training time um and to some extent you + +1905 +01:11:31,800 --> 01:11:36,120 +can do a more expensive decoding method + +1906 +01:11:34,239 --> 01:11:37,520 +to compensate for a worse model or to + +1907 +01:11:36,120 --> 01:11:39,080 +compensate for a model that hasn't been + +1908 +01:11:37,520 --> 01:11:42,480 +trained to do exactly the thing you want + +1909 +01:11:39,080 --> 01:11:44,800 +it to do um of course you can't you know + +1910 +01:11:42,480 --> 01:11:47,679 +use this to make gpt2 small as good as + +1911 +01:11:44,800 --> 01:11:49,840 +gp4 but you can sort of for some points + +1912 +01:11:47,679 --> 01:11:51,679 +in the middle spend more um computed + +1913 +01:11:49,840 --> 01:11:53,159 +inference time to pay for not spending + +1914 +01:11:51,679 --> 01:11:55,639 +as much computed training time and + +1915 +01:11:53,159 --> 01:11:57,440 +particularly if you don't have access to + +1916 +01:11:55,639 --> 01:11:59,400 +the kind of giant gpus you might need to + +1917 +01:11:57,440 --> 01:12:01,840 +continue fine-tuning your model this can + +1918 +01:11:59,400 --> 01:12:05,679 +be a really a really powerful + +1919 +01:12:01,840 --> 01:12:07,800 +alternative um yeah so say like you're + +1920 +01:12:05,679 --> 01:12:12,560 +building like something in production + +1921 +01:12:07,800 --> 01:12:15,920 +right people usually do um sort of like + +1922 +01:12:12,560 --> 01:12:18,760 +that you know inance before cling to see + +1923 +01:12:15,920 --> 01:12:21,840 +if it's G to work at do + +1924 +01:12:18,760 --> 01:12:25,080 +that like try to see like if you have a + +1925 +01:12:21,840 --> 01:12:26,800 +model that you can do some kind of + +1926 +01:12:25,080 --> 01:12:29,199 +expensive decoding method for to get + +1927 +01:12:26,800 --> 01:12:31,120 +good outputs is it then worth try + +1928 +01:12:29,199 --> 01:12:34,000 +training that model right um there's + +1929 +01:12:31,120 --> 01:12:36,560 +some great recent work on like training + +1930 +01:12:34,000 --> 01:12:39,400 +models to produce the same kind of + +1931 +01:12:36,560 --> 01:12:40,760 +outputs you get out of MVR without um + +1932 +01:12:39,400 --> 01:12:43,239 +actually doing a really expensive + +1933 +01:12:40,760 --> 01:12:45,600 +inference Stu so at some level like yeah + +1934 +01:12:43,239 --> 01:12:48,120 +you can decide like this model is good + +1935 +01:12:45,600 --> 01:12:49,920 +enough with its expensive method we can + +1936 +01:12:48,120 --> 01:12:50,920 +try to make it cheaper by spending more + +1937 +01:12:49,920 --> 01:12:53,960 +money on + +1938 +01:12:50,920 --> 01:12:55,520 +funing um but that's not it's not like + +1939 +01:12:53,960 --> 01:12:57,320 +necessarily guaranteed that that's will + +1940 +01:12:55,520 --> 01:13:00,679 +be the case + +1941 +01:12:57,320 --> 01:13:03,040 +Okay um the methods that we looked at + +1942 +01:13:00,679 --> 01:13:06,199 +have these sort of trade-offs in quality + +1943 +01:13:03,040 --> 01:13:07,960 +in diversity and in inference speed so + +1944 +01:13:06,199 --> 01:13:10,320 +sampling from your model directly is + +1945 +01:13:07,960 --> 01:13:13,120 +pretty fast to do you get really diverse + +1946 +01:13:10,320 --> 01:13:14,960 +outputs but it tends to be lower quality + +1947 +01:13:13,120 --> 01:13:16,320 +um whereas more restricted sampling + +1948 +01:13:14,960 --> 01:13:18,520 +these sort of mode seeking search + +1949 +01:13:16,320 --> 01:13:20,639 +methods tend to be higher quality but + +1950 +01:13:18,520 --> 01:13:21,880 +you get less less diverse outputs and + +1951 +01:13:20,639 --> 01:13:23,560 +that's why we have these methods like + +1952 +01:13:21,880 --> 01:13:26,719 +diverse and stochastic resarch to + +1953 +01:13:23,560 --> 01:13:28,760 +counter this a bit um and then methods + +1954 +01:13:26,719 --> 01:13:30,400 +like MBR or other sample and rerank + +1955 +01:13:28,760 --> 01:13:32,679 +methods tend to be very high quality + +1956 +01:13:30,400 --> 01:13:34,280 +outputs but you pay for this with much + +1957 +01:13:32,679 --> 01:13:36,520 +slower inference + +1958 +01:13:34,280 --> 01:13:38,679 +time um but if I can kind of convince + +1959 +01:13:36,520 --> 01:13:41,560 +you of anything today I think it would + +1960 +01:13:38,679 --> 01:13:43,600 +be this which is that these the decoding + +1961 +01:13:41,560 --> 01:13:45,600 +method you choose for your model has a + +1962 +01:13:43,600 --> 01:13:47,960 +really strong impact on performance + +1963 +01:13:45,600 --> 01:13:49,520 +Downstream um you can get radically + +1964 +01:13:47,960 --> 01:13:51,239 +different results out of the same model + +1965 +01:13:49,520 --> 01:13:52,639 +without doing any additional training + +1966 +01:13:51,239 --> 01:13:55,120 +just by choosing the different decoding + +1967 +01:13:52,639 --> 01:13:57,880 +method that you might want to try and so + +1968 +01:13:55,120 --> 01:13:59,679 +when you sort of let your libraries pick + +1969 +01:13:57,880 --> 01:14:01,159 +a quote unquote like sensible default + +1970 +01:13:59,679 --> 01:14:03,760 +you can leave a lot of performance on + +1971 +01:14:01,159 --> 01:14:06,480 +the train on the table so I encourage + +1972 +01:14:03,760 --> 01:14:08,199 +you folks that if if you're um deploying + +1973 +01:14:06,480 --> 01:14:09,760 +models in production or if you're doing + +1974 +01:14:08,199 --> 01:14:10,840 +research or you know maybe look at your + +1975 +01:14:09,760 --> 01:14:13,280 +outputs and your model has some + +1976 +01:14:10,840 --> 01:14:15,320 +undesirable behaviors to consider if the + +1977 +01:14:13,280 --> 01:14:17,800 +decoding method you're using is imposing + +1978 +01:14:15,320 --> 01:14:20,000 +some kind of Intuition or some kind of + +1979 +01:14:17,800 --> 01:14:21,840 +inductive bias and if you can alter that + +1980 +01:14:20,000 --> 01:14:24,239 +to get some of these behaviors without + +1981 +01:14:21,840 --> 01:14:26,320 +resorting to additional training + +1982 +01:14:24,239 --> 01:14:28,719 +um and that's sort of the end I can take + +1983 +01:14:26,320 --> 01:14:28,719 +any other + +1984 +01:14:34,320 --> 01:14:38,719 +questions okay um yeah I guess we don't + +1985 +01:14:37,199 --> 01:14:41,360 +have any questions we can take questions + +1986 +01:14:38,719 --> 01:14:45,560 +up here um one one thing I'd like to + +1987 +01:14:41,360 --> 01:14:47,679 +point out also is that um I I love the + +1988 +01:14:45,560 --> 01:14:50,760 +final thing that Amanda said here + +1989 +01:14:47,679 --> 01:14:54,199 +another thing is that my impression from + +1990 +01:14:50,760 --> 01:14:56,400 +dealing with things is that it's a lot + +1991 +01:14:54,199 --> 01:14:58,159 +easier to predict the effect of + +1992 +01:14:56,400 --> 01:14:59,920 +inference time decoding time + +1993 +01:14:58,159 --> 01:15:01,120 +manipulations than it is to predict the + +1994 +01:14:59,920 --> 01:15:04,239 +effect of + +1995 +01:15:01,120 --> 01:15:07,480 +like um fine-tuning or something like + +1996 +01:15:04,239 --> 01:15:11,040 +this like just to give a an + +1997 +01:15:07,480 --> 01:15:12,480 +example beam search with the maximum + +1998 +01:15:11,040 --> 01:15:15,199 +likelihood trained model tends to + +1999 +01:15:12,480 --> 01:15:16,719 +generate things that are shorter um + +2000 +01:15:15,199 --> 01:15:18,040 +whereas greedy decoding tends to + +2001 +01:15:16,719 --> 01:15:19,639 +generate things that are longer and + +2002 +01:15:18,040 --> 01:15:22,000 +repeat more often and stuff like that + +2003 +01:15:19,639 --> 01:15:25,920 +and if you try a few methods like this + +2004 +01:15:22,000 --> 01:15:28,920 +you'll quickly find these kind of qus of + +2005 +01:15:25,920 --> 01:15:31,320 +each of the methods and so by forming a + +2006 +01:15:28,920 --> 01:15:32,719 +good intuition of this you will also + +2007 +01:15:31,320 --> 01:15:34,000 +know how to fix these problems when you + +2008 +01:15:32,719 --> 01:15:35,600 +see them it's like oh my model's + +2009 +01:15:34,000 --> 01:15:37,320 +repeating itself a lot maybe I shouldn't + +2010 +01:15:35,600 --> 01:15:38,679 +be using grey search I should be + +2011 +01:15:37,320 --> 01:15:41,199 +switching over to something else or + +2012 +01:15:38,679 --> 01:15:43,320 +something like that so um this is a good + +2013 +01:15:41,199 --> 01:15:45,880 +thing to know and play around with yeah + +2014 +01:15:43,320 --> 01:15:47,239 +and I think pretty underutilized too um + +2015 +01:15:45,880 --> 01:15:48,880 +a lot of folks will not think about a + +2016 +01:15:47,239 --> 01:15:50,920 +decoding method to fix their problem + +2017 +01:15:48,880 --> 01:15:52,280 +even if like your model might actually + +2018 +01:15:50,920 --> 01:15:53,760 +be perfectly fine under a different + +2019 +01:15:52,280 --> 01:15:56,000 +decoding strategy + +2020 +01:15:53,760 --> 01:15:58,320 +great okay thanks a lot everyone you can + +2021 +01:15:56,000 --> 01:15:58,320 +uh + +2022 +01:16:02,280 --> 01:16:05,280 +finish \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (6) Generation Algorithms/transcript.vtt b/CMU Advanced NLP 2024 (6) Generation Algorithms/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..92a5059af4505b99f02e47a6799043a15445e383 --- /dev/null +++ b/CMU Advanced NLP 2024 (6) Generation Algorithms/transcript.vtt @@ -0,0 +1,6067 @@ +WEBVTT + +00:00:00.399 --> 00:00:04.720 +great um yeah so today we're going to be + +00:00:03.320 --> 00:00:07.040 +talking a little bit about generation + +00:00:04.720 --> 00:00:08.639 +algorithms um this will be sort of a + +00:00:07.040 --> 00:00:10.160 +tour through some of the most common + +00:00:08.639 --> 00:00:12.080 +methods and we're going to talk a little + +00:00:10.160 --> 00:00:13.480 +bit about the theory behind them as well + +00:00:12.080 --> 00:00:15.080 +um if you're looking at the slides on + +00:00:13.480 --> 00:00:18.359 +the website these might be ever so + +00:00:15.080 --> 00:00:20.000 +slightly different um but yeah I'll try + +00:00:18.359 --> 00:00:21.640 +to stop at each section boundary for + +00:00:20.000 --> 00:00:23.840 +questions also feel free to sort of + +00:00:21.640 --> 00:00:25.720 +interrupt at any point for + +00:00:23.840 --> 00:00:27.720 +clarifications so we're starting off + +00:00:25.720 --> 00:00:29.560 +today with some great news um let's say + +00:00:27.720 --> 00:00:31.199 +that you have some friend who maybe owns + +00:00:29.560 --> 00:00:34.800 +a giant tech company and they've gifted + +00:00:31.199 --> 00:00:36.480 +you this absolutely massive new model M + +00:00:34.800 --> 00:00:38.079 +um it's a great model it's pre-trained + +00:00:36.480 --> 00:00:40.879 +with the latest architecture it's + +00:00:38.079 --> 00:00:42.920 +pre-trained on um trillions of tokens of + +00:00:40.879 --> 00:00:44.520 +text it's got seven billion parameters + +00:00:42.920 --> 00:00:46.399 +it looks like a really promising new + +00:00:44.520 --> 00:00:48.399 +model you know it's the top of all these + +00:00:46.399 --> 00:00:50.320 +leaderboards um but if you actually take + +00:00:48.399 --> 00:00:52.520 +your new model M and you sort of open up + +00:00:50.320 --> 00:00:53.719 +this box and kind of Shake It Out maybe + +00:00:52.520 --> 00:00:55.239 +from last class you know a little bit + +00:00:53.719 --> 00:00:57.000 +architecturally what this model might + +00:00:55.239 --> 00:00:58.239 +look like but if you actually kind of + +00:00:57.000 --> 00:01:00.320 +take a closer look at it from a + +00:00:58.239 --> 00:01:01.719 +different angle what you see is that m + +00:01:00.320 --> 00:01:04.920 +is actually just a conditional + +00:01:01.719 --> 00:01:07.200 +probability distribution um you put some + +00:01:04.920 --> 00:01:09.680 +input X into your model and you get some + +00:01:07.200 --> 00:01:10.680 +probability out for any given sequence + +00:01:09.680 --> 00:01:13.360 +that you're sort of interested in + +00:01:10.680 --> 00:01:14.960 +evaluating right um and in particular M + +00:01:13.360 --> 00:01:17.560 +gives you a probability distribution + +00:01:14.960 --> 00:01:19.439 +over all tokens in its vocabulary to + +00:01:17.560 --> 00:01:21.040 +predict like what token you would output + +00:01:19.439 --> 00:01:24.840 +next right and so this is what this + +00:01:21.040 --> 00:01:26.880 +equation says um given some input X and + +00:01:24.840 --> 00:01:29.520 +everything that you've predicted so far + +00:01:26.880 --> 00:01:32.399 +you get the probability of the next + +00:01:29.520 --> 00:01:33.600 +token in YJ and if you multiply this out + +00:01:32.399 --> 00:01:34.840 +over all the probabilities in your + +00:01:33.600 --> 00:01:37.159 +sequence you can calculate the + +00:01:34.840 --> 00:01:41.240 +probability of any output y given your + +00:01:37.159 --> 00:01:42.640 +input X so what this like super fancy + +00:01:41.240 --> 00:01:44.119 +model that you spend a lot of money to + +00:01:42.640 --> 00:01:46.280 +train is really just a conditional + +00:01:44.119 --> 00:01:47.920 +probability distribution um but this + +00:01:46.280 --> 00:01:49.600 +turns out to be okay because you can use + +00:01:47.920 --> 00:01:51.920 +a conditional probability distribution + +00:01:49.600 --> 00:01:54.399 +to do sort of any task that we're really + +00:01:51.920 --> 00:01:56.719 +interested in in NLP um pretty much any + +00:01:54.399 --> 00:01:58.680 +task right so by changing what you + +00:01:56.719 --> 00:02:01.360 +consider your input X and your output y + +00:01:58.680 --> 00:02:03.560 +to be you can can get outputs from this + +00:02:01.360 --> 00:02:06.479 +model for things like translation for + +00:02:03.560 --> 00:02:08.720 +summarization for reasoning Tas um just + +00:02:06.479 --> 00:02:10.520 +by sort of changing what you consider + +00:02:08.720 --> 00:02:12.760 +your inputs and outputs in this + +00:02:10.520 --> 00:02:14.239 +setting but there's sort of both good + +00:02:12.760 --> 00:02:15.920 +and bad things about your model being a + +00:02:14.239 --> 00:02:17.120 +probability distribution instead of just + +00:02:15.920 --> 00:02:20.599 +an oracle that gives you sort of a + +00:02:17.120 --> 00:02:22.080 +single answer for every input um one + +00:02:20.599 --> 00:02:24.480 +kind of nice thing about this + +00:02:22.080 --> 00:02:26.080 +distribution um is that you can get at + +00:02:24.480 --> 00:02:27.720 +an idea of something like confidence + +00:02:26.080 --> 00:02:30.120 +right if you give your model the input 2 + +00:02:27.720 --> 00:02:32.480 +plus 2 equals and almost all the + +00:02:30.120 --> 00:02:34.200 +probability mass is on the token of four + +00:02:32.480 --> 00:02:35.760 +you can say like the model predicts with + +00:02:34.200 --> 00:02:38.319 +pretty high confidence that 2 plus 2 + +00:02:35.760 --> 00:02:39.480 +equals four um versus if you give it + +00:02:38.319 --> 00:02:40.959 +something that's maybe a little more + +00:02:39.480 --> 00:02:43.120 +open-ended like you ask it to predict + +00:02:40.959 --> 00:02:44.640 +Graham's favorite color and you see this + +00:02:43.120 --> 00:02:47.040 +distribution that's sort of a lot + +00:02:44.640 --> 00:02:48.440 +flatter you know the most likely output + +00:02:47.040 --> 00:02:49.720 +is green but maybe we don't have a lot + +00:02:48.440 --> 00:02:51.560 +of confidence that that's the correct + +00:02:49.720 --> 00:02:53.040 +answer um this is really closely tied + +00:02:51.560 --> 00:02:55.200 +into the idea of calibration which you + +00:02:53.040 --> 00:02:58.879 +guys talked about um I guess a couple of + +00:02:55.200 --> 00:03:00.640 +classes ago now the flip side of this + +00:02:58.879 --> 00:03:03.680 +though is that you know Noti that for + +00:03:00.640 --> 00:03:06.760 +this case like 2 plus 2al 4 not all of + +00:03:03.680 --> 00:03:08.519 +the probability mass is on four um and + +00:03:06.760 --> 00:03:09.720 +so models that are conditional + +00:03:08.519 --> 00:03:11.560 +probability distributions can + +00:03:09.720 --> 00:03:13.560 +hallucinate right um pretty much no + +00:03:11.560 --> 00:03:15.799 +matter what you do there's going to be + +00:03:13.560 --> 00:03:17.680 +some nonzero probability to some output + +00:03:15.799 --> 00:03:19.920 +that's incorrect or + +00:03:17.680 --> 00:03:21.239 +undesirable um in some cases maybe even + +00:03:19.920 --> 00:03:23.760 +offensive something that you don't want + +00:03:21.239 --> 00:03:25.280 +the model to Output um and this is sort + +00:03:23.760 --> 00:03:27.840 +of an artifact of the way these models + +00:03:25.280 --> 00:03:29.280 +are trained if there's some great work + +00:03:27.840 --> 00:03:31.400 +kind of more on the theory side here + +00:03:29.280 --> 00:03:32.840 +that shows that this is actually true + +00:03:31.400 --> 00:03:35.120 +even if everything in your input + +00:03:32.840 --> 00:03:36.920 +training data is sort of correct and + +00:03:35.120 --> 00:03:38.439 +factual and doesn't have any errors + +00:03:36.920 --> 00:03:41.200 +you'll still wind up with a situation + +00:03:38.439 --> 00:03:44.480 +where some nonzero probability mass is + +00:03:41.200 --> 00:03:47.000 +on some outputs that are undesirable or + +00:03:44.480 --> 00:03:50.120 +hallucinatory for sort of most inputs + +00:03:47.000 --> 00:03:52.159 +that you care about evaluating so if we + +00:03:50.120 --> 00:03:55.079 +have these issues how do we actually get + +00:03:52.159 --> 00:03:56.519 +a good output out of the model um and to + +00:03:55.079 --> 00:03:58.640 +do that we're first going to talk about + +00:03:56.519 --> 00:04:00.079 +some sampling methods um but I want to + +00:03:58.640 --> 00:04:01.879 +pause here in case there are of any + +00:04:00.079 --> 00:04:04.159 +questions on this idea of a model is a + +00:04:01.879 --> 00:04:04.159 +conditional + +00:04:05.040 --> 00:04:11.680 +distribution great so we can jump right + +00:04:07.519 --> 00:04:13.560 +in so we have this model right we know + +00:04:11.680 --> 00:04:15.959 +at each step at each token we might want + +00:04:13.560 --> 00:04:17.919 +to decode the distribution of likelihood + +00:04:15.959 --> 00:04:18.959 +over all vocabulary tokens right this + +00:04:17.919 --> 00:04:21.680 +conditional distribution we've been + +00:04:18.959 --> 00:04:24.240 +talking about um for the next time step + +00:04:21.680 --> 00:04:26.400 +and what we want out of this is a good + +00:04:24.240 --> 00:04:28.000 +output um for some definition of good + +00:04:26.400 --> 00:04:30.919 +that we can sort of develop as we go + +00:04:28.000 --> 00:04:32.479 +here so maybe the natural first thing to + +00:04:30.919 --> 00:04:34.880 +try is we have a probability + +00:04:32.479 --> 00:04:36.600 +distribution can we just sample from it + +00:04:34.880 --> 00:04:39.600 +right and this is something called + +00:04:36.600 --> 00:04:41.639 +ancestral sampling so at each time step + +00:04:39.600 --> 00:04:43.560 +we're going to draw a token from this + +00:04:41.639 --> 00:04:45.039 +distribution sort of according to its + +00:04:43.560 --> 00:04:47.199 +relative probability right so if + +00:04:45.039 --> 00:04:48.639 +something has twice as much probability + +00:04:47.199 --> 00:04:51.280 +Mass according to the model we'll draw + +00:04:48.639 --> 00:04:54.000 +it twice as often um and we can sample + +00:04:51.280 --> 00:04:55.560 +from this distribution at each time step + +00:04:54.000 --> 00:04:58.080 +and this is sort of this is sort of a + +00:04:55.560 --> 00:05:00.199 +nice setup um we get exact samples from + +00:04:58.080 --> 00:05:02.639 +the model distribution so using the + +00:05:00.199 --> 00:05:04.479 +setup if you can you imagine like + +00:05:02.639 --> 00:05:06.680 +drawing an almost infinite number of + +00:05:04.479 --> 00:05:08.320 +samples like a ridiculously large number + +00:05:06.680 --> 00:05:10.160 +and you look at their probabilities + +00:05:08.320 --> 00:05:11.840 +you'd sort of get something from this + +00:05:10.160 --> 00:05:13.039 +distribution with exactly the + +00:05:11.840 --> 00:05:15.720 +probability that the real model + +00:05:13.039 --> 00:05:17.280 +distribution is given you um so this is + +00:05:15.720 --> 00:05:19.039 +great this gives us an exact sample from + +00:05:17.280 --> 00:05:21.400 +the model this seems to be exactly what + +00:05:19.039 --> 00:05:22.880 +we want um but you can guess probably by + +00:05:21.400 --> 00:05:24.639 +the fact that we're only like 10 minutes + +00:05:22.880 --> 00:05:27.000 +into class here this is not really the + +00:05:24.639 --> 00:05:28.280 +end of the story um and there's actually + +00:05:27.000 --> 00:05:30.800 +a couple of problems with sampling + +00:05:28.280 --> 00:05:32.560 +directly from our model distribu + +00:05:30.800 --> 00:05:35.280 +the one that we're really going to focus + +00:05:32.560 --> 00:05:37.919 +on first here is this idea of a long + +00:05:35.280 --> 00:05:41.400 +tail so a model like llama and maybe our + +00:05:37.919 --> 00:05:43.639 +new model M um has 32,000 vocabulary + +00:05:41.400 --> 00:05:46.280 +tokens and you can imagine maybe out of + +00:05:43.639 --> 00:05:48.000 +those tokens there might be one or even + +00:05:46.280 --> 00:05:49.720 +2,000 of those tokens that are sort of a + +00:05:48.000 --> 00:05:51.919 +reasonable next thing to predict for a + +00:05:49.720 --> 00:05:53.479 +really open-ended task right but there's + +00:05:51.919 --> 00:05:55.440 +going to be all kinds of things in that + +00:05:53.479 --> 00:05:57.039 +distribution um that are maybe like + +00:05:55.440 --> 00:05:58.440 +punctuation there maybe tokens that + +00:05:57.039 --> 00:06:00.280 +won't actually lead to the correct + +00:05:58.440 --> 00:06:01.840 +answer like there's a lot of things in + +00:06:00.280 --> 00:06:04.560 +this distribution that would be all + +00:06:01.840 --> 00:06:06.160 +really low likelihood and this is fine + +00:06:04.560 --> 00:06:08.759 +these things just get low probability + +00:06:06.160 --> 00:06:11.039 +Mass but the problem is if you give sort + +00:06:08.759 --> 00:06:13.639 +of a small amount of probability Mass to + +00:06:11.039 --> 00:06:16.599 +30,000 different things that mass will + +00:06:13.639 --> 00:06:19.360 +add up pretty quickly um and to see this + +00:06:16.599 --> 00:06:20.360 +we have sort of this illustration here + +00:06:19.360 --> 00:06:21.560 +um I don't know if you can see the + +00:06:20.360 --> 00:06:23.280 +difference between the green and the + +00:06:21.560 --> 00:06:25.720 +yellow but I've also drawn a little bar + +00:06:23.280 --> 00:06:27.800 +between them this is a really longtailed + +00:06:25.720 --> 00:06:29.720 +distribution and the green part of the + +00:06:27.800 --> 00:06:31.960 +distribution which is a lot of tokens + +00:06:29.720 --> 00:06:34.000 +with high likelihood has 50% of the + +00:06:31.960 --> 00:06:35.560 +total probability the Yellow Part which + +00:06:34.000 --> 00:06:37.360 +is all a lot of things that are all + +00:06:35.560 --> 00:06:40.280 +individually not super likely is the + +00:06:37.360 --> 00:06:41.720 +other 50% of the probability and so what + +00:06:40.280 --> 00:06:44.360 +that means is if you're doing something + +00:06:41.720 --> 00:06:46.120 +like ancestral sampling 50% of the time + +00:06:44.360 --> 00:06:49.160 +you'll be sampling something really + +00:06:46.120 --> 00:06:51.520 +unlikely from this long tail um that + +00:06:49.160 --> 00:06:53.759 +seems sort of not like what we want + +00:06:51.520 --> 00:06:56.080 +right um so is there anything we can do + +00:06:53.759 --> 00:06:58.080 +about this and the obvious for solution + +00:06:56.080 --> 00:06:59.400 +here is can we just cut off that tail + +00:06:58.080 --> 00:07:01.680 +like if we know these tokens are not + +00:06:59.400 --> 00:07:03.039 +super likely can we just ignore them and + +00:07:01.680 --> 00:07:05.039 +there's a couple of different ways to do + +00:07:03.039 --> 00:07:07.919 +that um the first of these is something + +00:07:05.039 --> 00:07:10.080 +called topk sampling where we say okay + +00:07:07.919 --> 00:07:12.479 +you know maybe we think there are 10 + +00:07:10.080 --> 00:07:14.000 +reasonable like outputs is right maybe + +00:07:12.479 --> 00:07:17.280 +we'll just sample from the 10 most + +00:07:14.000 --> 00:07:19.759 +probable tokens um here maybe we say if + +00:07:17.280 --> 00:07:21.479 +we want to pick top six sampling we'll + +00:07:19.759 --> 00:07:23.919 +sample from just the six most probable + +00:07:21.479 --> 00:07:26.240 +tokens and so in this example you can + +00:07:23.919 --> 00:07:27.680 +see we originally had 10 tokens and + +00:07:26.240 --> 00:07:30.560 +we're going to sample from just the blue + +00:07:27.680 --> 00:07:32.919 +ones just the six most likely tokens + +00:07:30.560 --> 00:07:34.360 +um in this example this distribution is + +00:07:32.919 --> 00:07:37.280 +pretty flat there's a lot of things that + +00:07:34.360 --> 00:07:40.120 +are like kind of likely right so that + +00:07:37.280 --> 00:07:43.000 +those six tokens are only 68% of the + +00:07:40.120 --> 00:07:45.360 +total probability Mass um if we go like + +00:07:43.000 --> 00:07:47.240 +one time step further here we might have + +00:07:45.360 --> 00:07:49.360 +a distribution that's a lot peier most + +00:07:47.240 --> 00:07:51.759 +of the mass is on just a single token + +00:07:49.360 --> 00:07:53.919 +and so sampling from just the top six + +00:07:51.759 --> 00:07:56.400 +tokens actually captures 99% of the + +00:07:53.919 --> 00:07:58.360 +probability mes maybe we say that seems + +00:07:56.400 --> 00:08:01.199 +a little excessive right we don't really + +00:07:58.360 --> 00:08:03.400 +need um maybe all of these tokens that + +00:08:01.199 --> 00:08:05.479 +are all kind of low probability maybe we + +00:08:03.400 --> 00:08:07.000 +just want to sort of sample from the top + +00:08:05.479 --> 00:08:08.080 +half of our distribution or something or + +00:08:07.000 --> 00:08:10.840 +the top + +00:08:08.080 --> 00:08:12.919 +90% um so instead of choosing a top + +00:08:10.840 --> 00:08:15.560 +number of tokens to sample from you + +00:08:12.919 --> 00:08:17.400 +could choose a top amount of probability + +00:08:15.560 --> 00:08:20.000 +and this is something called top P or + +00:08:17.400 --> 00:08:21.520 +nucleus sampling so P here is the amount + +00:08:20.000 --> 00:08:24.039 +of probability from your distribution + +00:08:21.520 --> 00:08:26.639 +you want to consider so if you decide + +00:08:24.039 --> 00:08:29.280 +your p is about like 94% of the + +00:08:26.639 --> 00:08:31.639 +probability Mass you in this first examp + +00:08:29.280 --> 00:08:33.719 +example here would choose almost all of + +00:08:31.639 --> 00:08:35.440 +the tokens you keep adding tokens in + +00:08:33.719 --> 00:08:37.159 +until you reach an amount of total + +00:08:35.440 --> 00:08:39.479 +probability that's about + +00:08:37.159 --> 00:08:40.880 +094 but then when you get to the Second + +00:08:39.479 --> 00:08:43.240 +Step where you have a couple of really + +00:08:40.880 --> 00:08:45.959 +highly probable tokens you'd only need a + +00:08:43.240 --> 00:08:47.959 +couple of tokens to add up to 094 or + +00:08:45.959 --> 00:08:50.320 +even higher than 0.94 and so you would + +00:08:47.959 --> 00:08:52.200 +just sample from a smaller set of tokens + +00:08:50.320 --> 00:08:54.600 +so in top K sampling the total amount of + +00:08:52.200 --> 00:08:56.560 +probability your sampling from can move + +00:08:54.600 --> 00:08:58.120 +around in top P sampling the total + +00:08:56.560 --> 00:08:59.839 +number of tokens you're sampling from + +00:08:58.120 --> 00:09:01.959 +might change + +00:08:59.839 --> 00:09:04.760 +um but maybe we sort of don't want to + +00:09:01.959 --> 00:09:07.279 +impose a strong constraint like we want + +00:09:04.760 --> 00:09:09.279 +like 94% here maybe just what we really + +00:09:07.279 --> 00:09:11.040 +care about is saying that we're not + +00:09:09.279 --> 00:09:14.000 +going to sample anything that's really + +00:09:11.040 --> 00:09:16.800 +really unlikely right another way of + +00:09:14.000 --> 00:09:18.560 +doing this is called Epsilon sampling + +00:09:16.800 --> 00:09:20.519 +where we just sample tokens that have at + +00:09:18.560 --> 00:09:22.920 +least some minimum amount of probability + +00:09:20.519 --> 00:09:24.720 +to them right so maybe we just want + +00:09:22.920 --> 00:09:29.519 +tokens that have probability of at least + +00:09:24.720 --> 00:09:31.240 +0.05 here um in this first um example + +00:09:29.519 --> 00:09:32.640 +everything has at least some reasonable + +00:09:31.240 --> 00:09:34.240 +amount of probability so we're actually + +00:09:32.640 --> 00:09:36.240 +going to sample from our full + +00:09:34.240 --> 00:09:37.720 +distribution and then in the second + +00:09:36.240 --> 00:09:39.279 +example when we have a lot of things + +00:09:37.720 --> 00:09:41.160 +that are really unlikely we'll only + +00:09:39.279 --> 00:09:43.800 +sample from sort of the more likely part + +00:09:41.160 --> 00:09:45.240 +of the distribution um so all three of + +00:09:43.800 --> 00:09:47.000 +these methods are sort of different ways + +00:09:45.240 --> 00:09:49.399 +of trying to cut off the long tail using + +00:09:47.000 --> 00:09:51.480 +sort of different + +00:09:49.399 --> 00:09:53.000 +characteristics the tail of the + +00:09:51.480 --> 00:09:55.680 +distribution though isn't the only thing + +00:09:53.000 --> 00:09:58.000 +we could choose to modify um we could + +00:09:55.680 --> 00:09:59.880 +also choose to modify this sort of + +00:09:58.000 --> 00:10:02.120 +peakiness of the distribution + +00:09:59.880 --> 00:10:03.880 +so if you look here at the middle of + +00:10:02.120 --> 00:10:06.600 +these diagrams say this is your original + +00:10:03.880 --> 00:10:08.519 +distribution over next tokens and maybe + +00:10:06.600 --> 00:10:11.040 +you want to modify some properties of + +00:10:08.519 --> 00:10:12.640 +this distribution like you say I want an + +00:10:11.040 --> 00:10:14.200 +output that's really diverse and + +00:10:12.640 --> 00:10:15.680 +interesting and open-ended like maybe + +00:10:14.200 --> 00:10:17.920 +this is something like story generation + +00:10:15.680 --> 00:10:20.120 +where you want to have sort of a lot of + +00:10:17.920 --> 00:10:21.279 +maybe surprising things in your output + +00:10:20.120 --> 00:10:23.480 +you could say I want to sort of + +00:10:21.279 --> 00:10:26.440 +distribute my probability Mass more over + +00:10:23.480 --> 00:10:28.399 +the token space and you can do this um + +00:10:26.440 --> 00:10:32.720 +by sort of flattening this distribution + +00:10:28.399 --> 00:10:34.240 +like you see on the the right here um + +00:10:32.720 --> 00:10:36.800 +where now there's sort of more + +00:10:34.240 --> 00:10:39.040 +probability Mass spread over this um + +00:10:36.800 --> 00:10:40.320 +like wider set of tokens you could also + +00:10:39.040 --> 00:10:42.720 +say the opposite right you could say + +00:10:40.320 --> 00:10:44.120 +maybe I'm doing something like math + +00:10:42.720 --> 00:10:45.519 +where there shouldn't really be a lot of + +00:10:44.120 --> 00:10:47.800 +correct answers there should be really + +00:10:45.519 --> 00:10:50.399 +only one or maybe only like a few + +00:10:47.800 --> 00:10:52.320 +potential reasonable next answers and so + +00:10:50.399 --> 00:10:54.160 +you can make your distribution peier or + +00:10:52.320 --> 00:10:56.639 +sharper so that more of the probability + +00:10:54.160 --> 00:11:00.200 +mass is on the things at the very top um + +00:10:56.639 --> 00:11:02.000 +the way you do this is you modify y your + +00:11:00.200 --> 00:11:04.320 +loges your outputs of the last layer of + +00:11:02.000 --> 00:11:06.399 +the model before you apply softn so when + +00:11:04.320 --> 00:11:08.360 +you're predicting you get your outputs + +00:11:06.399 --> 00:11:10.040 +of the last layer of the model and then + +00:11:08.360 --> 00:11:11.560 +you apply softmax which turns those + +00:11:10.040 --> 00:11:15.240 +outputs into a distribution right they + +00:11:11.560 --> 00:11:17.399 +all sum up the um like Mass over all + +00:11:15.240 --> 00:11:18.839 +vocabulary tokens sums to one and so + +00:11:17.399 --> 00:11:21.920 +that is sort of a distribution you could + +00:11:18.839 --> 00:11:23.519 +sample from if you divide those Logics + +00:11:21.920 --> 00:11:26.000 +by some number before you apply that + +00:11:23.519 --> 00:11:27.880 +softmax you can make that distribution + +00:11:26.000 --> 00:11:30.760 +flatter by using a number greater than + +00:11:27.880 --> 00:11:32.440 +one or peier by using a number less than + +00:11:30.760 --> 00:11:35.079 +one and this is this type of parameter + +00:11:32.440 --> 00:11:36.839 +is called temperature um you can apply + +00:11:35.079 --> 00:11:38.480 +this with any of the other methods for + +00:11:36.839 --> 00:11:40.279 +sort of cutting off the long tail but + +00:11:38.480 --> 00:11:41.920 +what people will often do is just apply + +00:11:40.279 --> 00:11:43.639 +a temperature and then sample from that + +00:11:41.920 --> 00:11:45.320 +distribution and that's what we call + +00:11:43.639 --> 00:11:48.720 +temperature + +00:11:45.320 --> 00:11:49.920 +sampling so these I think most of you + +00:11:48.720 --> 00:11:51.320 +might already have been at least a + +00:11:49.920 --> 00:11:53.000 +little bit familiar with some of these + +00:11:51.320 --> 00:11:56.079 +methods I want to touch briefly on a + +00:11:53.000 --> 00:11:58.160 +couple of other ideas for modifying this + +00:11:56.079 --> 00:11:59.680 +distribution maybe some more complex and + +00:11:58.160 --> 00:12:01.839 +more recent ideas and the one that I + +00:11:59.680 --> 00:12:04.279 +want to talk about in more detail is + +00:12:01.839 --> 00:12:05.399 +something called contrastive decoding so + +00:12:04.279 --> 00:12:07.360 +the idea here is that we could + +00:12:05.399 --> 00:12:10.800 +incorporate some extra information at + +00:12:07.360 --> 00:12:12.760 +decoding time um using some other + +00:12:10.800 --> 00:12:15.320 +distribution some other data or in this + +00:12:12.760 --> 00:12:17.320 +case some other model so if you've ever + +00:12:15.320 --> 00:12:19.240 +played around with a really like + +00:12:17.320 --> 00:12:21.800 +relatively small language model maybe + +00:12:19.240 --> 00:12:23.320 +something like gbt2 small um You + +00:12:21.800 --> 00:12:26.560 +probably noticed you try to give it some + +00:12:23.320 --> 00:12:28.240 +inputs and maybe it degenerates into + +00:12:26.560 --> 00:12:30.160 +just repeating the same sequence over + +00:12:28.240 --> 00:12:31.720 +and over maybe it gives you outputs that + +00:12:30.160 --> 00:12:33.399 +are just completely incorrect like you + +00:12:31.720 --> 00:12:35.320 +ask it a factual question and it gets it + +00:12:33.399 --> 00:12:37.120 +wrong um and you don't see those + +00:12:35.320 --> 00:12:39.519 +problems if you look at sort of a larger + +00:12:37.120 --> 00:12:41.399 +model that's trained on more data so the + +00:12:39.519 --> 00:12:43.199 +question here is can you use what that + +00:12:41.399 --> 00:12:46.480 +smaller model is getting wrong to make + +00:12:43.199 --> 00:12:49.120 +your larger model even better um and the + +00:12:46.480 --> 00:12:51.360 +way we do this is by sort of the + +00:12:49.120 --> 00:12:52.880 +intuition that if the smaller model + +00:12:51.360 --> 00:12:55.079 +doesn't have a lot of probability on + +00:12:52.880 --> 00:12:57.160 +some answer but the the larger model + +00:12:55.079 --> 00:12:58.519 +does it's likely because that larger + +00:12:57.160 --> 00:13:02.279 +model has learned something with the + +00:12:58.519 --> 00:13:04.000 +smaller model didn't know and so here we + +00:13:02.279 --> 00:13:06.199 +modify the probability distribution + +00:13:04.000 --> 00:13:08.199 +coming out of the larger model to choose + +00:13:06.199 --> 00:13:11.120 +outputs that that model thinks are very + +00:13:08.199 --> 00:13:12.600 +likely and the amateur or the the weaker + +00:13:11.120 --> 00:13:15.480 +model thinks are not + +00:13:12.600 --> 00:13:20.000 +likely so in this example here from + +00:13:15.480 --> 00:13:22.560 +their paper um if you have sort of a + +00:13:20.000 --> 00:13:27.199 +input like Barack Obama was born in + +00:13:22.560 --> 00:13:29.720 +Hawaii he was born in L um the smaller + +00:13:27.199 --> 00:13:31.360 +model would often do something like + +00:13:29.720 --> 00:13:35.399 +start repeating and actually if you + +00:13:31.360 --> 00:13:36.720 +sample sort of naively from the um + +00:13:35.399 --> 00:13:38.560 +larger model you can wind up in these + +00:13:36.720 --> 00:13:40.000 +situations as well right so if you just + +00:13:38.560 --> 00:13:41.959 +choose the most likely thing at each + +00:13:40.000 --> 00:13:43.399 +step you wind up in this Loop where it's + +00:13:41.959 --> 00:13:45.560 +like he was born in Hawaii he was born + +00:13:43.399 --> 00:13:48.199 +in Hawaii he was born in Hawaii um and + +00:13:45.560 --> 00:13:51.320 +this is behavior we generally don't want + +00:13:48.199 --> 00:13:52.680 +um if you do something like nucleus or + +00:13:51.320 --> 00:13:53.720 +top PE sampling you can wind up with + +00:13:52.680 --> 00:13:55.880 +things that are actually completely + +00:13:53.720 --> 00:13:58.839 +incorrect like he was born in Washington + +00:13:55.880 --> 00:14:01.480 +DC um but if you use contrastive + +00:13:58.839 --> 00:14:04.120 +decoding you take the outputs coming out + +00:14:01.480 --> 00:14:05.720 +of your expert model here and you + +00:14:04.120 --> 00:14:07.680 +subtract out the probabilities coming + +00:14:05.720 --> 00:14:10.160 +out of the weaker model and you can wind + +00:14:07.680 --> 00:14:11.880 +up with things that the higher model the + +00:14:10.160 --> 00:14:13.759 +stronger model ascribed probability to + +00:14:11.880 --> 00:14:15.480 +but the weaker model did not likely + +00:14:13.759 --> 00:14:16.920 +because these are sort of facts that the + +00:14:15.480 --> 00:14:18.959 +larger model knows that the smaller + +00:14:16.920 --> 00:14:20.800 +model does not so here we actually get + +00:14:18.959 --> 00:14:23.199 +the year Barack Obama was born which is + +00:14:20.800 --> 00:14:25.800 +maybe a fact that the larger model knows + +00:14:23.199 --> 00:14:27.639 +and the smaller model didn't know um and + +00:14:25.800 --> 00:14:29.759 +so this is just one of sort of a broad + +00:14:27.639 --> 00:14:32.560 +class of methods where you use external + +00:14:29.759 --> 00:14:35.199 +information to improve your decoding by + +00:14:32.560 --> 00:14:38.720 +modifying this distribution at each + +00:14:35.199 --> 00:14:40.720 +set um those are sort of a brief tour of + +00:14:38.720 --> 00:14:43.920 +a couple of different sampling methods + +00:14:40.720 --> 00:14:43.920 +before we move into search + +00:14:44.600 --> 00:14:50.440 +yeah + +00:14:46.279 --> 00:14:54.880 +yeah is it going to improve upon just + +00:14:50.440 --> 00:14:57.240 +the yeah it generally does um and the + +00:14:54.880 --> 00:14:59.800 +intuition for why this might be I think + +00:14:57.240 --> 00:15:01.680 +is that there are sort of these + +00:14:59.800 --> 00:15:04.560 +degenerate cases like just repeating + +00:15:01.680 --> 00:15:06.120 +over and over that both the expert and + +00:15:04.560 --> 00:15:09.000 +the weak model would give relatively + +00:15:06.120 --> 00:15:10.880 +high probability to um maybe the expert + +00:15:09.000 --> 00:15:13.199 +model is like slightly less likely to do + +00:15:10.880 --> 00:15:14.959 +these things but it's still like sort of + +00:15:13.199 --> 00:15:16.639 +an easy case for the model to learn and + +00:15:14.959 --> 00:15:18.120 +so both of those models will have high + +00:15:16.639 --> 00:15:20.079 +probability for those things but the + +00:15:18.120 --> 00:15:21.800 +things that are genuinely like good + +00:15:20.079 --> 00:15:23.880 +outputs that only the expert would get + +00:15:21.800 --> 00:15:25.519 +right those will have low probability + +00:15:23.880 --> 00:15:27.600 +under the weak model and so you're sort + +00:15:25.519 --> 00:15:30.880 +of subtracting out all the degenerate + +00:15:27.600 --> 00:15:33.759 +behaviors and keeping to really good out + +00:15:30.880 --> 00:15:35.240 +this if you're generating a longer + +00:15:33.759 --> 00:15:37.440 +sequence with with + +00:15:35.240 --> 00:15:40.759 +contacing how do you know which steps + +00:15:37.440 --> 00:15:45.120 +you want to bring out yeah this is a + +00:15:40.759 --> 00:15:48.560 +great question so for this particular + +00:15:45.120 --> 00:15:50.560 +case oh yeah sorry so this was if you're + +00:15:48.560 --> 00:15:52.279 +doing contrastive decoding over a really + +00:15:50.560 --> 00:15:54.399 +long sequence like when do you choose to + +00:15:52.279 --> 00:15:55.800 +bring in the expert right and for + +00:15:54.399 --> 00:15:58.600 +contrastive decoding we're actually + +00:15:55.800 --> 00:16:00.759 +going to do this at every individual + +00:15:58.600 --> 00:16:02.440 +time step so we're going to use the + +00:16:00.759 --> 00:16:04.800 +expert model to decode and we're going + +00:16:02.440 --> 00:16:07.000 +to bring in the amateur to sort of + +00:16:04.800 --> 00:16:09.079 +subtract out probabilities at each next + +00:16:07.000 --> 00:16:10.399 +token prediction um you don't have to do + +00:16:09.079 --> 00:16:12.800 +that I think that's that's what they do + +00:16:10.399 --> 00:16:15.000 +in the paper um you could also decide to + +00:16:12.800 --> 00:16:16.680 +only do this sort of if you have high + +00:16:15.000 --> 00:16:19.639 +uncertainty or something if you don't + +00:16:16.680 --> 00:16:22.639 +have a really sharp probability + +00:16:19.639 --> 00:16:22.639 +distribution + +00:16:23.160 --> 00:16:28.160 +yeah yeah how weak should the weak + +00:16:25.399 --> 00:16:30.199 +predictor be um in the in the paper what + +00:16:28.160 --> 00:16:31.600 +they're look at is actually not a huge + +00:16:30.199 --> 00:16:34.560 +difference between the two models so you + +00:16:31.600 --> 00:16:35.800 +can see here this is gpd2 XL and small + +00:16:34.560 --> 00:16:37.319 +so there's a difference in parameter + +00:16:35.800 --> 00:16:39.519 +counts and like a bit of a difference in + +00:16:37.319 --> 00:16:42.160 +data I think here but these are actually + +00:16:39.519 --> 00:16:44.959 +not like gpd2 XL is certainly not like a + +00:16:42.160 --> 00:16:48.399 +super strong model now um I think they + +00:16:44.959 --> 00:16:50.920 +try a couple of different settings and + +00:16:48.399 --> 00:16:52.319 +the general intuition I think if I'm + +00:16:50.920 --> 00:16:54.880 +remembering it correctly is that you + +00:16:52.319 --> 00:16:56.319 +want a model that's not like so close in + +00:16:54.880 --> 00:16:58.000 +performance to your expert that you're + +00:16:56.319 --> 00:16:59.839 +basically just subtracting out useful + +00:16:58.000 --> 00:17:02.240 +things but you also don't want a model + +00:16:59.839 --> 00:17:03.519 +that's like so degenerate that it is not + +00:17:02.240 --> 00:17:04.959 +hasn't learned anything useful about + +00:17:03.519 --> 00:17:06.839 +your task at all so I think it might + +00:17:04.959 --> 00:17:09.600 +depend on what task you're looking + +00:17:06.839 --> 00:17:12.919 +at + +00:17:09.600 --> 00:17:14.559 +yes this is for inference um so actually + +00:17:12.919 --> 00:17:17.640 +everything we look at today will not + +00:17:14.559 --> 00:17:17.640 +require aning of the + +00:17:19.360 --> 00:17:26.559 +model Okay cool so now we're going to + +00:17:24.000 --> 00:17:30.039 +step into sort of a slightly different + +00:17:26.559 --> 00:17:31.280 +um set of strategies here which is maybe + +00:17:30.039 --> 00:17:33.039 +we don't just want something from the + +00:17:31.280 --> 00:17:35.160 +model distribution or something from a + +00:17:33.039 --> 00:17:37.760 +modified distribution maybe we actually + +00:17:35.160 --> 00:17:39.840 +just want the quote unquote best thing + +00:17:37.760 --> 00:17:42.960 +the single most likely output given our + +00:17:39.840 --> 00:17:45.200 +input right and here this would be the Y + +00:17:42.960 --> 00:17:48.039 +hat the single sequence that satisfies + +00:17:45.200 --> 00:17:51.919 +that has the highest score py given X + +00:17:48.039 --> 00:17:54.240 +for the X that we gave the model um this + +00:17:51.919 --> 00:17:56.000 +is this section is called mode seeking + +00:17:54.240 --> 00:17:58.039 +search because this is the mode of the + +00:17:56.000 --> 00:18:00.440 +distribution over outputs if you sampled + +00:17:58.039 --> 00:18:01.760 +a huge huge number of times and you + +00:18:00.440 --> 00:18:04.720 +looked at the single most likely + +00:18:01.760 --> 00:18:06.720 +sequence you got it would be this y hat + +00:18:04.720 --> 00:18:09.280 +and so how do we find this + +00:18:06.720 --> 00:18:11.600 +thing well one idea is we know the + +00:18:09.280 --> 00:18:13.159 +distribution at each individual setep + +00:18:11.600 --> 00:18:16.000 +can we just pick the most likely thing + +00:18:13.159 --> 00:18:18.960 +from that distribution and so in Greedy + +00:18:16.000 --> 00:18:21.080 +decoding we take the argmax the single + +00:18:18.960 --> 00:18:22.720 +highest probability token at each step + +00:18:21.080 --> 00:18:24.840 +and we continue generating until the + +00:18:22.720 --> 00:18:26.600 +single highest most the single highest + +00:18:24.840 --> 00:18:28.840 +probability token is the stop token + +00:18:26.600 --> 00:18:31.559 +right the end of sequence token + +00:18:28.840 --> 00:18:33.400 +um for an individual token right if we + +00:18:31.559 --> 00:18:35.559 +only want a single token output this is + +00:18:33.400 --> 00:18:38.320 +exactly what we want this is the single + +00:18:35.559 --> 00:18:40.400 +most likely output um and that's great + +00:18:38.320 --> 00:18:44.000 +but if we're looking at something that + +00:18:40.400 --> 00:18:45.120 +is maybe several tokens long are we + +00:18:44.000 --> 00:18:47.360 +actually going to get the highest + +00:18:45.120 --> 00:18:49.720 +probability thing and if you kind of + +00:18:47.360 --> 00:18:52.159 +squint at this you can see that maybe we + +00:18:49.720 --> 00:18:54.120 +have a problem here where the highest + +00:18:52.159 --> 00:18:56.320 +probability sequence that you get from + +00:18:54.120 --> 00:18:58.039 +multiplying across multiple steps + +00:18:56.320 --> 00:18:59.559 +doesn't necessarily start with the token + +00:18:58.039 --> 00:19:01.600 +that was highest probability at time + +00:18:59.559 --> 00:19:03.200 +step one right maybe if you're doing + +00:19:01.600 --> 00:19:04.720 +something like unconditional generation + +00:19:03.200 --> 00:19:06.720 +the highest probability token at time + +00:19:04.720 --> 00:19:08.360 +step one is always the but there could + +00:19:06.720 --> 00:19:09.919 +be a really probable sentence that just + +00:19:08.360 --> 00:19:11.480 +doesn't happen to start with the the + +00:19:09.919 --> 00:19:12.720 +word the' and you would never find it + +00:19:11.480 --> 00:19:15.080 +using GRE + +00:19:12.720 --> 00:19:17.360 +decoding so this isn't going to give us + +00:19:15.080 --> 00:19:19.799 +the highest probability output over a + +00:19:17.360 --> 00:19:22.000 +sequence that's more than one token one + +00:19:19.799 --> 00:19:23.360 +can we do anything better to try to find + +00:19:22.000 --> 00:19:25.640 +this um + +00:19:23.360 --> 00:19:27.559 +output and here we get into sort of one + +00:19:25.640 --> 00:19:29.520 +of the most popular decoding methods the + +00:19:27.559 --> 00:19:32.600 +one that you maybe heard of before which + +00:19:29.520 --> 00:19:35.080 +is beam search the idea here is that we + +00:19:32.600 --> 00:19:36.559 +don't want to miss a high probability + +00:19:35.080 --> 00:19:38.880 +token that's hidden behind a lower + +00:19:36.559 --> 00:19:40.200 +probability prefix so we want to kind of + +00:19:38.880 --> 00:19:42.000 +search through a couple of different + +00:19:40.200 --> 00:19:43.760 +options so that we don't discard + +00:19:42.000 --> 00:19:47.120 +something too early that might have high + +00:19:43.760 --> 00:19:49.360 +probability um later on in generation + +00:19:47.120 --> 00:19:50.919 +and this is a type of bread first search + +00:19:49.360 --> 00:19:53.200 +so we're going to look at a wide variety + +00:19:50.919 --> 00:19:54.600 +of options at a given time step we're + +00:19:53.200 --> 00:19:55.600 +going to pick some set of them to + +00:19:54.600 --> 00:19:57.120 +continue and then we're going to look at + +00:19:55.600 --> 00:19:58.919 +a wide variety of options for the next + +00:19:57.120 --> 00:19:59.960 +time step instead of generating all the + +00:19:58.919 --> 00:20:02.200 +way through a sequence and then + +00:19:59.960 --> 00:20:04.320 +generating all the way through another + +00:20:02.200 --> 00:20:05.760 +sequence um and how this works is we're + +00:20:04.320 --> 00:20:07.559 +going to pick sort of a number of + +00:20:05.760 --> 00:20:09.400 +candidates we'd like to explore a beam + +00:20:07.559 --> 00:20:11.039 +with so in this example we're going to + +00:20:09.400 --> 00:20:12.799 +pick three and we're going to say all + +00:20:11.039 --> 00:20:15.480 +right here are maybe three options for + +00:20:12.799 --> 00:20:17.640 +time step one for if we pick each of + +00:20:15.480 --> 00:20:19.760 +those three options what would be the + +00:20:17.640 --> 00:20:21.799 +three most likely things for time step + +00:20:19.760 --> 00:20:23.200 +two right rather than choosing just the + +00:20:21.799 --> 00:20:24.520 +single most likely thing in Greedy + +00:20:23.200 --> 00:20:26.960 +decoding we're going to pick three + +00:20:24.520 --> 00:20:29.120 +options and so now we have three options + +00:20:26.960 --> 00:20:32.559 +for time step one three options for time + +00:20:29.120 --> 00:20:34.280 +step two we now have nine options um + +00:20:32.559 --> 00:20:36.320 +here right three options and then three + +00:20:34.280 --> 00:20:37.679 +more for each of these and we don't want + +00:20:36.320 --> 00:20:40.159 +to continue doing this because this is + +00:20:37.679 --> 00:20:41.960 +going to sort of combinator explode so + +00:20:40.159 --> 00:20:44.080 +we need to choose some subset of these + +00:20:41.960 --> 00:20:45.880 +to continue with and the way we do that + +00:20:44.080 --> 00:20:47.799 +is we look at the probability over this + +00:20:45.880 --> 00:20:49.240 +two token sequence and we choose the two + +00:20:47.799 --> 00:20:51.520 +that have the highest probability + +00:20:49.240 --> 00:20:53.400 +overall so in this instance we've chosen + +00:20:51.520 --> 00:20:55.679 +sort of one thing from this first group + +00:20:53.400 --> 00:20:57.760 +and two things from the second group and + +00:20:55.679 --> 00:20:59.760 +now we're back down to three hypotheses + +00:20:57.760 --> 00:21:02.120 +each now two tokens long and we'll + +00:20:59.760 --> 00:21:04.000 +continue generating to time step three + +00:21:02.120 --> 00:21:05.600 +we'll get nine options we'll pre it back + +00:21:04.000 --> 00:21:07.760 +down to three and we'll continue until + +00:21:05.600 --> 00:21:09.159 +the end of generation where we now have + +00:21:07.760 --> 00:21:10.679 +three sequences and we'll just pick the + +00:21:09.159 --> 00:21:14.000 +one that's highest probability out of + +00:21:10.679 --> 00:21:15.679 +those three to return um this is not + +00:21:14.000 --> 00:21:17.360 +guaranteed to get you the highest + +00:21:15.679 --> 00:21:18.480 +probability thing right you still have + +00:21:17.360 --> 00:21:20.039 +this risk that you could be sort of + +00:21:18.480 --> 00:21:22.279 +pruning out something that's high + +00:21:20.039 --> 00:21:24.159 +probability but in general this sort of + +00:21:22.279 --> 00:21:26.600 +works um much better than greedy + +00:21:24.159 --> 00:21:28.520 +decoding and this is if you have a + +00:21:26.600 --> 00:21:31.120 +language model and you're sort of not + +00:21:28.520 --> 00:21:32.440 +what um decoding method it's using outs + +00:21:31.120 --> 00:21:34.200 +are pretty good it's either beam search + +00:21:32.440 --> 00:21:37.120 +or temperature samping right this is + +00:21:34.200 --> 00:21:40.039 +very effective this is used um pretty + +00:21:37.120 --> 00:21:41.760 +broadly there are however some issues + +00:21:40.039 --> 00:21:43.760 +with beam search and one of the biggest + +00:21:41.760 --> 00:21:46.159 +ones is that when you're doing this + +00:21:43.760 --> 00:21:47.679 +maximum likelihood sampling you really + +00:21:46.159 --> 00:21:50.080 +or the sampling to search for something + +00:21:47.679 --> 00:21:51.760 +that's very high likelihood um you + +00:21:50.080 --> 00:21:53.679 +really sacrifice a lot of diversity in + +00:21:51.760 --> 00:21:55.320 +your outputs and in particular you could + +00:21:53.679 --> 00:21:57.279 +wind up at the end of beam search with + +00:21:55.320 --> 00:21:58.919 +three different outputs to choose from + +00:21:57.279 --> 00:22:00.120 +that are all pretty pretty much the same + +00:21:58.919 --> 00:22:02.640 +like they're slightly different token + +00:22:00.120 --> 00:22:04.559 +sequences but they look very similar and + +00:22:02.640 --> 00:22:07.480 +so maybe you want to S get sort of a + +00:22:04.559 --> 00:22:08.919 +more diverse set um there's a couple of + +00:22:07.480 --> 00:22:10.640 +different methods in this category I'm + +00:22:08.919 --> 00:22:12.679 +going to very briefly shout out two of + +00:22:10.640 --> 00:22:14.200 +them um but the idea here is to sort of + +00:22:12.679 --> 00:22:16.440 +reintroduce some of the benefits of + +00:22:14.200 --> 00:22:19.120 +sampling while still doing this kind of + +00:22:16.440 --> 00:22:20.919 +search for high probability things um + +00:22:19.120 --> 00:22:22.600 +diverse beam search is one of these + +00:22:20.919 --> 00:22:25.520 +methods and here the idea is that we + +00:22:22.600 --> 00:22:27.279 +want to modify that scoring step when we + +00:22:25.520 --> 00:22:28.600 +choose which three out of our nine beams + +00:22:27.279 --> 00:22:30.200 +we want to continue + +00:22:28.600 --> 00:22:32.000 +to avoid choosing things that are really + +00:22:30.200 --> 00:22:34.320 +really close to each other right so + +00:22:32.000 --> 00:22:36.039 +maybe our highest probability thing is + +00:22:34.320 --> 00:22:37.559 +some sequence a and then if we look at + +00:22:36.039 --> 00:22:39.520 +the other sequences there's one that's + +00:22:37.559 --> 00:22:41.279 +pretty high probability but very similar + +00:22:39.520 --> 00:22:43.600 +to that sequence and there's one that's + +00:22:41.279 --> 00:22:45.320 +like slightly lower probability but very + +00:22:43.600 --> 00:22:47.200 +different and so maybe we would choose a + +00:22:45.320 --> 00:22:49.679 +sequence that is a little lower + +00:22:47.200 --> 00:22:51.760 +probability to maximize diversity in our + +00:22:49.679 --> 00:22:53.799 +set to try to get like sort of a wider + +00:22:51.760 --> 00:22:56.200 +range of options to choose from later in + +00:22:53.799 --> 00:22:58.200 +generation so this modifies the scoring + +00:22:56.200 --> 00:23:00.120 +to not just take into account likelihood + +00:22:58.200 --> 00:23:03.200 +but also similarity to other + +00:23:00.120 --> 00:23:05.400 +KS another option down this path is + +00:23:03.200 --> 00:23:07.640 +stochastic beam search where we're going + +00:23:05.400 --> 00:23:09.279 +to keep the scoring the same but rather + +00:23:07.640 --> 00:23:11.679 +than choosing just the top three most + +00:23:09.279 --> 00:23:13.279 +likely tokens to expand out each beam + +00:23:11.679 --> 00:23:15.200 +we're actually going to sample from some + +00:23:13.279 --> 00:23:17.000 +distribution and you could sample from + +00:23:15.200 --> 00:23:18.760 +the model distribution directly using + +00:23:17.000 --> 00:23:20.200 +ancestral sampling or you could use any + +00:23:18.760 --> 00:23:22.679 +of our sampling methods we talked about + +00:23:20.200 --> 00:23:24.200 +in the last section to do this and the + +00:23:22.679 --> 00:23:25.799 +the idea here is sort of similar to + +00:23:24.200 --> 00:23:29.279 +diverse beam search we want to get sort + +00:23:25.799 --> 00:23:31.240 +of a wider exploration of our models + +00:23:29.279 --> 00:23:33.520 +like output space you know we want to + +00:23:31.240 --> 00:23:35.360 +sort of explore more things instead of + +00:23:33.520 --> 00:23:36.760 +just seeking winding up with a bunch of + +00:23:35.360 --> 00:23:39.679 +outputs that look very similar at the + +00:23:36.760 --> 00:23:41.120 +end of beam search um if folks are + +00:23:39.679 --> 00:23:43.679 +interested in these I think these are + +00:23:41.120 --> 00:23:46.159 +both linked on the website um the the + +00:23:43.679 --> 00:23:48.679 +papers that both of these ideas came + +00:23:46.159 --> 00:23:51.480 +from + +00:23:48.679 --> 00:23:54.400 +Yes um for stochastic + +00:23:51.480 --> 00:23:57.039 +resarch the sampl probability takes into + +00:23:54.400 --> 00:23:59.039 +account the current part that we already + +00:23:57.039 --> 00:24:02.000 +travel okay + +00:23:59.039 --> 00:24:04.320 +yeah exactly so it's this um like + +00:24:02.000 --> 00:24:05.640 +selection step here but we're instead of + +00:24:04.320 --> 00:24:07.760 +just doing greedy selection we're going + +00:24:05.640 --> 00:24:11.760 +to do + +00:24:07.760 --> 00:24:17.520 +assembling yes my question was on the T + +00:24:11.760 --> 00:24:23.200 +yeah like you for something super simple + +00:24:17.520 --> 00:24:26.520 +like if both of them have a high are you + +00:24:23.200 --> 00:24:28.120 +like yeah so you would if it has a + +00:24:26.520 --> 00:24:30.080 +really high probability under both + +00:24:28.120 --> 00:24:32.880 +models it would have a lower probability + +00:24:30.080 --> 00:24:35.080 +after doing this sort of contrasted + +00:24:32.880 --> 00:24:36.600 +de right so if the if the smaller + +00:24:35.080 --> 00:24:38.799 +model's really good at your task this + +00:24:36.600 --> 00:24:40.960 +might not work very + +00:24:38.799 --> 00:24:43.360 +well yeah I think in the paper they're + +00:24:40.960 --> 00:24:45.320 +generally evaluating on these sort of + +00:24:43.360 --> 00:24:48.279 +like open ended generation task I bet + +00:24:45.320 --> 00:24:51.279 +this works a lot worse for + +00:24:48.279 --> 00:24:51.279 +now + +00:24:56.760 --> 00:24:59.760 +yes + +00:25:02.440 --> 00:25:08.120 +you yeah this is a great question um and + +00:25:05.960 --> 00:25:11.559 +so the question is how do we measure + +00:25:08.120 --> 00:25:14.120 +similar beams um you can sort of Define + +00:25:11.559 --> 00:25:15.559 +any kind of similarity function you like + +00:25:14.120 --> 00:25:17.520 +here um anything that you'd use to + +00:25:15.559 --> 00:25:20.440 +evaluate like how similar something is + +00:25:17.520 --> 00:25:22.360 +to a gold reference right um I think in + +00:25:20.440 --> 00:25:25.039 +the original diverse beam search they do + +00:25:22.360 --> 00:25:27.760 +this by looking at like exact token + +00:25:25.039 --> 00:25:30.640 +match across the two right like if these + +00:25:27.760 --> 00:25:33.880 +beams are the same in all but one of the + +00:25:30.640 --> 00:25:35.600 +tokens or they have like you know 50% of + +00:25:33.880 --> 00:25:37.120 +the tokens are shared across the beams + +00:25:35.600 --> 00:25:38.559 +and maybe these are really similar and + +00:25:37.120 --> 00:25:40.559 +they should try to choose two things + +00:25:38.559 --> 00:25:42.600 +that are different um but you could swap + +00:25:40.559 --> 00:25:46.200 +that out for any + +00:25:42.600 --> 00:25:49.440 +metc yes so + +00:25:46.200 --> 00:25:50.960 +the there's kind of like a that's Happ + +00:25:49.440 --> 00:25:53.360 +at + +00:25:50.960 --> 00:25:55.000 +every for the stochastic be search + +00:25:53.360 --> 00:25:57.720 +there's like a shering what do you mean + +00:25:55.000 --> 00:26:00.520 +by a shepher so it says modify the next + +00:25:57.720 --> 00:26:03.000 +sech selection because they're like um + +00:26:00.520 --> 00:26:06.919 +it is searching at a different space and + +00:26:03.000 --> 00:26:09.679 +it's not searching within the same 3D + +00:26:06.919 --> 00:26:14.080 +SE is it searching in a different space + +00:26:09.679 --> 00:26:15.799 +yeah so it's um in the same probability + +00:26:14.080 --> 00:26:18.399 +distribution but it'll see a different + +00:26:15.799 --> 00:26:20.840 +part of the distribution so when you're + +00:26:18.399 --> 00:26:22.640 +doing the grey search you'll only ever + +00:26:20.840 --> 00:26:24.559 +look at the top three tokens in the next + +00:26:22.640 --> 00:26:27.120 +token distribution because you're just + +00:26:24.559 --> 00:26:29.840 +selecting like the maximums um but in + +00:26:27.120 --> 00:26:31.360 +sampling you could you could get the + +00:26:29.840 --> 00:26:32.880 +same tokens right if they're really high + +00:26:31.360 --> 00:26:35.720 +likelihood but you could also sample + +00:26:32.880 --> 00:26:38.399 +something that's further down in the + +00:26:35.720 --> 00:26:42.760 +distribution yeah as a followup to that + +00:26:38.399 --> 00:26:44.880 +like into uh our stamping we take into + +00:26:42.760 --> 00:26:46.960 +account the probability of the prefix + +00:26:44.880 --> 00:26:50.679 +like the current hypothesis right + +00:26:46.960 --> 00:26:51.760 +because otherwise it is the same as just + +00:26:50.679 --> 00:26:54.279 +uh + +00:26:51.760 --> 00:26:57.159 +in yeah so in the sampling we're taking + +00:26:54.279 --> 00:27:00.120 +into account the previous the prefix + +00:26:57.159 --> 00:27:02.600 +yeah so so it we will take into account + +00:27:00.120 --> 00:27:06.200 +the prefix but this sampling mechanism + +00:27:02.600 --> 00:27:08.320 +here could be ancestral sampling um the + +00:27:06.200 --> 00:27:10.480 +only the difference here is that we're + +00:27:08.320 --> 00:27:12.600 +also doing a sort of search step on top + +00:27:10.480 --> 00:27:14.679 +of that to choose the maximum likelihood + +00:27:12.600 --> 00:27:18.080 +things across multiple + +00:27:14.679 --> 00:27:20.559 +me another important thing um is you + +00:27:18.080 --> 00:27:22.279 +sample without replacement and so + +00:27:20.559 --> 00:27:24.120 +normally you sample with replacement and + +00:27:22.279 --> 00:27:25.840 +you might get exactly the same thing but + +00:27:24.120 --> 00:27:28.000 +when you're doing stasic beam search you + +00:27:25.840 --> 00:27:30.240 +sample without replacement so you get + +00:27:28.000 --> 00:27:33.279 +like three ones according to the + +00:27:30.240 --> 00:27:36.080 +probability but they're guaranteed to be + +00:27:33.279 --> 00:27:37.799 +different right so beam search like one + +00:27:36.080 --> 00:27:39.559 +of the characteristics of beam search is + +00:27:37.799 --> 00:27:41.640 +you always get three different things + +00:27:39.559 --> 00:27:44.240 +because you're picking the three top + +00:27:41.640 --> 00:27:45.760 +when you do sampling uh like stochastic + +00:27:44.240 --> 00:27:47.399 +Bean shirts you get three different + +00:27:45.760 --> 00:27:49.440 +things they're not guaranteed to be the + +00:27:47.399 --> 00:27:51.760 +top they could be distributed according + +00:27:49.440 --> 00:27:54.360 +to the prob distribution but they're + +00:27:51.760 --> 00:27:55.840 +guaranteed so um you can take a look at + +00:27:54.360 --> 00:27:58.039 +the paper for more details of exactly + +00:27:55.840 --> 00:28:00.159 +how it looks but that that's + +00:27:58.039 --> 00:28:03.039 +so then is the main difference that + +00:28:00.159 --> 00:28:05.120 +compared to plus temping that we have n + +00:28:03.039 --> 00:28:08.519 +options that we're cheing tet instead of + +00:28:05.120 --> 00:28:10.320 +going with the going with only one and + +00:28:08.519 --> 00:28:11.200 +you can't yeah you can't simple the same + +00:28:10.320 --> 00:28:14.960 +thing + +00:28:11.200 --> 00:28:16.919 +right yeah so just uh repeat recording + +00:28:14.960 --> 00:28:19.159 +is that n options we're keeping track of + +00:28:16.919 --> 00:28:22.240 +and they're all going to be unique token + +00:28:19.159 --> 00:28:24.240 +sequences at least um you can actually + +00:28:22.240 --> 00:28:26.200 +get the same output sequence from two + +00:28:24.240 --> 00:28:28.120 +different toen sequences if you tokenize + +00:28:26.200 --> 00:28:32.360 +slightly differently um but these will + +00:28:28.120 --> 00:28:37.840 +always be unique tokens + +00:28:32.360 --> 00:28:39.279 +Le so that was sort of a a why like a a + +00:28:37.840 --> 00:28:41.320 +set of methods that we've developed to + +00:28:39.279 --> 00:28:43.600 +try to find the most probable sequence + +00:28:41.320 --> 00:28:44.480 +out of the model um but in the next + +00:28:43.600 --> 00:28:46.039 +section here we're going to sort of + +00:28:44.480 --> 00:28:50.240 +think about whether that's actually what + +00:28:46.039 --> 00:28:51.679 +we want to do at all um so what is like + +00:28:50.240 --> 00:28:54.240 +is do we really want the highest + +00:28:51.679 --> 00:28:56.880 +probability thing um we know that + +00:28:54.240 --> 00:28:58.600 +outputs with really low probability tend + +00:28:56.880 --> 00:29:00.640 +to be really like worse than outfits + +00:28:58.600 --> 00:29:03.240 +with high probability right maybe I'm + +00:29:00.640 --> 00:29:05.840 +trying to predict like what the next + +00:29:03.240 --> 00:29:08.640 +sentence should be after the cat saw the + +00:29:05.840 --> 00:29:11.240 +dog right the cat sat down is way higher + +00:29:08.640 --> 00:29:12.559 +probability than the cat grew wings and + +00:29:11.240 --> 00:29:14.039 +at least with the cats I've met that + +00:29:12.559 --> 00:29:15.679 +sounds pretty that sounds pretty much + +00:29:14.039 --> 00:29:19.559 +right right like this is a much better + +00:29:15.679 --> 00:29:21.720 +output than the cat gr wings but if you + +00:29:19.559 --> 00:29:24.159 +look at just the outputs with relatively + +00:29:21.720 --> 00:29:25.960 +high probability it's sort of less clear + +00:29:24.159 --> 00:29:27.880 +that this defines an exact ranking + +00:29:25.960 --> 00:29:30.559 +between those outputs right + +00:29:27.880 --> 00:29:32.600 +is the cat sat down necessarily better + +00:29:30.559 --> 00:29:34.519 +than the cat ran away these both seem + +00:29:32.600 --> 00:29:35.720 +like pretty reasonable outputs to me + +00:29:34.519 --> 00:29:40.200 +even though one of them is slightly + +00:29:35.720 --> 00:29:42.799 +higher probability and so we do we + +00:29:40.200 --> 00:29:45.240 +really like necessarily need to recover + +00:29:42.799 --> 00:29:47.200 +the cat that down um and this gets a + +00:29:45.240 --> 00:29:49.399 +little a little more complicated still + +00:29:47.200 --> 00:29:51.120 +if we look at sort of a range of outputs + +00:29:49.399 --> 00:29:53.120 +so say there's sort of six outputs that + +00:29:51.120 --> 00:29:55.240 +our model could give us um and here + +00:29:53.120 --> 00:29:57.559 +we're looking at sort of full sequences + +00:29:55.240 --> 00:30:00.120 +not individual tokens just for clarity + +00:29:57.559 --> 00:30:02.640 +so maybe our outputs in order of + +00:30:00.120 --> 00:30:05.840 +probability are the cat sat down it ran + +00:30:02.640 --> 00:30:08.240 +away it sprinted off it got out of there + +00:30:05.840 --> 00:30:09.720 +it's very small and it grew Wings right + +00:30:08.240 --> 00:30:11.440 +so we're definitely sure that the cat + +00:30:09.720 --> 00:30:13.159 +sat down is a better output than the cat + +00:30:11.440 --> 00:30:15.360 +grew wings and if we're doing a mod + +00:30:13.159 --> 00:30:17.600 +seeking search we would find that as our + +00:30:15.360 --> 00:30:19.440 +most likely thing if we're if we you + +00:30:17.600 --> 00:30:21.440 +know do a good job searching and we'd + +00:30:19.440 --> 00:30:23.519 +return that as our output but if you + +00:30:21.440 --> 00:30:25.919 +look at the rest of this distribution + +00:30:23.519 --> 00:30:27.880 +you see that there's actually a whole + +00:30:25.919 --> 00:30:29.240 +set of outputs after that all say + +00:30:27.880 --> 00:30:31.720 +something that kind of means the cat + +00:30:29.240 --> 00:30:33.480 +left the area right it's just that this + +00:30:31.720 --> 00:30:35.200 +probability is split over these three + +00:30:33.480 --> 00:30:37.080 +different generations and if you + +00:30:35.200 --> 00:30:39.120 +actually add up the probability mass of + +00:30:37.080 --> 00:30:40.880 +all three of these sequences this is + +00:30:39.120 --> 00:30:42.919 +double the probability mass of the cat + +00:30:40.880 --> 00:30:44.360 +sat down but because none of these + +00:30:42.919 --> 00:30:45.960 +individual sequences is higher + +00:30:44.360 --> 00:30:47.399 +probability if you're doing mode seeking + +00:30:45.960 --> 00:30:50.640 +search you wouldn't you wouldn't be able + +00:30:47.399 --> 00:30:52.480 +to see this effect right so do we really + +00:30:50.640 --> 00:30:53.760 +want to return the cat sat down or do we + +00:30:52.480 --> 00:30:55.200 +want to return something that means the + +00:30:53.760 --> 00:30:57.559 +cat left the + +00:30:55.200 --> 00:30:59.200 +area the question then is like if it's + +00:30:57.559 --> 00:31:03.120 +not probability that makes an output + +00:30:59.200 --> 00:31:04.679 +good what is it so we have this one + +00:31:03.120 --> 00:31:06.039 +output that's really high probability + +00:31:04.679 --> 00:31:09.000 +but it's very different from everything + +00:31:06.039 --> 00:31:10.720 +else in our set and then we have a + +00:31:09.000 --> 00:31:13.200 +couple of outputs that are all pretty + +00:31:10.720 --> 00:31:15.080 +high probability and similar to a bunch + +00:31:13.200 --> 00:31:17.840 +of other relatively high probability + +00:31:15.080 --> 00:31:19.720 +things so maybe it's sort of less risky + +00:31:17.840 --> 00:31:21.399 +to return one of these right are thing + +00:31:19.720 --> 00:31:23.200 +that's higher probability but different + +00:31:21.399 --> 00:31:24.600 +than everything else could be different + +00:31:23.200 --> 00:31:26.840 +because it's way better or it could be + +00:31:24.600 --> 00:31:29.000 +different because it's way worse um + +00:31:26.840 --> 00:31:31.120 +another way to think about this is you + +00:31:29.000 --> 00:31:32.600 +know maybe if you and your friends were + +00:31:31.120 --> 00:31:34.200 +cheating on a test which you shouldn't + +00:31:32.600 --> 00:31:35.480 +do but if you were going to do it and + +00:31:34.200 --> 00:31:37.519 +all of your friends sent you their + +00:31:35.480 --> 00:31:39.240 +answers um maybe one of your friends has + +00:31:37.519 --> 00:31:40.960 +a slightly higher score in the class + +00:31:39.240 --> 00:31:42.519 +than everyone else but they said the + +00:31:40.960 --> 00:31:44.480 +answer was answer a and everyone else + +00:31:42.519 --> 00:31:45.799 +said the answer was B right you still + +00:31:44.480 --> 00:31:48.480 +might go with the answer that everyone + +00:31:45.799 --> 00:31:50.679 +else said because like what there's it + +00:31:48.480 --> 00:31:52.679 +sort of feels less risky like maybe + +00:31:50.679 --> 00:31:54.440 +everyone else got the answer get that + +00:31:52.679 --> 00:31:55.880 +answer and so your one friend could be + +00:31:54.440 --> 00:31:56.919 +right when everyone else is wrong or + +00:31:55.880 --> 00:31:59.679 +they could have made a mistake that no + +00:31:56.919 --> 00:32:01.240 +one El else is making so this is sort of + +00:31:59.679 --> 00:32:03.519 +the same concept right we want an output + +00:32:01.240 --> 00:32:06.320 +that's relatively high probability but + +00:32:03.519 --> 00:32:09.399 +also relatively low + +00:32:06.320 --> 00:32:11.320 +risk and so here maybe if we were using + +00:32:09.399 --> 00:32:13.679 +this criteria we'd return the cat ran + +00:32:11.320 --> 00:32:14.720 +away as our sort of as our sort of + +00:32:13.679 --> 00:32:16.720 +single + +00:32:14.720 --> 00:32:19.440 +output so how do you find something + +00:32:16.720 --> 00:32:21.000 +that's high probability and low risk + +00:32:19.440 --> 00:32:22.480 +there's sort of two questions here right + +00:32:21.000 --> 00:32:24.399 +we have to figure out how to estimate + +00:32:22.480 --> 00:32:26.120 +probability and if we're looking at a + +00:32:24.399 --> 00:32:28.519 +set of outputs like the six we saw + +00:32:26.120 --> 00:32:29.880 +before maybe we can just do this by + +00:32:28.519 --> 00:32:31.720 +counting right we could sample + +00:32:29.880 --> 00:32:34.000 +everything from the model and get exact + +00:32:31.720 --> 00:32:35.200 +probability or we could take a sample + +00:32:34.000 --> 00:32:38.080 +from the model and just look at + +00:32:35.200 --> 00:32:40.200 +probabilities in that set and from there + +00:32:38.080 --> 00:32:41.840 +from that sample um sort of one + +00:32:40.200 --> 00:32:43.559 +reasonable thing to do is just count + +00:32:41.840 --> 00:32:45.320 +frequency right if something's in our + +00:32:43.559 --> 00:32:47.919 +sample twice as often we just say it's + +00:32:45.320 --> 00:32:49.799 +twice as frequent or it's twice as + +00:32:47.919 --> 00:32:52.880 +probable um this is something called + +00:32:49.799 --> 00:32:54.440 +Monte Carlos sampling if you do this um + +00:32:52.880 --> 00:32:56.039 +enough times like if you sample an + +00:32:54.440 --> 00:32:58.279 +infinite set this is would give you + +00:32:56.039 --> 00:33:00.880 +exactly the model distri distribution um + +00:32:58.279 --> 00:33:02.840 +but for the sort of reasonable size sets + +00:33:00.880 --> 00:33:04.200 +we're working with maybe like a 100 + +00:33:02.840 --> 00:33:06.320 +samples this gives us a sort of + +00:33:04.200 --> 00:33:09.440 +reasonable approximation for what we for + +00:33:06.320 --> 00:33:10.840 +what we need to do here at least so + +00:33:09.440 --> 00:33:12.000 +we're just going to take a sample to get + +00:33:10.840 --> 00:33:13.440 +probability and we're just going to + +00:33:12.000 --> 00:33:15.519 +count things in that sample to see how + +00:33:13.440 --> 00:33:17.320 +likely things are that doesn't seem too + +00:33:15.519 --> 00:33:20.080 +bad how do we estimate + +00:33:17.320 --> 00:33:21.679 +risk the idea here is that we have a + +00:33:20.080 --> 00:33:24.080 +bunch of other things in this set of + +00:33:21.679 --> 00:33:26.080 +outputs and we can treat those as sort + +00:33:24.080 --> 00:33:27.880 +of like pseudo references right we can + +00:33:26.080 --> 00:33:29.840 +evaluate agreement between the thing + +00:33:27.880 --> 00:33:31.519 +we're looking at and each of those other + +00:33:29.840 --> 00:33:33.480 +references and this is sort of the same + +00:33:31.519 --> 00:33:35.519 +idea of calculating similarity in + +00:33:33.480 --> 00:33:37.159 +diverse beam search we're going to use + +00:33:35.519 --> 00:33:39.639 +some kind of metric to compare how + +00:33:37.159 --> 00:33:41.279 +similar these things are um this metric + +00:33:39.639 --> 00:33:43.080 +could be anything you use Downstream it + +00:33:41.279 --> 00:33:44.840 +could be like an engram overlap metric + +00:33:43.080 --> 00:33:48.600 +like Rouge or blue or it could also be + +00:33:44.840 --> 00:33:51.120 +something um neural or semantic like um + +00:33:48.600 --> 00:33:54.799 +something like BT score or Bart + +00:33:51.120 --> 00:33:56.600 +score and so this concept um is a type + +00:33:54.799 --> 00:33:57.919 +of decoding called minimum based risk + +00:33:56.600 --> 00:33:59.600 +decoding + +00:33:57.919 --> 00:34:01.840 +and what this equation captures is + +00:33:59.600 --> 00:34:03.919 +exactly the intuition that we were um + +00:34:01.840 --> 00:34:06.600 +sort of talking about just a slide ago + +00:34:03.919 --> 00:34:08.159 +where we're going to choose something + +00:34:06.600 --> 00:34:09.919 +that is low risk which means it's + +00:34:08.159 --> 00:34:11.960 +similar to a lot of other things in this + +00:34:09.919 --> 00:34:12.800 +set of outputs we've sampled and we're + +00:34:11.960 --> 00:34:14.800 +going to choose something that's + +00:34:12.800 --> 00:34:17.560 +relatively high probability which means + +00:34:14.800 --> 00:34:19.159 +that sort of when we sum up over this if + +00:34:17.560 --> 00:34:21.399 +something occurs in our set a bunch of + +00:34:19.159 --> 00:34:23.320 +times it's going to have pretty strong + +00:34:21.399 --> 00:34:25.800 +weight in picking which um of these + +00:34:23.320 --> 00:34:27.000 +outputs are similar right if sort of + +00:34:25.800 --> 00:34:28.399 +there's one thing in the set that + +00:34:27.000 --> 00:34:29.919 +appears a bunch of times it's going to + +00:34:28.399 --> 00:34:32.040 +have a strong influence on which thing + +00:34:29.919 --> 00:34:34.119 +we pick and so that kind of captures + +00:34:32.040 --> 00:34:38.520 +high probability in this + +00:34:34.119 --> 00:34:41.119 +setting so to see how this works we can + +00:34:38.520 --> 00:34:44.639 +look at an example um in + +00:34:41.119 --> 00:34:47.399 +summarization so we choose some Metric + +00:34:44.639 --> 00:34:49.639 +maybe we choose um Rouge which is an + +00:34:47.399 --> 00:34:51.399 +engram overlap metric for summarization + +00:34:49.639 --> 00:34:52.879 +and we say we're going to sample 100 + +00:34:51.399 --> 00:34:55.960 +things and we're going to use this + +00:34:52.879 --> 00:35:00.359 +equation to choose the one that has the + +00:34:55.960 --> 00:35:03.960 +sort of lower EST risk according to MBR + +00:35:00.359 --> 00:35:06.480 +um so if we do that and we look at this + +00:35:03.960 --> 00:35:07.560 +sort of table of results here um you can + +00:35:06.480 --> 00:35:09.680 +see that this + +00:35:07.560 --> 00:35:11.320 +outperforms the other sampling methods + +00:35:09.680 --> 00:35:13.720 +that we've looked at before so greedy + +00:35:11.320 --> 00:35:15.640 +decoding here is just sampling the + +00:35:13.720 --> 00:35:18.760 +single most likely thing in each step + +00:35:15.640 --> 00:35:21.800 +beam search here is the BS with five or + +00:35:18.760 --> 00:35:24.359 +10 beams and DBS is the diverse beam + +00:35:21.800 --> 00:35:27.040 +search we were talking about um if we + +00:35:24.359 --> 00:35:29.440 +use minimum based risk and we use grou + +00:35:27.040 --> 00:35:31.240 +is the sort of determiner of similarity + +00:35:29.440 --> 00:35:32.680 +we do way better across all of our + +00:35:31.240 --> 00:35:33.960 +metrics but we especially do really good + +00:35:32.680 --> 00:35:36.680 +at Rouge because that's sort of the + +00:35:33.960 --> 00:35:38.119 +metric that we've been using to evaluate + +00:35:36.680 --> 00:35:40.240 +and then if we swap this out for other + +00:35:38.119 --> 00:35:43.599 +metrics you still see an performance + +00:35:40.240 --> 00:35:46.440 +improvement over these um search methods + +00:35:43.599 --> 00:35:48.119 +here um what's the sort of catch here + +00:35:46.440 --> 00:35:49.920 +the catch here is that MBR requires you + +00:35:48.119 --> 00:35:51.599 +to sample a hundred things and so this + +00:35:49.920 --> 00:35:54.760 +is a lot more expensive it's a lot + +00:35:51.599 --> 00:35:54.760 +slower at infin + +00:35:54.800 --> 00:35:58.800 +time um yes + +00:36:04.200 --> 00:36:10.040 +yes a great question why does the beam + +00:36:07.000 --> 00:36:14.000 +search with more beams perform worse um + +00:36:10.040 --> 00:36:16.720 +this is a well a relatively welln + +00:36:14.000 --> 00:36:19.359 +phenomena called the cursive beam search + +00:36:16.720 --> 00:36:21.640 +which is we actually lost your M so you + +00:36:19.359 --> 00:36:24.599 +mic and we can speak okay yeah so this + +00:36:21.640 --> 00:36:26.079 +is called the cursive beam search um and + +00:36:24.599 --> 00:36:27.760 +the idea here is that beam search is + +00:36:26.079 --> 00:36:29.359 +like an approxim search right so if you + +00:36:27.760 --> 00:36:31.200 +add more beams you should be doing + +00:36:29.359 --> 00:36:33.319 +better and better at finding the maximum + +00:36:31.200 --> 00:36:34.800 +likelihood thing and generally you are + +00:36:33.319 --> 00:36:37.160 +you get something that is higher + +00:36:34.800 --> 00:36:39.160 +probability but as you add more beams + +00:36:37.160 --> 00:36:42.319 +you also often get something that does + +00:36:39.160 --> 00:36:42.319 +worse on your Downstream + +00:36:44.160 --> 00:36:47.560 +metrics back up + +00:36:54.240 --> 00:36:58.680 +there is that back online + +00:36:59.119 --> 00:37:06.520 +yeah is that back is that any louder no + +00:37:03.520 --> 00:37:06.520 +it + +00:37:07.000 --> 00:37:12.640 +question oh there we go is that better + +00:37:09.599 --> 00:37:13.760 +great um yeah so why why does this + +00:37:12.640 --> 00:37:16.040 +happen right why do you get something + +00:37:13.760 --> 00:37:18.560 +that's higher likelihood but um lower + +00:37:16.040 --> 00:37:22.040 +performance Downstream um and this is + +00:37:18.560 --> 00:37:24.000 +like another sort of degeneracy of beam + +00:37:22.040 --> 00:37:25.680 +search that this idea that the thing + +00:37:24.000 --> 00:37:27.440 +that is the absolute highest likelihood + +00:37:25.680 --> 00:37:28.599 +in your distribution might not actually + +00:37:27.440 --> 00:37:31.079 +be what you want + +00:37:28.599 --> 00:37:33.960 +Downstream um this is sort of one of the + +00:37:31.079 --> 00:37:35.200 +other things that people use to motivate + +00:37:33.960 --> 00:37:37.599 +why you might want to do something like + +00:37:35.200 --> 00:37:39.400 +MBR instead um and there's a great paper + +00:37:37.599 --> 00:37:41.640 +about this problem called the inadequacy + +00:37:39.400 --> 00:37:43.680 +of the mode because beam search is + +00:37:41.640 --> 00:37:45.520 +looking for the mode of the + +00:37:43.680 --> 00:37:47.880 +distribution well one other thing I'd + +00:37:45.520 --> 00:37:49.680 +like to mention is it also goes together + +00:37:47.880 --> 00:37:51.119 +with how you train your models because + +00:37:49.680 --> 00:37:53.760 +most of our models are trained using + +00:37:51.119 --> 00:37:57.079 +maximum likelihood maximum likelihood + +00:37:53.760 --> 00:37:59.040 +isn't explicitly maximizing our ability + +00:37:57.079 --> 00:38:01.079 +to get the best answer it's explicitly + +00:37:59.040 --> 00:38:05.720 +maximizing our ability to estimate the + +00:38:01.079 --> 00:38:10.160 +the distribution of answers so if I + +00:38:05.720 --> 00:38:13.040 +say um if you said like what is what is + +00:38:10.160 --> 00:38:15.839 +your favorite hobby or something like + +00:38:13.040 --> 00:38:17.680 +that uh what is your favorite hobby in a + +00:38:15.839 --> 00:38:19.280 +dialogue system often it'll answer I + +00:38:17.680 --> 00:38:22.400 +don't know or something like that + +00:38:19.280 --> 00:38:24.920 +because it like you know that that's + +00:38:22.400 --> 00:38:26.599 +more likely than answering any specific + +00:38:24.920 --> 00:38:29.240 +hobby like it's more likely than + +00:38:26.599 --> 00:38:32.119 +answering basketball bowling you know + +00:38:29.240 --> 00:38:35.040 +whatever else because you have many many + +00:38:32.119 --> 00:38:36.560 +different options and so like especially + +00:38:35.040 --> 00:38:39.880 +if it's something that's a little bit + +00:38:36.560 --> 00:38:42.160 +more comp complicated it will avoid + +00:38:39.880 --> 00:38:44.680 +answering that and in particular it ends + +00:38:42.160 --> 00:38:47.240 +up answering very short things for + +00:38:44.680 --> 00:38:49.280 +example um or sometimes it ends up + +00:38:47.240 --> 00:38:51.160 +repeating itself over and over again or + +00:38:49.280 --> 00:38:53.240 +or things like that so it also goes + +00:38:51.160 --> 00:38:57.760 +together with like the training of the + +00:38:53.240 --> 00:38:59.359 +model yeah and this is um one of the + +00:38:57.760 --> 00:39:01.079 +this is still a problem in modern + +00:38:59.359 --> 00:39:02.560 +systems so if you actually look at the + +00:39:01.079 --> 00:39:03.839 +single like if you could enumerate + +00:39:02.560 --> 00:39:05.680 +everything and see the single most + +00:39:03.839 --> 00:39:07.520 +likely sequence it's often the empty + +00:39:05.680 --> 00:39:10.920 +sequence just not opening anything at + +00:39:07.520 --> 00:39:12.640 +all um and so if that's your true mode + +00:39:10.920 --> 00:39:16.119 +of the distribution then doing better at + +00:39:12.640 --> 00:39:16.119 +mode seeking is not always like + +00:39:16.599 --> 00:39:19.599 +helpful + +00:39:25.440 --> 00:39:32.960 +yes could this be influenced by the + +00:39:28.200 --> 00:39:32.960 +confidence problem like um how + +00:39:37.560 --> 00:39:41.079 +so seems + +00:39:49.760 --> 00:39:53.599 +bees + +00:39:51.010 --> 00:39:57.280 +[Music] + +00:39:53.599 --> 00:39:59.760 +might right I think I I think I see + +00:39:57.280 --> 00:40:02.000 +what you're saying which is that like + +00:39:59.760 --> 00:40:04.200 +the the confidence gives you the + +00:40:02.000 --> 00:40:06.680 +confidence of like a single exact + +00:40:04.200 --> 00:40:11.000 +sequence right not the like actual sort + +00:40:06.680 --> 00:40:13.200 +of semantic space of and so yeah if you + +00:40:11.000 --> 00:40:14.920 +looked at just like the if you look at + +00:40:13.200 --> 00:40:17.000 +just the probability scores you get the + +00:40:14.920 --> 00:40:18.520 +probability of an exact string when what + +00:40:17.000 --> 00:40:20.119 +you really actually care about with + +00:40:18.520 --> 00:40:22.319 +confidence is the probability of sort of + +00:40:20.119 --> 00:40:23.800 +like things that mean the same thing + +00:40:22.319 --> 00:40:25.359 +yeah this is um part of why like + +00:40:23.800 --> 00:40:28.359 +calibration is really hard for long + +00:40:25.359 --> 00:40:28.359 +sequences + +00:40:30.720 --> 00:40:37.319 +great so we're g to touch sort of + +00:40:34.359 --> 00:40:39.520 +briefly on a couple of other things that + +00:40:37.319 --> 00:40:40.920 +aren't sort of always explicitly + +00:40:39.520 --> 00:40:42.480 +described in this framework but that you + +00:40:40.920 --> 00:40:45.040 +can think of as variance of minimum + +00:40:42.480 --> 00:40:46.960 +based risk um and if you're interested + +00:40:45.040 --> 00:40:49.560 +in this analysis um I think as Graham + +00:40:46.960 --> 00:40:51.800 +mentioned earlier um Alex Z is a first + +00:40:49.560 --> 00:40:53.680 +year MLT and I wrote a paper about this + +00:40:51.800 --> 00:40:57.839 +um which you can check out if you're + +00:40:53.680 --> 00:41:01.200 +interested so the um two that I really + +00:40:57.839 --> 00:41:03.800 +want to touch on here are other sort of + +00:41:01.200 --> 00:41:05.240 +inference time things you can consider + +00:41:03.800 --> 00:41:07.520 +which might look a little bit different + +00:41:05.240 --> 00:41:09.480 +on the first BL um the first of these is + +00:41:07.520 --> 00:41:11.680 +output ensembling so say you have + +00:41:09.480 --> 00:41:13.240 +multiple different models and you get + +00:41:11.680 --> 00:41:15.480 +outputs from all of them and now you + +00:41:13.240 --> 00:41:19.560 +need to choose a best output among that + +00:41:15.480 --> 00:41:21.599 +set um one of the sort of common ways to + +00:41:19.560 --> 00:41:24.480 +do this is to compare like an embedding + +00:41:21.599 --> 00:41:25.920 +similarity across models like does model + +00:41:24.480 --> 00:41:27.560 +one think these two things are really + +00:41:25.920 --> 00:41:28.880 +similar does model two think these two + +00:41:27.560 --> 00:41:32.599 +things are really similar and try to + +00:41:28.880 --> 00:41:34.680 +choose something that the um has really + +00:41:32.599 --> 00:41:37.319 +high similarity with a lot of other + +00:41:34.680 --> 00:41:39.200 +outputs um of course now that we've just + +00:41:37.319 --> 00:41:41.440 +recently been talking about MBR you can + +00:41:39.200 --> 00:41:44.920 +see that you can probably see that this + +00:41:41.440 --> 00:41:46.280 +is um the same general formulation just + +00:41:44.920 --> 00:41:47.880 +rather than summing over a set of + +00:41:46.280 --> 00:41:49.520 +outputs from a single model now you're + +00:41:47.880 --> 00:41:52.160 +looking at outputs over a whole set of + +00:41:49.520 --> 00:41:54.640 +models um so some types of ensembling + +00:41:52.160 --> 00:41:57.319 +fall into this category of minimum based + +00:41:54.640 --> 00:42:00.680 +risk methods another thing in this + +00:41:57.319 --> 00:42:03.280 +category is a um sort of recent decoding + +00:42:00.680 --> 00:42:06.079 +method called self-consistency and the + +00:42:03.280 --> 00:42:08.200 +idea here is that you want to do + +00:42:06.079 --> 00:42:09.359 +something like mathematical reasoning + +00:42:08.200 --> 00:42:10.599 +and you really care about getting the + +00:42:09.359 --> 00:42:12.000 +final answer right but you don't + +00:42:10.599 --> 00:42:15.000 +necessarily care about getting all of + +00:42:12.000 --> 00:42:18.079 +the the reasoning steps in between right + +00:42:15.000 --> 00:42:19.520 +so you prompt the model for an answer um + +00:42:18.079 --> 00:42:20.800 +using something like Chain of Thought + +00:42:19.520 --> 00:42:22.680 +right you ask it to sort of talk through + +00:42:20.800 --> 00:42:26.440 +the steps it's going to do and then give + +00:42:22.680 --> 00:42:28.599 +you a final answer um you sample many + +00:42:26.440 --> 00:42:30.400 +puts using this and then you completely + +00:42:28.599 --> 00:42:32.200 +throw away the chains of thought um and + +00:42:30.400 --> 00:42:35.359 +you just take the answer from each + +00:42:32.200 --> 00:42:37.640 +output um you have that set of answers + +00:42:35.359 --> 00:42:38.960 +maybe you have like 20 30 100 answers + +00:42:37.640 --> 00:42:40.000 +you just return the one that was most + +00:42:38.960 --> 00:42:43.720 +frequently + +00:42:40.000 --> 00:42:46.119 +generated um what this is doing is a + +00:42:43.720 --> 00:42:48.800 +type of MBR where the metric that you + +00:42:46.119 --> 00:42:51.160 +actually care about is exact match of + +00:42:48.800 --> 00:42:51.839 +this answer right ignoring the rest of + +00:42:51.160 --> 00:42:54.079 +the + +00:42:51.839 --> 00:42:55.800 +generation um and so here we have sort + +00:42:54.079 --> 00:42:56.839 +of the same intuition that we want an + +00:42:55.800 --> 00:42:59.160 +output + +00:42:56.839 --> 00:43:01.520 +that is high probability right we're + +00:42:59.160 --> 00:43:03.359 +getting it generated a lot but also low + +00:43:01.520 --> 00:43:06.079 +risk not a lot of the other outputs in + +00:43:03.359 --> 00:43:08.440 +our in our set disagree with this + +00:43:06.079 --> 00:43:10.359 +answer so those are a couple of + +00:43:08.440 --> 00:43:11.920 +different variants of methods where + +00:43:10.359 --> 00:43:13.880 +we're sort of sampling a wide set of + +00:43:11.920 --> 00:43:17.359 +sequences and trying to choose the best + +00:43:13.880 --> 00:43:20.960 +one um MBR is one set is one type of + +00:43:17.359 --> 00:43:22.680 +sort of sequence set reranking method um + +00:43:20.960 --> 00:43:24.760 +you could do other things to rerank sets + +00:43:22.680 --> 00:43:27.400 +as well but this is sort of one + +00:43:24.760 --> 00:43:30.359 +representative class of these yes uh or + +00:43:27.400 --> 00:43:32.280 +of the of these methods before we get + +00:43:30.359 --> 00:43:35.200 +into constrain generation those are sort + +00:43:32.280 --> 00:43:37.000 +of the three broad categories of + +00:43:35.200 --> 00:43:39.480 +inference methods we'll discuss which is + +00:43:37.000 --> 00:43:41.680 +sort of sampling from some distribution + +00:43:39.480 --> 00:43:45.040 +searching over some space of + +00:43:41.680 --> 00:43:47.400 +distributions and doing some kind of um + +00:43:45.040 --> 00:43:48.559 +analysis over a set of samples to choose + +00:43:47.400 --> 00:43:51.359 +which ones they + +00:43:48.559 --> 00:43:52.559 +return um does anyone have any questions + +00:43:51.359 --> 00:43:55.079 +at this + +00:43:52.559 --> 00:44:00.680 +point + +00:43:55.079 --> 00:44:00.680 +yeah that a model + +00:44:05.800 --> 00:44:12.760 +cannot yeah like why is averaging model + +00:44:08.359 --> 00:44:16.400 +weights not MBR um I think it's not MBR + +00:44:12.760 --> 00:44:18.559 +because the two um the key thing that I + +00:44:16.400 --> 00:44:20.880 +think really makes a method MBR is this + +00:44:18.559 --> 00:44:22.480 +concept of comparing between multiple um + +00:44:20.880 --> 00:44:24.880 +sort of pseudo + +00:44:22.480 --> 00:44:26.839 +references um and there you don't have + +00:44:24.880 --> 00:44:28.359 +the same like you aage model way can you + +00:44:26.839 --> 00:44:32.440 +wind up with sort of a single output on + +00:44:28.359 --> 00:44:34.040 +the end that maybe is like using like + +00:44:32.440 --> 00:44:35.800 +information from these two model + +00:44:34.040 --> 00:44:38.240 +distributions that you've sort of smush + +00:44:35.800 --> 00:44:41.160 +together um but it's not the same + +00:44:38.240 --> 00:44:44.720 +concept of like comparing against pseudo + +00:44:41.160 --> 00:44:44.720 +references or ranking in a + +00:44:48.920 --> 00:44:55.599 +set right so now this is sort of a this + +00:44:52.720 --> 00:44:57.559 +was a wide variety of methods to try to + +00:44:55.599 --> 00:44:59.040 +find an output that's just sort of good + +00:44:57.559 --> 00:45:01.440 +right we want an output that that is + +00:44:59.040 --> 00:45:03.480 +nice out of our model um but now we'd + +00:45:01.440 --> 00:45:05.880 +like to maybe enclose a few additional + +00:45:03.480 --> 00:45:08.280 +constraints so say I'm asking our model + +00:45:05.880 --> 00:45:10.720 +for some Hobbies I could use to stay in + +00:45:08.280 --> 00:45:11.920 +to stay in shape and no matter what I + +00:45:10.720 --> 00:45:14.160 +don't want the model to recommend + +00:45:11.920 --> 00:45:16.880 +climbing like I I just I don't want this + +00:45:14.160 --> 00:45:18.400 +as an option I've tried it I'm not a fan + +00:45:16.880 --> 00:45:21.240 +um how do I get the model to stop + +00:45:18.400 --> 00:45:22.760 +suggesting climbing to me and if you've + +00:45:21.240 --> 00:45:24.559 +sort of played around with some of the + +00:45:22.760 --> 00:45:26.200 +more recent llms you'd say maybe this is + +00:45:24.559 --> 00:45:27.480 +easy right you just tell the model the + +00:45:26.200 --> 00:45:30.160 +instruction that you don't want to talk + +00:45:27.480 --> 00:45:31.640 +about climbing and having talked to Bard + +00:45:30.160 --> 00:45:33.640 +recently I can tell you unfortunately + +00:45:31.640 --> 00:45:34.800 +that it's not that easy so I tell the + +00:45:33.640 --> 00:45:36.599 +model I don't want to talk about + +00:45:34.800 --> 00:45:38.000 +climbing it does okay for a little bit + +00:45:36.599 --> 00:45:40.920 +and then it's like all right but maybe + +00:45:38.000 --> 00:45:42.359 +you want to try rap climbing um and so + +00:45:40.920 --> 00:45:44.559 +we could continue trying to instruction + +00:45:42.359 --> 00:45:46.200 +to our model but maybe there's sort of a + +00:45:44.559 --> 00:45:49.079 +way to impose this constraint on the + +00:45:46.200 --> 00:45:50.680 +decoding side instead and so I'd say all + +00:45:49.079 --> 00:45:52.960 +right I'm going to do something dramatic + +00:45:50.680 --> 00:45:54.440 +right I know I can manipulate the + +00:45:52.960 --> 00:45:56.200 +probability distribution I'm just going + +00:45:54.440 --> 00:45:57.920 +to set the probability of climbing to be + +00:45:56.200 --> 00:46:00.440 +zero I don't want to see this token like + +00:45:57.920 --> 00:46:02.640 +I'm I'm completely over it um and this + +00:46:00.440 --> 00:46:04.839 +is sort of nice in some sense because + +00:46:02.640 --> 00:46:06.720 +this is pretty easy to do um remember + +00:46:04.839 --> 00:46:08.440 +we're doing a soft Max over the outputs + +00:46:06.720 --> 00:46:10.599 +to choose this probability distribution + +00:46:08.440 --> 00:46:12.400 +and so if we add a huge negative number + +00:46:10.599 --> 00:46:14.160 +to the logic for climbing before we do + +00:46:12.400 --> 00:46:15.520 +this softmax its probability is going to + +00:46:14.160 --> 00:46:18.640 +be basically zero and we're never going + +00:46:15.520 --> 00:46:20.240 +to see it as an output um but this + +00:46:18.640 --> 00:46:22.480 +doesn't seem like a perfect solution + +00:46:20.240 --> 00:46:24.400 +right because you know what if the model + +00:46:22.480 --> 00:46:26.160 +recommends bouldering to me do I have to + +00:46:24.400 --> 00:46:28.599 +write like a sort of a list of every + +00:46:26.160 --> 00:46:30.599 +possible climbing synonym in the world + +00:46:28.599 --> 00:46:32.079 +um what if there's sort of an allowable + +00:46:30.599 --> 00:46:33.920 +way to use this token like I want the + +00:46:32.079 --> 00:46:35.319 +model to suggest hiking because climbing + +00:46:33.920 --> 00:46:37.480 +up a mountain to see a good view is + +00:46:35.319 --> 00:46:38.720 +relaxing but that's a use of the word + +00:46:37.480 --> 00:46:41.400 +climbing and we just said that we can't + +00:46:38.720 --> 00:46:43.520 +use the word climbing um or what if we + +00:46:41.400 --> 00:46:45.480 +sort of generate other related terms + +00:46:43.520 --> 00:46:47.520 +before we get to the restricted term + +00:46:45.480 --> 00:46:49.359 +like the model starts suggesting maybe + +00:46:47.520 --> 00:46:51.480 +you can work out by going to an indoor + +00:46:49.359 --> 00:46:52.920 +rock blank and then what are we going to + +00:46:51.480 --> 00:46:54.800 +say there's not we can't say rock + +00:46:52.920 --> 00:46:57.079 +climbing so maybe the model suggests + +00:46:54.800 --> 00:46:58.640 +rock climbing is rock collecting is a + +00:46:57.079 --> 00:47:01.400 +hobby to stay in shape and that doesn't + +00:46:58.640 --> 00:47:03.480 +sound good either um you could continue + +00:47:01.400 --> 00:47:05.640 +like sort of engineering more and more + +00:47:03.480 --> 00:47:06.599 +complicated rules here but maybe we + +00:47:05.640 --> 00:47:08.760 +could do something that's a little + +00:47:06.599 --> 00:47:10.559 +simpler so what if I just sample a bunch + +00:47:08.760 --> 00:47:11.920 +of outputs from the model and then I + +00:47:10.559 --> 00:47:14.359 +check if they're about climbing and I + +00:47:11.920 --> 00:47:16.280 +get rid of them if they are right um + +00:47:14.359 --> 00:47:18.200 +this is sort of the advantage that it's + +00:47:16.280 --> 00:47:19.599 +pretty easy to check after the fact if + +00:47:18.200 --> 00:47:22.480 +the sequence has satisfied this + +00:47:19.599 --> 00:47:24.400 +constraint you know we could train some + +00:47:22.480 --> 00:47:26.200 +smaller model to guess if the topic of a + +00:47:24.400 --> 00:47:27.960 +sentence is about climbing could check + +00:47:26.200 --> 00:47:30.040 +for keywords we could have a friend + +00:47:27.960 --> 00:47:31.359 +who's willing to see this content like + +00:47:30.040 --> 00:47:33.040 +filter through it and throw everything + +00:47:31.359 --> 00:47:36.480 +out that's not about climing that is + +00:47:33.040 --> 00:47:38.280 +about climbing but if this model um + +00:47:36.480 --> 00:47:40.119 +ascribes really high likelihood to this + +00:47:38.280 --> 00:47:42.559 +like if this model was trained on you + +00:47:40.119 --> 00:47:44.760 +know data from CS PhD students this + +00:47:42.559 --> 00:47:46.240 +could be an extremely high likelihood + +00:47:44.760 --> 00:47:48.319 +suggestion and so we might need to + +00:47:46.240 --> 00:47:49.839 +regenerate hundreds or thousands of + +00:47:48.319 --> 00:47:52.559 +sequences to find something that's not + +00:47:49.839 --> 00:47:55.240 +about climing um and that feels a little + +00:47:52.559 --> 00:47:56.920 +bit inefficient right so is there + +00:47:55.240 --> 00:47:59.040 +something that we can do that's a little + +00:47:56.920 --> 00:48:01.599 +bit better than that well really we'd + +00:47:59.040 --> 00:48:03.200 +like to guess at some point during our + +00:48:01.599 --> 00:48:05.200 +generation if the sequence is going to + +00:48:03.200 --> 00:48:08.000 +be about climbing and maybe like + +00:48:05.200 --> 00:48:10.640 +recalibrate or you know we could even + +00:48:08.000 --> 00:48:12.079 +restart or sort of shape Our Generations + +00:48:10.640 --> 00:48:14.520 +so that we don't wind up with a sequence + +00:48:12.079 --> 00:48:16.319 +that's about climbing in the first place + +00:48:14.520 --> 00:48:19.359 +um one of the methods that we'll discuss + +00:48:16.319 --> 00:48:20.920 +to do this is a method called fudge um + +00:48:19.359 --> 00:48:22.800 +and unfortunately in their paper they + +00:48:20.920 --> 00:48:24.240 +don't have the same anti-climbing bias I + +00:48:22.800 --> 00:48:27.000 +do so this example is actually about + +00:48:24.240 --> 00:48:29.000 +formality instead um the idea here is + +00:48:27.000 --> 00:48:32.079 +that we want a sequence output of the + +00:48:29.000 --> 00:48:34.079 +model that is sort of satisfies this + +00:48:32.079 --> 00:48:36.079 +constraint of being formal and the way + +00:48:34.079 --> 00:48:39.960 +we're going to do this is at each step + +00:48:36.079 --> 00:48:41.640 +of prediction we get the outputs of what + +00:48:39.960 --> 00:48:44.160 +the model predicts is the next token + +00:48:41.640 --> 00:48:47.319 +right this sort of distribution here in + +00:48:44.160 --> 00:48:49.760 +blue and we also have some second + +00:48:47.319 --> 00:48:52.079 +distribution which says given sort of + +00:48:49.760 --> 00:48:54.480 +what we have so far How likely is this + +00:48:52.079 --> 00:48:56.920 +to be a formal sentence at the end right + +00:48:54.480 --> 00:48:58.880 +does a sentence that starts do you want + +00:48:56.920 --> 00:49:01.200 +have a high likelihood of being formal + +00:48:58.880 --> 00:49:04.559 +versus a sentence that starts do you + +00:49:01.200 --> 00:49:07.200 +prefer and so this sort of guess at what + +00:49:04.559 --> 00:49:09.520 +will be formal at the end of the um + +00:49:07.200 --> 00:49:10.960 +generation will put High likelihood on + +00:49:09.520 --> 00:49:13.599 +things that result in really formal + +00:49:10.960 --> 00:49:15.880 +sentences like do you prefer or do you + +00:49:13.599 --> 00:49:17.200 +thus whereas the original model might + +00:49:15.880 --> 00:49:19.440 +have higher likelihood on things that + +00:49:17.200 --> 00:49:22.559 +are maybe more commonly said like do you + +00:49:19.440 --> 00:49:24.319 +want um so we combine these two + +00:49:22.559 --> 00:49:26.280 +distributions you can just multiply them + +00:49:24.319 --> 00:49:29.079 +together and then we sample from this + +00:49:26.280 --> 00:49:30.520 +modified distribution which now has some + +00:49:29.079 --> 00:49:32.359 +sort of high weight on things that the + +00:49:30.520 --> 00:49:33.559 +model thinks are likely but also takes + +00:49:32.359 --> 00:49:35.960 +into account the likelihood of + +00:49:33.559 --> 00:49:38.240 +satisfying a constraint um this is + +00:49:35.960 --> 00:49:40.640 +another sort of method of modifying or + +00:49:38.240 --> 00:49:42.520 +sampling distribution um with some + +00:49:40.640 --> 00:49:44.520 +external information here and so there's + +00:49:42.520 --> 00:49:47.440 +results and sequences that wind up being + +00:49:44.520 --> 00:49:48.799 +sort of more likely to be formal without + +00:49:47.440 --> 00:49:50.280 +having to sample a whole bunch of + +00:49:48.799 --> 00:49:52.880 +sentences and reject the ones that we + +00:49:50.280 --> 00:49:54.720 +think don't satisfy this constraint so + +00:49:52.880 --> 00:49:57.119 +how do we get sort of a guess of what + +00:49:54.720 --> 00:49:58.839 +will be formal at the end of Generation + +00:49:57.119 --> 00:50:01.319 +Um this is where the name fudge comes + +00:49:58.839 --> 00:50:03.319 +from the fud stands for future + +00:50:01.319 --> 00:50:06.640 +discriminator and so what they do is + +00:50:03.319 --> 00:50:08.920 +they train a model on prefixes to guess + +00:50:06.640 --> 00:50:10.400 +whether that sequence will be formal um + +00:50:08.920 --> 00:50:12.040 +you can do this if you have a bunch of + +00:50:10.400 --> 00:50:15.319 +data that's sort of sorted into formal + +00:50:12.040 --> 00:50:17.720 +and not formal right every um sort of + +00:50:15.319 --> 00:50:20.119 +prefix of a sentence in the formal + +00:50:17.720 --> 00:50:21.480 +category is a training example right you + +00:50:20.119 --> 00:50:23.720 +know a sentence that starts do you + +00:50:21.480 --> 00:50:27.599 +prefer you can shop off each token to + +00:50:23.720 --> 00:50:29.920 +get sort of a um set of sequ of prefixes + +00:50:27.599 --> 00:50:31.160 +to sequences that have the label formal + +00:50:29.920 --> 00:50:33.559 +and you can do the same thing to your + +00:50:31.160 --> 00:50:34.920 +informal set and train a discriminator + +00:50:33.559 --> 00:50:36.559 +to choose between them to say like + +00:50:34.920 --> 00:50:38.400 +what's the probability the sentence but + +00:50:36.559 --> 00:50:41.160 +will belong to the formal set when we + +00:50:38.400 --> 00:50:43.319 +finish and so this idea of sort of + +00:50:41.160 --> 00:50:44.359 +trying to guess at a given decoding step + +00:50:43.319 --> 00:50:49.480 +if we're going to wind up with our + +00:50:44.359 --> 00:50:50.799 +constraints satisfied at the end um is a + +00:50:49.480 --> 00:50:53.000 +sort of key way to do constraint + +00:50:50.799 --> 00:50:56.000 +decoding um and one that we'll return to + +00:50:53.000 --> 00:50:58.280 +in just a couple slides here + +00:50:56.000 --> 00:51:00.440 +I want to talk touch on something + +00:50:58.280 --> 00:51:03.079 +slightly different which is that maybe + +00:51:00.440 --> 00:51:04.599 +one of the constraints we care about is + +00:51:03.079 --> 00:51:07.319 +something a little more nebulous like we + +00:51:04.599 --> 00:51:09.160 +want to match human preference um the + +00:51:07.319 --> 00:51:12.079 +way that we usually accomplish this + +00:51:09.160 --> 00:51:14.920 +constraint is a little bit different + +00:51:12.079 --> 00:51:16.040 +right um this we' usually do through + +00:51:14.920 --> 00:51:18.839 +like reinforcement learning through + +00:51:16.040 --> 00:51:21.559 +human feedback um and so we take sort of + +00:51:18.839 --> 00:51:24.960 +our original model distribution and we + +00:51:21.559 --> 00:51:27.960 +take a sort of really like tight like + +00:51:24.960 --> 00:51:30.200 +distrib tion of evidence that says like + +00:51:27.960 --> 00:51:31.680 +um this model says that this sequence is + +00:51:30.200 --> 00:51:33.960 +really high reward this sequence is + +00:51:31.680 --> 00:51:35.640 +really low reward and we try to sort of + +00:51:33.960 --> 00:51:38.200 +combine them somehow through training so + +00:51:35.640 --> 00:51:41.240 +we get a new model that is um quote + +00:51:38.200 --> 00:51:43.240 +unquote aligned and that it has like a + +00:51:41.240 --> 00:51:45.280 +higher likelihood of giving us things + +00:51:43.240 --> 00:51:48.640 +that have really high reward according + +00:51:45.280 --> 00:51:51.319 +to our reward distribution um you can + +00:51:48.640 --> 00:51:53.599 +view this though as a type of basian + +00:51:51.319 --> 00:51:55.119 +inference and so what this means is the + +00:51:53.599 --> 00:51:57.440 +distribution that we really want to get + +00:51:55.119 --> 00:51:59.880 +at the end is a distribution that + +00:51:57.440 --> 00:52:03.160 +combines our original models + +00:51:59.880 --> 00:52:05.680 +distribution and some idea of like How + +00:52:03.160 --> 00:52:08.480 +likely we are to satisfy the reward + +00:52:05.680 --> 00:52:10.720 +right um this we do through + +00:52:08.480 --> 00:52:12.359 +reinforcement learning but if we sort of + +00:52:10.720 --> 00:52:14.480 +know what these two distributions look + +00:52:12.359 --> 00:52:16.119 +like we've we've just been talking about + +00:52:14.480 --> 00:52:17.680 +a lot of methods that modify the + +00:52:16.119 --> 00:52:20.119 +original models distribution with + +00:52:17.680 --> 00:52:21.880 +external information it seems like maybe + +00:52:20.119 --> 00:52:24.760 +we could just add that external + +00:52:21.880 --> 00:52:26.200 +information in at decoding time to get + +00:52:24.760 --> 00:52:29.040 +some of the same + +00:52:26.200 --> 00:52:31.040 +effects um and it turns out you can do + +00:52:29.040 --> 00:52:32.799 +exactly this so this is a paper from + +00:52:31.040 --> 00:52:36.680 +last year called reward augmented + +00:52:32.799 --> 00:52:39.079 +decoding and the idea here is sort of um + +00:52:36.680 --> 00:52:41.839 +in the same conceptual class as fudge + +00:52:39.079 --> 00:52:44.079 +but instead of um predicting whether + +00:52:41.839 --> 00:52:46.079 +we're likely to satisfy the constraint + +00:52:44.079 --> 00:52:47.599 +we're predicting how much reward we + +00:52:46.079 --> 00:52:49.880 +think that sequence will have at the end + +00:52:47.599 --> 00:52:52.599 +of generation so we take our original + +00:52:49.880 --> 00:52:54.839 +model without doing any rhf and we get + +00:52:52.599 --> 00:52:58.160 +the output we get the predictions for + +00:52:54.839 --> 00:52:59.400 +the next token and then we use a model + +00:52:58.160 --> 00:53:02.359 +that's been trained to predict the + +00:52:59.400 --> 00:53:05.040 +likely reward given some prefix like a + +00:53:02.359 --> 00:53:06.720 +future discriminator and we calculate + +00:53:05.040 --> 00:53:08.200 +the likely reward if we pick each of + +00:53:06.720 --> 00:53:09.799 +those tokens and then we use the + +00:53:08.200 --> 00:53:12.319 +combination of those two distributions + +00:53:09.799 --> 00:53:13.720 +to choose what to decode next um and + +00:53:12.319 --> 00:53:16.000 +this sort of gives you some of the + +00:53:13.720 --> 00:53:18.440 +benefits of rlf without actually having + +00:53:16.000 --> 00:53:21.200 +to do reinforcement learning so it's a + +00:53:18.440 --> 00:53:23.160 +way of treating like aligning to human + +00:53:21.200 --> 00:53:26.839 +feedback as just another constraint that + +00:53:23.160 --> 00:53:30.400 +you can impose at decoding point + +00:53:26.839 --> 00:53:32.319 +so those were sort of a a subset of the + +00:53:30.400 --> 00:53:34.280 +um constrains decoding strategies that + +00:53:32.319 --> 00:53:35.799 +people use um before we get into the + +00:53:34.280 --> 00:53:38.400 +human and the loop stack are there any + +00:53:35.799 --> 00:53:38.400 +questions on + +00:53:39.040 --> 00:53:43.599 +this yes for + +00:53:44.960 --> 00:53:48.319 +the do you have + +00:53:52.799 --> 00:53:57.440 +to right so for the discrimin do you + +00:53:55.640 --> 00:54:00.000 +need to train one for every constraint + +00:53:57.440 --> 00:54:01.440 +and you do yeah so you need to have some + +00:54:00.000 --> 00:54:02.920 +set of data that satisfies your + +00:54:01.440 --> 00:54:05.319 +constraint and some set of data that + +00:54:02.920 --> 00:54:08.200 +doesn't before you can enforce a new + +00:54:05.319 --> 00:54:10.200 +constraint in an alternative might be + +00:54:08.200 --> 00:54:12.040 +like in the paper that's what they did + +00:54:10.200 --> 00:54:16.400 +but an alternative might be just to + +00:54:12.040 --> 00:54:18.359 +train a discriminator to determine + +00:54:16.400 --> 00:54:20.880 +whether any constraint was violated so + +00:54:18.359 --> 00:54:23.359 +if you have 100 constraints you could do + +00:54:20.880 --> 00:54:25.599 +a binary prier about whether any + +00:54:23.359 --> 00:54:26.880 +constraint is violated and then + +00:54:25.599 --> 00:54:29.040 +also + +00:54:26.880 --> 00:54:30.559 +sufficient but if you wanted to add a + +00:54:29.040 --> 00:54:34.079 +new constraint you'd still have to + +00:54:30.559 --> 00:54:34.079 +retrain or you have to retrain + +00:54:35.160 --> 00:54:41.319 +or the the reason that this is sort of + +00:54:38.119 --> 00:54:43.119 +relatively reasonable to do is that this + +00:54:41.319 --> 00:54:45.240 +determination of if a constraint is + +00:54:43.119 --> 00:54:46.960 +likely to be violated is sort of a a + +00:54:45.240 --> 00:54:48.520 +lighter weight or an easier task to + +00:54:46.960 --> 00:54:50.520 +learn you can use a relatively small + +00:54:48.520 --> 00:54:52.079 +model for this versus like your big + +00:54:50.520 --> 00:54:53.680 +model just that has to be able to + +00:54:52.079 --> 00:54:55.920 +predict the next token for any sequence + +00:54:53.680 --> 00:54:58.400 +anymore yeah another another like + +00:54:55.920 --> 00:55:00.760 +interesting thing is if you think about + +00:54:58.400 --> 00:55:01.520 +it normally you're predicting with your + +00:55:00.760 --> 00:55:04.119 +big + +00:55:01.520 --> 00:55:06.359 +softmax like this over all of your + +00:55:04.119 --> 00:55:09.680 +vocabulary you can even use the same + +00:55:06.359 --> 00:55:11.920 +representations here to predict with a + +00:55:09.680 --> 00:55:13.359 +binary classifier uh whether the + +00:55:11.920 --> 00:55:14.559 +constraint is violated let's say you + +00:55:13.359 --> 00:55:17.240 +have 100 + +00:55:14.559 --> 00:55:19.240 +constraints this is still a vector of + +00:55:17.240 --> 00:55:21.520 +size 100 compared to your vector of size + +00:55:19.240 --> 00:55:26.240 +32,000 that you're using for llama right + +00:55:21.520 --> 00:55:28.280 +so it's not like this adds the training + +00:55:26.240 --> 00:55:32.799 +would cost some time but it adds very + +00:55:28.280 --> 00:55:32.799 +little like inference time I guess + +00:55:33.440 --> 00:55:38.960 +basically the rock + +00:55:35.880 --> 00:55:41.400 +sound so when you do the constraint you + +00:55:38.960 --> 00:55:43.160 +use like a more General + +00:55:41.400 --> 00:55:44.680 +like do + +00:55:43.160 --> 00:55:48.160 +notest + +00:55:44.680 --> 00:55:50.799 +or I guess like in that constraint for + +00:55:48.160 --> 00:55:50.799 +you can add + +00:55:52.559 --> 00:55:57.000 +like, is there + +00:55:57.880 --> 00:56:00.720 +like is there a way to generalize your + +00:55:59.400 --> 00:56:04.760 +constraint would be like don't talk + +00:56:00.720 --> 00:56:07.039 +about this whole set of hobes um you + +00:56:04.760 --> 00:56:08.960 +could do that by training a + +00:56:07.039 --> 00:56:10.400 +discriminator um by training one + +00:56:08.960 --> 00:56:12.359 +discriminator that considers all of + +00:56:10.400 --> 00:56:15.119 +those or by training like a hundred + +00:56:12.359 --> 00:56:17.559 +different discriminators and then um + +00:56:15.119 --> 00:56:19.520 +sort of taking like the maximum score + +00:56:17.559 --> 00:56:21.240 +from any of them right like you want to + +00:56:19.520 --> 00:56:23.240 +you want to be able to exclude all of + +00:56:21.240 --> 00:56:27.799 +these things so you consider if any of + +00:56:23.240 --> 00:56:30.720 +them are violated yeah and for um reward + +00:56:27.799 --> 00:56:32.839 +augmented recoding how do we sort of + +00:56:30.720 --> 00:56:36.039 +like frame that reward model or is that + +00:56:32.839 --> 00:56:38.400 +just come from the previously done rhf + +00:56:36.039 --> 00:56:41.079 +data that the store from there and then + +00:56:38.400 --> 00:56:44.119 +you sort of like FR another + +00:56:41.079 --> 00:56:47.880 +discriminator but this one + +00:56:44.119 --> 00:56:50.799 +now I I fully understand yeah so how do + +00:56:47.880 --> 00:56:52.920 +we get the the reward model here this is + +00:56:50.799 --> 00:56:55.280 +we can use the same data that we' use + +00:56:52.920 --> 00:56:58.000 +for rhf but we need a slightly different + +00:56:55.280 --> 00:57:01.119 +model so for rhf we'll train a reward + +00:56:58.000 --> 00:57:02.599 +model over full sequences right and here + +00:57:01.119 --> 00:57:05.280 +we need to do the same trick where we + +00:57:02.599 --> 00:57:07.280 +sort of look at just prefixes and try to + +00:57:05.280 --> 00:57:09.640 +guess the reward Downstream but if we + +00:57:07.280 --> 00:57:12.440 +have already have preference data then + +00:57:09.640 --> 00:57:15.119 +we have some um like we have a data + +00:57:12.440 --> 00:57:16.720 +source to do this with I think if I'm + +00:57:15.119 --> 00:57:19.240 +remembering correctly they also had a + +00:57:16.720 --> 00:57:20.920 +couple more sort of tricks for data + +00:57:19.240 --> 00:57:22.640 +augmentation to get this to work this is + +00:57:20.920 --> 00:57:25.720 +sort of like a non-trivial thing to + +00:57:22.640 --> 00:57:28.039 +figure out um because like reward is + +00:57:25.720 --> 00:57:30.200 +generally a secret bual + +00:57:28.039 --> 00:57:32.280 +attribute and also if you don't know + +00:57:30.200 --> 00:57:34.160 +very much about rhf we're going to cover + +00:57:32.280 --> 00:57:36.400 +that the future class so don't worry if + +00:57:34.160 --> 00:57:37.880 +this is a yeah sorry to Jump Ahead a + +00:57:36.400 --> 00:57:39.880 +little no no + +00:57:37.880 --> 00:57:43.640 +wores + +00:57:39.880 --> 00:57:47.240 +yeah application this like why would we + +00:57:43.640 --> 00:57:49.640 +doing this to ensure it could be like + +00:57:47.240 --> 00:57:52.839 +our llm would want to highlight certain + +00:57:49.640 --> 00:57:53.799 +qualities like we want our evence to be + +00:57:52.839 --> 00:57:55.960 +more + +00:57:53.799 --> 00:57:57.839 +empathetic is there + +00:57:55.960 --> 00:57:59.440 +something yeah like what are the real + +00:57:57.839 --> 00:58:01.280 +world applications like could we use + +00:57:59.440 --> 00:58:03.680 +this to make L more empathetic or + +00:58:01.280 --> 00:58:06.359 +something yeah any any real attribute + +00:58:03.680 --> 00:58:08.000 +that you can sort of collect like + +00:58:06.359 --> 00:58:09.839 +positive and negative data for you could + +00:58:08.000 --> 00:58:12.200 +do this kind of constraints for I think + +00:58:09.839 --> 00:58:15.119 +the the ones you see most commonly are + +00:58:12.200 --> 00:58:16.480 +the human preference and then like + +00:58:15.119 --> 00:58:18.839 +negative constraints like you don't want + +00:58:16.480 --> 00:58:20.000 +your model to generate offensive content + +00:58:18.839 --> 00:58:21.839 +and if you can build like a good + +00:58:20.000 --> 00:58:23.319 +discriminator for is a sentence going in + +00:58:21.839 --> 00:58:26.160 +a really offensive Direction you can + +00:58:23.319 --> 00:58:28.440 +kind of stop it from gener + +00:58:26.160 --> 00:58:30.480 +yeah would it be a good idea if you + +00:58:28.440 --> 00:58:33.760 +generate a bunch of cons and ask the + +00:58:30.480 --> 00:58:35.480 +model itself whether it violates the + +00:58:33.760 --> 00:58:37.319 +yeah you could do that for sure could + +00:58:35.480 --> 00:58:38.920 +you ask like could you generate a bunch + +00:58:37.319 --> 00:58:42.440 +of samples and ask the model if it + +00:58:38.920 --> 00:58:44.720 +violates the constraint um this is also + +00:58:42.440 --> 00:58:47.119 +a type of sort of sample and then rerank + +00:58:44.720 --> 00:58:52.319 +strategy um but yeah this would be sort + +00:58:47.119 --> 00:58:54.000 +of a more um clever like less + +00:58:52.319 --> 00:58:55.559 +heavyweight version of this checking if + +00:58:54.000 --> 00:58:57.319 +it's about climate means right you'd + +00:58:55.559 --> 00:58:58.520 +like ask the model if it violated the + +00:58:57.319 --> 00:59:00.160 +constraint and if it's a good enough + +00:58:58.520 --> 00:59:02.480 +model it could probably do that pretty + +00:59:00.160 --> 00:59:05.160 +well I suppose in that case you don't + +00:59:02.480 --> 00:59:08.160 +have to thing anything yeah yeah and + +00:59:05.160 --> 00:59:10.359 +this is sort of a general like the + +00:59:08.160 --> 00:59:12.240 +generating text that like satisfies a + +00:59:10.359 --> 00:59:14.079 +constraint is harder than checking if a + +00:59:12.240 --> 00:59:16.280 +text satisfies a constraint so even if + +00:59:14.079 --> 00:59:17.880 +the model isn't good about like not + +00:59:16.280 --> 00:59:19.440 +generating text about climbing when you + +00:59:17.880 --> 00:59:20.520 +tell it to it might be able to tell if + +00:59:19.440 --> 00:59:23.640 +text is + +00:59:20.520 --> 00:59:26.640 +about yeah yeah so how do + +00:59:23.640 --> 00:59:26.640 +you + +00:59:28.400 --> 00:59:32.359 +have different + +00:59:32.920 --> 00:59:36.319 +different you have + +00:59:36.599 --> 00:59:42.119 +to yeah like how do you collect the data + +00:59:38.839 --> 00:59:45.720 +to train this discriminator um generally + +00:59:42.119 --> 00:59:47.160 +you're going to see like you'll look to + +00:59:45.720 --> 00:59:48.720 +see if there are data sets that already + +00:59:47.160 --> 00:59:50.160 +captured this attribute or you could + +00:59:48.720 --> 00:59:51.599 +sort of write her istics to try to + +00:59:50.160 --> 00:59:53.839 +recover it if it's an attribute that not + +00:59:51.599 --> 00:59:55.480 +a lot of other people care about like + +00:59:53.839 --> 00:59:58.280 +you could write your puristic to check + +00:59:55.480 --> 01:00:00.160 +if text is about climbing for instance + +00:59:58.280 --> 01:00:02.359 +um and then try to recover what noisy + +01:00:00.160 --> 01:00:04.200 +samples of data that is or is not about + +01:00:02.359 --> 01:00:05.559 +climbing maybe you could scrape a + +01:00:04.200 --> 01:00:07.000 +climbing forum and then scrape like a + +01:00:05.559 --> 01:00:09.079 +hiking forum and use the difference + +01:00:07.000 --> 01:00:10.319 +between them um but for a lot of tests + +01:00:09.079 --> 01:00:11.760 +there's actually pretty good data sets + +01:00:10.319 --> 01:00:14.400 +already out there for this so there's + +01:00:11.760 --> 01:00:17.480 +like in there's a lot of style transfer + +01:00:14.400 --> 01:00:20.200 +tasks that are like go from informal to + +01:00:17.480 --> 01:00:22.240 +formal or go from this to that or like + +01:00:20.200 --> 01:00:24.039 +make this text in an iic contamin and + +01:00:22.240 --> 01:00:26.559 +you can find like data from those + +01:00:24.039 --> 01:00:26.559 +sources + +01:00:26.799 --> 01:00:31.599 +we never like talked about F yet but I'm + +01:00:29.520 --> 01:00:34.520 +really curious with like the word a + +01:00:31.599 --> 01:00:38.039 +beting whether it would perform better + +01:00:34.520 --> 01:00:39.079 +than like fineing on RF like certainly + +01:00:38.039 --> 01:00:42.720 +more + +01:00:39.079 --> 01:00:45.039 +efficient but I I was I think this is a + +01:00:42.720 --> 01:00:49.760 +comparison they make in their paper but + +01:00:45.039 --> 01:00:52.520 +I don't remember their pun on yeah um in + +01:00:49.760 --> 01:00:55.280 +general there's this sort of a like you + +01:00:52.520 --> 01:00:57.039 +can pay a onetime kind of heavy cost to + +01:00:55.280 --> 01:00:58.880 +fine-tune or you can pay costs at + +01:00:57.039 --> 01:01:01.160 +inference time every time to make sort + +01:00:58.880 --> 01:01:03.880 +of a to make your model better in any of + +01:01:01.160 --> 01:01:06.160 +these ways and depending on how much + +01:01:03.880 --> 01:01:09.119 +inference you're playing do like one or + +01:01:06.160 --> 01:01:09.119 +the other of these could be + +01:01:11.240 --> 01:01:16.400 +better + +01:01:12.839 --> 01:01:19.200 +great so now we're going to talk about + +01:01:16.400 --> 01:01:21.160 +sort of methods for introducing human + +01:01:19.200 --> 01:01:22.680 +interaction into the decoding process + +01:01:21.160 --> 01:01:25.240 +and everything we've looked at so far + +01:01:22.680 --> 01:01:26.920 +has been very sort of black booss kind + +01:01:25.240 --> 01:01:28.920 +of hands off right like you give the + +01:01:26.920 --> 01:01:30.640 +model M some input maybe we do some kind + +01:01:28.920 --> 01:01:33.640 +of manipulation on the decoding side you + +01:01:30.640 --> 01:01:37.160 +get one output back right um but in a + +01:01:33.640 --> 01:01:38.920 +lot of situations where maybe you have + +01:01:37.160 --> 01:01:40.960 +some high-risk application and you need + +01:01:38.920 --> 01:01:42.640 +somebody to be consistently monitoring + +01:01:40.960 --> 01:01:43.799 +and maybe intervening or you're doing + +01:01:42.640 --> 01:01:46.359 +something where you want to do some kind + +01:01:43.799 --> 01:01:47.960 +of human AI collaboration um and you + +01:01:46.359 --> 01:01:49.160 +want to be able to go back and forth or + +01:01:47.960 --> 01:01:50.960 +you want to have a conversation with the + +01:01:49.160 --> 01:01:53.480 +model what you're actually doing is sort + +01:01:50.960 --> 01:01:54.960 +of a series of decodings with human + +01:01:53.480 --> 01:01:56.319 +intervention in between + +01:01:54.960 --> 01:01:58.640 +um and I'm going to talk about a couple + +01:01:56.319 --> 01:02:00.760 +of these strategies briefly I think if + +01:01:58.640 --> 01:02:02.200 +you've used sort of a modern llm you're + +01:02:00.760 --> 01:02:04.440 +probably familiar with at least a few of + +01:02:02.200 --> 01:02:06.720 +them already um we'll sort of put names + +01:02:04.440 --> 01:02:08.359 +to each of them and the set of examples + +01:02:06.720 --> 01:02:10.880 +that we're running with here are from a + +01:02:08.359 --> 01:02:13.880 +paper called wordcraft which is about um + +01:02:10.880 --> 01:02:15.480 +story generation with llm assistants but + +01:02:13.880 --> 01:02:17.559 +these can also be applied sort of more + +01:02:15.480 --> 01:02:20.319 +generally to any kind of task where + +01:02:17.559 --> 01:02:23.799 +you'd want to go back and forth with a + +01:02:20.319 --> 01:02:25.319 +model um the sort of easiest or maybe + +01:02:23.799 --> 01:02:27.599 +simplest place to start here is just + +01:02:25.319 --> 01:02:29.760 +with interleaving text right you can + +01:02:27.599 --> 01:02:31.400 +choose when the model starts and stops + +01:02:29.760 --> 01:02:33.720 +decoding and you can choose when a human + +01:02:31.400 --> 01:02:34.920 +is writing text in between and you can + +01:02:33.720 --> 01:02:36.680 +condition your model in sort of a + +01:02:34.920 --> 01:02:39.240 +mixture of human and model generated + +01:02:36.680 --> 01:02:41.279 +text to choose what to continue next um + +01:02:39.240 --> 01:02:43.680 +you can also do something like have the + +01:02:41.279 --> 01:02:45.319 +model generate a set of text edit that + +01:02:43.680 --> 01:02:47.119 +text in some way maybe the human is + +01:02:45.319 --> 01:02:48.640 +imposing some really subtle constraint + +01:02:47.119 --> 01:02:50.559 +like I want it to sound like my writing + +01:02:48.640 --> 01:02:52.200 +style we don't have a discriminator for + +01:02:50.559 --> 01:02:54.119 +this but the human can sort of modify + +01:02:52.200 --> 01:02:55.680 +the text and then continue generating + +01:02:54.119 --> 01:02:57.160 +from that point and that will influence + +01:02:55.680 --> 01:03:01.160 +the style of the text that continues + +01:02:57.160 --> 01:03:03.240 +being generative um a this case here is + +01:03:01.160 --> 01:03:04.720 +sort of a you're writing a story + +01:03:03.240 --> 01:03:06.520 +together and so you're going back and + +01:03:04.720 --> 01:03:07.799 +forth and editing the text like that but + +01:03:06.520 --> 01:03:10.319 +you can also think of any kind of + +01:03:07.799 --> 01:03:11.920 +conversation with a model as the same + +01:03:10.319 --> 01:03:15.319 +kind of interleaving of text right the + +01:03:11.920 --> 01:03:17.000 +model gives some um text you provide + +01:03:15.319 --> 01:03:18.599 +some text you go back and forth on like + +01:03:17.000 --> 01:03:20.480 +who's providing the text that conditions + +01:03:18.599 --> 01:03:23.039 +the + +01:03:20.480 --> 01:03:24.880 +model you also might want to do things + +01:03:23.039 --> 01:03:26.760 +like more fine brain replace + +01:03:24.880 --> 01:03:28.559 +so here the person has highlighted some + +01:03:26.760 --> 01:03:31.640 +text and said like make this more + +01:03:28.559 --> 01:03:33.960 +descriptive or shorten this to two words + +01:03:31.640 --> 01:03:36.079 +or maybe you want some additional + +01:03:33.960 --> 01:03:38.520 +constraint like can this be happier can + +01:03:36.079 --> 01:03:40.960 +this be sad like change the ending or + +01:03:38.520 --> 01:03:43.760 +something um you can accomplish this in + +01:03:40.960 --> 01:03:45.799 +a variety of ways um here this is done + +01:03:43.760 --> 01:03:47.680 +through input manipulation so you prompt + +01:03:45.799 --> 01:03:50.359 +your model differently with different + +01:03:47.680 --> 01:03:52.200 +constraints you can also do this with an + +01:03:50.359 --> 01:03:54.440 +actual modeling change like if you want + +01:03:52.200 --> 01:03:56.119 +some kind of infilling model um + +01:03:54.440 --> 01:03:57.720 +particularly for things like code this + +01:03:56.119 --> 01:04:01.119 +can be helpful so you want context from + +01:03:57.720 --> 01:04:02.440 +left and right sides um or you can do + +01:04:01.119 --> 01:04:03.799 +this with the decoding changes that we + +01:04:02.440 --> 01:04:05.960 +talked about in the previous section + +01:04:03.799 --> 01:04:07.799 +right you could add a discriminator for + +01:04:05.960 --> 01:04:09.680 +descriptiveness of text or you could do + +01:04:07.799 --> 01:04:11.680 +some kind of sampling ranking method to + +01:04:09.680 --> 01:04:13.880 +recover a more descriptive + +01:04:11.680 --> 01:04:16.640 +output another thing that's very common + +01:04:13.880 --> 01:04:17.960 +in this space is sampling and reranking + +01:04:16.640 --> 01:04:20.839 +methods where the human is the one + +01:04:17.960 --> 01:04:23.640 +choosing what to return right so in + +01:04:20.839 --> 01:04:25.960 +wordcraft you see a set of choices and + +01:04:23.640 --> 01:04:28.200 +you can choose text to insert but more + +01:04:25.960 --> 01:04:30.720 +commonly in something like um chat gbt + +01:04:28.200 --> 01:04:33.160 +or Bard you see this little option to + +01:04:30.720 --> 01:04:34.880 +regenerate text right you as the human + +01:04:33.160 --> 01:04:36.160 +can reject the text and say like no I + +01:04:34.880 --> 01:04:38.680 +don't like this give me a different + +01:04:36.160 --> 01:04:41.359 +output and this is also sort of a way of + +01:04:38.680 --> 01:04:44.079 +controlling decoding um just by doing it + +01:04:41.359 --> 01:04:46.319 +on on a human rather in an algorithmic + +01:04:44.079 --> 01:04:49.279 +level of course you don't necessarily + +01:04:46.319 --> 01:04:51.200 +need a human in here and so um some + +01:04:49.279 --> 01:04:52.960 +recent work has looked at functionally + +01:04:51.200 --> 01:04:55.799 +using models to make these decisions + +01:04:52.960 --> 01:04:57.480 +instead um this is a a a prompting paper + +01:04:55.799 --> 01:05:00.359 +called free of thought which was sort of + +01:04:57.480 --> 01:05:02.279 +very popular on Twitter last summer um + +01:05:00.359 --> 01:05:06.119 +and the idea here is that you're going + +01:05:02.279 --> 01:05:08.480 +to generate um several smaller sequences + +01:05:06.119 --> 01:05:11.200 +um like a couple of sentences a + +01:05:08.480 --> 01:05:13.160 +reasoning step or a thought in the paper + +01:05:11.200 --> 01:05:14.839 +and you're going to use a model to + +01:05:13.160 --> 01:05:16.839 +choose which ones to continue and you + +01:05:14.839 --> 01:05:19.000 +can do different sort of constraints + +01:05:16.839 --> 01:05:21.960 +here like I want to sort of rank this + +01:05:19.000 --> 01:05:25.079 +set of three or maybe I want to predict + +01:05:21.960 --> 01:05:26.839 +if any in this set is wrong like is this + +01:05:25.079 --> 01:05:29.400 +a good reasoning step and if the model + +01:05:26.839 --> 01:05:32.240 +says no you no longer continue that but + +01:05:29.400 --> 01:05:33.559 +the idea here is through prompting + +01:05:32.240 --> 01:05:35.640 +really achieving something that's sort + +01:05:33.559 --> 01:05:38.960 +of if you squint at it looks a lot like + +01:05:35.640 --> 01:05:41.279 +beam search right instead of doing a um + +01:05:38.960 --> 01:05:43.160 +like token level thing and making a + +01:05:41.279 --> 01:05:45.079 +decision based on likelihood you're + +01:05:43.160 --> 01:05:47.880 +generating sort of several sentences out + +01:05:45.079 --> 01:05:50.599 +a time and making a decision based on + +01:05:47.880 --> 01:05:52.359 +this models feedback right this signal + +01:05:50.599 --> 01:05:53.799 +from an external source which here is a + +01:05:52.359 --> 01:05:55.279 +model but could also be a human if + +01:05:53.799 --> 01:05:57.920 +you're willing willing to sort of wait + +01:05:55.279 --> 01:06:01.559 +around for them to make the decision and + +01:05:57.920 --> 01:06:03.839 +so this is a way of sort of giving + +01:06:01.559 --> 01:06:06.640 +feedback on a broader level than single + +01:06:03.839 --> 01:06:09.079 +tokens um to guide a decoding process to + +01:06:06.640 --> 01:06:09.079 +a final + +01:06:09.839 --> 01:06:15.079 +outut so the last couple of things we'll + +01:06:12.760 --> 01:06:17.520 +talk about here are sort of practical + +01:06:15.079 --> 01:06:19.839 +considerations speed choosing decoding + +01:06:17.520 --> 01:06:22.599 +methods um but I can take any questions + +01:06:19.839 --> 01:06:22.599 +before that + +01:06:23.000 --> 01:06:26.000 +to + +01:06:26.760 --> 01:06:32.920 +great so how do you make this fast and + +01:06:30.359 --> 01:06:34.920 +in particular if you've ever tried to + +01:06:32.920 --> 01:06:36.920 +sort of Benchmark performance of a model + +01:06:34.920 --> 01:06:38.720 +what you realize pretty quickly is that + +01:06:36.920 --> 01:06:40.720 +the vast majority of time is actually + +01:06:38.720 --> 01:06:43.440 +spent in decoding you have to generate + +01:06:40.720 --> 01:06:45.319 +one token at a time you have to sort of + +01:06:43.440 --> 01:06:46.920 +pass that back through the model to get + +01:06:45.319 --> 01:06:51.279 +conditioning to generate the next token + +01:06:46.920 --> 01:06:53.599 +and so this is um generally fairly slow + +01:06:51.279 --> 01:06:54.839 +um this is sort of a a major impediment + +01:06:53.599 --> 01:06:56.359 +if you're d to do something like a + +01:06:54.839 --> 01:06:57.839 +streaming application where you want or + +01:06:56.359 --> 01:06:59.559 +a chat application where you don't want + +01:06:57.839 --> 01:07:03.599 +the person to be waiting around for an + +01:06:59.559 --> 01:07:06.799 +answer um one way to do this is a method + +01:07:03.599 --> 01:07:09.160 +called Spectra of decoding and this is a + +01:07:06.799 --> 01:07:12.599 +method where you're using a smaller + +01:07:09.160 --> 01:07:14.039 +model um not as like we're in contrast + +01:07:12.599 --> 01:07:16.240 +of decoding right we're using a smaller + +01:07:14.039 --> 01:07:17.559 +model to decide what not to generate but + +01:07:16.240 --> 01:07:20.119 +here we're using a smaller model to + +01:07:17.559 --> 01:07:21.880 +decide be what to generate um and the + +01:07:20.119 --> 01:07:24.960 +idea here is that most of these tokens + +01:07:21.880 --> 01:07:26.480 +are maybe not super hard to side it's + +01:07:24.960 --> 01:07:27.400 +just that occasionally the bigger model + +01:07:26.480 --> 01:07:30.240 +might want to go in a different + +01:07:27.400 --> 01:07:32.920 +direction so these green tokens here are + +01:07:30.240 --> 01:07:35.160 +generated by a smaller model our amateur + +01:07:32.920 --> 01:07:37.079 +model here and the larger model acts + +01:07:35.160 --> 01:07:39.960 +largely as a verifier and what it does + +01:07:37.079 --> 01:07:43.000 +is it checks if the output so far is + +01:07:39.960 --> 01:07:44.920 +going in a an a Direction that's sort of + +01:07:43.000 --> 01:07:46.400 +in distribution for the big model like + +01:07:44.920 --> 01:07:49.240 +something that's within the realm of + +01:07:46.400 --> 01:07:50.720 +what it might SLE and to there's sort of + +01:07:49.240 --> 01:07:52.400 +an involved discussion in this paper of + +01:07:50.720 --> 01:07:55.200 +how you determine if something is in + +01:07:52.400 --> 01:07:58.000 +distribution um so here the smaller + +01:07:55.200 --> 01:08:00.240 +models generates like five or six tokens + +01:07:58.000 --> 01:08:02.559 +that the larger model says okay this + +01:08:00.240 --> 01:08:03.680 +looks great until it hits a token that + +01:08:02.559 --> 01:08:06.079 +the larger model would not have + +01:08:03.680 --> 01:08:07.920 +generated in that circumstance and then + +01:08:06.079 --> 01:08:10.279 +the larger model rejects that token and + +01:08:07.920 --> 01:08:13.000 +generates a different token instead so + +01:08:10.279 --> 01:08:15.440 +you can see here each of these red and + +01:08:13.000 --> 01:08:17.600 +then blue sections is where the larger + +01:08:15.440 --> 01:08:19.400 +model has rejected something and has to + +01:08:17.600 --> 01:08:21.920 +actually autor regressively decode a + +01:08:19.400 --> 01:08:24.199 +single token by contrast if you were + +01:08:21.920 --> 01:08:27.359 +doing regular decoding at each + +01:08:24.199 --> 01:08:28.799 +individual token in this sequence the um + +01:08:27.359 --> 01:08:31.640 +larger model would have had to make the + +01:08:28.799 --> 01:08:35.359 +fall forward pass to decoda token so + +01:08:31.640 --> 01:08:37.359 +here rather than de doing maybe what + +01:08:35.359 --> 01:08:39.239 +probably like 20ish decoding steps to + +01:08:37.359 --> 01:08:41.560 +get this full sequence the larger model + +01:08:39.239 --> 01:08:43.040 +has done about eight decoring steps and + +01:08:41.560 --> 01:08:47.560 +everything else is able to sort of + +01:08:43.040 --> 01:08:49.759 +verify a block of tokens at once um this + +01:08:47.560 --> 01:08:51.400 +sort of idea of like using a smaller + +01:08:49.759 --> 01:08:54.120 +model as an approximation is pretty + +01:08:51.400 --> 01:08:55.839 +powerful um and there's some great um + +01:08:54.120 --> 01:08:58.159 +followup work cons specul decoding and + +01:08:55.839 --> 01:08:59.000 +sort of ways to do this faster or with + +01:08:58.159 --> 01:09:01.520 +stronger + +01:08:59.000 --> 01:09:04.839 +guarantees um but this General concept + +01:09:01.520 --> 01:09:06.920 +is I would bet probably how models like + +01:09:04.839 --> 01:09:09.080 +um part of how models like chat GPT or + +01:09:06.920 --> 01:09:11.159 +Bard are sort of generating text so + +01:09:09.080 --> 01:09:13.120 +quickly um there's another element here + +01:09:11.159 --> 01:09:16.159 +which is like the model architecture + +01:09:13.120 --> 01:09:17.679 +being sparse but I think that um if you + +01:09:16.159 --> 01:09:19.920 +folks talk about mixture of experts we + +01:09:17.679 --> 01:09:22.880 +might get into that + +01:09:19.920 --> 01:09:26.080 +later um how do you do this kind of fast + +01:09:22.880 --> 01:09:27.679 +inference um libraries like BLM will + +01:09:26.080 --> 01:09:29.440 +Implement things I think Implement + +01:09:27.679 --> 01:09:32.199 +speculative decoding and Implement sort + +01:09:29.440 --> 01:09:34.400 +of Hardware level tricks like choosing + +01:09:32.199 --> 01:09:37.799 +which attention um weights to Cash wear + +01:09:34.400 --> 01:09:39.199 +to do faster inflence um there's also + +01:09:37.799 --> 01:09:40.799 +great libraries for doing things like + +01:09:39.199 --> 01:09:42.679 +constraint decoding so things like + +01:09:40.799 --> 01:09:45.520 +outlines will let you set constraints + +01:09:42.679 --> 01:09:46.960 +like I want my outputs to all be Json + +01:09:45.520 --> 01:09:48.640 +and it will impose additional + +01:09:46.960 --> 01:09:50.839 +constraints during decoding to ensure + +01:09:48.640 --> 01:09:52.279 +that that happens and then pretty much + +01:09:50.839 --> 01:09:53.960 +anything in these first couple of + +01:09:52.279 --> 01:09:56.560 +sections we talked about um like + +01:09:53.960 --> 01:09:58.440 +sampling mode seeking search and + +01:09:56.560 --> 01:10:00.400 +sometimes MBR will also be implemented + +01:09:58.440 --> 01:10:05.080 +in pretty much any Library you use for + +01:10:00.400 --> 01:10:07.679 +models like huggingface Fair seek or + +01:10:05.080 --> 01:10:10.000 +Jacks so to kind of take a step back + +01:10:07.679 --> 01:10:12.520 +here is when you get to the end of class + +01:10:10.000 --> 01:10:15.640 +um there's really two broad categories + +01:10:12.520 --> 01:10:17.679 +of methods that we talked about today um + +01:10:15.640 --> 01:10:20.360 +given our initial distribution from the + +01:10:17.679 --> 01:10:22.600 +model for a next token given our our + +01:10:20.360 --> 01:10:24.920 +input we can do two kind of different + +01:10:22.600 --> 01:10:26.400 +things we can each individual decoding + +01:10:24.920 --> 01:10:28.360 +step choose some kind of function to + +01:10:26.400 --> 01:10:30.280 +manipulate this distribution and this + +01:10:28.360 --> 01:10:32.280 +could be something like short like + +01:10:30.280 --> 01:10:33.960 +cutting off the long tail like modifying + +01:10:32.280 --> 01:10:36.239 +the temperature or adding external + +01:10:33.960 --> 01:10:38.400 +information from another model or from a + +01:10:36.239 --> 01:10:41.480 +discriminator model + +01:10:38.400 --> 01:10:43.159 +right or we can over a larger part of + +01:10:41.480 --> 01:10:45.120 +the decoding process choose some + +01:10:43.159 --> 01:10:47.120 +function to choose between sequences and + +01:10:45.120 --> 01:10:49.199 +this could be like choosing between next + +01:10:47.120 --> 01:10:51.679 +tokens in beam search when we pruning + +01:10:49.199 --> 01:10:53.120 +beams this could be choosing from Full + +01:10:51.679 --> 01:10:56.760 +sequences when we're doing something + +01:10:53.120 --> 01:10:58.040 +like MB r or sample and rerank methods + +01:10:56.760 --> 01:11:00.239 +um and you can do these two things in + +01:10:58.040 --> 01:11:01.440 +parallel right you can choose like a + +01:11:00.239 --> 01:11:03.159 +different function to manipulate the + +01:11:01.440 --> 01:11:04.760 +next token distribution and then some + +01:11:03.159 --> 01:11:06.199 +sort of like broader thing to choose + +01:11:04.760 --> 01:11:08.280 +what you do with the full sequences you + +01:11:06.199 --> 01:11:09.920 +get out of that distribution um but + +01:11:08.280 --> 01:11:12.040 +there are sort of these two broad + +01:11:09.920 --> 01:11:14.880 +categories of + +01:11:12.040 --> 01:11:17.440 +decoding so what should you take away + +01:11:14.880 --> 01:11:19.400 +from this um I think a couple of things + +01:11:17.440 --> 01:11:21.000 +you decoding methods can be really + +01:11:19.400 --> 01:11:23.040 +powerful to control features of your + +01:11:21.000 --> 01:11:25.040 +output if you want to impose particular + +01:11:23.040 --> 01:11:26.679 +constraints if you want to factor in + +01:11:25.040 --> 01:11:27.960 +reward function or factor in a data + +01:11:26.679 --> 01:11:31.800 +source that you maybe didn't have at + +01:11:27.960 --> 01:11:34.239 +training time um and to some extent you + +01:11:31.800 --> 01:11:36.120 +can do a more expensive decoding method + +01:11:34.239 --> 01:11:37.520 +to compensate for a worse model or to + +01:11:36.120 --> 01:11:39.080 +compensate for a model that hasn't been + +01:11:37.520 --> 01:11:42.480 +trained to do exactly the thing you want + +01:11:39.080 --> 01:11:44.800 +it to do um of course you can't you know + +01:11:42.480 --> 01:11:47.679 +use this to make gpt2 small as good as + +01:11:44.800 --> 01:11:49.840 +gp4 but you can sort of for some points + +01:11:47.679 --> 01:11:51.679 +in the middle spend more um computed + +01:11:49.840 --> 01:11:53.159 +inference time to pay for not spending + +01:11:51.679 --> 01:11:55.639 +as much computed training time and + +01:11:53.159 --> 01:11:57.440 +particularly if you don't have access to + +01:11:55.639 --> 01:11:59.400 +the kind of giant gpus you might need to + +01:11:57.440 --> 01:12:01.840 +continue fine-tuning your model this can + +01:11:59.400 --> 01:12:05.679 +be a really a really powerful + +01:12:01.840 --> 01:12:07.800 +alternative um yeah so say like you're + +01:12:05.679 --> 01:12:12.560 +building like something in production + +01:12:07.800 --> 01:12:15.920 +right people usually do um sort of like + +01:12:12.560 --> 01:12:18.760 +that you know inance before cling to see + +01:12:15.920 --> 01:12:21.840 +if it's G to work at do + +01:12:18.760 --> 01:12:25.080 +that like try to see like if you have a + +01:12:21.840 --> 01:12:26.800 +model that you can do some kind of + +01:12:25.080 --> 01:12:29.199 +expensive decoding method for to get + +01:12:26.800 --> 01:12:31.120 +good outputs is it then worth try + +01:12:29.199 --> 01:12:34.000 +training that model right um there's + +01:12:31.120 --> 01:12:36.560 +some great recent work on like training + +01:12:34.000 --> 01:12:39.400 +models to produce the same kind of + +01:12:36.560 --> 01:12:40.760 +outputs you get out of MVR without um + +01:12:39.400 --> 01:12:43.239 +actually doing a really expensive + +01:12:40.760 --> 01:12:45.600 +inference Stu so at some level like yeah + +01:12:43.239 --> 01:12:48.120 +you can decide like this model is good + +01:12:45.600 --> 01:12:49.920 +enough with its expensive method we can + +01:12:48.120 --> 01:12:50.920 +try to make it cheaper by spending more + +01:12:49.920 --> 01:12:53.960 +money on + +01:12:50.920 --> 01:12:55.520 +funing um but that's not it's not like + +01:12:53.960 --> 01:12:57.320 +necessarily guaranteed that that's will + +01:12:55.520 --> 01:13:00.679 +be the case + +01:12:57.320 --> 01:13:03.040 +Okay um the methods that we looked at + +01:13:00.679 --> 01:13:06.199 +have these sort of trade-offs in quality + +01:13:03.040 --> 01:13:07.960 +in diversity and in inference speed so + +01:13:06.199 --> 01:13:10.320 +sampling from your model directly is + +01:13:07.960 --> 01:13:13.120 +pretty fast to do you get really diverse + +01:13:10.320 --> 01:13:14.960 +outputs but it tends to be lower quality + +01:13:13.120 --> 01:13:16.320 +um whereas more restricted sampling + +01:13:14.960 --> 01:13:18.520 +these sort of mode seeking search + +01:13:16.320 --> 01:13:20.639 +methods tend to be higher quality but + +01:13:18.520 --> 01:13:21.880 +you get less less diverse outputs and + +01:13:20.639 --> 01:13:23.560 +that's why we have these methods like + +01:13:21.880 --> 01:13:26.719 +diverse and stochastic resarch to + +01:13:23.560 --> 01:13:28.760 +counter this a bit um and then methods + +01:13:26.719 --> 01:13:30.400 +like MBR or other sample and rerank + +01:13:28.760 --> 01:13:32.679 +methods tend to be very high quality + +01:13:30.400 --> 01:13:34.280 +outputs but you pay for this with much + +01:13:32.679 --> 01:13:36.520 +slower inference + +01:13:34.280 --> 01:13:38.679 +time um but if I can kind of convince + +01:13:36.520 --> 01:13:41.560 +you of anything today I think it would + +01:13:38.679 --> 01:13:43.600 +be this which is that these the decoding + +01:13:41.560 --> 01:13:45.600 +method you choose for your model has a + +01:13:43.600 --> 01:13:47.960 +really strong impact on performance + +01:13:45.600 --> 01:13:49.520 +Downstream um you can get radically + +01:13:47.960 --> 01:13:51.239 +different results out of the same model + +01:13:49.520 --> 01:13:52.639 +without doing any additional training + +01:13:51.239 --> 01:13:55.120 +just by choosing the different decoding + +01:13:52.639 --> 01:13:57.880 +method that you might want to try and so + +01:13:55.120 --> 01:13:59.679 +when you sort of let your libraries pick + +01:13:57.880 --> 01:14:01.159 +a quote unquote like sensible default + +01:13:59.679 --> 01:14:03.760 +you can leave a lot of performance on + +01:14:01.159 --> 01:14:06.480 +the train on the table so I encourage + +01:14:03.760 --> 01:14:08.199 +you folks that if if you're um deploying + +01:14:06.480 --> 01:14:09.760 +models in production or if you're doing + +01:14:08.199 --> 01:14:10.840 +research or you know maybe look at your + +01:14:09.760 --> 01:14:13.280 +outputs and your model has some + +01:14:10.840 --> 01:14:15.320 +undesirable behaviors to consider if the + +01:14:13.280 --> 01:14:17.800 +decoding method you're using is imposing + +01:14:15.320 --> 01:14:20.000 +some kind of Intuition or some kind of + +01:14:17.800 --> 01:14:21.840 +inductive bias and if you can alter that + +01:14:20.000 --> 01:14:24.239 +to get some of these behaviors without + +01:14:21.840 --> 01:14:26.320 +resorting to additional training + +01:14:24.239 --> 01:14:28.719 +um and that's sort of the end I can take + +01:14:26.320 --> 01:14:28.719 +any other + +01:14:34.320 --> 01:14:38.719 +questions okay um yeah I guess we don't + +01:14:37.199 --> 01:14:41.360 +have any questions we can take questions + +01:14:38.719 --> 01:14:45.560 +up here um one one thing I'd like to + +01:14:41.360 --> 01:14:47.679 +point out also is that um I I love the + +01:14:45.560 --> 01:14:50.760 +final thing that Amanda said here + +01:14:47.679 --> 01:14:54.199 +another thing is that my impression from + +01:14:50.760 --> 01:14:56.400 +dealing with things is that it's a lot + +01:14:54.199 --> 01:14:58.159 +easier to predict the effect of + +01:14:56.400 --> 01:14:59.920 +inference time decoding time + +01:14:58.159 --> 01:15:01.120 +manipulations than it is to predict the + +01:14:59.920 --> 01:15:04.239 +effect of + +01:15:01.120 --> 01:15:07.480 +like um fine-tuning or something like + +01:15:04.239 --> 01:15:11.040 +this like just to give a an + +01:15:07.480 --> 01:15:12.480 +example beam search with the maximum + +01:15:11.040 --> 01:15:15.199 +likelihood trained model tends to + +01:15:12.480 --> 01:15:16.719 +generate things that are shorter um + +01:15:15.199 --> 01:15:18.040 +whereas greedy decoding tends to + +01:15:16.719 --> 01:15:19.639 +generate things that are longer and + +01:15:18.040 --> 01:15:22.000 +repeat more often and stuff like that + +01:15:19.639 --> 01:15:25.920 +and if you try a few methods like this + +01:15:22.000 --> 01:15:28.920 +you'll quickly find these kind of qus of + +01:15:25.920 --> 01:15:31.320 +each of the methods and so by forming a + +01:15:28.920 --> 01:15:32.719 +good intuition of this you will also + +01:15:31.320 --> 01:15:34.000 +know how to fix these problems when you + +01:15:32.719 --> 01:15:35.600 +see them it's like oh my model's + +01:15:34.000 --> 01:15:37.320 +repeating itself a lot maybe I shouldn't + +01:15:35.600 --> 01:15:38.679 +be using grey search I should be + +01:15:37.320 --> 01:15:41.199 +switching over to something else or + +01:15:38.679 --> 01:15:43.320 +something like that so um this is a good + +01:15:41.199 --> 01:15:45.880 +thing to know and play around with yeah + +01:15:43.320 --> 01:15:47.239 +and I think pretty underutilized too um + +01:15:45.880 --> 01:15:48.880 +a lot of folks will not think about a + +01:15:47.239 --> 01:15:50.920 +decoding method to fix their problem + +01:15:48.880 --> 01:15:52.280 +even if like your model might actually + +01:15:50.920 --> 01:15:53.760 +be perfectly fine under a different + +01:15:52.280 --> 01:15:56.000 +decoding strategy + +01:15:53.760 --> 01:15:58.320 +great okay thanks a lot everyone you can + +01:15:56.000 --> 01:15:58.320 +uh + +01:16:02.280 --> 01:16:05.280 +finish diff --git a/CMU Advanced NLP 2024 (7) Prompting/CMU Advanced NLP 2024 (7) Prompting.mp4 b/CMU Advanced NLP 2024 (7) Prompting/CMU Advanced NLP 2024 (7) Prompting.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cf4f3d97bd18f968a433eb12a26ebabf19b8700f --- /dev/null +++ b/CMU Advanced NLP 2024 (7) Prompting/CMU Advanced NLP 2024 (7) Prompting.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:792f063b2ab84937c54e15659894d477fcb6c4beace3a8e6fd432d838dc636a3 +size 67999737 diff --git a/CMU Advanced NLP 2024 (7) Prompting/metadata.json b/CMU Advanced NLP 2024 (7) Prompting/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..f571a4792b486a2815a5372227d223e6ee9ea6fd --- /dev/null +++ b/CMU Advanced NLP 2024 (7) Prompting/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=T1YrTbTkUb4", + "title": "CMU Advanced NLP 2024 (7) Prompting" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (7) Prompting/transcript.srt b/CMU Advanced NLP 2024 (7) Prompting/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..093fb396ca9f87cf03cbcd79525c4de08498bdce --- /dev/null +++ b/CMU Advanced NLP 2024 (7) Prompting/transcript.srt @@ -0,0 +1,5875 @@ +1 +00:00:01,319 --> 00:00:07,560 +um today I want to talk about prompting + +2 +00:00:03,919 --> 00:00:09,639 +and uh prompting is kind of a new uh + +3 +00:00:07,560 --> 00:00:11,320 +Paradigm as of a few years ago with + +4 +00:00:09,639 --> 00:00:15,120 +interacting with models it's now kind of + +5 +00:00:11,320 --> 00:00:16,880 +the standard uh in doing so and + +6 +00:00:15,120 --> 00:00:19,880 +basically what we do is we encourage a + +7 +00:00:16,880 --> 00:00:21,840 +pre-trained model to make predictions by + +8 +00:00:19,880 --> 00:00:24,039 +providing a textual prompt specifying + +9 +00:00:21,840 --> 00:00:25,960 +the task to be done this is how you + +10 +00:00:24,039 --> 00:00:28,960 +always interact with chat GPT or + +11 +00:00:25,960 --> 00:00:33,200 +anything else like this + +12 +00:00:28,960 --> 00:00:36,200 +um so prompting fundamentals uh the way + +13 +00:00:33,200 --> 00:00:38,360 +that basic prompting works is you append + +14 +00:00:36,200 --> 00:00:42,079 +a textual string to the beginning of the + +15 +00:00:38,360 --> 00:00:44,079 +output and you complete it and exactly + +16 +00:00:42,079 --> 00:00:45,800 +how you complete it can be based on any + +17 +00:00:44,079 --> 00:00:48,800 +of the generation methods that we talked + +18 +00:00:45,800 --> 00:00:51,559 +about in the previous class uh you know + +19 +00:00:48,800 --> 00:00:55,160 +beam search it can be uh sampling it can + +20 +00:00:51,559 --> 00:00:58,480 +be MBR or self-consistency or whatever + +21 +00:00:55,160 --> 00:01:00,960 +else um so I I put in when a dog sees a + +22 +00:00:58,480 --> 00:01:03,680 +squirrel it will usually + +23 +00:01:00,960 --> 00:01:06,280 +um into gpt2 small which is a very small + +24 +00:01:03,680 --> 00:01:08,960 +language model says Be Afraid of + +25 +00:01:06,280 --> 00:01:10,560 +Anything unusual as an exception that's + +26 +00:01:08,960 --> 00:01:13,720 +when a squirrel is usually afraid to + +27 +00:01:10,560 --> 00:01:16,280 +bitee um so as you can see if the model + +28 +00:01:13,720 --> 00:01:19,560 +is not super great you get a kind of not + +29 +00:01:16,280 --> 00:01:24,119 +very great response also um but then I + +30 +00:01:19,560 --> 00:01:25,960 +CED it into gp2 XL and uh what it says + +31 +00:01:24,119 --> 00:01:28,159 +when a dog sees a squirrel it will + +32 +00:01:25,960 --> 00:01:30,640 +usually lick the squirrel it will also + +33 +00:01:28,159 --> 00:01:34,000 +touch its nose to the squirrel the tail + +34 +00:01:30,640 --> 00:01:37,880 +and nose if it can um which might be + +35 +00:01:34,000 --> 00:01:40,280 +true um one thing I I should note is + +36 +00:01:37,880 --> 00:01:43,040 +when I generated these I used uh like + +37 +00:01:40,280 --> 00:01:45,200 +actual regular ancestral sampling so I + +38 +00:01:43,040 --> 00:01:47,159 +set the temperature to one I didn't do + +39 +00:01:45,200 --> 00:01:49,600 +top feed didn't do top K or anything + +40 +00:01:47,159 --> 00:01:51,040 +like this so this is a raw view of what + +41 +00:01:49,600 --> 00:01:53,799 +the language model thinks is like + +42 +00:01:51,040 --> 00:01:58,479 +actually a reasonable answer um if I + +43 +00:01:53,799 --> 00:02:00,159 +modified the code to do something else + +44 +00:01:58,479 --> 00:02:02,560 +actually maybe I can I can do that that + +45 +00:02:00,159 --> 00:02:04,960 +right now but if I modified the code to + +46 +00:02:02,560 --> 00:02:08,879 +use a + +47 +00:02:04,960 --> 00:02:12,119 +different output we can actually see uh + +48 +00:02:08,879 --> 00:02:12,119 +the different result that we + +49 +00:02:13,599 --> 00:02:17,959 +get since I I have it here + +50 +00:02:18,360 --> 00:02:23,879 +anyway actually sorry I'll need to + +51 +00:02:20,360 --> 00:02:27,239 +modify the code on my my screen here + +52 +00:02:23,879 --> 00:02:32,120 +um so I will + +53 +00:02:27,239 --> 00:02:35,040 +set uh top K to 50 top P to + +54 +00:02:32,120 --> 00:02:38,360 +0.95 so you see I I changed the + +55 +00:02:35,040 --> 00:02:38,360 +generation parameters + +56 +00:02:38,760 --> 00:02:46,400 +here and I'll uh run all of + +57 +00:02:43,159 --> 00:02:50,319 +them you can see the uh the result that + +58 +00:02:46,400 --> 00:02:51,840 +we get in a little bit but basically um + +59 +00:02:50,319 --> 00:02:54,800 +so this is the standard method for + +60 +00:02:51,840 --> 00:02:57,319 +prompting I intentionally use gpt2 small + +61 +00:02:54,800 --> 00:02:58,800 +and gpt2 XL here because these are raw + +62 +00:02:57,319 --> 00:03:01,879 +based language models they were just + +63 +00:02:58,800 --> 00:03:05,440 +pre-trained as language models and so + +64 +00:03:01,879 --> 00:03:06,920 +when we prompt them we're getting a + +65 +00:03:05,440 --> 00:03:09,200 +language model that was just trained on + +66 +00:03:06,920 --> 00:03:12,280 +lots of texts view of what is likely + +67 +00:03:09,200 --> 00:03:13,760 +next text um there are other ways to + +68 +00:03:12,280 --> 00:03:15,599 +train language models like instruction + +69 +00:03:13,760 --> 00:03:18,040 +tuning and rlf which I'm going to be + +70 +00:03:15,599 --> 00:03:19,480 +talking in future classes and if that's + +71 +00:03:18,040 --> 00:03:21,760 +the case you might get a different + +72 +00:03:19,480 --> 00:03:23,159 +response here so when a dog sees a + +73 +00:03:21,760 --> 00:03:25,720 +squirrel it will usually get angry + +74 +00:03:23,159 --> 00:03:27,319 +scratched the squirrel and run off uh + +75 +00:03:25,720 --> 00:03:29,080 +some dogs may also attempt to capture + +76 +00:03:27,319 --> 00:03:30,799 +the squirrel or attempt to eat it dogs + +77 +00:03:29,080 --> 00:03:32,599 +will often to pick up the squirrel and + +78 +00:03:30,799 --> 00:03:36,400 +eat it + +79 +00:03:32,599 --> 00:03:40,680 +for it was more uh more violent than I + +80 +00:03:36,400 --> 00:03:44,280 +expected any um + +81 +00:03:40,680 --> 00:03:45,720 +so but anyway I think that like actually + +82 +00:03:44,280 --> 00:03:47,080 +you can see that when I used the + +83 +00:03:45,720 --> 00:03:48,920 +different generation parameters it + +84 +00:03:47,080 --> 00:03:51,480 +actually gave me something that was + +85 +00:03:48,920 --> 00:03:54,319 +maybe more typical than lick so lick is + +86 +00:03:51,480 --> 00:03:56,840 +maybe a kind of unusual uh answer here + +87 +00:03:54,319 --> 00:03:58,680 +but anyway + +88 +00:03:56,840 --> 00:04:03,040 +cool + +89 +00:03:58,680 --> 00:04:05,680 +so that's the basic idea of prompting we + +90 +00:04:03,040 --> 00:04:08,480 +tend to use prompting to try to solve + +91 +00:04:05,680 --> 00:04:10,680 +problems also so it's not just to + +92 +00:04:08,480 --> 00:04:14,200 +complete text although completing text + +93 +00:04:10,680 --> 00:04:17,320 +is useful and important like I complete + +94 +00:04:14,200 --> 00:04:19,199 +text in my Gmail all the time uh you + +95 +00:04:17,320 --> 00:04:20,600 +know it it's constantly giving me + +96 +00:04:19,199 --> 00:04:23,440 +suggestions about what I should write + +97 +00:04:20,600 --> 00:04:24,800 +next and I do tab autoc complete um you + +98 +00:04:23,440 --> 00:04:28,040 +know on your phone you're doing that + +99 +00:04:24,800 --> 00:04:29,919 +that's also using a language model um + +100 +00:04:28,040 --> 00:04:32,320 +but very often we'll use prompting to do + +101 +00:04:29,919 --> 00:04:34,440 +things other than just completing Texs + +102 +00:04:32,320 --> 00:04:36,000 +and when we do this uh this is kind of + +103 +00:04:34,440 --> 00:04:38,199 +the standard workflow for how we solve + +104 +00:04:36,000 --> 00:04:41,280 +NLP tasks with prompting the way we do + +105 +00:04:38,199 --> 00:04:43,360 +this is we fill in a prompt template + +106 +00:04:41,280 --> 00:04:46,080 +predict the answer and post-process the + +107 +00:04:43,360 --> 00:04:46,080 +answer in some + +108 +00:04:46,320 --> 00:04:51,880 +way so prompt templates are templates + +109 +00:04:49,280 --> 00:04:55,280 +where you will actually uh that you will + +110 +00:04:51,880 --> 00:04:57,479 +fill in with an actual input and so if + +111 +00:04:55,280 --> 00:05:00,479 +we have an input X which is something + +112 +00:04:57,479 --> 00:05:04,880 +like I love this movie our template will + +113 +00:05:00,479 --> 00:05:08,360 +be something like X overall it was Z or + +114 +00:05:04,880 --> 00:05:10,680 +overall it was and so if we do that when + +115 +00:05:08,360 --> 00:05:13,320 +we actually want to make a prediction we + +116 +00:05:10,680 --> 00:05:14,840 +will uh convert this into the actual + +117 +00:05:13,320 --> 00:05:16,880 +prompt we feed into the language model + +118 +00:05:14,840 --> 00:05:20,639 +by filling in the template um I love + +119 +00:05:16,880 --> 00:05:24,919 +this movie overall it was blank and then + +120 +00:05:20,639 --> 00:05:24,919 +fill this uh continuation + +121 +00:05:25,840 --> 00:05:31,919 +in a particular variety uh + +122 +00:05:30,000 --> 00:05:34,039 +that we use very broadly nowadays + +123 +00:05:31,919 --> 00:05:36,240 +because a lot of models are trained as + +124 +00:05:34,039 --> 00:05:38,240 +chatbots um but actually even if they're + +125 +00:05:36,240 --> 00:05:41,199 +not trained as chatbots this still works + +126 +00:05:38,240 --> 00:05:46,199 +to some extent um is a chat + +127 +00:05:41,199 --> 00:05:49,919 +prompt and so usually the way we we do + +128 +00:05:46,199 --> 00:05:53,240 +this is we specify inputs in a format + +129 +00:05:49,919 --> 00:05:55,800 +called the open AI messages format and + +130 +00:05:53,240 --> 00:05:58,199 +uh this is this is what it looks like + +131 +00:05:55,800 --> 00:06:03,759 +each we have a + +132 +00:05:58,199 --> 00:06:07,680 +list of outputs each list is given a + +133 +00:06:03,759 --> 00:06:10,280 +role and content and here so we have the + +134 +00:06:07,680 --> 00:06:12,479 +role of system and the content is please + +135 +00:06:10,280 --> 00:06:15,319 +classify movie reviews as positive or + +136 +00:06:12,479 --> 00:06:17,400 +negative uh then we have the role user + +137 +00:06:15,319 --> 00:06:21,039 +uh this movie is a + +138 +00:06:17,400 --> 00:06:24,919 +banger um and then we have roles uh + +139 +00:06:21,039 --> 00:06:27,240 +system message uh so is the roles we + +140 +00:06:24,919 --> 00:06:29,639 +have the system and the system is a + +141 +00:06:27,240 --> 00:06:31,560 +message provided to the system to + +142 +00:06:29,639 --> 00:06:33,560 +influence Its Behavior it's to explain + +143 +00:06:31,560 --> 00:06:39,240 +to it + +144 +00:06:33,560 --> 00:06:40,840 +like how it should be working um and so + +145 +00:06:39,240 --> 00:06:43,199 +you can see that this is explaining to + +146 +00:06:40,840 --> 00:06:46,400 +the system how it should be working user + +147 +00:06:43,199 --> 00:06:48,680 +is the message input by the user um and + +148 +00:06:46,400 --> 00:06:51,160 +so this could be just a single message + +149 +00:06:48,680 --> 00:06:53,520 +or if you have a multi-turn dialogue it + +150 +00:06:51,160 --> 00:06:55,080 +can be like user and then assistant and + +151 +00:06:53,520 --> 00:06:56,680 +then user and then assistant and then + +152 +00:06:55,080 --> 00:06:59,400 +user and then assistant and that makes + +153 +00:06:56,680 --> 00:07:00,680 +it clear that it's a multi-term dialogue + +154 +00:06:59,400 --> 00:07:02,800 +so if you have a multi-term dialogue in + +155 +00:07:00,680 --> 00:07:06,319 +chat GPT that's how they're feeding it + +156 +00:07:02,800 --> 00:07:06,319 +in um into the + +157 +00:07:06,479 --> 00:07:12,440 +system so what's happening behind the + +158 +00:07:08,840 --> 00:07:14,160 +scenes with these chat prompts basically + +159 +00:07:12,440 --> 00:07:17,720 +they're being converted into token + +160 +00:07:14,160 --> 00:07:19,680 +strings and then fed into the model so + +161 +00:07:17,720 --> 00:07:21,800 +despite the fact that this is fed in in + +162 +00:07:19,680 --> 00:07:23,560 +this format and it makes you think that + +163 +00:07:21,800 --> 00:07:25,120 +maybe something special is going on + +164 +00:07:23,560 --> 00:07:28,360 +actually in most cases these are just + +165 +00:07:25,120 --> 00:07:30,199 +being fed into the model uh as a prompt + +166 +00:07:28,360 --> 00:07:34,560 +so these are just kind of special + +167 +00:07:30,199 --> 00:07:36,879 +version of a uh of a template so here we + +168 +00:07:34,560 --> 00:07:40,560 +have um this is what the Llama template + +169 +00:07:36,879 --> 00:07:43,319 +looks like so basically you have um + +170 +00:07:40,560 --> 00:07:46,560 +square bracket ins and then for the + +171 +00:07:43,319 --> 00:07:49,280 +system message it's like um like angle + +172 +00:07:46,560 --> 00:07:51,240 +bracket uh angle bracket sis uh close + +173 +00:07:49,280 --> 00:07:53,720 +angle bracket close angle bracket and + +174 +00:07:51,240 --> 00:07:55,759 +then the actual system message and then + +175 +00:07:53,720 --> 00:07:58,479 +you have uh this closing out the system + +176 +00:07:55,759 --> 00:08:01,240 +message this closing out the instruction + +177 +00:07:58,479 --> 00:08:04,120 +then the user is surrounded by inst and + +178 +00:08:01,240 --> 00:08:06,599 +then the assistant is just like a + +179 +00:08:04,120 --> 00:08:08,400 +regular string so this is what the + +180 +00:08:06,599 --> 00:08:12,319 +actual textual string that's fed into + +181 +00:08:08,400 --> 00:08:14,199 +llama chat models is we can contrast + +182 +00:08:12,319 --> 00:08:19,440 +that to some other models so alpaka + +183 +00:08:14,199 --> 00:08:22,400 +looks like this um uh so we have like + +184 +00:08:19,440 --> 00:08:24,879 +hash instruction colon and then the + +185 +00:08:22,400 --> 00:08:26,639 +instruction for the user there there's + +186 +00:08:24,879 --> 00:08:28,879 +no distinction between system and user + +187 +00:08:26,639 --> 00:08:31,960 +so it's like hash instruction and then + +188 +00:08:28,879 --> 00:08:35,240 +the user message and then hash response + +189 +00:08:31,960 --> 00:08:37,760 +and then be assistant so it's not super + +190 +00:08:35,240 --> 00:08:39,640 +important which one we use here um the + +191 +00:08:37,760 --> 00:08:41,919 +important thing is that this matches + +192 +00:08:39,640 --> 00:08:44,039 +with what uh the model is trained and + +193 +00:08:41,919 --> 00:08:46,640 +I'll show you some example uh you know + +194 +00:08:44,039 --> 00:08:50,680 +I'll talk about that in more detail + +195 +00:08:46,640 --> 00:08:52,880 +later and we have a reference uh that I + +196 +00:08:50,680 --> 00:08:56,600 +got this uh + +197 +00:08:52,880 --> 00:08:58,519 +from and there's this toolkit that I um + +198 +00:08:56,600 --> 00:09:02,680 +I rather like recently it's called light + +199 +00:08:58,519 --> 00:09:05,079 +llm it makes it very easy to uh query + +200 +00:09:02,680 --> 00:09:07,240 +different llms uh and kind of like + +201 +00:09:05,079 --> 00:09:09,320 +unified things so basically you can + +202 +00:09:07,240 --> 00:09:11,800 +query many different types of LMS like + +203 +00:09:09,320 --> 00:09:14,440 +open AI or open source models or other + +204 +00:09:11,800 --> 00:09:17,079 +things like that and what happens behind + +205 +00:09:14,440 --> 00:09:19,120 +the scene is it basically takes um the + +206 +00:09:17,079 --> 00:09:20,839 +open AI messages format and converts it + +207 +00:09:19,120 --> 00:09:22,880 +into the appropriate prompt format for + +208 +00:09:20,839 --> 00:09:24,680 +whatever model you're using or the + +209 +00:09:22,880 --> 00:09:27,120 +appropriate API calls for whatever thing + +210 +00:09:24,680 --> 00:09:29,800 +you're using but + +211 +00:09:27,120 --> 00:09:31,399 +um this here basically + +212 +00:09:29,800 --> 00:09:33,800 +um if you click through this link shows + +213 +00:09:31,399 --> 00:09:35,959 +you okay this is what it looks like for + +214 +00:09:33,800 --> 00:09:37,880 +alpaca um so you have the instruction + +215 +00:09:35,959 --> 00:09:40,920 +instruction response this is what it + +216 +00:09:37,880 --> 00:09:44,880 +looks like for llama 2 chat this is what + +217 +00:09:40,920 --> 00:09:48,480 +it looks like for the oama um for AMA + +218 +00:09:44,880 --> 00:09:49,920 +this is what it looks like for mistol + +219 +00:09:48,480 --> 00:09:52,160 +and other things like that so you see + +220 +00:09:49,920 --> 00:09:53,440 +all of these are very similar but + +221 +00:09:52,160 --> 00:09:55,000 +they're like slightly different and + +222 +00:09:53,440 --> 00:09:58,120 +getting these right is actually kind of + +223 +00:09:55,000 --> 00:10:01,120 +important for the model doing a good + +224 +00:09:58,120 --> 00:10:01,120 +job + +225 +00:10:03,640 --> 00:10:10,399 +um any questions about + +226 +00:10:05,880 --> 00:10:15,360 +this yeah like say you start PR with + +227 +00:10:10,399 --> 00:10:18,160 +this um inut and then you started simar + +228 +00:10:15,360 --> 00:10:21,320 +without + +229 +00:10:18,160 --> 00:10:24,640 +model could you give an example yeah so + +230 +00:10:21,320 --> 00:10:28,040 +say um my account is a great movie or + +231 +00:10:24,640 --> 00:10:31,040 +this movie is great in front of I put + +232 +00:10:28,040 --> 00:10:31,040 +UMR + +233 +00:10:34,279 --> 00:10:39,519 +model + +234 +00:10:36,399 --> 00:10:42,440 +so depend it depends a lot on the + +235 +00:10:39,519 --> 00:10:45,959 +bottle the reason why this system + +236 +00:10:42,440 --> 00:10:48,720 +message was input here in the first + +237 +00:10:45,959 --> 00:10:52,440 +place was this wasn't originally a + +238 +00:10:48,720 --> 00:10:54,240 +feature of open AI models uh open AI was + +239 +00:10:52,440 --> 00:10:56,440 +the first place to introduce this which + +240 +00:10:54,240 --> 00:10:58,519 +is why I I'm calling it open ey messages + +241 +00:10:56,440 --> 00:10:59,800 +formul they didn't originally have + +242 +00:10:58,519 --> 00:11:02,360 +something like this but they were having + +243 +00:10:59,800 --> 00:11:04,360 +lots of trouble with um people trying to + +244 +00:11:02,360 --> 00:11:07,600 +reveal the prompts that were given to + +245 +00:11:04,360 --> 00:11:09,680 +systems uh like called like prompt + +246 +00:11:07,600 --> 00:11:12,040 +injection attacks or like jailbreaking + +247 +00:11:09,680 --> 00:11:15,399 +attacks or stff like that and so the + +248 +00:11:12,040 --> 00:11:17,079 +models would basically reveal this + +249 +00:11:15,399 --> 00:11:19,600 +prompt that was being used behind the + +250 +00:11:17,079 --> 00:11:22,760 +scenes by whatever customer of open a + +251 +00:11:19,600 --> 00:11:26,120 +was like deploying a system and so in + +252 +00:11:22,760 --> 00:11:29,120 +order to fix this basically what open AI + +253 +00:11:26,120 --> 00:11:30,480 +did I believe I believe like they're + +254 +00:11:29,120 --> 00:11:32,279 +don't actually tell you exactly what + +255 +00:11:30,480 --> 00:11:36,040 +they did ever but I'm assuming what they + +256 +00:11:32,279 --> 00:11:37,680 +did is they trained uh their models so + +257 +00:11:36,040 --> 00:11:39,240 +that the models would not output + +258 +00:11:37,680 --> 00:11:41,639 +anything that's included in the system + +259 +00:11:39,240 --> 00:11:43,839 +message so the system message is used to + +260 +00:11:41,639 --> 00:11:46,120 +influence behavior but it like they're + +261 +00:11:43,839 --> 00:11:48,200 +explicitly trained to not output things + +262 +00:11:46,120 --> 00:11:49,880 +that are included in there and so if you + +263 +00:11:48,200 --> 00:11:53,360 +put the + +264 +00:11:49,880 --> 00:11:56,200 +actual if you put the actual thing that + +265 +00:11:53,360 --> 00:11:59,639 +you wanted to evaluate within the system + +266 +00:11:56,200 --> 00:12:01,839 +message it might still predict + +267 +00:11:59,639 --> 00:12:04,839 +the sentiment correctly but it won't + +268 +00:12:01,839 --> 00:12:06,920 +repeat the the stuff that was in system + +269 +00:12:04,839 --> 00:12:09,920 +message + +270 +00:12:06,920 --> 00:12:09,920 +B + +271 +00:12:14,160 --> 00:12:20,480 +yeah after we give it the yeah yeah so + +272 +00:12:18,320 --> 00:12:23,040 +the that's a great question so typically + +273 +00:12:20,480 --> 00:12:26,480 +this is hand created so you you create + +274 +00:12:23,040 --> 00:12:29,680 +something like this um I I have a a + +275 +00:12:26,480 --> 00:12:32,120 +bracket X here but another way people + +276 +00:12:29,680 --> 00:12:33,800 +typically specify this is you just have + +277 +00:12:32,120 --> 00:12:36,880 +a + +278 +00:12:33,800 --> 00:12:41,199 +big um you just have a big python string + +279 +00:12:36,880 --> 00:12:41,199 +which is like um you know + +280 +00:12:42,040 --> 00:12:46,480 +please um please + +281 +00:12:49,279 --> 00:12:55,440 +specify and then you + +282 +00:12:52,440 --> 00:12:55,440 +have + +283 +00:12:56,160 --> 00:13:02,240 +um and then you substitute in uh like + +284 +00:12:59,880 --> 00:13:04,440 +the input into this place here so you + +285 +00:13:02,240 --> 00:13:07,760 +usually handw write it I'm going to + +286 +00:13:04,440 --> 00:13:07,760 +talk excuse + +287 +00:13:07,800 --> 00:13:14,120 +me and to end about some methods to + +288 +00:13:10,320 --> 00:13:16,120 +learn these also but um I'd say like 90 + +289 +00:13:14,120 --> 00:13:18,320 +95% of the time people are just writing + +290 +00:13:16,120 --> 00:13:18,320 +the + +291 +00:13:19,959 --> 00:13:24,560 +man yep I would + +292 +00:13:25,920 --> 00:13:31,639 +write + +293 +00:13:27,760 --> 00:13:31,639 +and real input that + +294 +00:13:33,240 --> 00:13:38,040 +I yeah so typically the template is + +295 +00:13:36,360 --> 00:13:39,800 +written when you decide what system you + +296 +00:13:38,040 --> 00:13:41,839 +want to create so you decide you want to + +297 +00:13:39,800 --> 00:13:44,519 +create a sentiment analysis system so + +298 +00:13:41,839 --> 00:13:46,760 +you create a template that either says + +299 +00:13:44,519 --> 00:13:48,079 +like please classify the topic in the + +300 +00:13:46,760 --> 00:13:50,959 +case of a model that was trained to + +301 +00:13:48,079 --> 00:13:52,240 +follow instructions or if you have a + +302 +00:13:50,959 --> 00:13:54,240 +base model that was not trained to + +303 +00:13:52,240 --> 00:13:58,079 +follow instructions which is rare rare + +304 +00:13:54,240 --> 00:14:00,279 +nowadays but gpd2 or La llama 2 without + +305 +00:13:58,079 --> 00:14:02,320 +chat tuning is as an example of that + +306 +00:14:00,279 --> 00:14:05,600 +then you would need to create a template + +307 +00:14:02,320 --> 00:14:10,040 +that looks like this um where + +308 +00:14:05,600 --> 00:14:11,360 +you put the model in a situation where + +309 +00:14:10,040 --> 00:14:13,839 +the + +310 +00:14:11,360 --> 00:14:15,240 +next word that follows up should be + +311 +00:14:13,839 --> 00:14:17,120 +indicative of the answer to your + +312 +00:14:15,240 --> 00:14:20,120 +question so like positive or negative or + +313 +00:14:17,120 --> 00:14:21,800 +something like that so um but either way + +314 +00:14:20,120 --> 00:14:24,639 +like usually you handw write this when + +315 +00:14:21,800 --> 00:14:27,199 +you decide what task is you want to do + +316 +00:14:24,639 --> 00:14:29,000 +then this input X this comes at test + +317 +00:14:27,199 --> 00:14:32,920 +time this comes when you actually Dey + +318 +00:14:29,000 --> 00:14:34,240 +your system um so this would be like an + +319 +00:14:32,920 --> 00:14:37,040 +Amazon review that you wanted to + +320 +00:14:34,240 --> 00:14:37,040 +classify using an + +321 +00:14:37,720 --> 00:14:42,720 +image cool any other + +322 +00:14:40,519 --> 00:14:46,480 +questions okay let's + +323 +00:14:42,720 --> 00:14:48,160 +move um so basically this is what is + +324 +00:14:46,480 --> 00:14:49,920 +happening behind the scenes I don't know + +325 +00:14:48,160 --> 00:14:53,040 +what open AI format is because they + +326 +00:14:49,920 --> 00:14:54,639 +won't tell us of course um but you know + +327 +00:14:53,040 --> 00:14:56,000 +I'm assuming that that's similar to + +328 +00:14:54,639 --> 00:14:59,399 +what's happening in + +329 +00:14:56,000 --> 00:15:01,959 +op okay um so the next thing that we do + +330 +00:14:59,399 --> 00:15:05,360 +is answer prediction so given uh The + +331 +00:15:01,959 --> 00:15:08,320 +Prompt we predict the answer um and so + +332 +00:15:05,360 --> 00:15:11,880 +using whatever algorithm we want to use + +333 +00:15:08,320 --> 00:15:14,880 +uh we predict you know fantastic + +334 +00:15:11,880 --> 00:15:14,880 +here + +335 +00:15:15,120 --> 00:15:21,639 +um and actually it might not predict + +336 +00:15:19,959 --> 00:15:26,399 +fantastic it might predict something + +337 +00:15:21,639 --> 00:15:28,120 +else like overall it was um a really + +338 +00:15:26,399 --> 00:15:30,000 +fantastic movie that I liked a lot or + +339 +00:15:28,120 --> 00:15:33,839 +something like so it might also do + +340 +00:15:30,000 --> 00:15:36,880 +something like that so based on that we + +341 +00:15:33,839 --> 00:15:39,600 +want to select the actual output out of + +342 +00:15:36,880 --> 00:15:41,160 +the generated uh outputs and I'm calling + +343 +00:15:39,600 --> 00:15:43,639 +this uh + +344 +00:15:41,160 --> 00:15:45,959 +postprocessing so for instance we might + +345 +00:15:43,639 --> 00:15:48,240 +take the output as is so for something + +346 +00:15:45,959 --> 00:15:50,880 +like just you interacting with chat + +347 +00:15:48,240 --> 00:15:53,360 +jpt um or interacting with a chat model + +348 +00:15:50,880 --> 00:15:55,639 +you might be looking at the text as is + +349 +00:15:53,360 --> 00:15:58,319 +or it might be formatting the output for + +350 +00:15:55,639 --> 00:16:00,079 +easy Vis visualization selecting only + +351 +00:15:58,319 --> 00:16:02,440 +parts of the output that you want to use + +352 +00:16:00,079 --> 00:16:04,560 +or mapping the output to other + +353 +00:16:02,440 --> 00:16:07,600 +actions so to give an example of + +354 +00:16:04,560 --> 00:16:10,079 +formatting this is a feature of uh chat + +355 +00:16:07,600 --> 00:16:13,440 +GPT or Bard or any that you interact + +356 +00:16:10,079 --> 00:16:14,920 +with but um I wrote please write a table + +357 +00:16:13,440 --> 00:16:18,759 +with the last five presidents and their + +358 +00:16:14,920 --> 00:16:20,319 +birth dates and chat GPT is happy to do + +359 +00:16:18,759 --> 00:16:22,000 +this for me it says here is a table with + +360 +00:16:20,319 --> 00:16:24,920 +the last five US presidents and their + +361 +00:16:22,000 --> 00:16:27,639 +birth dates um Joe Biden Donald Trump + +362 +00:16:24,920 --> 00:16:31,720 +Barack Obama George W wish Bill Clinton + +363 +00:16:27,639 --> 00:16:33,600 +um but this is written in markdown um or + +364 +00:16:31,720 --> 00:16:35,079 +I assume it's written in markdown so it + +365 +00:16:33,600 --> 00:16:37,880 +basically makes this table and then + +366 +00:16:35,079 --> 00:16:39,319 +renders it in an easy to view way so + +367 +00:16:37,880 --> 00:16:41,000 +this is really important if you're + +368 +00:16:39,319 --> 00:16:42,440 +building a user facing system because + +369 +00:16:41,000 --> 00:16:44,279 +you want to be able to render these + +370 +00:16:42,440 --> 00:16:46,279 +things but the only thing a large + +371 +00:16:44,279 --> 00:16:48,880 +language model can output is text right + +372 +00:16:46,279 --> 00:16:50,279 +it can output a string of tokens so uh + +373 +00:16:48,880 --> 00:16:54,000 +this is a really good way to interact + +374 +00:16:50,279 --> 00:16:55,759 +with it um I I followed by saying output + +375 +00:16:54,000 --> 00:16:58,720 +that in Json format so it says here's + +376 +00:16:55,759 --> 00:17:00,360 +the information in Json format and + +377 +00:16:58,720 --> 00:17:02,000 +instead of just giving me a big Json + +378 +00:17:00,360 --> 00:17:04,199 +string it gives me syntax highlighting + +379 +00:17:02,000 --> 00:17:06,880 +and all the other stuff like this um + +380 +00:17:04,199 --> 00:17:09,760 +presumably what it's doing here is it's + +381 +00:17:06,880 --> 00:17:12,839 +outputting um like a triple hash or + +382 +00:17:09,760 --> 00:17:15,160 +something like this um the reason why I + +383 +00:17:12,839 --> 00:17:17,600 +know that is because + +384 +00:17:15,160 --> 00:17:21,079 +like seems to be making a mistake down + +385 +00:17:17,600 --> 00:17:23,280 +here for some reason um like uh + +386 +00:17:21,079 --> 00:17:25,079 +outputting a weird Le formatted thing at + +387 +00:17:23,280 --> 00:17:26,160 +that and so even chat GPT makes mistakes + +388 +00:17:25,079 --> 00:17:30,320 +some of the + +389 +00:17:26,160 --> 00:17:32,400 +time um + +390 +00:17:30,320 --> 00:17:33,960 +cool um another thing that you might + +391 +00:17:32,400 --> 00:17:35,520 +want to do is especially if you're not + +392 +00:17:33,960 --> 00:17:37,360 +using it in like a a directly + +393 +00:17:35,520 --> 00:17:40,200 +user-facing application but you want to + +394 +00:17:37,360 --> 00:17:41,840 +use it to extract some information or + +395 +00:17:40,200 --> 00:17:45,440 +make some classification decision or + +396 +00:17:41,840 --> 00:17:47,280 +something like that um you often select + +397 +00:17:45,440 --> 00:17:49,880 +information that's indicative of the + +398 +00:17:47,280 --> 00:17:52,360 +answer and so I love this movie overall + +399 +00:17:49,880 --> 00:17:53,960 +it was a movie that was simply fantastic + +400 +00:17:52,360 --> 00:17:56,600 +um you can do things like extract + +401 +00:17:53,960 --> 00:17:59,440 +keywords like fantastic and use that to + +402 +00:17:56,600 --> 00:18:01,360 +indicate positive sentiment + +403 +00:17:59,440 --> 00:18:04,080 +there's various methods for doing this + +404 +00:18:01,360 --> 00:18:05,919 +and these are also used in the + +405 +00:18:04,080 --> 00:18:08,679 +benchmarks that are used to evaluate + +406 +00:18:05,919 --> 00:18:09,799 +language models so it's you know like + +407 +00:18:08,679 --> 00:18:11,039 +even if you're not building an + +408 +00:18:09,799 --> 00:18:12,679 +application directly but you're just + +409 +00:18:11,039 --> 00:18:14,120 +trying to do well in this class and get + +410 +00:18:12,679 --> 00:18:15,679 +like a high score on a leaderboard or + +411 +00:18:14,120 --> 00:18:20,320 +something it's still useful to know + +412 +00:18:15,679 --> 00:18:22,159 +about these things so um for things like + +413 +00:18:20,320 --> 00:18:24,039 +classification um you can identify + +414 +00:18:22,159 --> 00:18:27,159 +keywords like fantastic that might be + +415 +00:18:24,039 --> 00:18:29,120 +indicative of the class another thing + +416 +00:18:27,159 --> 00:18:31,559 +that's uh pretty common is for + +417 +00:18:29,120 --> 00:18:34,480 +regression or numerical problems you + +418 +00:18:31,559 --> 00:18:37,440 +identify numbers and pull out the + +419 +00:18:34,480 --> 00:18:40,400 +numbers and use those numbers as the + +420 +00:18:37,440 --> 00:18:42,360 +answer um for code uh you can pull out + +421 +00:18:40,400 --> 00:18:45,080 +code Snippets and triple back ticks and + +422 +00:18:42,360 --> 00:18:46,960 +then execute the code for example so all + +423 +00:18:45,080 --> 00:18:48,600 +of these things are basically heuristic + +424 +00:18:46,960 --> 00:18:50,159 +methods but they can be used to pull out + +425 +00:18:48,600 --> 00:18:53,440 +the actual answer that you want from the + +426 +00:18:50,159 --> 00:18:53,440 +text that's generated di + +427 +00:18:54,480 --> 00:19:00,320 +know cool uh any questions about that + +428 +00:19:02,280 --> 00:19:07,880 +the final thing is output mapping um + +429 +00:19:04,640 --> 00:19:11,120 +given an answer uh map it into a class + +430 +00:19:07,880 --> 00:19:13,360 +label or a continuous value and so this + +431 +00:19:11,120 --> 00:19:16,000 +is doing something like taking fantastic + +432 +00:19:13,360 --> 00:19:18,480 +and mapping it into the class + +433 +00:19:16,000 --> 00:19:21,000 +positive uh and so you know if we want + +434 +00:19:18,480 --> 00:19:23,000 +to extract fi one to five star ratings + +435 +00:19:21,000 --> 00:19:25,559 +from reviews this is something you would + +436 +00:19:23,000 --> 00:19:29,360 +need to do and very often it's like a + +437 +00:19:25,559 --> 00:19:33,880 +one to um one class to + +438 +00:19:29,360 --> 00:19:35,720 +many um many word mapping and uh by + +439 +00:19:33,880 --> 00:19:37,400 +doing this you can basically get a more + +440 +00:19:35,720 --> 00:19:38,720 +robust mapping onto the number that you + +441 +00:19:37,400 --> 00:19:42,400 +actually + +442 +00:19:38,720 --> 00:19:42,400 +want I actually + +443 +00:19:42,720 --> 00:19:48,919 +coincidentally on uh on Twitter saw a + +444 +00:19:45,280 --> 00:19:48,919 +really good example of this like a week + +445 +00:19:55,880 --> 00:20:00,520 +ago and yeah I don't know if I'm going + +446 +00:19:59,120 --> 00:20:05,440 +to be able to find it in a reasonable + +447 +00:20:00,520 --> 00:20:08,520 +time frame but basically um there was + +448 +00:20:05,440 --> 00:20:11,080 +a person who was using gp4 to create a + +449 +00:20:08,520 --> 00:20:14,120 +model uh to like reward open source + +450 +00:20:11,080 --> 00:20:15,880 +models for good and bad you know + +451 +00:20:14,120 --> 00:20:18,320 +responses + +452 +00:20:15,880 --> 00:20:20,799 +and they started out with giving it a + +453 +00:20:18,320 --> 00:20:24,480 +one to five star rating and then they + +454 +00:20:20,799 --> 00:20:28,360 +switched it into very good good okay bad + +455 +00:20:24,480 --> 00:20:31,280 +very bad and then um then asked to + +456 +00:20:28,360 --> 00:20:34,520 +generate you know those like very good + +457 +00:20:31,280 --> 00:20:37,039 +good bad okay bad very bad instead of + +458 +00:20:34,520 --> 00:20:40,360 +one to five and that worked a lot better + +459 +00:20:37,039 --> 00:20:43,480 +like the GPT model was a lot more uh + +460 +00:20:40,360 --> 00:20:46,039 +like likely to get the answer correct um + +461 +00:20:43,480 --> 00:20:48,880 +than it was if you gave a one to five + +462 +00:20:46,039 --> 00:20:50,799 +star rating so this is something you + +463 +00:20:48,880 --> 00:20:54,280 +should think about pretty seriously and + +464 +00:20:50,799 --> 00:20:57,440 +the way you can think about it is How + +465 +00:20:54,280 --> 00:20:59,679 +likely was this data to appear in a + +466 +00:20:57,440 --> 00:21:02,520 +large Corp of data on the + +467 +00:20:59,679 --> 00:21:04,760 +internet and it might be like a lot less + +468 +00:21:02,520 --> 00:21:08,679 +likely that it's like how good is this + +469 +00:21:04,760 --> 00:21:11,400 +movie five then how good is this movie + +470 +00:21:08,679 --> 00:21:13,960 +really good like just think of like the + +471 +00:21:11,400 --> 00:21:16,200 +occurrence probability and you can even + +472 +00:21:13,960 --> 00:21:18,600 +um like mine this data from the the web + +473 +00:21:16,200 --> 00:21:21,320 +if you want to to try to find out the + +474 +00:21:18,600 --> 00:21:24,520 +best you know + +475 +00:21:21,320 --> 00:21:30,039 +like the best things + +476 +00:21:24,520 --> 00:21:30,039 +there cool um any questions about this + +477 +00:21:35,360 --> 00:21:39,480 +yeah how is + +478 +00:21:37,720 --> 00:21:43,039 +it + +479 +00:21:39,480 --> 00:21:45,919 +learning so the model the model is + +480 +00:21:43,039 --> 00:21:47,600 +predicting txt and like accurately it's + +481 +00:21:45,919 --> 00:21:50,200 +not even predicting the word fantastic + +482 +00:21:47,600 --> 00:21:54,480 +it's predicting the token ID like + +483 +00:21:50,200 --> 00:21:57,600 +73521 or something like that um but you + +484 +00:21:54,480 --> 00:21:58,679 +know if it has seen that token ID more + +485 +00:21:57,600 --> 00:22:00,840 +frequent + +486 +00:21:58,679 --> 00:22:04,240 +after reviews than it has seen the token + +487 +00:22:00,840 --> 00:22:06,000 +ID for the number one or the number five + +488 +00:22:04,240 --> 00:22:07,520 +then it's more likely to predict that + +489 +00:22:06,000 --> 00:22:10,279 +accurately right it's more likely to + +490 +00:22:07,520 --> 00:22:11,880 +predict fantastic than it is to predict + +491 +00:22:10,279 --> 00:22:14,679 +five star or something like that just + +492 +00:22:11,880 --> 00:22:16,720 +because fantastic is more frequent and + +493 +00:22:14,679 --> 00:22:18,880 +so because of that if you think about + +494 +00:22:16,720 --> 00:22:22,120 +like what has it seen in all of the data + +495 +00:22:18,880 --> 00:22:24,240 +on the internet and like model your um + +496 +00:22:22,120 --> 00:22:26,960 +model your answers here appropriately + +497 +00:22:24,240 --> 00:22:28,520 +then that can give you + +498 +00:22:26,960 --> 00:22:30,320 +betters + +499 +00:22:28,520 --> 00:22:32,120 +this is a very important rule of thumb + +500 +00:22:30,320 --> 00:22:33,400 +like don't try to make a language model + +501 +00:22:32,120 --> 00:22:35,039 +do something it's never seen in the + +502 +00:22:33,400 --> 00:22:38,200 +pre-training data and it will make your + +503 +00:22:35,039 --> 00:22:40,240 +life a lot easier so um you can think + +504 +00:22:38,200 --> 00:22:41,880 +that going forward + +505 +00:22:40,240 --> 00:22:44,679 +to + +506 +00:22:41,880 --> 00:22:48,559 +cool so next I want to move into fat + +507 +00:22:44,679 --> 00:22:49,679 +prompting or in context learning um so + +508 +00:22:48,559 --> 00:22:52,159 +fat + +509 +00:22:49,679 --> 00:22:54,440 +prompting basically what we do is we + +510 +00:22:52,159 --> 00:22:55,799 +provide a few examples of the task + +511 +00:22:54,440 --> 00:22:58,440 +together with the + +512 +00:22:55,799 --> 00:23:00,080 +instruction and the way this work works + +513 +00:22:58,440 --> 00:23:02,360 +is you write an instruction like please + +514 +00:23:00,080 --> 00:23:05,919 +classify movie reviews as positive or + +515 +00:23:02,360 --> 00:23:08,120 +negative and add like input uh I really + +516 +00:23:05,919 --> 00:23:10,320 +don't like this movie output negative uh + +517 +00:23:08,120 --> 00:23:12,480 +input this movie is great output + +518 +00:23:10,320 --> 00:23:16,640 +positive + +519 +00:23:12,480 --> 00:23:18,880 +and this is um pretty effective the + +520 +00:23:16,640 --> 00:23:21,799 +thing it's most effective for are + +521 +00:23:18,880 --> 00:23:24,400 +twofold it's most effective for making + +522 +00:23:21,799 --> 00:23:26,360 +sure that you get the formatting right + +523 +00:23:24,400 --> 00:23:27,640 +uh because if you have a few examples + +524 +00:23:26,360 --> 00:23:28,679 +the model will tend to follow those + +525 +00:23:27,640 --> 00:23:30,840 +examples + +526 +00:23:28,679 --> 00:23:34,440 +with respect to formatting especially if + +527 +00:23:30,840 --> 00:23:37,320 +we're talking about like gp4 models um + +528 +00:23:34,440 --> 00:23:40,400 +or strong GPT models it's also effective + +529 +00:23:37,320 --> 00:23:42,400 +if you're using weaker models so like + +530 +00:23:40,400 --> 00:23:44,720 +stronger models like gp4 tend to be + +531 +00:23:42,400 --> 00:23:46,720 +pretty good at following instructions so + +532 +00:23:44,720 --> 00:23:49,520 +if you say + +533 +00:23:46,720 --> 00:23:51,640 +um please classify movie reviews as + +534 +00:23:49,520 --> 00:23:54,000 +positive or negative it will be more + +535 +00:23:51,640 --> 00:23:56,279 +likely to just output positive or + +536 +00:23:54,000 --> 00:23:58,760 +negative um but if you have weaker + +537 +00:23:56,279 --> 00:24:01,720 +models it might say I really don't like + +538 +00:23:58,760 --> 00:24:03,559 +this movie output uh I think I think + +539 +00:24:01,720 --> 00:24:05,640 +this is probably negative or something + +540 +00:24:03,559 --> 00:24:07,240 +like that it will you know it might not + +541 +00:24:05,640 --> 00:24:10,080 +follow the instructions as well and it's + +542 +00:24:07,240 --> 00:24:14,240 +more effective to provide as in context + +543 +00:24:10,080 --> 00:24:17,600 +examples um so so this is a one uh + +544 +00:24:14,240 --> 00:24:19,480 +one uh thing to remember one thing I + +545 +00:24:17,600 --> 00:24:22,120 +should mention also is when I say F shot + +546 +00:24:19,480 --> 00:24:25,720 +prompting and in context learning these + +547 +00:24:22,120 --> 00:24:27,880 +are basically the same thing uh they + +548 +00:24:25,720 --> 00:24:29,720 +basically refer to the same concept but + +549 +00:24:27,880 --> 00:24:31,919 +just from slightly different + +550 +00:24:29,720 --> 00:24:34,799 +examples uh from sorry slightly + +551 +00:24:31,919 --> 00:24:36,919 +different angles PE shot is in contrast + +552 +00:24:34,799 --> 00:24:39,320 +to zero shot so zero shot means you're + +553 +00:24:36,919 --> 00:24:43,039 +providing no examples so zero shot + +554 +00:24:39,320 --> 00:24:45,720 +prompting you would have none uh few + +555 +00:24:43,039 --> 00:24:47,240 +shot you have several examples in + +556 +00:24:45,720 --> 00:24:49,679 +context learning means that you're + +557 +00:24:47,240 --> 00:24:51,640 +learning how to do a task but instead of + +558 +00:24:49,679 --> 00:24:54,320 +providing the model with fine-tuning + +559 +00:24:51,640 --> 00:24:56,679 +data you're providing the examples in + +560 +00:24:54,320 --> 00:24:58,080 +the language models context so they both + +561 +00:24:56,679 --> 00:25:00,919 +basically mean the same thing but + +562 +00:24:58,080 --> 00:25:03,159 +they're they're just contrasting to like + +563 +00:25:00,919 --> 00:25:06,559 +either a zero shot or fine tuning which + +564 +00:25:03,159 --> 00:25:06,559 +is why the terminology is + +565 +00:25:06,880 --> 00:25:13,520 +different so they usering interface + +566 +00:25:11,320 --> 00:25:16,080 +and for the + +567 +00:25:13,520 --> 00:25:17,760 +rendering uh yes you can definitely do F + +568 +00:25:16,080 --> 00:25:20,039 +shot prompting I'm actually going to + +569 +00:25:17,760 --> 00:25:23,440 +talk exactly about exactly how you do + +570 +00:25:20,039 --> 00:25:26,320 +this in like an open AI model um here + +571 +00:25:23,440 --> 00:25:28,240 +which is for open AI models there's a + +572 +00:25:26,320 --> 00:25:31,320 +couple ways that you could do this one + +573 +00:25:28,240 --> 00:25:33,640 +way you could do this is you could um + +574 +00:25:31,320 --> 00:25:36,279 +you could have the role be user and the + +575 +00:25:33,640 --> 00:25:39,279 +role be assistant and just add like + +576 +00:25:36,279 --> 00:25:41,159 +additional conversational history into + +577 +00:25:39,279 --> 00:25:43,159 +the the messages that you're sending to + +578 +00:25:41,159 --> 00:25:46,240 +the language model but actually the + +579 +00:25:43,159 --> 00:25:49,120 +recommended way of doing this um which + +580 +00:25:46,240 --> 00:25:51,880 +is in the openi cookbook uh which is in + +581 +00:25:49,120 --> 00:25:53,919 +the reference is that you send this as a + +582 +00:25:51,880 --> 00:25:58,200 +system message but you provide this like + +583 +00:25:53,919 --> 00:26:00,840 +additional name variable here um with + +584 +00:25:58,200 --> 00:26:02,840 +example user and example assistant the + +585 +00:26:00,840 --> 00:26:06,200 +main reason why you do this is just + +586 +00:26:02,840 --> 00:26:08,080 +because if you don't um if you send it + +587 +00:26:06,200 --> 00:26:10,600 +in as the like user and assistant the + +588 +00:26:08,080 --> 00:26:12,799 +model might refer back to the few shot + +589 +00:26:10,600 --> 00:26:14,320 +examples as something that happened + +590 +00:26:12,799 --> 00:26:15,760 +previously in the conversation whereas + +591 +00:26:14,320 --> 00:26:18,200 +if you send it in the system message + +592 +00:26:15,760 --> 00:26:19,799 +it's guaranteed to not do that so I + +593 +00:26:18,200 --> 00:26:23,600 +think it's like less of an accuracy + +594 +00:26:19,799 --> 00:26:26,360 +thing it's more of a like it's more of a + +595 +00:26:23,600 --> 00:26:29,120 +privacy prompt privacy thing uh than + +596 +00:26:26,360 --> 00:26:30,880 +anything else so this is a recommended + +597 +00:26:29,120 --> 00:26:33,159 +way of doing this on the other hand if + +598 +00:26:30,880 --> 00:26:34,600 +you're using like an open source model + +599 +00:26:33,159 --> 00:26:36,600 +uh you need to be careful because this + +600 +00:26:34,600 --> 00:26:38,279 +name might not even be included in the + +601 +00:26:36,600 --> 00:26:40,080 +prompt template like for example in the + +602 +00:26:38,279 --> 00:26:41,840 +light llm prompt templates that I was + +603 +00:26:40,080 --> 00:26:44,080 +sending in this is not even included at + +604 +00:26:41,840 --> 00:26:46,480 +all so you might just get a weird system + +605 +00:26:44,080 --> 00:26:49,720 +message that uh is poorly fored so you + +606 +00:26:46,480 --> 00:26:53,600 +need to be a little bit conscious + +607 +00:26:49,720 --> 00:26:55,799 +this um cool any questions here does + +608 +00:26:53,600 --> 00:26:58,880 +that answer the + +609 +00:26:55,799 --> 00:27:02,279 +question okay + +610 +00:26:58,880 --> 00:27:05,000 +um so one one thing to be aware of is + +611 +00:27:02,279 --> 00:27:07,039 +llms are sensitive to small changes and + +612 +00:27:05,000 --> 00:27:12,080 +in context examples that you provide to + +613 +00:27:07,039 --> 00:27:14,600 +them so uh previous work has examined + +614 +00:27:12,080 --> 00:27:19,399 +this from a number of angles there's a + +615 +00:27:14,600 --> 00:27:22,679 +paper by Luol and they examine the + +616 +00:27:19,399 --> 00:27:25,000 +sensitivity to example ordering so like + +617 +00:27:22,679 --> 00:27:28,399 +if you take the same examples and you + +618 +00:27:25,000 --> 00:27:30,840 +just order them in different orders um + +619 +00:27:28,399 --> 00:27:32,679 +you can actually get very wildly + +620 +00:27:30,840 --> 00:27:35,600 +different + +621 +00:27:32,679 --> 00:27:37,520 +results um and this is especially true + +622 +00:27:35,600 --> 00:27:40,320 +for smaller models so the smaller models + +623 +00:27:37,520 --> 00:27:42,720 +here are like the gpt2 models the larger + +624 +00:27:40,320 --> 00:27:47,440 +models here are like the GPT the larger + +625 +00:27:42,720 --> 00:27:47,440 +model here is GPT 3.5 uh I + +626 +00:27:48,399 --> 00:27:54,120 +believe other things that people have + +627 +00:27:50,559 --> 00:27:56,760 +looked at are label balance so um how + +628 +00:27:54,120 --> 00:27:58,559 +important is it for the labels to be + +629 +00:27:56,760 --> 00:28:01,440 +balanced + +630 +00:27:58,559 --> 00:28:02,799 +um and if you're doing sentiment + +631 +00:28:01,440 --> 00:28:05,240 +classification for example you might + +632 +00:28:02,799 --> 00:28:07,519 +have only positive examples or only + +633 +00:28:05,240 --> 00:28:10,000 +negative examples and if you have only + +634 +00:28:07,519 --> 00:28:13,279 +positive or negative examples this can + +635 +00:28:10,000 --> 00:28:15,559 +uh help or hurt your accuracy uh for + +636 +00:28:13,279 --> 00:28:17,200 +example on this Amazon review data set + +637 +00:28:15,559 --> 00:28:18,679 +most of the reviews are positive so you + +638 +00:28:17,200 --> 00:28:20,840 +actually do better by having lots of + +639 +00:28:18,679 --> 00:28:23,640 +positive examples in your in context + +640 +00:28:20,840 --> 00:28:26,600 +examples on the other hand for sst2 this + +641 +00:28:23,640 --> 00:28:29,159 +is label balanced so having only + +642 +00:28:26,600 --> 00:28:31,799 +positive or negative is worse on average + +643 +00:28:29,159 --> 00:28:34,279 +than having three positive and one + +644 +00:28:31,799 --> 00:28:36,679 +negative another thing is label coverage + +645 +00:28:34,279 --> 00:28:38,679 +so if we're talking about multi class + +646 +00:28:36,679 --> 00:28:41,120 +classification um + +647 +00:28:38,679 --> 00:28:42,919 +having good coverage of all of the + +648 +00:28:41,120 --> 00:28:45,919 +classes that you want to include in your + +649 +00:28:42,919 --> 00:28:49,120 +multiclass classification is important + +650 +00:28:45,919 --> 00:28:51,720 +um to some extent but if you have uh + +651 +00:28:49,120 --> 00:28:53,440 +more uh you can also confuse some model + +652 +00:28:51,720 --> 00:28:55,840 +especially if they're minority labels so + +653 +00:28:53,440 --> 00:28:57,799 +if you have a whole bunch of like random + +654 +00:28:55,840 --> 00:28:59,080 +minority labels and that can cause so + +655 +00:28:57,799 --> 00:29:01,399 +this is something important to think + +656 +00:28:59,080 --> 00:29:04,640 +about if you're planning on solving kind + +657 +00:29:01,399 --> 00:29:08,640 +of like classification tests um I I've + +658 +00:29:04,640 --> 00:29:11,000 +also had my own experience with uh using + +659 +00:29:08,640 --> 00:29:13,159 +GPT for evaluation for machine + +660 +00:29:11,000 --> 00:29:14,760 +translation and when we use GPT for + +661 +00:29:13,159 --> 00:29:18,559 +evaluation for machine translation it + +662 +00:29:14,760 --> 00:29:20,799 +was very important to add um like high + +663 +00:29:18,559 --> 00:29:22,760 +uh high scoring values low score high + +664 +00:29:20,799 --> 00:29:26,320 +scoring outputs low scoring outputs some + +665 +00:29:22,760 --> 00:29:27,840 +in the middle um and so it's also the + +666 +00:29:26,320 --> 00:29:30,760 +case for regression + +667 +00:29:27,840 --> 00:29:30,760 +uh problems as + +668 +00:29:32,600 --> 00:29:37,320 +well cool um any questions + +669 +00:29:38,159 --> 00:29:45,000 +here um however this is not super + +670 +00:29:42,240 --> 00:29:46,600 +predictable um so there's not like any + +671 +00:29:45,000 --> 00:29:48,399 +rule of thumb that tells you like this + +672 +00:29:46,600 --> 00:29:49,720 +is or as far as I know there's not any + +673 +00:29:48,399 --> 00:29:51,640 +rule of thumb that tells you this is the + +674 +00:29:49,720 --> 00:29:54,000 +way you should construct in context + +675 +00:29:51,640 --> 00:29:55,880 +examples uh there are lots of papers + +676 +00:29:54,000 --> 00:29:57,799 +that say they have methods that work + +677 +00:29:55,880 --> 00:30:01,000 +better but I don't know if there's any + +678 +00:29:57,799 --> 00:30:02,559 +like gold standard IND indry practice + +679 +00:30:01,000 --> 00:30:05,799 +for doing something like this at the + +680 +00:30:02,559 --> 00:30:07,799 +moment so just to give an example uh + +681 +00:30:05,799 --> 00:30:10,399 +this paper it's a really nice paper + +682 +00:30:07,799 --> 00:30:13,440 +examining why uh in context Learning + +683 +00:30:10,399 --> 00:30:17,279 +Works one thing one interesting finding + +684 +00:30:13,440 --> 00:30:19,760 +that they have is they output they take + +685 +00:30:17,279 --> 00:30:22,720 +in context examples but they randomize + +686 +00:30:19,760 --> 00:30:27,320 +the labels they make the labels wrong + +687 +00:30:22,720 --> 00:30:29,519 +some of the time so even with completely + +688 +00:30:27,320 --> 00:30:32,120 +wrong labels even with labels that are + +689 +00:30:29,519 --> 00:30:34,399 +correct 0% of the time you still get + +690 +00:30:32,120 --> 00:30:37,360 +much much better accuracy than if you + +691 +00:30:34,399 --> 00:30:39,440 +use no Inc context examples and why is + +692 +00:30:37,360 --> 00:30:41,640 +this probably you know it's getting the + +693 +00:30:39,440 --> 00:30:44,600 +model formatting correct it's getting + +694 +00:30:41,640 --> 00:30:47,679 +like the names of the labels correct + +695 +00:30:44,600 --> 00:30:49,039 +even if it's not uh accurate so it seems + +696 +00:30:47,679 --> 00:30:50,519 +like it's not really using these for + +697 +00:30:49,039 --> 00:30:52,640 +training data it's using them more just + +698 +00:30:50,519 --> 00:30:56,240 +to know the formatting + +699 +00:30:52,640 --> 00:30:59,399 +appropriate like + +700 +00:30:56,240 --> 00:31:01,399 +you so you already + +701 +00:30:59,399 --> 00:31:03,760 +have + +702 +00:31:01,399 --> 00:31:08,840 +right how is it + +703 +00:31:03,760 --> 00:31:11,240 +Ma like is it just y one y i gu I'm just + +704 +00:31:08,840 --> 00:31:15,000 +ask how you would inter + +705 +00:31:11,240 --> 00:31:16,480 +that so this is you're not training the + +706 +00:31:15,000 --> 00:31:17,880 +model at the moment we're going to talk + +707 +00:31:16,480 --> 00:31:19,360 +about that next class but right now + +708 +00:31:17,880 --> 00:31:21,279 +you're taking a model that has already + +709 +00:31:19,360 --> 00:31:22,840 +been trained and you're providing it + +710 +00:31:21,279 --> 00:31:25,519 +with a few examples and then you're + +711 +00:31:22,840 --> 00:31:28,679 +asking it to fill in um the following + +712 +00:31:25,519 --> 00:31:30,880 +examples just examples + +713 +00:31:28,679 --> 00:31:32,960 +yes + +714 +00:31:30,880 --> 00:31:34,679 +exactly and it's pretty amazing that + +715 +00:31:32,960 --> 00:31:36,440 +that works in the first place especially + +716 +00:31:34,679 --> 00:31:39,840 +with a model that hasn't been explicitly + +717 +00:31:36,440 --> 00:31:41,200 +trained that way but um there's a a fair + +718 +00:31:39,840 --> 00:31:42,320 +amount of research that I think we're + +719 +00:31:41,200 --> 00:31:43,960 +probably going to be talking about in + +720 +00:31:42,320 --> 00:31:47,000 +the interpretability class about why + +721 +00:31:43,960 --> 00:31:49,600 +this happens but um + +722 +00:31:47,000 --> 00:31:51,279 +basically my my interpretation for why + +723 +00:31:49,600 --> 00:31:53,679 +this happens is because there's so much + +724 +00:31:51,279 --> 00:31:56,000 +repetitive stuff on the internet right + +725 +00:31:53,679 --> 00:31:58,240 +there's a bunch of examples of math + +726 +00:31:56,000 --> 00:32:00,399 +problems which is like + +727 +00:31:58,240 --> 00:32:02,279 +question one and then the math problem + +728 +00:32:00,399 --> 00:32:04,320 +and then the answer question two math + +729 +00:32:02,279 --> 00:32:06,440 +problem and then the answer so in order + +730 +00:32:04,320 --> 00:32:08,320 +to model the text on the internet it + +731 +00:32:06,440 --> 00:32:12,120 +needs to learn how to be able to do + +732 +00:32:08,320 --> 00:32:15,399 +these things but so um + +733 +00:32:12,120 --> 00:32:17,760 +cool the second thing is uh more + +734 +00:32:15,399 --> 00:32:20,000 +demonstrations can sometimes hurt + +735 +00:32:17,760 --> 00:32:22,120 +accuracy so this is like binary + +736 +00:32:20,000 --> 00:32:25,080 +classification versus multiple choice + +737 +00:32:22,120 --> 00:32:27,440 +question answering um and actually with + +738 +00:32:25,080 --> 00:32:30,919 +binary classification the model ends up + +739 +00:32:27,440 --> 00:32:33,159 +getting worse um with uh more examples + +740 +00:32:30,919 --> 00:32:36,799 +probably just because the longer context + +741 +00:32:33,159 --> 00:32:39,320 +uh you know confuses the model or moves + +742 +00:32:36,799 --> 00:32:41,320 +the instructions that are provided to + +743 +00:32:39,320 --> 00:32:44,279 +the model farther away in the context so + +744 +00:32:41,320 --> 00:32:48,120 +it starts forgetting them so + +745 +00:32:44,279 --> 00:32:50,240 +um basically what I want to say is uh + +746 +00:32:48,120 --> 00:32:51,760 +you know this is more of an art than a + +747 +00:32:50,240 --> 00:32:53,279 +science you might not get entirely + +748 +00:32:51,760 --> 00:32:55,840 +predictable results but don't worry it's + +749 +00:32:53,279 --> 00:32:59,320 +not just + +750 +00:32:55,840 --> 00:32:59,320 +you cool cool + +751 +00:33:09,200 --> 00:33:15,320 +yeah it can't so the question is if the + +752 +00:33:12,639 --> 00:33:17,039 +in context examples reflect the data + +753 +00:33:15,320 --> 00:33:18,919 +distribution well would that boost the + +754 +00:33:17,039 --> 00:33:24,240 +accuracy I think the answer is probably + +755 +00:33:18,919 --> 00:33:26,039 +yes yeah um I don't know if that it's + +756 +00:33:24,240 --> 00:33:27,679 +that clear because like what I would + +757 +00:33:26,039 --> 00:33:29,919 +expect + +758 +00:33:27,679 --> 00:33:33,559 +is better + +759 +00:33:29,919 --> 00:33:37,240 +coverage is probably more + +760 +00:33:33,559 --> 00:33:39,760 +important than better representativeness + +761 +00:33:37,240 --> 00:33:41,960 +so like even if you have some minority + +762 +00:33:39,760 --> 00:33:43,639 +labels um it's probably better for the + +763 +00:33:41,960 --> 00:33:44,880 +model to know what those minority labels + +764 +00:33:43,639 --> 00:33:47,279 +look like and that's going to be + +765 +00:33:44,880 --> 00:33:49,120 +especially true for like stronger models + +766 +00:33:47,279 --> 00:33:50,679 +um I think + +767 +00:33:49,120 --> 00:33:54,320 +so + +768 +00:33:50,679 --> 00:33:56,440 +cool okay so uh next I want to talk + +769 +00:33:54,320 --> 00:33:59,000 +about Chain of Thought prompting um so + +770 +00:33:56,440 --> 00:34:01,320 +Chain of Thought prompting is a very + +771 +00:33:59,000 --> 00:34:04,080 +popular way of prompting + +772 +00:34:01,320 --> 00:34:06,080 +models and the way it works is you get + +773 +00:34:04,080 --> 00:34:07,839 +the model to explain its reasoning + +774 +00:34:06,080 --> 00:34:12,679 +before making an + +775 +00:34:07,839 --> 00:34:14,520 +answer um and so sorry this example is a + +776 +00:34:12,679 --> 00:34:18,879 +little bit small but like the standard + +777 +00:34:14,520 --> 00:34:20,480 +prompting method is uh like Roger has + +778 +00:34:18,879 --> 00:34:22,000 +five tennis balls he buys two more cans + +779 +00:34:20,480 --> 00:34:23,480 +of tennis balls each can has three + +780 +00:34:22,000 --> 00:34:28,200 +tennis balls how many tennis balls does + +781 +00:34:23,480 --> 00:34:29,359 +he have now um the answer is 11 and so + +782 +00:34:28,200 --> 00:34:32,119 +um this + +783 +00:34:29,359 --> 00:34:34,320 +is an in context example and then you + +784 +00:34:32,119 --> 00:34:37,240 +have your input which has a different + +785 +00:34:34,320 --> 00:34:39,000 +problem uh the cafeteria has 23 apples + +786 +00:34:37,240 --> 00:34:40,639 +if they Ed 20 to make lunch and bought + +787 +00:34:39,000 --> 00:34:41,800 +six more how many apples do they have + +788 +00:34:40,639 --> 00:34:46,720 +the answer is + +789 +00:34:41,800 --> 00:34:49,000 +27 um and so this is wrong so what Chain + +790 +00:34:46,720 --> 00:34:52,000 +of Thought prompting does is instead of + +791 +00:34:49,000 --> 00:34:54,960 +just giving the answer it gives you an + +792 +00:34:52,000 --> 00:34:57,079 +additional reasoning chain uh that says + +793 +00:34:54,960 --> 00:34:59,680 +R started with five balls two cans of of + +794 +00:34:57,079 --> 00:35:01,800 +three tennis balls uh each of six tennis + +795 +00:34:59,680 --> 00:35:04,520 +balls 5 plus 6 equals 11 the answer is + +796 +00:35:01,800 --> 00:35:06,280 +11 and so then when you feed this in + +797 +00:35:04,520 --> 00:35:08,000 +basically the model will generate a + +798 +00:35:06,280 --> 00:35:10,240 +similar reasoning chain and then it's + +799 +00:35:08,000 --> 00:35:13,400 +more likely to get the answer correct + +800 +00:35:10,240 --> 00:35:15,720 +and this very robustly works + +801 +00:35:13,400 --> 00:35:19,440 +for many + +802 +00:35:15,720 --> 00:35:21,440 +different problems where a reasoning + +803 +00:35:19,440 --> 00:35:23,520 +chain is + +804 +00:35:21,440 --> 00:35:27,839 +necessary and if you think about the + +805 +00:35:23,520 --> 00:35:30,359 +reason why this uh why this works I + +806 +00:35:27,839 --> 00:35:33,040 +think there's basically two reasons why + +807 +00:35:30,359 --> 00:35:34,440 +um the first reason is I I only wrote + +808 +00:35:33,040 --> 00:35:36,560 +one on the thing here but the first + +809 +00:35:34,440 --> 00:35:38,760 +reason is it allows the model to + +810 +00:35:36,560 --> 00:35:41,359 +decompose harder problems into simpler + +811 +00:35:38,760 --> 00:35:45,119 +problems and simpler problems are easier + +812 +00:35:41,359 --> 00:35:47,560 +right so um instead + +813 +00:35:45,119 --> 00:35:51,319 +of immediately trying to solve the whole + +814 +00:35:47,560 --> 00:35:53,800 +problem in a single go it will first + +815 +00:35:51,319 --> 00:35:56,520 +solve the problem of like what how many + +816 +00:35:53,800 --> 00:35:58,920 +are left after you use buy and so it + +817 +00:35:56,520 --> 00:36:00,240 +gets three and so now it has this three + +818 +00:35:58,920 --> 00:36:02,480 +here so now it can solve the next + +819 +00:36:00,240 --> 00:36:05,160 +problem of adding six that's equal to 9 + +820 +00:36:02,480 --> 00:36:07,880 +so it's solving simpler sub problems + +821 +00:36:05,160 --> 00:36:11,440 +than it is and uh compared to harder + +822 +00:36:07,880 --> 00:36:13,920 +ones another reason why is it allows for + +823 +00:36:11,440 --> 00:36:17,319 +adaptive computation time so if you + +824 +00:36:13,920 --> 00:36:17,319 +think about like a Transformer + +825 +00:36:19,000 --> 00:36:23,119 +model um if you think about a + +826 +00:36:21,280 --> 00:36:25,560 +Transformer model a Transformer model + +827 +00:36:23,119 --> 00:36:27,200 +has fixed computation time for + +828 +00:36:25,560 --> 00:36:29,920 +predicting each token right a fixed + +829 +00:36:27,200 --> 00:36:31,560 +number of layers it um and based on that + +830 +00:36:29,920 --> 00:36:33,839 +fixed number of layers it passes all the + +831 +00:36:31,560 --> 00:36:36,520 +information through and makes a + +832 +00:36:33,839 --> 00:36:38,200 +prediction and some problems are harder + +833 +00:36:36,520 --> 00:36:39,599 +than others right so it would be very + +834 +00:36:38,200 --> 00:36:42,480 +wasteful to have a really big + +835 +00:36:39,599 --> 00:36:45,640 +Transformer that could solve you know + +836 +00:36:42,480 --> 00:36:49,119 +really complex math problems in the same + +837 +00:36:45,640 --> 00:36:53,359 +amount of time it takes to predict that + +838 +00:36:49,119 --> 00:36:55,280 +the next word is like uh dog after the + +839 +00:36:53,359 --> 00:36:57,280 +word the big or something like that + +840 +00:36:55,280 --> 00:36:58,560 +right so there are some things that are + +841 +00:36:57,280 --> 00:37:00,000 +easy we can do in a second there are + +842 +00:36:58,560 --> 00:37:01,839 +some things that take us more time and + +843 +00:37:00,000 --> 00:37:05,880 +essentially this Chain of Thought + +844 +00:37:01,839 --> 00:37:09,280 +reasoning is um is doing that it's + +845 +00:37:05,880 --> 00:37:12,280 +giving it more time to solve the harder + +846 +00:37:09,280 --> 00:37:12,280 +problems + +847 +00:37:17,200 --> 00:37:22,440 +yes + +848 +00:37:18,839 --> 00:37:23,960 +okay yeah good good question so so + +849 +00:37:22,440 --> 00:37:26,200 +that's what um that's what this next + +850 +00:37:23,960 --> 00:37:27,920 +paper does so uh the the question was + +851 +00:37:26,200 --> 00:37:31,160 +what what happens if we just ask it to + +852 +00:37:27,920 --> 00:37:34,800 +reason and the answer is it still works + +853 +00:37:31,160 --> 00:37:37,000 +um and this paper was really like I I I + +854 +00:37:34,800 --> 00:37:39,760 +love this paper for its Simplicity and + +855 +00:37:37,000 --> 00:37:43,160 +cleverness and basically uh they + +856 +00:37:39,760 --> 00:37:45,000 +contrast few shot learning few shot + +857 +00:37:43,160 --> 00:37:49,200 +Chain of Thought where you provide Chain + +858 +00:37:45,000 --> 00:37:52,160 +of Thought examples zero shot prompting + +859 +00:37:49,200 --> 00:37:54,560 +basically and zero shot Chain of Thought + +860 +00:37:52,160 --> 00:37:58,720 +So what they do is they just + +861 +00:37:54,560 --> 00:38:00,280 +add uh let's thinks step by step that + +862 +00:37:58,720 --> 00:38:04,200 +they add that phrase to the end of The + +863 +00:38:00,280 --> 00:38:06,079 +Prompt and then that elicits the model + +864 +00:38:04,200 --> 00:38:08,000 +to basically do Chain of Thought + +865 +00:38:06,079 --> 00:38:09,240 +reasoning without any further examples + +866 +00:38:08,000 --> 00:38:12,599 +of how that Chain of Thought reasoning + +867 +00:38:09,240 --> 00:38:14,440 +works why does this work again because + +868 +00:38:12,599 --> 00:38:16,760 +like on the internet there's a bunch of + +869 +00:38:14,440 --> 00:38:20,240 +examples of math problem solving data + +870 +00:38:16,760 --> 00:38:22,800 +sets or QA corpora where it says let + +871 +00:38:20,240 --> 00:38:24,480 +things step by step and after that you + +872 +00:38:22,800 --> 00:38:28,040 +you know consistently have this sort of + +873 +00:38:24,480 --> 00:38:29,800 +resoning chain added there so um good + +874 +00:38:28,040 --> 00:38:31,200 +good intuition uh that this paper + +875 +00:38:29,800 --> 00:38:32,480 +answers the question and this like + +876 +00:38:31,200 --> 00:38:36,119 +actually does + +877 +00:38:32,480 --> 00:38:39,119 +work one interesting thing is + +878 +00:38:36,119 --> 00:38:39,119 +um + +879 +00:38:39,720 --> 00:38:45,200 +now if I go to chat + +880 +00:38:47,319 --> 00:38:52,240 +GPT and I say + +881 +00:38:50,480 --> 00:38:58,520 +um + +882 +00:38:52,240 --> 00:38:58,520 +I am teaching a class with 98 + +883 +00:38:58,720 --> 00:39:06,000 +students + +884 +00:39:01,480 --> 00:39:08,400 +70% turn in the + +885 +00:39:06,000 --> 00:39:10,720 +assignment hint on + +886 +00:39:08,400 --> 00:39:17,720 +time uh + +887 +00:39:10,720 --> 00:39:21,880 +10% and it in play how many did not ENT + +888 +00:39:17,720 --> 00:39:21,880 +in let's see let's see if this + +889 +00:39:25,079 --> 00:39:29,200 +works okay it's writing code for + +890 +00:39:29,440 --> 00:39:34,599 +me which is that's a feature slide I + +891 +00:39:32,119 --> 00:39:37,319 +kind of didn't uh I kind of didn't + +892 +00:39:34,599 --> 00:39:37,319 +wanted to do + +893 +00:39:40,040 --> 00:39:44,160 +that okay + +894 +00:39:45,920 --> 00:39:50,359 +um I do not + +895 +00:39:54,280 --> 00:39:59,720 +like okay so um + +896 +00:39:57,040 --> 00:39:59,720 +it's a little bit + +897 +00:40:15,720 --> 00:40:21,839 +slow okay so there there that worked but + +898 +00:40:19,680 --> 00:40:24,640 +not that I did not say let's think step + +899 +00:40:21,839 --> 00:40:27,760 +by step I didn't I didn't ask it to do + +900 +00:40:24,640 --> 00:40:29,240 +this um and the reason why is um we're + +901 +00:40:27,760 --> 00:40:31,119 +going to talk about instruction tuning + +902 +00:40:29,240 --> 00:40:34,359 +next time but basically GPT has been + +903 +00:40:31,119 --> 00:40:36,560 +tuned to do this reasoning even if you + +904 +00:40:34,359 --> 00:40:38,480 +don't ask it to do that uh it wouldn't + +905 +00:40:36,560 --> 00:40:40,839 +do that naturally but it's because lots + +906 +00:40:38,480 --> 00:40:43,880 +of supervised data has been added into + +907 +00:40:40,839 --> 00:40:46,920 +this model so like another thing is like + +908 +00:40:43,880 --> 00:40:48,960 +if you are planning on doing anything + +909 +00:40:46,920 --> 00:40:51,240 +about like Chain of Thought reasoning or + +910 +00:40:48,960 --> 00:40:53,000 +or stuff like that as a class project + +911 +00:40:51,240 --> 00:40:54,960 +you need to keep in mind that the like + +912 +00:40:53,000 --> 00:40:58,280 +GPD models have already been trained to + +913 +00:40:54,960 --> 00:41:00,040 +do this and so so if you want to like + +914 +00:40:58,280 --> 00:41:01,599 +try to find out a better way to elicit + +915 +00:41:00,040 --> 00:41:03,960 +this from a raw model you'll need to use + +916 +00:41:01,599 --> 00:41:07,119 +a raw model like llama 2 with no chat + +917 +00:41:03,960 --> 00:41:10,200 +tuning or stuff like that um in order to + +918 +00:41:07,119 --> 00:41:12,520 +uh do that in a neutral in a neutral + +919 +00:41:10,200 --> 00:41:14,960 +setting that hasn't been contaminated by + +920 +00:41:12,520 --> 00:41:14,960 +like super + +921 +00:41:15,960 --> 00:41:20,520 +L cool um any + +922 +00:41:21,079 --> 00:41:27,720 +questions okay um so next I want to talk + +923 +00:41:24,720 --> 00:41:31,280 +about prompting in programs that uh Chad + +924 +00:41:27,720 --> 00:41:35,560 +GPD gave me a good example of uh why why + +925 +00:41:31,280 --> 00:41:37,160 +this is useful or important um so + +926 +00:41:35,560 --> 00:41:40,640 +there's two results actually both of + +927 +00:41:37,160 --> 00:41:43,440 +these are are from my uh collaborators + +928 +00:41:40,640 --> 00:41:45,839 +but the first one is um it demonstrates + +929 +00:41:43,440 --> 00:41:48,720 +that structuring outputs at programs can + +930 +00:41:45,839 --> 00:41:51,599 +help you get better results even if the + +931 +00:41:48,720 --> 00:41:55,119 +task isn't a programmatic task so this + +932 +00:41:51,599 --> 00:41:57,000 +is kind of interesting um so we were + +933 +00:41:55,119 --> 00:41:59,319 +looking at predicting stru structured + +934 +00:41:57,000 --> 00:42:01,640 +outputs and these structured outputs + +935 +00:41:59,319 --> 00:42:03,839 +specifically are procedural knowledge + +936 +00:42:01,640 --> 00:42:06,920 +like this so like how do we cook a pie + +937 +00:42:03,839 --> 00:42:09,040 +or how do we serve pot pies on a plate + +938 +00:42:06,920 --> 00:42:10,800 +and we had this procedural knowledge + +939 +00:42:09,040 --> 00:42:14,040 +like take the pies out to pool open the + +940 +00:42:10,800 --> 00:42:16,079 +cabinet drawer take out several plates + +941 +00:42:14,040 --> 00:42:17,720 +and we wanted to know the dependencies + +942 +00:42:16,079 --> 00:42:19,520 +between these so we could create a + +943 +00:42:17,720 --> 00:42:22,559 +structured like procedural knowledge + +944 +00:42:19,520 --> 00:42:25,599 +base so this is not an inherently code + +945 +00:42:22,559 --> 00:42:27,200 +based task it's not a you know you could + +946 +00:42:25,599 --> 00:42:28,880 +just ask + +947 +00:42:27,200 --> 00:42:32,160 +the model in natural language and that + +948 +00:42:28,880 --> 00:42:35,000 +would work as well so we structured + +949 +00:42:32,160 --> 00:42:37,720 +things in a couple varieties so we had a + +950 +00:42:35,000 --> 00:42:39,800 +textual format we had uh something in + +951 +00:42:37,720 --> 00:42:43,079 +the dot format which is a way to draw + +952 +00:42:39,800 --> 00:42:45,480 +graphs and then we had we also tried + +953 +00:42:43,079 --> 00:42:47,240 +structuring the output in Python so + +954 +00:42:45,480 --> 00:42:48,960 +these are just different ways to format + +955 +00:42:47,240 --> 00:42:50,720 +the output they all say the same thing + +956 +00:42:48,960 --> 00:42:54,599 +and we can extract the answer from all + +957 +00:42:50,720 --> 00:42:56,920 +of them um but we found that structuring + +958 +00:42:54,599 --> 00:42:58,480 +it in in Python basically is the more + +959 +00:42:56,920 --> 00:43:02,920 +effective way of doing + +960 +00:42:58,480 --> 00:43:04,680 +this so why why is it this the case the + +961 +00:43:02,920 --> 00:43:06,280 +answer is essentially the same thing + +962 +00:43:04,680 --> 00:43:08,680 +that I was talking about before with you + +963 +00:43:06,280 --> 00:43:11,480 +know predicting excellent instead of + +964 +00:43:08,680 --> 00:43:13,319 +five right you know it's seen a ton of + +965 +00:43:11,480 --> 00:43:15,960 +python in it's training data so it's + +966 +00:43:13,319 --> 00:43:17,760 +very good at predicting python uh it's + +967 +00:43:15,960 --> 00:43:20,359 +less good at predicting dot format + +968 +00:43:17,760 --> 00:43:24,240 +because it seemed less do format and it + +969 +00:43:20,359 --> 00:43:26,640 +hasn't seen very much text here + +970 +00:43:24,240 --> 00:43:29,960 +um another + +971 +00:43:26,640 --> 00:43:32,359 +comment is code is very highly + +972 +00:43:29,960 --> 00:43:33,559 +structured compared to natural language + +973 +00:43:32,359 --> 00:43:35,599 +and because code is very highly + +974 +00:43:33,559 --> 00:43:37,520 +structured we have things like + +975 +00:43:35,599 --> 00:43:39,079 +dependencies where we refer back to + +976 +00:43:37,520 --> 00:43:41,079 +variables that we defined before and + +977 +00:43:39,079 --> 00:43:44,119 +other things like this so I think when + +978 +00:43:41,079 --> 00:43:46,760 +it starts outputting code the models get + +979 +00:43:44,119 --> 00:43:48,359 +into this mode which say yes please + +980 +00:43:46,760 --> 00:43:51,280 +refer back to the things you've seen + +981 +00:43:48,359 --> 00:43:53,440 +previously more often like attend to + +982 +00:43:51,280 --> 00:43:57,040 +previous stuff more often and don't just + +983 +00:43:53,440 --> 00:43:59,760 +like generate things you know uh + +984 +00:43:57,040 --> 00:44:02,440 +arbitrarily and hallucinate you know new + +985 +00:43:59,760 --> 00:44:04,119 +content and because of this for + +986 +00:44:02,440 --> 00:44:05,559 +generating structured outputs even if + +987 +00:44:04,119 --> 00:44:08,920 +the structured outputs don't need to be + +988 +00:44:05,559 --> 00:44:11,520 +code you can benefit by doing + +989 +00:44:08,920 --> 00:44:13,200 +this another thing that's a really handy + +990 +00:44:11,520 --> 00:44:16,319 +trick is anytime you want to get a + +991 +00:44:13,200 --> 00:44:19,079 +structured output out of a model + +992 +00:44:16,319 --> 00:44:22,760 +um you can ask it to generate something + +993 +00:44:19,079 --> 00:44:24,839 +in Json instead of generating it in uh + +994 +00:44:22,760 --> 00:44:26,640 +in text and the reason why Json is + +995 +00:44:24,839 --> 00:44:28,079 +useful is you can press the on you can + +996 +00:44:26,640 --> 00:44:32,319 +pull out the strings and other stuff + +997 +00:44:28,079 --> 00:44:34,839 +like that um so this can be very + +998 +00:44:32,319 --> 00:44:37,960 +effective because if you just add an + +999 +00:44:34,839 --> 00:44:40,200 +instruction that says please um please + +1000 +00:44:37,960 --> 00:44:42,440 +format things in this particular + +1001 +00:44:40,200 --> 00:44:43,680 +format often the model won't listen to + +1002 +00:44:42,440 --> 00:44:44,800 +you and it will output something in a + +1003 +00:44:43,680 --> 00:44:46,280 +different format you need to write a + +1004 +00:44:44,800 --> 00:44:48,599 +really annoying parser to pull out the + +1005 +00:44:46,280 --> 00:44:50,280 +information that you actually want but + +1006 +00:44:48,599 --> 00:44:51,960 +it gets Json right almost all of the + +1007 +00:44:50,280 --> 00:44:54,040 +time just because it's seen so much Json + +1008 +00:44:51,960 --> 00:44:57,880 +so that's a nice trick if you want to do + +1009 +00:44:54,040 --> 00:45:01,520 +something like that + +1010 +00:44:57,880 --> 00:45:03,559 +another uh thing is a paper uh called + +1011 +00:45:01,520 --> 00:45:08,079 +program AED language models that we did + +1012 +00:45:03,559 --> 00:45:10,200 +about a year ago and the method that we + +1013 +00:45:08,079 --> 00:45:13,760 +proposed here is using a program to + +1014 +00:45:10,200 --> 00:45:16,480 +generate outputs uh using a program to + +1015 +00:45:13,760 --> 00:45:19,440 +generate outputs and this can be more + +1016 +00:45:16,480 --> 00:45:22,319 +precise than asking an LM to do so and + +1017 +00:45:19,440 --> 00:45:26,720 +so instead of doing Chain of Thought + +1018 +00:45:22,319 --> 00:45:30,640 +prompting we created a few F shot + +1019 +00:45:26,720 --> 00:45:34,319 +examples where we wrote like the text + +1020 +00:45:30,640 --> 00:45:37,160 +here and then the text in English and + +1021 +00:45:34,319 --> 00:45:40,160 +then we had code corresponding code the + +1022 +00:45:37,160 --> 00:45:42,280 +text in English corresponding code and + +1023 +00:45:40,160 --> 00:45:44,960 +then the answer is and then the final + +1024 +00:45:42,280 --> 00:45:48,160 +code and then we basically generate this + +1025 +00:45:44,960 --> 00:45:49,640 +code and execute it to get the answer so + +1026 +00:45:48,160 --> 00:45:52,319 +like as you saw this is implemented in + +1027 +00:45:49,640 --> 00:45:54,280 +chat GP now it's uh you write something + +1028 +00:45:52,319 --> 00:45:56,319 +out it will decide whether it wants to + +1029 +00:45:54,280 --> 00:45:58,599 +generate code or generate text depending + +1030 +00:45:56,319 --> 00:46:00,760 +on the type of problem and it's just + +1031 +00:45:58,599 --> 00:46:03,559 +more precise it can solve like actually + +1032 +00:46:00,760 --> 00:46:05,200 +rather complex problems like uh you know + +1033 +00:46:03,559 --> 00:46:07,880 +calculating how much tax you need to be + +1034 +00:46:05,200 --> 00:46:10,880 +paying or something like that + +1035 +00:46:07,880 --> 00:46:12,480 +um it's especially useful for numeric + +1036 +00:46:10,880 --> 00:46:14,599 +questions and it's implemented in things + +1037 +00:46:12,480 --> 00:46:17,040 +like the chat GPT uh code interpreter + +1038 +00:46:14,599 --> 00:46:18,640 +bar to execution other things like that + +1039 +00:46:17,040 --> 00:46:22,079 +it's pretty cool it can actually do + +1040 +00:46:18,640 --> 00:46:24,440 +visualizations for you for papers also + +1041 +00:46:22,079 --> 00:46:28,000 +so if you ask it to visualize data for + +1042 +00:46:24,440 --> 00:46:30,200 +you um chat GPD now does a pretty good + +1043 +00:46:28,000 --> 00:46:32,640 +job of doing this like to give an + +1044 +00:46:30,200 --> 00:46:34,319 +example I asked it I gave it a big + +1045 +00:46:32,640 --> 00:46:35,760 +python list and asked it to generate a + +1046 +00:46:34,319 --> 00:46:37,839 +histogram and it did a really good job + +1047 +00:46:35,760 --> 00:46:40,240 +of it for me it also gives you the code + +1048 +00:46:37,839 --> 00:46:42,839 +so you can go in and modify it later so + +1049 +00:46:40,240 --> 00:46:44,720 +um I would definitely recommend you know + +1050 +00:46:42,839 --> 00:46:46,200 +thinking about using this uh either in + +1051 +00:46:44,720 --> 00:46:49,200 +your research or just to write your + +1052 +00:46:46,200 --> 00:46:51,480 +reports uh for this class so um this + +1053 +00:46:49,200 --> 00:46:55,839 +class is uh generative AI friendly + +1054 +00:46:51,480 --> 00:46:57,760 +mostly so like I do I do want you to + +1055 +00:46:55,839 --> 00:46:59,880 +learn the things we expect you to learn + +1056 +00:46:57,760 --> 00:47:02,480 +which is why I suggest that you don't + +1057 +00:46:59,880 --> 00:47:04,400 +like just write every uh everything for + +1058 +00:47:02,480 --> 00:47:06,280 +assignment number one with chat GP key + +1059 +00:47:04,400 --> 00:47:07,720 +but I think even if you tried to do that + +1060 +00:47:06,280 --> 00:47:09,640 +it'd probably get it wrong in subtle + +1061 +00:47:07,720 --> 00:47:10,920 +ways so you're probably better off + +1062 +00:47:09,640 --> 00:47:13,880 +understanding the content + +1063 +00:47:10,920 --> 00:47:16,400 +anyway um + +1064 +00:47:13,880 --> 00:47:18,160 +cool this can also be expanded a whole + +1065 +00:47:16,400 --> 00:47:21,559 +lot into like agents and tools and I'm + +1066 +00:47:18,160 --> 00:47:21,559 +going to talk about that separately + +1067 +00:47:22,800 --> 00:47:27,720 +later cool uh any any things about this + +1068 +00:47:29,040 --> 00:47:34,200 +okay I'm uh I'm going to go next so + +1069 +00:47:31,800 --> 00:47:36,079 +prompt engineering um when you're + +1070 +00:47:34,200 --> 00:47:37,280 +designing prompts uh there's a number of + +1071 +00:47:36,079 --> 00:47:38,240 +different ways you can do this you can + +1072 +00:47:37,280 --> 00:47:41,559 +do this + +1073 +00:47:38,240 --> 00:47:42,960 +manually uh you to do this you configure + +1074 +00:47:41,559 --> 00:47:44,520 +a manual template based on the + +1075 +00:47:42,960 --> 00:47:46,160 +characteristics of the task using all of + +1076 +00:47:44,520 --> 00:47:48,880 +the knowledge that I told you + +1077 +00:47:46,160 --> 00:47:50,119 +before you can also do automated search + +1078 +00:47:48,880 --> 00:47:52,079 +and there's a number of different ways + +1079 +00:47:50,119 --> 00:47:55,119 +to do automated search for + +1080 +00:47:52,079 --> 00:47:58,319 +prompts uh the first one is doing some + +1081 +00:47:55,119 --> 00:48:00,599 +sort of search discret space uh so you + +1082 +00:47:58,319 --> 00:48:02,720 +find a prompt that is + +1083 +00:48:00,599 --> 00:48:04,680 +essentially + +1084 +00:48:02,720 --> 00:48:06,640 +text the other one is search in + +1085 +00:48:04,680 --> 00:48:08,559 +continuous space so you find a prompt + +1086 +00:48:06,640 --> 00:48:10,680 +that isn't actually comprehensible text + +1087 +00:48:08,559 --> 00:48:14,760 +but nonetheless is a good + +1088 +00:48:10,680 --> 00:48:16,960 +prompt so looking at manual engineering + +1089 +00:48:14,760 --> 00:48:19,000 +um making sure that the format matches + +1090 +00:48:16,960 --> 00:48:21,680 +that of a trained model uh such as the + +1091 +00:48:19,000 --> 00:48:24,359 +chat format is actually really really + +1092 +00:48:21,680 --> 00:48:26,119 +important um and this can have a a large + +1093 +00:48:24,359 --> 00:48:28,119 +effect on models there's a really paper + +1094 +00:48:26,119 --> 00:48:30,000 +that demonstrated this convincingly + +1095 +00:48:28,119 --> 00:48:33,200 +before and also releases some software + +1096 +00:48:30,000 --> 00:48:35,880 +that allows you to do this um kind of in + +1097 +00:48:33,200 --> 00:48:38,079 +an efficient manner and what this is + +1098 +00:48:35,880 --> 00:48:41,200 +showing is + +1099 +00:48:38,079 --> 00:48:45,079 +um this is the original formatting of a + +1100 +00:48:41,200 --> 00:48:48,400 +prompt that was given I I Believe by uh + +1101 +00:48:45,079 --> 00:48:50,119 +some sort of like uh machine reading or + +1102 +00:48:48,400 --> 00:48:52,799 +document based question answering data + +1103 +00:48:50,119 --> 00:48:55,480 +set which was like passage + +1104 +00:48:52,799 --> 00:48:58,440 +answer if you modify the spacing between + +1105 +00:48:55,480 --> 00:49:01,680 +the the fields that increases your score + +1106 +00:48:58,440 --> 00:49:04,280 +by several percentage points um if you + +1107 +00:49:01,680 --> 00:49:06,880 +remove the colons that increases your + +1108 +00:49:04,280 --> 00:49:08,720 +score by a few more percentage points + +1109 +00:49:06,880 --> 00:49:10,119 +it's kind of silly but like little + +1110 +00:49:08,720 --> 00:49:11,040 +things like this actually can make a + +1111 +00:49:10,119 --> 00:49:14,240 +really big + +1112 +00:49:11,040 --> 00:49:17,599 +difference um if you modify the casing + +1113 +00:49:14,240 --> 00:49:19,960 +this decreases by a lot if you modify + +1114 +00:49:17,599 --> 00:49:22,440 +the casing and remove colons so the + +1115 +00:49:19,960 --> 00:49:25,200 +thing that was useful like adding colons + +1116 +00:49:22,440 --> 00:49:26,720 +here remove colons uh that further + +1117 +00:49:25,200 --> 00:49:29,280 +decrease + +1118 +00:49:26,720 --> 00:49:31,400 +if you forget to add a space between the + +1119 +00:49:29,280 --> 00:49:32,559 +passage and the text that really hurts + +1120 +00:49:31,400 --> 00:49:35,599 +your + +1121 +00:49:32,559 --> 00:49:38,000 +accuracy so this is pretty painful right + +1122 +00:49:35,599 --> 00:49:40,599 +like you don't want to be getting uh + +1123 +00:49:38,000 --> 00:49:44,160 +0.036% accuracy when adding a space + +1124 +00:49:40,599 --> 00:49:48,680 +would give you like 75% accuracy + +1125 +00:49:44,160 --> 00:49:50,799 +right um and one interesting thing is um + +1126 +00:49:48,680 --> 00:49:53,160 +this is looking + +1127 +00:49:50,799 --> 00:49:56,559 +at different + +1128 +00:49:53,160 --> 00:49:58,520 +models and um + +1129 +00:49:56,559 --> 00:50:00,640 +with different models it's pretty + +1130 +00:49:58,520 --> 00:50:03,599 +consistent that many different plausible + +1131 +00:50:00,640 --> 00:50:05,400 +formats that you try the average gives + +1132 +00:50:03,599 --> 00:50:07,240 +you a really low accuracy but there's a + +1133 +00:50:05,400 --> 00:50:08,760 +few outliers that give you really good + +1134 +00:50:07,240 --> 00:50:11,119 +accuracy and these probably correspond + +1135 +00:50:08,760 --> 00:50:13,400 +to the things that it was trained on um + +1136 +00:50:11,119 --> 00:50:15,880 +instruction tuned on or or other things + +1137 +00:50:13,400 --> 00:50:17,480 +like this so number one make sure you're + +1138 +00:50:15,880 --> 00:50:19,799 +using like the canonical prompt + +1139 +00:50:17,480 --> 00:50:21,240 +formatting for the model for sure number + +1140 +00:50:19,799 --> 00:50:22,640 +two you might want to do a little bit of + +1141 +00:50:21,240 --> 00:50:24,720 +additional search to see if you can do + +1142 +00:50:22,640 --> 00:50:26,960 +even better than that so um this is + +1143 +00:50:24,720 --> 00:50:29,480 +something to be very aware + +1144 +00:50:26,960 --> 00:50:32,480 +of + +1145 +00:50:29,480 --> 00:50:32,480 +um + +1146 +00:50:34,200 --> 00:50:37,680 +okay do you have a + +1147 +00:50:39,599 --> 00:50:43,720 +question this is dependent on what it + +1148 +00:50:41,720 --> 00:50:47,680 +sees in trading time another thing + +1149 +00:50:43,720 --> 00:50:51,920 +actually is um this will definitely be + +1150 +00:50:47,680 --> 00:50:53,200 +tighter for uh like a chat GPT or GPT 4 + +1151 +00:50:51,920 --> 00:50:56,599 +um because it's been trained on many + +1152 +00:50:53,200 --> 00:50:59,319 +different formats at training time um + +1153 +00:50:56,599 --> 00:51:00,880 +and so the better the model has been + +1154 +00:50:59,319 --> 00:51:03,520 +trained on a lot of different formats + +1155 +00:51:00,880 --> 00:51:05,559 +the less this is going to have an + +1156 +00:51:03,520 --> 00:51:06,920 +effect but you know you're probably not + +1157 +00:51:05,559 --> 00:51:09,440 +going to be retraining a model that + +1158 +00:51:06,920 --> 00:51:10,799 +somebody gives you uh so like this is + +1159 +00:51:09,440 --> 00:51:12,880 +something to be very aware of if you're + +1160 +00:51:10,799 --> 00:51:14,839 +just a downstream newer model especially + +1161 +00:51:12,880 --> 00:51:17,599 +an open source + +1162 +00:51:14,839 --> 00:51:19,359 +model um another thing is how do you + +1163 +00:51:17,599 --> 00:51:22,280 +give instructions to + +1164 +00:51:19,359 --> 00:51:25,000 +models um instructions should be clear + +1165 +00:51:22,280 --> 00:51:29,280 +concise and easy to understand one very + +1166 +00:51:25,000 --> 00:51:31,559 +funny thing is um I think now like + +1167 +00:51:29,280 --> 00:51:33,280 +actually prompting language models is + +1168 +00:51:31,559 --> 00:51:34,960 +very similar to prompting humans + +1169 +00:51:33,280 --> 00:51:37,119 +especially if we're talking about like + +1170 +00:51:34,960 --> 00:51:38,760 +gp4 so if you're not very good at + +1171 +00:51:37,119 --> 00:51:41,599 +explaining things to humans that might + +1172 +00:51:38,760 --> 00:51:45,440 +actually be bad um and you might want to + +1173 +00:51:41,599 --> 00:51:47,359 +practice that and explaining things to + +1174 +00:51:45,440 --> 00:51:50,319 +models might be a good way to practice + +1175 +00:51:47,359 --> 00:51:51,799 +that right so you know um it actually + +1176 +00:51:50,319 --> 00:51:54,040 +can give you feedback without annoying + +1177 +00:51:51,799 --> 00:51:55,359 +your friends by having you explain uh + +1178 +00:51:54,040 --> 00:51:58,160 +things to them in several different ways + +1179 +00:51:55,359 --> 00:52:00,040 +way and seeing how they react so um but + +1180 +00:51:58,160 --> 00:52:03,680 +anyway clear concise easy to understand + +1181 +00:52:00,040 --> 00:52:05,319 +is good um there's this prompting guide + +1182 +00:52:03,680 --> 00:52:08,599 +uh which I I can + +1183 +00:52:05,319 --> 00:52:13,240 +open um this has a prompt engineering + +1184 +00:52:08,599 --> 00:52:14,520 +guide I I I like this site but it it + +1185 +00:52:13,240 --> 00:52:17,400 +does have a bit + +1186 +00:52:14,520 --> 00:52:18,880 +of like variance in the importance of + +1187 +00:52:17,400 --> 00:52:21,760 +the information that tells you but like + +1188 +00:52:18,880 --> 00:52:23,960 +this particular page is nice I feel so + +1189 +00:52:21,760 --> 00:52:26,160 +start simple start with simple + +1190 +00:52:23,960 --> 00:52:29,520 +instructions um + +1191 +00:52:26,160 --> 00:52:32,119 +you should tell the model what it should + +1192 +00:52:29,520 --> 00:52:36,839 +be doing so make sure you say write + +1193 +00:52:32,119 --> 00:52:39,799 +classify summarize translate order um + +1194 +00:52:36,839 --> 00:52:41,960 +and things like this uh it also gives + +1195 +00:52:39,799 --> 00:52:45,440 +some good examples of the level of + +1196 +00:52:41,960 --> 00:52:47,559 +specificity that you should be giving so + +1197 +00:52:45,440 --> 00:52:49,680 +something that's less precise is explain + +1198 +00:52:47,559 --> 00:52:51,559 +the concept of prompt engineering keep + +1199 +00:52:49,680 --> 00:52:53,920 +the explanation short only a few + +1200 +00:52:51,559 --> 00:52:57,119 +sentences and don't be too + +1201 +00:52:53,920 --> 00:52:58,799 +descriptive um it use two to three + +1202 +00:52:57,119 --> 00:53:00,240 +sentences to explain the concept of + +1203 +00:52:58,799 --> 00:53:02,599 +prompt engineering to a high school + +1204 +00:53:00,240 --> 00:53:04,839 +student so what this does is this tells + +1205 +00:53:02,599 --> 00:53:07,839 +you the level of read like the reading + +1206 +00:53:04,839 --> 00:53:07,839 +level + +1207 +00:53:07,960 --> 00:53:12,520 +um so this doesn't even tell you the + +1208 +00:53:10,200 --> 00:53:14,319 +reading level I guess um and then two to + +1209 +00:53:12,520 --> 00:53:16,240 +three sentences is more precise than + +1210 +00:53:14,319 --> 00:53:19,200 +keep it a few sentences don't be too + +1211 +00:53:16,240 --> 00:53:22,440 +descriptive so um the more precise you + +1212 +00:53:19,200 --> 00:53:25,760 +can be the the better it is um one + +1213 +00:53:22,440 --> 00:53:27,040 +interesting thing is like if you ask + +1214 +00:53:25,760 --> 00:53:28,359 +your friend to do something and they + +1215 +00:53:27,040 --> 00:53:32,400 +don't know how to do it they'll complain + +1216 +00:53:28,359 --> 00:53:34,240 +to you but right now uh LMS don't + +1217 +00:53:32,400 --> 00:53:35,720 +complain to you they may in the future + +1218 +00:53:34,240 --> 00:53:38,680 +uh that might be like actually an + +1219 +00:53:35,720 --> 00:53:40,799 +interesting thing to find uh the you + +1220 +00:53:38,680 --> 00:53:42,319 +know interesting methodological thing to + +1221 +00:53:40,799 --> 00:53:45,240 +look at for a project or something like + +1222 +00:53:42,319 --> 00:53:47,960 +that but um right now you need to be + +1223 +00:53:45,240 --> 00:53:49,040 +precise and like there's it doesn't give + +1224 +00:53:47,960 --> 00:53:51,799 +you feedback when you're not being + +1225 +00:53:49,040 --> 00:53:51,799 +precise so you need + +1226 +00:53:52,000 --> 00:53:56,359 +to um separately from this there are + +1227 +00:53:54,200 --> 00:53:59,160 +methods for automatic prompt engineering + +1228 +00:53:56,359 --> 00:54:00,960 +so uh prompt paraphrasing gradient based + +1229 +00:53:59,160 --> 00:54:02,240 +discreet prompt search prompt tuning + +1230 +00:54:00,960 --> 00:54:06,160 +prefix + +1231 +00:54:02,240 --> 00:54:09,880 +tuning so prompt paraphrasing um this is + +1232 +00:54:06,160 --> 00:54:12,559 +a method that uh we proposed a while ago + +1233 +00:54:09,880 --> 00:54:15,760 +um to basically paraphrase an existing + +1234 +00:54:12,559 --> 00:54:17,280 +prompt to get other candidates um it's + +1235 +00:54:15,760 --> 00:54:19,240 +rather simple basically you take a + +1236 +00:54:17,280 --> 00:54:21,960 +prompt you put it through a paraphrasing + +1237 +00:54:19,240 --> 00:54:24,280 +model and it will give you new prompts + +1238 +00:54:21,960 --> 00:54:25,440 +and this is good because it will tend to + +1239 +00:54:24,280 --> 00:54:28,319 +give you things that are natural + +1240 +00:54:25,440 --> 00:54:29,839 +language um you can paraphrase 50 times + +1241 +00:54:28,319 --> 00:54:32,480 +try all of them see which one gives you + +1242 +00:54:29,839 --> 00:54:37,079 +the highest accuracy and then use that + +1243 +00:54:32,480 --> 00:54:39,280 +one um there's also an interesting paper + +1244 +00:54:37,079 --> 00:54:43,079 +uh that demonstrates that you can do + +1245 +00:54:39,280 --> 00:54:45,240 +this iteratively so you paraphrase once + +1246 +00:54:43,079 --> 00:54:46,599 +um you filter down all the candidates + +1247 +00:54:45,240 --> 00:54:48,119 +that do well and then you go in and + +1248 +00:54:46,599 --> 00:54:49,960 +paraphrase them again and you just do + +1249 +00:54:48,119 --> 00:54:51,960 +this over and over again and that can + +1250 +00:54:49,960 --> 00:54:54,079 +give you better results than kind of one + +1251 +00:54:51,960 --> 00:54:57,079 +one off + +1252 +00:54:54,079 --> 00:54:57,079 +paraphrasing + +1253 +00:54:59,240 --> 00:55:02,079 +so that's very simple you can even use a + +1254 +00:55:01,079 --> 00:55:04,160 +large language model to do the + +1255 +00:55:02,079 --> 00:55:06,599 +paraphrasing for you um another thing + +1256 +00:55:04,160 --> 00:55:08,920 +that you can do is gradient based search + +1257 +00:55:06,599 --> 00:55:11,119 +so the way this works is you need to + +1258 +00:55:08,920 --> 00:55:16,319 +have a a model that you can calculate + +1259 +00:55:11,119 --> 00:55:19,920 +gradients for and what you do is you + +1260 +00:55:16,319 --> 00:55:22,240 +calculate you create a seed prompt and + +1261 +00:55:19,920 --> 00:55:26,000 +then you calculate gradients into that + +1262 +00:55:22,240 --> 00:55:29,760 +seed prompt so you treat the + +1263 +00:55:26,000 --> 00:55:33,160 +um you treat each of the tokens here + +1264 +00:55:29,760 --> 00:55:36,680 +like T1 T2 T3 T4 + +1265 +00:55:33,160 --> 00:55:38,240 +T5 as their own embeddings you do back + +1266 +00:55:36,680 --> 00:55:39,920 +propop into those embeddings and you + +1267 +00:55:38,240 --> 00:55:42,799 +optimize them to get high accuracy on + +1268 +00:55:39,920 --> 00:55:44,720 +your data set then after you're done + +1269 +00:55:42,799 --> 00:55:47,319 +optimizing them to get high accuracy on + +1270 +00:55:44,720 --> 00:55:49,079 +your data set you clamp them onto the + +1271 +00:55:47,319 --> 00:55:52,160 +nearest neighbor embedding that you + +1272 +00:55:49,079 --> 00:55:53,520 +already have so you basically say okay + +1273 +00:55:52,160 --> 00:55:56,720 +the nearest neighbor to the embedding + +1274 +00:55:53,520 --> 00:55:58,920 +that I learned you um is atmosphere then + +1275 +00:55:56,720 --> 00:56:02,240 +a lot dialogue clone + +1276 +00:55:58,920 --> 00:56:03,799 +totally and so this is this will + +1277 +00:56:02,240 --> 00:56:05,599 +actually give you better results than + +1278 +00:56:03,799 --> 00:56:07,839 +paraphrasing in many cases because the + +1279 +00:56:05,599 --> 00:56:11,520 +search space is less constrained you can + +1280 +00:56:07,839 --> 00:56:12,960 +get these very unnatural prompts uh that + +1281 +00:56:11,520 --> 00:56:16,280 +don't seem to make sense but actually + +1282 +00:56:12,960 --> 00:56:20,280 +work well this has particularly been + +1283 +00:56:16,280 --> 00:56:22,960 +widely used in um adversarial attacks on + +1284 +00:56:20,280 --> 00:56:25,599 +language modals so how can you come up + +1285 +00:56:22,960 --> 00:56:27,720 +with um + +1286 +00:56:25,599 --> 00:56:31,559 +with prompts that + +1287 +00:56:27,720 --> 00:56:33,319 +cause language models to uh do things + +1288 +00:56:31,559 --> 00:56:36,039 +that you don't want them to be + +1289 +00:56:33,319 --> 00:56:38,920 +doing and um there's actually this nice + +1290 +00:56:36,039 --> 00:56:41,440 +paper uh also by people at CMU called + +1291 +00:56:38,920 --> 00:56:42,960 +Universal and transferable adversarial + +1292 +00:56:41,440 --> 00:56:45,400 +attacks on line language + +1293 +00:56:42,960 --> 00:56:50,559 +models and basically what they do is + +1294 +00:56:45,400 --> 00:56:53,880 +they try to optimize the uh they try to + +1295 +00:56:50,559 --> 00:56:56,839 +optimize the prompt to create a prompt + +1296 +00:56:53,880 --> 00:56:58,599 +that causes the model to do bad things + +1297 +00:56:56,839 --> 00:57:00,039 +basically and they try to do it even on + +1298 +00:56:58,599 --> 00:57:03,440 +models that have been trying to not do + +1299 +00:57:00,039 --> 00:57:05,039 +bad things and they demonstrate that + +1300 +00:57:03,440 --> 00:57:07,359 +number one you can cause things like + +1301 +00:57:05,039 --> 00:57:09,599 +models like llama to do bad you know bad + +1302 +00:57:07,359 --> 00:57:12,559 +things like output toxic things tell you + +1303 +00:57:09,599 --> 00:57:15,599 +how to build bombs stuff like that but + +1304 +00:57:12,559 --> 00:57:18,480 +also the same prompts also work on like + +1305 +00:57:15,599 --> 00:57:22,319 +GPD models uh which is kind of like + +1306 +00:57:18,480 --> 00:57:23,839 +interesting and and very uh you know + +1307 +00:57:22,319 --> 00:57:26,520 +confusing in a way because you thought + +1308 +00:57:23,839 --> 00:57:28,160 +this might be explo idiosyncrasies of a + +1309 +00:57:26,520 --> 00:57:32,440 +particular language model but actually + +1310 +00:57:28,160 --> 00:57:32,440 +it's not so I I find this kind of + +1311 +00:57:33,880 --> 00:57:39,520 +fascinating + +1312 +00:57:36,039 --> 00:57:42,240 +so if you take that a step further one + +1313 +00:57:39,520 --> 00:57:44,079 +thing that you can do is you can say oh + +1314 +00:57:42,240 --> 00:57:46,280 +actually there's no reason why we need + +1315 +00:57:44,079 --> 00:57:48,520 +to clamp these embeddings back to an + +1316 +00:57:46,280 --> 00:57:52,240 +existing embedding right so we could + +1317 +00:57:48,520 --> 00:57:56,079 +just optimize the prompts the embeddings + +1318 +00:57:52,240 --> 00:57:57,720 +of the prompts that go for a task and + +1319 +00:57:56,079 --> 00:58:02,000 +not clamp them back to embeddings and + +1320 +00:57:57,720 --> 00:58:03,599 +just keep them as is so um what I mean + +1321 +00:58:02,000 --> 00:58:07,079 +by that is like right here it's + +1322 +00:58:03,599 --> 00:58:09,160 +optimizing T1 T2 T3 T4 T5 and then + +1323 +00:58:07,079 --> 00:58:11,359 +clamping that back to Atmosphere a lot + +1324 +00:58:09,160 --> 00:58:13,960 +dialog clone totally but just keep them + +1325 +00:58:11,359 --> 00:58:16,160 +as is and don't worry about them like + +1326 +00:58:13,960 --> 00:58:18,039 +actually being a token in the model + +1327 +00:58:16,160 --> 00:58:19,400 +because if you have control over your + +1328 +00:58:18,039 --> 00:58:21,200 +model you can just add them as new + +1329 +00:58:19,400 --> 00:58:25,960 +elements in the vocabulary and you're + +1330 +00:58:21,200 --> 00:58:28,440 +fine right so what they demonstrate in + +1331 +00:58:25,960 --> 00:58:31,520 +this paper is that instead of taking + +1332 +00:58:28,440 --> 00:58:33,440 +your 11 billion parameter model and + +1333 +00:58:31,520 --> 00:58:35,920 +training the whole 11 billion parameter + +1334 +00:58:33,440 --> 00:58:38,359 +model for many different tasks on many + +1335 +00:58:35,920 --> 00:58:40,079 +different data sets they just train + +1336 +00:58:38,359 --> 00:58:42,039 +these prompts which are like 20K + +1337 +00:58:40,079 --> 00:58:44,039 +parameters each I I forget how long it + +1338 +00:58:42,039 --> 00:58:46,280 +is it's like 10 tokens or 20 tokens or + +1339 +00:58:44,039 --> 00:58:48,079 +something like that um and train it on + +1340 +00:58:46,280 --> 00:58:49,640 +all of the the data sets here and you + +1341 +00:58:48,079 --> 00:58:50,680 +don't actually need to do multitask + +1342 +00:58:49,640 --> 00:58:52,200 +learning you don't need to train on + +1343 +00:58:50,680 --> 00:58:53,720 +multiple tasks at the same time you can + +1344 +00:58:52,200 --> 00:58:56,119 +just train on a single + +1345 +00:58:53,720 --> 00:58:58,599 +task + +1346 +00:58:56,119 --> 00:59:01,000 +so now let's take that even a step + +1347 +00:58:58,599 --> 00:59:03,640 +further so this is only training the + +1348 +00:59:01,000 --> 00:59:06,359 +embeddings that you input into the model + +1349 +00:59:03,640 --> 00:59:08,160 +there's a method called prefix tuning + +1350 +00:59:06,359 --> 00:59:10,319 +and the way prefix tuning works is + +1351 +00:59:08,160 --> 00:59:12,280 +instead of training only the embeddings + +1352 +00:59:10,319 --> 00:59:14,799 +that go into the model they actually + +1353 +00:59:12,280 --> 00:59:18,920 +train a prefix that you then append to + +1354 +00:59:14,799 --> 00:59:20,839 +every layer of the model so prompt + +1355 +00:59:18,920 --> 00:59:23,319 +tuning basically does this for the first + +1356 +00:59:20,839 --> 00:59:24,839 +layer of the model prefix tuning does + +1357 +00:59:23,319 --> 00:59:28,400 +this for every layer of the model you + +1358 +00:59:24,839 --> 00:59:30,319 +append a prefix uh for every day so it's + +1359 +00:59:28,400 --> 00:59:32,200 +just a more expressive version of + +1360 +00:59:30,319 --> 00:59:36,119 +prompting + +1361 +00:59:32,200 --> 00:59:40,200 +essentially so these are all kinds of + +1362 +00:59:36,119 --> 00:59:43,680 +gradual steps from a human created + +1363 +00:59:40,200 --> 00:59:47,880 +prompt into something that is basically + +1364 +00:59:43,680 --> 00:59:50,839 +training a a prompt or a prefix to the + +1365 +00:59:47,880 --> 00:59:52,960 +model so I I would take questions but + +1366 +00:59:50,839 --> 00:59:55,200 +let me get to the end of this section uh + +1367 +00:59:52,960 --> 00:59:58,839 +also because uh I think there's + +1368 +00:59:55,200 --> 01:00:00,720 +interesting analogies here so in the + +1369 +00:59:58,839 --> 01:00:02,880 +next class I'm going to talk about + +1370 +01:00:00,720 --> 01:00:04,440 +parameter efficient fine-tuning methods + +1371 +01:00:02,880 --> 01:00:06,960 +which is kind of a more + +1372 +01:00:04,440 --> 01:00:10,000 +General it's a more + +1373 +01:00:06,960 --> 01:00:11,480 +General version of prompt tuning or + +1374 +01:00:10,000 --> 01:00:13,280 +prefix tuning there are methods that + +1375 +01:00:11,480 --> 01:00:15,960 +tune a small number of parameters to get + +1376 +01:00:13,280 --> 01:00:17,400 +the model to do something and there's a + +1377 +01:00:15,960 --> 01:00:18,880 +bunch of different parameter efficient + +1378 +01:00:17,400 --> 01:00:21,520 +tuning methods many people may have + +1379 +01:00:18,880 --> 01:00:23,880 +heard of something like Laura uh or + +1380 +01:00:21,520 --> 01:00:25,440 +adapters um I just talked about prefix + +1381 +01:00:23,880 --> 01:00:28,119 +tuning + +1382 +01:00:25,440 --> 01:00:30,960 +so essentially prompt tuning and prefix + +1383 +01:00:28,119 --> 01:00:33,359 +tuning are part of this more General + +1384 +01:00:30,960 --> 01:00:36,680 +class of parameter efficient find tuning + +1385 +01:00:33,359 --> 01:00:39,240 +methods and so what we can say is + +1386 +01:00:36,680 --> 01:00:41,119 +actually prompting is fine-tuning + +1387 +01:00:39,240 --> 01:00:42,920 +prompting is a way of fine-tuning the + +1388 +01:00:41,119 --> 01:00:46,799 +model or getting the model to perform a + +1389 +01:00:42,920 --> 01:00:49,839 +particular task um and we have this + +1390 +01:00:46,799 --> 01:00:53,720 +taxonomy of we have prompts in natural + +1391 +01:00:49,839 --> 01:00:55,160 +language that are created uh by humans + +1392 +01:00:53,720 --> 01:00:57,240 +actually maybe I should say manual + +1393 +01:00:55,160 --> 01:00:59,559 +prompt engineering here this was first + +1394 +01:00:57,240 --> 01:01:01,480 +done in the gpd2 paper where they + +1395 +01:00:59,559 --> 01:01:04,359 +demonstrate that models uh models could + +1396 +01:01:01,480 --> 01:01:06,200 +solve tasks by doing it this way prompt + +1397 +01:01:04,359 --> 01:01:07,760 +paraphrasing is a step up from this + +1398 +01:01:06,200 --> 01:01:09,799 +because it's no longer relying on human + +1399 +01:01:07,760 --> 01:01:12,680 +engineering and you can you know expand + +1400 +01:01:09,799 --> 01:01:15,280 +to a broader set of prompts um it can + +1401 +01:01:12,680 --> 01:01:17,359 +always start with human created prompts + +1402 +01:01:15,280 --> 01:01:20,240 +so it's kind of like broader uh than + +1403 +01:01:17,359 --> 01:01:21,799 +that discrete prompt search doesn't + +1404 +01:01:20,240 --> 01:01:23,599 +necessarily need to rely on a + +1405 +01:01:21,799 --> 01:01:25,559 +paraphrasing model it could rely on like + +1406 +01:01:23,599 --> 01:01:26,760 +gradient-based models or something else + +1407 +01:01:25,559 --> 01:01:29,240 +like that to give you something that's + +1408 +01:01:26,760 --> 01:01:32,559 +not actually natural language uh kind of + +1409 +01:01:29,240 --> 01:01:35,920 +just random tokens continuous prompts or + +1410 +01:01:32,559 --> 01:01:38,119 +prompt tuning is a step above that + +1411 +01:01:35,920 --> 01:01:41,039 +multi-layer continuous prompts or prefix + +1412 +01:01:38,119 --> 01:01:42,520 +tuning is a layer above that parameter + +1413 +01:01:41,039 --> 01:01:43,520 +efficient tuning is more General than + +1414 +01:01:42,520 --> 01:01:45,359 +that and then you have all training + +1415 +01:01:43,520 --> 01:01:49,160 +methods so including fine tuning your + +1416 +01:01:45,359 --> 01:01:52,680 +model and so what are the implications + +1417 +01:01:49,160 --> 01:01:55,760 +of this um I think so a lot of people + +1418 +01:01:52,680 --> 01:01:58,720 +when prompting came out they were like + +1419 +01:01:55,760 --> 01:02:00,640 +prompting methods are very hacky I don't + +1420 +01:01:58,720 --> 01:02:03,839 +like how we have to do manual prompt + +1421 +01:02:00,640 --> 01:02:08,160 +engineering um it seems like a dark art + +1422 +01:02:03,839 --> 01:02:11,000 +as opposed to like you know actually you + +1423 +01:02:08,160 --> 01:02:14,160 +know some sort of well understood + +1424 +01:02:11,000 --> 01:02:16,839 +fine-tuning method that we could use um + +1425 +01:02:14,160 --> 01:02:20,520 +but I I actually like them I like + +1426 +01:02:16,839 --> 01:02:23,920 +prompting a lot because um if anybody is + +1427 +01:02:20,520 --> 01:02:25,960 +familiar with like basian basian + +1428 +01:02:23,920 --> 01:02:27,920 +statistics or machine learning we have + +1429 +01:02:25,960 --> 01:02:28,799 +the concept of like a prior probability + +1430 +01:02:27,920 --> 01:02:31,200 +over + +1431 +01:02:28,799 --> 01:02:32,359 +parameters and then a probability that + +1432 +01:02:31,200 --> 01:02:34,680 +we get + +1433 +01:02:32,359 --> 01:02:37,880 +after after fine tuning the model or + +1434 +01:02:34,680 --> 01:02:40,440 +after training the model and prompts in + +1435 +01:02:37,880 --> 01:02:42,640 +a way are our first like good prior over + +1436 +01:02:40,440 --> 01:02:43,880 +neural network models they give us the + +1437 +01:02:42,640 --> 01:02:46,319 +ability to + +1438 +01:02:43,880 --> 01:02:48,559 +specify what task the model should be + +1439 +01:02:46,319 --> 01:02:51,880 +doing or like a general idea of what + +1440 +01:02:48,559 --> 01:02:54,200 +task the model should be doing before we + +1441 +01:02:51,880 --> 01:02:56,359 +ask the model to actually do the task + +1442 +01:02:54,200 --> 01:02:58,640 +and and so we can either use that prior + +1443 +01:02:56,359 --> 01:03:02,119 +Asis we can use a prompted model Asis + +1444 +01:02:58,640 --> 01:03:04,839 +without doing any additional tuning or + +1445 +01:03:02,119 --> 01:03:06,480 +we could take the prior that we have + +1446 +01:03:04,839 --> 01:03:07,920 +given to the model by using a natural + +1447 +01:03:06,480 --> 01:03:09,039 +language description of the task it + +1448 +01:03:07,920 --> 01:03:12,079 +should be + +1449 +01:03:09,039 --> 01:03:14,799 +doing and then combine it with fineing + +1450 +01:03:12,079 --> 01:03:17,039 +so we can take the prompted + +1451 +01:03:14,799 --> 01:03:19,279 +model we can + +1452 +01:03:17,039 --> 01:03:21,640 +initialize we can initialize the + +1453 +01:03:19,279 --> 01:03:23,960 +distribution of this like Cas a prompt + +1454 +01:03:21,640 --> 01:03:25,720 +using the prompt using a human created + +1455 +01:03:23,960 --> 01:03:28,160 +prompt and then go on and fine-tune it + +1456 +01:03:25,720 --> 01:03:30,960 +on lots of training data as well and + +1457 +01:03:28,160 --> 01:03:33,799 +there's a method for doing that um by + +1458 +01:03:30,960 --> 01:03:35,880 +shik and schutza uh called uh pattern + +1459 +01:03:33,799 --> 01:03:37,559 +exploiting training where they do + +1460 +01:03:35,880 --> 01:03:39,799 +exactly that they basically initialize + +1461 +01:03:37,559 --> 01:03:41,720 +with a manually created prompt and then + +1462 +01:03:39,799 --> 01:03:44,559 +they find the model on finding inator + +1463 +01:03:41,720 --> 01:03:46,400 +after that so um that's a reason why I + +1464 +01:03:44,559 --> 01:03:47,920 +like prompting based methods they they + +1465 +01:03:46,400 --> 01:03:49,720 +give us this like really nice way to + +1466 +01:03:47,920 --> 01:03:53,039 +very quickly create a system but we can + +1467 +01:03:49,720 --> 01:03:56,079 +also have you know whatever level of + +1468 +01:03:53,039 --> 01:03:59,880 +additional training on top of that + +1469 +01:03:56,079 --> 01:03:59,880 +cool so that's a little bit early I'm \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (7) Prompting/transcript.vtt b/CMU Advanced NLP 2024 (7) Prompting/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..e1fe5a9bd1af80cafe2f25a74e510a8fcc5c608f --- /dev/null +++ b/CMU Advanced NLP 2024 (7) Prompting/transcript.vtt @@ -0,0 +1,4408 @@ +WEBVTT + +00:00:01.319 --> 00:00:07.560 +um today I want to talk about prompting + +00:00:03.919 --> 00:00:09.639 +and uh prompting is kind of a new uh + +00:00:07.560 --> 00:00:11.320 +Paradigm as of a few years ago with + +00:00:09.639 --> 00:00:15.120 +interacting with models it's now kind of + +00:00:11.320 --> 00:00:16.880 +the standard uh in doing so and + +00:00:15.120 --> 00:00:19.880 +basically what we do is we encourage a + +00:00:16.880 --> 00:00:21.840 +pre-trained model to make predictions by + +00:00:19.880 --> 00:00:24.039 +providing a textual prompt specifying + +00:00:21.840 --> 00:00:25.960 +the task to be done this is how you + +00:00:24.039 --> 00:00:28.960 +always interact with chat GPT or + +00:00:25.960 --> 00:00:33.200 +anything else like this + +00:00:28.960 --> 00:00:36.200 +um so prompting fundamentals uh the way + +00:00:33.200 --> 00:00:38.360 +that basic prompting works is you append + +00:00:36.200 --> 00:00:42.079 +a textual string to the beginning of the + +00:00:38.360 --> 00:00:44.079 +output and you complete it and exactly + +00:00:42.079 --> 00:00:45.800 +how you complete it can be based on any + +00:00:44.079 --> 00:00:48.800 +of the generation methods that we talked + +00:00:45.800 --> 00:00:51.559 +about in the previous class uh you know + +00:00:48.800 --> 00:00:55.160 +beam search it can be uh sampling it can + +00:00:51.559 --> 00:00:58.480 +be MBR or self-consistency or whatever + +00:00:55.160 --> 00:01:00.960 +else um so I I put in when a dog sees a + +00:00:58.480 --> 00:01:03.680 +squirrel it will usually + +00:01:00.960 --> 00:01:06.280 +um into gpt2 small which is a very small + +00:01:03.680 --> 00:01:08.960 +language model says Be Afraid of + +00:01:06.280 --> 00:01:10.560 +Anything unusual as an exception that's + +00:01:08.960 --> 00:01:13.720 +when a squirrel is usually afraid to + +00:01:10.560 --> 00:01:16.280 +bitee um so as you can see if the model + +00:01:13.720 --> 00:01:19.560 +is not super great you get a kind of not + +00:01:16.280 --> 00:01:24.119 +very great response also um but then I + +00:01:19.560 --> 00:01:25.960 +CED it into gp2 XL and uh what it says + +00:01:24.119 --> 00:01:28.159 +when a dog sees a squirrel it will + +00:01:25.960 --> 00:01:30.640 +usually lick the squirrel it will also + +00:01:28.159 --> 00:01:34.000 +touch its nose to the squirrel the tail + +00:01:30.640 --> 00:01:37.880 +and nose if it can um which might be + +00:01:34.000 --> 00:01:40.280 +true um one thing I I should note is + +00:01:37.880 --> 00:01:43.040 +when I generated these I used uh like + +00:01:40.280 --> 00:01:45.200 +actual regular ancestral sampling so I + +00:01:43.040 --> 00:01:47.159 +set the temperature to one I didn't do + +00:01:45.200 --> 00:01:49.600 +top feed didn't do top K or anything + +00:01:47.159 --> 00:01:51.040 +like this so this is a raw view of what + +00:01:49.600 --> 00:01:53.799 +the language model thinks is like + +00:01:51.040 --> 00:01:58.479 +actually a reasonable answer um if I + +00:01:53.799 --> 00:02:00.159 +modified the code to do something else + +00:01:58.479 --> 00:02:02.560 +actually maybe I can I can do that that + +00:02:00.159 --> 00:02:04.960 +right now but if I modified the code to + +00:02:02.560 --> 00:02:08.879 +use a + +00:02:04.960 --> 00:02:12.119 +different output we can actually see uh + +00:02:08.879 --> 00:02:12.119 +the different result that we + +00:02:13.599 --> 00:02:17.959 +get since I I have it here + +00:02:18.360 --> 00:02:23.879 +anyway actually sorry I'll need to + +00:02:20.360 --> 00:02:27.239 +modify the code on my my screen here + +00:02:23.879 --> 00:02:32.120 +um so I will + +00:02:27.239 --> 00:02:35.040 +set uh top K to 50 top P to + +00:02:32.120 --> 00:02:38.360 +0.95 so you see I I changed the + +00:02:35.040 --> 00:02:38.360 +generation parameters + +00:02:38.760 --> 00:02:46.400 +here and I'll uh run all of + +00:02:43.159 --> 00:02:50.319 +them you can see the uh the result that + +00:02:46.400 --> 00:02:51.840 +we get in a little bit but basically um + +00:02:50.319 --> 00:02:54.800 +so this is the standard method for + +00:02:51.840 --> 00:02:57.319 +prompting I intentionally use gpt2 small + +00:02:54.800 --> 00:02:58.800 +and gpt2 XL here because these are raw + +00:02:57.319 --> 00:03:01.879 +based language models they were just + +00:02:58.800 --> 00:03:05.440 +pre-trained as language models and so + +00:03:01.879 --> 00:03:06.920 +when we prompt them we're getting a + +00:03:05.440 --> 00:03:09.200 +language model that was just trained on + +00:03:06.920 --> 00:03:12.280 +lots of texts view of what is likely + +00:03:09.200 --> 00:03:13.760 +next text um there are other ways to + +00:03:12.280 --> 00:03:15.599 +train language models like instruction + +00:03:13.760 --> 00:03:18.040 +tuning and rlf which I'm going to be + +00:03:15.599 --> 00:03:19.480 +talking in future classes and if that's + +00:03:18.040 --> 00:03:21.760 +the case you might get a different + +00:03:19.480 --> 00:03:23.159 +response here so when a dog sees a + +00:03:21.760 --> 00:03:25.720 +squirrel it will usually get angry + +00:03:23.159 --> 00:03:27.319 +scratched the squirrel and run off uh + +00:03:25.720 --> 00:03:29.080 +some dogs may also attempt to capture + +00:03:27.319 --> 00:03:30.799 +the squirrel or attempt to eat it dogs + +00:03:29.080 --> 00:03:32.599 +will often to pick up the squirrel and + +00:03:30.799 --> 00:03:36.400 +eat it + +00:03:32.599 --> 00:03:40.680 +for it was more uh more violent than I + +00:03:36.400 --> 00:03:44.280 +expected any um + +00:03:40.680 --> 00:03:45.720 +so but anyway I think that like actually + +00:03:44.280 --> 00:03:47.080 +you can see that when I used the + +00:03:45.720 --> 00:03:48.920 +different generation parameters it + +00:03:47.080 --> 00:03:51.480 +actually gave me something that was + +00:03:48.920 --> 00:03:54.319 +maybe more typical than lick so lick is + +00:03:51.480 --> 00:03:56.840 +maybe a kind of unusual uh answer here + +00:03:54.319 --> 00:03:58.680 +but anyway + +00:03:56.840 --> 00:04:03.040 +cool + +00:03:58.680 --> 00:04:05.680 +so that's the basic idea of prompting we + +00:04:03.040 --> 00:04:08.480 +tend to use prompting to try to solve + +00:04:05.680 --> 00:04:10.680 +problems also so it's not just to + +00:04:08.480 --> 00:04:14.200 +complete text although completing text + +00:04:10.680 --> 00:04:17.320 +is useful and important like I complete + +00:04:14.200 --> 00:04:19.199 +text in my Gmail all the time uh you + +00:04:17.320 --> 00:04:20.600 +know it it's constantly giving me + +00:04:19.199 --> 00:04:23.440 +suggestions about what I should write + +00:04:20.600 --> 00:04:24.800 +next and I do tab autoc complete um you + +00:04:23.440 --> 00:04:28.040 +know on your phone you're doing that + +00:04:24.800 --> 00:04:29.919 +that's also using a language model um + +00:04:28.040 --> 00:04:32.320 +but very often we'll use prompting to do + +00:04:29.919 --> 00:04:34.440 +things other than just completing Texs + +00:04:32.320 --> 00:04:36.000 +and when we do this uh this is kind of + +00:04:34.440 --> 00:04:38.199 +the standard workflow for how we solve + +00:04:36.000 --> 00:04:41.280 +NLP tasks with prompting the way we do + +00:04:38.199 --> 00:04:43.360 +this is we fill in a prompt template + +00:04:41.280 --> 00:04:46.080 +predict the answer and post-process the + +00:04:43.360 --> 00:04:46.080 +answer in some + +00:04:46.320 --> 00:04:51.880 +way so prompt templates are templates + +00:04:49.280 --> 00:04:55.280 +where you will actually uh that you will + +00:04:51.880 --> 00:04:57.479 +fill in with an actual input and so if + +00:04:55.280 --> 00:05:00.479 +we have an input X which is something + +00:04:57.479 --> 00:05:04.880 +like I love this movie our template will + +00:05:00.479 --> 00:05:08.360 +be something like X overall it was Z or + +00:05:04.880 --> 00:05:10.680 +overall it was and so if we do that when + +00:05:08.360 --> 00:05:13.320 +we actually want to make a prediction we + +00:05:10.680 --> 00:05:14.840 +will uh convert this into the actual + +00:05:13.320 --> 00:05:16.880 +prompt we feed into the language model + +00:05:14.840 --> 00:05:20.639 +by filling in the template um I love + +00:05:16.880 --> 00:05:24.919 +this movie overall it was blank and then + +00:05:20.639 --> 00:05:24.919 +fill this uh continuation + +00:05:25.840 --> 00:05:31.919 +in a particular variety uh + +00:05:30.000 --> 00:05:34.039 +that we use very broadly nowadays + +00:05:31.919 --> 00:05:36.240 +because a lot of models are trained as + +00:05:34.039 --> 00:05:38.240 +chatbots um but actually even if they're + +00:05:36.240 --> 00:05:41.199 +not trained as chatbots this still works + +00:05:38.240 --> 00:05:46.199 +to some extent um is a chat + +00:05:41.199 --> 00:05:49.919 +prompt and so usually the way we we do + +00:05:46.199 --> 00:05:53.240 +this is we specify inputs in a format + +00:05:49.919 --> 00:05:55.800 +called the open AI messages format and + +00:05:53.240 --> 00:05:58.199 +uh this is this is what it looks like + +00:05:55.800 --> 00:06:03.759 +each we have a + +00:05:58.199 --> 00:06:07.680 +list of outputs each list is given a + +00:06:03.759 --> 00:06:10.280 +role and content and here so we have the + +00:06:07.680 --> 00:06:12.479 +role of system and the content is please + +00:06:10.280 --> 00:06:15.319 +classify movie reviews as positive or + +00:06:12.479 --> 00:06:17.400 +negative uh then we have the role user + +00:06:15.319 --> 00:06:21.039 +uh this movie is a + +00:06:17.400 --> 00:06:24.919 +banger um and then we have roles uh + +00:06:21.039 --> 00:06:27.240 +system message uh so is the roles we + +00:06:24.919 --> 00:06:29.639 +have the system and the system is a + +00:06:27.240 --> 00:06:31.560 +message provided to the system to + +00:06:29.639 --> 00:06:33.560 +influence Its Behavior it's to explain + +00:06:31.560 --> 00:06:39.240 +to it + +00:06:33.560 --> 00:06:40.840 +like how it should be working um and so + +00:06:39.240 --> 00:06:43.199 +you can see that this is explaining to + +00:06:40.840 --> 00:06:46.400 +the system how it should be working user + +00:06:43.199 --> 00:06:48.680 +is the message input by the user um and + +00:06:46.400 --> 00:06:51.160 +so this could be just a single message + +00:06:48.680 --> 00:06:53.520 +or if you have a multi-turn dialogue it + +00:06:51.160 --> 00:06:55.080 +can be like user and then assistant and + +00:06:53.520 --> 00:06:56.680 +then user and then assistant and then + +00:06:55.080 --> 00:06:59.400 +user and then assistant and that makes + +00:06:56.680 --> 00:07:00.680 +it clear that it's a multi-term dialogue + +00:06:59.400 --> 00:07:02.800 +so if you have a multi-term dialogue in + +00:07:00.680 --> 00:07:06.319 +chat GPT that's how they're feeding it + +00:07:02.800 --> 00:07:06.319 +in um into the + +00:07:06.479 --> 00:07:12.440 +system so what's happening behind the + +00:07:08.840 --> 00:07:14.160 +scenes with these chat prompts basically + +00:07:12.440 --> 00:07:17.720 +they're being converted into token + +00:07:14.160 --> 00:07:19.680 +strings and then fed into the model so + +00:07:17.720 --> 00:07:21.800 +despite the fact that this is fed in in + +00:07:19.680 --> 00:07:23.560 +this format and it makes you think that + +00:07:21.800 --> 00:07:25.120 +maybe something special is going on + +00:07:23.560 --> 00:07:28.360 +actually in most cases these are just + +00:07:25.120 --> 00:07:30.199 +being fed into the model uh as a prompt + +00:07:28.360 --> 00:07:34.560 +so these are just kind of special + +00:07:30.199 --> 00:07:36.879 +version of a uh of a template so here we + +00:07:34.560 --> 00:07:40.560 +have um this is what the Llama template + +00:07:36.879 --> 00:07:43.319 +looks like so basically you have um + +00:07:40.560 --> 00:07:46.560 +square bracket ins and then for the + +00:07:43.319 --> 00:07:49.280 +system message it's like um like angle + +00:07:46.560 --> 00:07:51.240 +bracket uh angle bracket sis uh close + +00:07:49.280 --> 00:07:53.720 +angle bracket close angle bracket and + +00:07:51.240 --> 00:07:55.759 +then the actual system message and then + +00:07:53.720 --> 00:07:58.479 +you have uh this closing out the system + +00:07:55.759 --> 00:08:01.240 +message this closing out the instruction + +00:07:58.479 --> 00:08:04.120 +then the user is surrounded by inst and + +00:08:01.240 --> 00:08:06.599 +then the assistant is just like a + +00:08:04.120 --> 00:08:08.400 +regular string so this is what the + +00:08:06.599 --> 00:08:12.319 +actual textual string that's fed into + +00:08:08.400 --> 00:08:14.199 +llama chat models is we can contrast + +00:08:12.319 --> 00:08:19.440 +that to some other models so alpaka + +00:08:14.199 --> 00:08:22.400 +looks like this um uh so we have like + +00:08:19.440 --> 00:08:24.879 +hash instruction colon and then the + +00:08:22.400 --> 00:08:26.639 +instruction for the user there there's + +00:08:24.879 --> 00:08:28.879 +no distinction between system and user + +00:08:26.639 --> 00:08:31.960 +so it's like hash instruction and then + +00:08:28.879 --> 00:08:35.240 +the user message and then hash response + +00:08:31.960 --> 00:08:37.760 +and then be assistant so it's not super + +00:08:35.240 --> 00:08:39.640 +important which one we use here um the + +00:08:37.760 --> 00:08:41.919 +important thing is that this matches + +00:08:39.640 --> 00:08:44.039 +with what uh the model is trained and + +00:08:41.919 --> 00:08:46.640 +I'll show you some example uh you know + +00:08:44.039 --> 00:08:50.680 +I'll talk about that in more detail + +00:08:46.640 --> 00:08:52.880 +later and we have a reference uh that I + +00:08:50.680 --> 00:08:56.600 +got this uh + +00:08:52.880 --> 00:08:58.519 +from and there's this toolkit that I um + +00:08:56.600 --> 00:09:02.680 +I rather like recently it's called light + +00:08:58.519 --> 00:09:05.079 +llm it makes it very easy to uh query + +00:09:02.680 --> 00:09:07.240 +different llms uh and kind of like + +00:09:05.079 --> 00:09:09.320 +unified things so basically you can + +00:09:07.240 --> 00:09:11.800 +query many different types of LMS like + +00:09:09.320 --> 00:09:14.440 +open AI or open source models or other + +00:09:11.800 --> 00:09:17.079 +things like that and what happens behind + +00:09:14.440 --> 00:09:19.120 +the scene is it basically takes um the + +00:09:17.079 --> 00:09:20.839 +open AI messages format and converts it + +00:09:19.120 --> 00:09:22.880 +into the appropriate prompt format for + +00:09:20.839 --> 00:09:24.680 +whatever model you're using or the + +00:09:22.880 --> 00:09:27.120 +appropriate API calls for whatever thing + +00:09:24.680 --> 00:09:29.800 +you're using but + +00:09:27.120 --> 00:09:31.399 +um this here basically + +00:09:29.800 --> 00:09:33.800 +um if you click through this link shows + +00:09:31.399 --> 00:09:35.959 +you okay this is what it looks like for + +00:09:33.800 --> 00:09:37.880 +alpaca um so you have the instruction + +00:09:35.959 --> 00:09:40.920 +instruction response this is what it + +00:09:37.880 --> 00:09:44.880 +looks like for llama 2 chat this is what + +00:09:40.920 --> 00:09:48.480 +it looks like for the oama um for AMA + +00:09:44.880 --> 00:09:49.920 +this is what it looks like for mistol + +00:09:48.480 --> 00:09:52.160 +and other things like that so you see + +00:09:49.920 --> 00:09:53.440 +all of these are very similar but + +00:09:52.160 --> 00:09:55.000 +they're like slightly different and + +00:09:53.440 --> 00:09:58.120 +getting these right is actually kind of + +00:09:55.000 --> 00:10:01.120 +important for the model doing a good + +00:09:58.120 --> 00:10:01.120 +job + +00:10:03.640 --> 00:10:10.399 +um any questions about + +00:10:05.880 --> 00:10:15.360 +this yeah like say you start PR with + +00:10:10.399 --> 00:10:18.160 +this um inut and then you started simar + +00:10:15.360 --> 00:10:21.320 +without + +00:10:18.160 --> 00:10:24.640 +model could you give an example yeah so + +00:10:21.320 --> 00:10:28.040 +say um my account is a great movie or + +00:10:24.640 --> 00:10:31.040 +this movie is great in front of I put + +00:10:28.040 --> 00:10:31.040 +UMR + +00:10:34.279 --> 00:10:39.519 +model + +00:10:36.399 --> 00:10:42.440 +so depend it depends a lot on the + +00:10:39.519 --> 00:10:45.959 +bottle the reason why this system + +00:10:42.440 --> 00:10:48.720 +message was input here in the first + +00:10:45.959 --> 00:10:52.440 +place was this wasn't originally a + +00:10:48.720 --> 00:10:54.240 +feature of open AI models uh open AI was + +00:10:52.440 --> 00:10:56.440 +the first place to introduce this which + +00:10:54.240 --> 00:10:58.519 +is why I I'm calling it open ey messages + +00:10:56.440 --> 00:10:59.800 +formul they didn't originally have + +00:10:58.519 --> 00:11:02.360 +something like this but they were having + +00:10:59.800 --> 00:11:04.360 +lots of trouble with um people trying to + +00:11:02.360 --> 00:11:07.600 +reveal the prompts that were given to + +00:11:04.360 --> 00:11:09.680 +systems uh like called like prompt + +00:11:07.600 --> 00:11:12.040 +injection attacks or like jailbreaking + +00:11:09.680 --> 00:11:15.399 +attacks or stff like that and so the + +00:11:12.040 --> 00:11:17.079 +models would basically reveal this + +00:11:15.399 --> 00:11:19.600 +prompt that was being used behind the + +00:11:17.079 --> 00:11:22.760 +scenes by whatever customer of open a + +00:11:19.600 --> 00:11:26.120 +was like deploying a system and so in + +00:11:22.760 --> 00:11:29.120 +order to fix this basically what open AI + +00:11:26.120 --> 00:11:30.480 +did I believe I believe like they're + +00:11:29.120 --> 00:11:32.279 +don't actually tell you exactly what + +00:11:30.480 --> 00:11:36.040 +they did ever but I'm assuming what they + +00:11:32.279 --> 00:11:37.680 +did is they trained uh their models so + +00:11:36.040 --> 00:11:39.240 +that the models would not output + +00:11:37.680 --> 00:11:41.639 +anything that's included in the system + +00:11:39.240 --> 00:11:43.839 +message so the system message is used to + +00:11:41.639 --> 00:11:46.120 +influence behavior but it like they're + +00:11:43.839 --> 00:11:48.200 +explicitly trained to not output things + +00:11:46.120 --> 00:11:49.880 +that are included in there and so if you + +00:11:48.200 --> 00:11:53.360 +put the + +00:11:49.880 --> 00:11:56.200 +actual if you put the actual thing that + +00:11:53.360 --> 00:11:59.639 +you wanted to evaluate within the system + +00:11:56.200 --> 00:12:01.839 +message it might still predict + +00:11:59.639 --> 00:12:04.839 +the sentiment correctly but it won't + +00:12:01.839 --> 00:12:06.920 +repeat the the stuff that was in system + +00:12:04.839 --> 00:12:09.920 +message + +00:12:06.920 --> 00:12:09.920 +B + +00:12:14.160 --> 00:12:20.480 +yeah after we give it the yeah yeah so + +00:12:18.320 --> 00:12:23.040 +the that's a great question so typically + +00:12:20.480 --> 00:12:26.480 +this is hand created so you you create + +00:12:23.040 --> 00:12:29.680 +something like this um I I have a a + +00:12:26.480 --> 00:12:32.120 +bracket X here but another way people + +00:12:29.680 --> 00:12:33.800 +typically specify this is you just have + +00:12:32.120 --> 00:12:36.880 +a + +00:12:33.800 --> 00:12:41.199 +big um you just have a big python string + +00:12:36.880 --> 00:12:41.199 +which is like um you know + +00:12:42.040 --> 00:12:46.480 +please um please + +00:12:49.279 --> 00:12:55.440 +specify and then you + +00:12:52.440 --> 00:12:55.440 +have + +00:12:56.160 --> 00:13:02.240 +um and then you substitute in uh like + +00:12:59.880 --> 00:13:04.440 +the input into this place here so you + +00:13:02.240 --> 00:13:07.760 +usually handw write it I'm going to + +00:13:04.440 --> 00:13:07.760 +talk excuse + +00:13:07.800 --> 00:13:14.120 +me and to end about some methods to + +00:13:10.320 --> 00:13:16.120 +learn these also but um I'd say like 90 + +00:13:14.120 --> 00:13:18.320 +95% of the time people are just writing + +00:13:16.120 --> 00:13:18.320 +the + +00:13:19.959 --> 00:13:24.560 +man yep I would + +00:13:25.920 --> 00:13:31.639 +write + +00:13:27.760 --> 00:13:31.639 +and real input that + +00:13:33.240 --> 00:13:38.040 +I yeah so typically the template is + +00:13:36.360 --> 00:13:39.800 +written when you decide what system you + +00:13:38.040 --> 00:13:41.839 +want to create so you decide you want to + +00:13:39.800 --> 00:13:44.519 +create a sentiment analysis system so + +00:13:41.839 --> 00:13:46.760 +you create a template that either says + +00:13:44.519 --> 00:13:48.079 +like please classify the topic in the + +00:13:46.760 --> 00:13:50.959 +case of a model that was trained to + +00:13:48.079 --> 00:13:52.240 +follow instructions or if you have a + +00:13:50.959 --> 00:13:54.240 +base model that was not trained to + +00:13:52.240 --> 00:13:58.079 +follow instructions which is rare rare + +00:13:54.240 --> 00:14:00.279 +nowadays but gpd2 or La llama 2 without + +00:13:58.079 --> 00:14:02.320 +chat tuning is as an example of that + +00:14:00.279 --> 00:14:05.600 +then you would need to create a template + +00:14:02.320 --> 00:14:10.040 +that looks like this um where + +00:14:05.600 --> 00:14:11.360 +you put the model in a situation where + +00:14:10.040 --> 00:14:13.839 +the + +00:14:11.360 --> 00:14:15.240 +next word that follows up should be + +00:14:13.839 --> 00:14:17.120 +indicative of the answer to your + +00:14:15.240 --> 00:14:20.120 +question so like positive or negative or + +00:14:17.120 --> 00:14:21.800 +something like that so um but either way + +00:14:20.120 --> 00:14:24.639 +like usually you handw write this when + +00:14:21.800 --> 00:14:27.199 +you decide what task is you want to do + +00:14:24.639 --> 00:14:29.000 +then this input X this comes at test + +00:14:27.199 --> 00:14:32.920 +time this comes when you actually Dey + +00:14:29.000 --> 00:14:34.240 +your system um so this would be like an + +00:14:32.920 --> 00:14:37.040 +Amazon review that you wanted to + +00:14:34.240 --> 00:14:37.040 +classify using an + +00:14:37.720 --> 00:14:42.720 +image cool any other + +00:14:40.519 --> 00:14:46.480 +questions okay let's + +00:14:42.720 --> 00:14:48.160 +move um so basically this is what is + +00:14:46.480 --> 00:14:49.920 +happening behind the scenes I don't know + +00:14:48.160 --> 00:14:53.040 +what open AI format is because they + +00:14:49.920 --> 00:14:54.639 +won't tell us of course um but you know + +00:14:53.040 --> 00:14:56.000 +I'm assuming that that's similar to + +00:14:54.639 --> 00:14:59.399 +what's happening in + +00:14:56.000 --> 00:15:01.959 +op okay um so the next thing that we do + +00:14:59.399 --> 00:15:05.360 +is answer prediction so given uh The + +00:15:01.959 --> 00:15:08.320 +Prompt we predict the answer um and so + +00:15:05.360 --> 00:15:11.880 +using whatever algorithm we want to use + +00:15:08.320 --> 00:15:14.880 +uh we predict you know fantastic + +00:15:11.880 --> 00:15:14.880 +here + +00:15:15.120 --> 00:15:21.639 +um and actually it might not predict + +00:15:19.959 --> 00:15:26.399 +fantastic it might predict something + +00:15:21.639 --> 00:15:28.120 +else like overall it was um a really + +00:15:26.399 --> 00:15:30.000 +fantastic movie that I liked a lot or + +00:15:28.120 --> 00:15:33.839 +something like so it might also do + +00:15:30.000 --> 00:15:36.880 +something like that so based on that we + +00:15:33.839 --> 00:15:39.600 +want to select the actual output out of + +00:15:36.880 --> 00:15:41.160 +the generated uh outputs and I'm calling + +00:15:39.600 --> 00:15:43.639 +this uh + +00:15:41.160 --> 00:15:45.959 +postprocessing so for instance we might + +00:15:43.639 --> 00:15:48.240 +take the output as is so for something + +00:15:45.959 --> 00:15:50.880 +like just you interacting with chat + +00:15:48.240 --> 00:15:53.360 +jpt um or interacting with a chat model + +00:15:50.880 --> 00:15:55.639 +you might be looking at the text as is + +00:15:53.360 --> 00:15:58.319 +or it might be formatting the output for + +00:15:55.639 --> 00:16:00.079 +easy Vis visualization selecting only + +00:15:58.319 --> 00:16:02.440 +parts of the output that you want to use + +00:16:00.079 --> 00:16:04.560 +or mapping the output to other + +00:16:02.440 --> 00:16:07.600 +actions so to give an example of + +00:16:04.560 --> 00:16:10.079 +formatting this is a feature of uh chat + +00:16:07.600 --> 00:16:13.440 +GPT or Bard or any that you interact + +00:16:10.079 --> 00:16:14.920 +with but um I wrote please write a table + +00:16:13.440 --> 00:16:18.759 +with the last five presidents and their + +00:16:14.920 --> 00:16:20.319 +birth dates and chat GPT is happy to do + +00:16:18.759 --> 00:16:22.000 +this for me it says here is a table with + +00:16:20.319 --> 00:16:24.920 +the last five US presidents and their + +00:16:22.000 --> 00:16:27.639 +birth dates um Joe Biden Donald Trump + +00:16:24.920 --> 00:16:31.720 +Barack Obama George W wish Bill Clinton + +00:16:27.639 --> 00:16:33.600 +um but this is written in markdown um or + +00:16:31.720 --> 00:16:35.079 +I assume it's written in markdown so it + +00:16:33.600 --> 00:16:37.880 +basically makes this table and then + +00:16:35.079 --> 00:16:39.319 +renders it in an easy to view way so + +00:16:37.880 --> 00:16:41.000 +this is really important if you're + +00:16:39.319 --> 00:16:42.440 +building a user facing system because + +00:16:41.000 --> 00:16:44.279 +you want to be able to render these + +00:16:42.440 --> 00:16:46.279 +things but the only thing a large + +00:16:44.279 --> 00:16:48.880 +language model can output is text right + +00:16:46.279 --> 00:16:50.279 +it can output a string of tokens so uh + +00:16:48.880 --> 00:16:54.000 +this is a really good way to interact + +00:16:50.279 --> 00:16:55.759 +with it um I I followed by saying output + +00:16:54.000 --> 00:16:58.720 +that in Json format so it says here's + +00:16:55.759 --> 00:17:00.360 +the information in Json format and + +00:16:58.720 --> 00:17:02.000 +instead of just giving me a big Json + +00:17:00.360 --> 00:17:04.199 +string it gives me syntax highlighting + +00:17:02.000 --> 00:17:06.880 +and all the other stuff like this um + +00:17:04.199 --> 00:17:09.760 +presumably what it's doing here is it's + +00:17:06.880 --> 00:17:12.839 +outputting um like a triple hash or + +00:17:09.760 --> 00:17:15.160 +something like this um the reason why I + +00:17:12.839 --> 00:17:17.600 +know that is because + +00:17:15.160 --> 00:17:21.079 +like seems to be making a mistake down + +00:17:17.600 --> 00:17:23.280 +here for some reason um like uh + +00:17:21.079 --> 00:17:25.079 +outputting a weird Le formatted thing at + +00:17:23.280 --> 00:17:26.160 +that and so even chat GPT makes mistakes + +00:17:25.079 --> 00:17:30.320 +some of the + +00:17:26.160 --> 00:17:32.400 +time um + +00:17:30.320 --> 00:17:33.960 +cool um another thing that you might + +00:17:32.400 --> 00:17:35.520 +want to do is especially if you're not + +00:17:33.960 --> 00:17:37.360 +using it in like a a directly + +00:17:35.520 --> 00:17:40.200 +user-facing application but you want to + +00:17:37.360 --> 00:17:41.840 +use it to extract some information or + +00:17:40.200 --> 00:17:45.440 +make some classification decision or + +00:17:41.840 --> 00:17:47.280 +something like that um you often select + +00:17:45.440 --> 00:17:49.880 +information that's indicative of the + +00:17:47.280 --> 00:17:52.360 +answer and so I love this movie overall + +00:17:49.880 --> 00:17:53.960 +it was a movie that was simply fantastic + +00:17:52.360 --> 00:17:56.600 +um you can do things like extract + +00:17:53.960 --> 00:17:59.440 +keywords like fantastic and use that to + +00:17:56.600 --> 00:18:01.360 +indicate positive sentiment + +00:17:59.440 --> 00:18:04.080 +there's various methods for doing this + +00:18:01.360 --> 00:18:05.919 +and these are also used in the + +00:18:04.080 --> 00:18:08.679 +benchmarks that are used to evaluate + +00:18:05.919 --> 00:18:09.799 +language models so it's you know like + +00:18:08.679 --> 00:18:11.039 +even if you're not building an + +00:18:09.799 --> 00:18:12.679 +application directly but you're just + +00:18:11.039 --> 00:18:14.120 +trying to do well in this class and get + +00:18:12.679 --> 00:18:15.679 +like a high score on a leaderboard or + +00:18:14.120 --> 00:18:20.320 +something it's still useful to know + +00:18:15.679 --> 00:18:22.159 +about these things so um for things like + +00:18:20.320 --> 00:18:24.039 +classification um you can identify + +00:18:22.159 --> 00:18:27.159 +keywords like fantastic that might be + +00:18:24.039 --> 00:18:29.120 +indicative of the class another thing + +00:18:27.159 --> 00:18:31.559 +that's uh pretty common is for + +00:18:29.120 --> 00:18:34.480 +regression or numerical problems you + +00:18:31.559 --> 00:18:37.440 +identify numbers and pull out the + +00:18:34.480 --> 00:18:40.400 +numbers and use those numbers as the + +00:18:37.440 --> 00:18:42.360 +answer um for code uh you can pull out + +00:18:40.400 --> 00:18:45.080 +code Snippets and triple back ticks and + +00:18:42.360 --> 00:18:46.960 +then execute the code for example so all + +00:18:45.080 --> 00:18:48.600 +of these things are basically heuristic + +00:18:46.960 --> 00:18:50.159 +methods but they can be used to pull out + +00:18:48.600 --> 00:18:53.440 +the actual answer that you want from the + +00:18:50.159 --> 00:18:53.440 +text that's generated di + +00:18:54.480 --> 00:19:00.320 +know cool uh any questions about that + +00:19:02.280 --> 00:19:07.880 +the final thing is output mapping um + +00:19:04.640 --> 00:19:11.120 +given an answer uh map it into a class + +00:19:07.880 --> 00:19:13.360 +label or a continuous value and so this + +00:19:11.120 --> 00:19:16.000 +is doing something like taking fantastic + +00:19:13.360 --> 00:19:18.480 +and mapping it into the class + +00:19:16.000 --> 00:19:21.000 +positive uh and so you know if we want + +00:19:18.480 --> 00:19:23.000 +to extract fi one to five star ratings + +00:19:21.000 --> 00:19:25.559 +from reviews this is something you would + +00:19:23.000 --> 00:19:29.360 +need to do and very often it's like a + +00:19:25.559 --> 00:19:33.880 +one to um one class to + +00:19:29.360 --> 00:19:35.720 +many um many word mapping and uh by + +00:19:33.880 --> 00:19:37.400 +doing this you can basically get a more + +00:19:35.720 --> 00:19:38.720 +robust mapping onto the number that you + +00:19:37.400 --> 00:19:42.400 +actually + +00:19:38.720 --> 00:19:42.400 +want I actually + +00:19:42.720 --> 00:19:48.919 +coincidentally on uh on Twitter saw a + +00:19:45.280 --> 00:19:48.919 +really good example of this like a week + +00:19:55.880 --> 00:20:00.520 +ago and yeah I don't know if I'm going + +00:19:59.120 --> 00:20:05.440 +to be able to find it in a reasonable + +00:20:00.520 --> 00:20:08.520 +time frame but basically um there was + +00:20:05.440 --> 00:20:11.080 +a person who was using gp4 to create a + +00:20:08.520 --> 00:20:14.120 +model uh to like reward open source + +00:20:11.080 --> 00:20:15.880 +models for good and bad you know + +00:20:14.120 --> 00:20:18.320 +responses + +00:20:15.880 --> 00:20:20.799 +and they started out with giving it a + +00:20:18.320 --> 00:20:24.480 +one to five star rating and then they + +00:20:20.799 --> 00:20:28.360 +switched it into very good good okay bad + +00:20:24.480 --> 00:20:31.280 +very bad and then um then asked to + +00:20:28.360 --> 00:20:34.520 +generate you know those like very good + +00:20:31.280 --> 00:20:37.039 +good bad okay bad very bad instead of + +00:20:34.520 --> 00:20:40.360 +one to five and that worked a lot better + +00:20:37.039 --> 00:20:43.480 +like the GPT model was a lot more uh + +00:20:40.360 --> 00:20:46.039 +like likely to get the answer correct um + +00:20:43.480 --> 00:20:48.880 +than it was if you gave a one to five + +00:20:46.039 --> 00:20:50.799 +star rating so this is something you + +00:20:48.880 --> 00:20:54.280 +should think about pretty seriously and + +00:20:50.799 --> 00:20:57.440 +the way you can think about it is How + +00:20:54.280 --> 00:20:59.679 +likely was this data to appear in a + +00:20:57.440 --> 00:21:02.520 +large Corp of data on the + +00:20:59.679 --> 00:21:04.760 +internet and it might be like a lot less + +00:21:02.520 --> 00:21:08.679 +likely that it's like how good is this + +00:21:04.760 --> 00:21:11.400 +movie five then how good is this movie + +00:21:08.679 --> 00:21:13.960 +really good like just think of like the + +00:21:11.400 --> 00:21:16.200 +occurrence probability and you can even + +00:21:13.960 --> 00:21:18.600 +um like mine this data from the the web + +00:21:16.200 --> 00:21:21.320 +if you want to to try to find out the + +00:21:18.600 --> 00:21:24.520 +best you know + +00:21:21.320 --> 00:21:30.039 +like the best things + +00:21:24.520 --> 00:21:30.039 +there cool um any questions about this + +00:21:35.360 --> 00:21:39.480 +yeah how is + +00:21:37.720 --> 00:21:43.039 +it + +00:21:39.480 --> 00:21:45.919 +learning so the model the model is + +00:21:43.039 --> 00:21:47.600 +predicting txt and like accurately it's + +00:21:45.919 --> 00:21:50.200 +not even predicting the word fantastic + +00:21:47.600 --> 00:21:54.480 +it's predicting the token ID like + +00:21:50.200 --> 00:21:57.600 +73521 or something like that um but you + +00:21:54.480 --> 00:21:58.679 +know if it has seen that token ID more + +00:21:57.600 --> 00:22:00.840 +frequent + +00:21:58.679 --> 00:22:04.240 +after reviews than it has seen the token + +00:22:00.840 --> 00:22:06.000 +ID for the number one or the number five + +00:22:04.240 --> 00:22:07.520 +then it's more likely to predict that + +00:22:06.000 --> 00:22:10.279 +accurately right it's more likely to + +00:22:07.520 --> 00:22:11.880 +predict fantastic than it is to predict + +00:22:10.279 --> 00:22:14.679 +five star or something like that just + +00:22:11.880 --> 00:22:16.720 +because fantastic is more frequent and + +00:22:14.679 --> 00:22:18.880 +so because of that if you think about + +00:22:16.720 --> 00:22:22.120 +like what has it seen in all of the data + +00:22:18.880 --> 00:22:24.240 +on the internet and like model your um + +00:22:22.120 --> 00:22:26.960 +model your answers here appropriately + +00:22:24.240 --> 00:22:28.520 +then that can give you + +00:22:26.960 --> 00:22:30.320 +betters + +00:22:28.520 --> 00:22:32.120 +this is a very important rule of thumb + +00:22:30.320 --> 00:22:33.400 +like don't try to make a language model + +00:22:32.120 --> 00:22:35.039 +do something it's never seen in the + +00:22:33.400 --> 00:22:38.200 +pre-training data and it will make your + +00:22:35.039 --> 00:22:40.240 +life a lot easier so um you can think + +00:22:38.200 --> 00:22:41.880 +that going forward + +00:22:40.240 --> 00:22:44.679 +to + +00:22:41.880 --> 00:22:48.559 +cool so next I want to move into fat + +00:22:44.679 --> 00:22:49.679 +prompting or in context learning um so + +00:22:48.559 --> 00:22:52.159 +fat + +00:22:49.679 --> 00:22:54.440 +prompting basically what we do is we + +00:22:52.159 --> 00:22:55.799 +provide a few examples of the task + +00:22:54.440 --> 00:22:58.440 +together with the + +00:22:55.799 --> 00:23:00.080 +instruction and the way this work works + +00:22:58.440 --> 00:23:02.360 +is you write an instruction like please + +00:23:00.080 --> 00:23:05.919 +classify movie reviews as positive or + +00:23:02.360 --> 00:23:08.120 +negative and add like input uh I really + +00:23:05.919 --> 00:23:10.320 +don't like this movie output negative uh + +00:23:08.120 --> 00:23:12.480 +input this movie is great output + +00:23:10.320 --> 00:23:16.640 +positive + +00:23:12.480 --> 00:23:18.880 +and this is um pretty effective the + +00:23:16.640 --> 00:23:21.799 +thing it's most effective for are + +00:23:18.880 --> 00:23:24.400 +twofold it's most effective for making + +00:23:21.799 --> 00:23:26.360 +sure that you get the formatting right + +00:23:24.400 --> 00:23:27.640 +uh because if you have a few examples + +00:23:26.360 --> 00:23:28.679 +the model will tend to follow those + +00:23:27.640 --> 00:23:30.840 +examples + +00:23:28.679 --> 00:23:34.440 +with respect to formatting especially if + +00:23:30.840 --> 00:23:37.320 +we're talking about like gp4 models um + +00:23:34.440 --> 00:23:40.400 +or strong GPT models it's also effective + +00:23:37.320 --> 00:23:42.400 +if you're using weaker models so like + +00:23:40.400 --> 00:23:44.720 +stronger models like gp4 tend to be + +00:23:42.400 --> 00:23:46.720 +pretty good at following instructions so + +00:23:44.720 --> 00:23:49.520 +if you say + +00:23:46.720 --> 00:23:51.640 +um please classify movie reviews as + +00:23:49.520 --> 00:23:54.000 +positive or negative it will be more + +00:23:51.640 --> 00:23:56.279 +likely to just output positive or + +00:23:54.000 --> 00:23:58.760 +negative um but if you have weaker + +00:23:56.279 --> 00:24:01.720 +models it might say I really don't like + +00:23:58.760 --> 00:24:03.559 +this movie output uh I think I think + +00:24:01.720 --> 00:24:05.640 +this is probably negative or something + +00:24:03.559 --> 00:24:07.240 +like that it will you know it might not + +00:24:05.640 --> 00:24:10.080 +follow the instructions as well and it's + +00:24:07.240 --> 00:24:14.240 +more effective to provide as in context + +00:24:10.080 --> 00:24:17.600 +examples um so so this is a one uh + +00:24:14.240 --> 00:24:19.480 +one uh thing to remember one thing I + +00:24:17.600 --> 00:24:22.120 +should mention also is when I say F shot + +00:24:19.480 --> 00:24:25.720 +prompting and in context learning these + +00:24:22.120 --> 00:24:27.880 +are basically the same thing uh they + +00:24:25.720 --> 00:24:29.720 +basically refer to the same concept but + +00:24:27.880 --> 00:24:31.919 +just from slightly different + +00:24:29.720 --> 00:24:34.799 +examples uh from sorry slightly + +00:24:31.919 --> 00:24:36.919 +different angles PE shot is in contrast + +00:24:34.799 --> 00:24:39.320 +to zero shot so zero shot means you're + +00:24:36.919 --> 00:24:43.039 +providing no examples so zero shot + +00:24:39.320 --> 00:24:45.720 +prompting you would have none uh few + +00:24:43.039 --> 00:24:47.240 +shot you have several examples in + +00:24:45.720 --> 00:24:49.679 +context learning means that you're + +00:24:47.240 --> 00:24:51.640 +learning how to do a task but instead of + +00:24:49.679 --> 00:24:54.320 +providing the model with fine-tuning + +00:24:51.640 --> 00:24:56.679 +data you're providing the examples in + +00:24:54.320 --> 00:24:58.080 +the language models context so they both + +00:24:56.679 --> 00:25:00.919 +basically mean the same thing but + +00:24:58.080 --> 00:25:03.159 +they're they're just contrasting to like + +00:25:00.919 --> 00:25:06.559 +either a zero shot or fine tuning which + +00:25:03.159 --> 00:25:06.559 +is why the terminology is + +00:25:06.880 --> 00:25:13.520 +different so they usering interface + +00:25:11.320 --> 00:25:16.080 +and for the + +00:25:13.520 --> 00:25:17.760 +rendering uh yes you can definitely do F + +00:25:16.080 --> 00:25:20.039 +shot prompting I'm actually going to + +00:25:17.760 --> 00:25:23.440 +talk exactly about exactly how you do + +00:25:20.039 --> 00:25:26.320 +this in like an open AI model um here + +00:25:23.440 --> 00:25:28.240 +which is for open AI models there's a + +00:25:26.320 --> 00:25:31.320 +couple ways that you could do this one + +00:25:28.240 --> 00:25:33.640 +way you could do this is you could um + +00:25:31.320 --> 00:25:36.279 +you could have the role be user and the + +00:25:33.640 --> 00:25:39.279 +role be assistant and just add like + +00:25:36.279 --> 00:25:41.159 +additional conversational history into + +00:25:39.279 --> 00:25:43.159 +the the messages that you're sending to + +00:25:41.159 --> 00:25:46.240 +the language model but actually the + +00:25:43.159 --> 00:25:49.120 +recommended way of doing this um which + +00:25:46.240 --> 00:25:51.880 +is in the openi cookbook uh which is in + +00:25:49.120 --> 00:25:53.919 +the reference is that you send this as a + +00:25:51.880 --> 00:25:58.200 +system message but you provide this like + +00:25:53.919 --> 00:26:00.840 +additional name variable here um with + +00:25:58.200 --> 00:26:02.840 +example user and example assistant the + +00:26:00.840 --> 00:26:06.200 +main reason why you do this is just + +00:26:02.840 --> 00:26:08.080 +because if you don't um if you send it + +00:26:06.200 --> 00:26:10.600 +in as the like user and assistant the + +00:26:08.080 --> 00:26:12.799 +model might refer back to the few shot + +00:26:10.600 --> 00:26:14.320 +examples as something that happened + +00:26:12.799 --> 00:26:15.760 +previously in the conversation whereas + +00:26:14.320 --> 00:26:18.200 +if you send it in the system message + +00:26:15.760 --> 00:26:19.799 +it's guaranteed to not do that so I + +00:26:18.200 --> 00:26:23.600 +think it's like less of an accuracy + +00:26:19.799 --> 00:26:26.360 +thing it's more of a like it's more of a + +00:26:23.600 --> 00:26:29.120 +privacy prompt privacy thing uh than + +00:26:26.360 --> 00:26:30.880 +anything else so this is a recommended + +00:26:29.120 --> 00:26:33.159 +way of doing this on the other hand if + +00:26:30.880 --> 00:26:34.600 +you're using like an open source model + +00:26:33.159 --> 00:26:36.600 +uh you need to be careful because this + +00:26:34.600 --> 00:26:38.279 +name might not even be included in the + +00:26:36.600 --> 00:26:40.080 +prompt template like for example in the + +00:26:38.279 --> 00:26:41.840 +light llm prompt templates that I was + +00:26:40.080 --> 00:26:44.080 +sending in this is not even included at + +00:26:41.840 --> 00:26:46.480 +all so you might just get a weird system + +00:26:44.080 --> 00:26:49.720 +message that uh is poorly fored so you + +00:26:46.480 --> 00:26:53.600 +need to be a little bit conscious + +00:26:49.720 --> 00:26:55.799 +this um cool any questions here does + +00:26:53.600 --> 00:26:58.880 +that answer the + +00:26:55.799 --> 00:27:02.279 +question okay + +00:26:58.880 --> 00:27:05.000 +um so one one thing to be aware of is + +00:27:02.279 --> 00:27:07.039 +llms are sensitive to small changes and + +00:27:05.000 --> 00:27:12.080 +in context examples that you provide to + +00:27:07.039 --> 00:27:14.600 +them so uh previous work has examined + +00:27:12.080 --> 00:27:19.399 +this from a number of angles there's a + +00:27:14.600 --> 00:27:22.679 +paper by Luol and they examine the + +00:27:19.399 --> 00:27:25.000 +sensitivity to example ordering so like + +00:27:22.679 --> 00:27:28.399 +if you take the same examples and you + +00:27:25.000 --> 00:27:30.840 +just order them in different orders um + +00:27:28.399 --> 00:27:32.679 +you can actually get very wildly + +00:27:30.840 --> 00:27:35.600 +different + +00:27:32.679 --> 00:27:37.520 +results um and this is especially true + +00:27:35.600 --> 00:27:40.320 +for smaller models so the smaller models + +00:27:37.520 --> 00:27:42.720 +here are like the gpt2 models the larger + +00:27:40.320 --> 00:27:47.440 +models here are like the GPT the larger + +00:27:42.720 --> 00:27:47.440 +model here is GPT 3.5 uh I + +00:27:48.399 --> 00:27:54.120 +believe other things that people have + +00:27:50.559 --> 00:27:56.760 +looked at are label balance so um how + +00:27:54.120 --> 00:27:58.559 +important is it for the labels to be + +00:27:56.760 --> 00:28:01.440 +balanced + +00:27:58.559 --> 00:28:02.799 +um and if you're doing sentiment + +00:28:01.440 --> 00:28:05.240 +classification for example you might + +00:28:02.799 --> 00:28:07.519 +have only positive examples or only + +00:28:05.240 --> 00:28:10.000 +negative examples and if you have only + +00:28:07.519 --> 00:28:13.279 +positive or negative examples this can + +00:28:10.000 --> 00:28:15.559 +uh help or hurt your accuracy uh for + +00:28:13.279 --> 00:28:17.200 +example on this Amazon review data set + +00:28:15.559 --> 00:28:18.679 +most of the reviews are positive so you + +00:28:17.200 --> 00:28:20.840 +actually do better by having lots of + +00:28:18.679 --> 00:28:23.640 +positive examples in your in context + +00:28:20.840 --> 00:28:26.600 +examples on the other hand for sst2 this + +00:28:23.640 --> 00:28:29.159 +is label balanced so having only + +00:28:26.600 --> 00:28:31.799 +positive or negative is worse on average + +00:28:29.159 --> 00:28:34.279 +than having three positive and one + +00:28:31.799 --> 00:28:36.679 +negative another thing is label coverage + +00:28:34.279 --> 00:28:38.679 +so if we're talking about multi class + +00:28:36.679 --> 00:28:41.120 +classification um + +00:28:38.679 --> 00:28:42.919 +having good coverage of all of the + +00:28:41.120 --> 00:28:45.919 +classes that you want to include in your + +00:28:42.919 --> 00:28:49.120 +multiclass classification is important + +00:28:45.919 --> 00:28:51.720 +um to some extent but if you have uh + +00:28:49.120 --> 00:28:53.440 +more uh you can also confuse some model + +00:28:51.720 --> 00:28:55.840 +especially if they're minority labels so + +00:28:53.440 --> 00:28:57.799 +if you have a whole bunch of like random + +00:28:55.840 --> 00:28:59.080 +minority labels and that can cause so + +00:28:57.799 --> 00:29:01.399 +this is something important to think + +00:28:59.080 --> 00:29:04.640 +about if you're planning on solving kind + +00:29:01.399 --> 00:29:08.640 +of like classification tests um I I've + +00:29:04.640 --> 00:29:11.000 +also had my own experience with uh using + +00:29:08.640 --> 00:29:13.159 +GPT for evaluation for machine + +00:29:11.000 --> 00:29:14.760 +translation and when we use GPT for + +00:29:13.159 --> 00:29:18.559 +evaluation for machine translation it + +00:29:14.760 --> 00:29:20.799 +was very important to add um like high + +00:29:18.559 --> 00:29:22.760 +uh high scoring values low score high + +00:29:20.799 --> 00:29:26.320 +scoring outputs low scoring outputs some + +00:29:22.760 --> 00:29:27.840 +in the middle um and so it's also the + +00:29:26.320 --> 00:29:30.760 +case for regression + +00:29:27.840 --> 00:29:30.760 +uh problems as + +00:29:32.600 --> 00:29:37.320 +well cool um any questions + +00:29:38.159 --> 00:29:45.000 +here um however this is not super + +00:29:42.240 --> 00:29:46.600 +predictable um so there's not like any + +00:29:45.000 --> 00:29:48.399 +rule of thumb that tells you like this + +00:29:46.600 --> 00:29:49.720 +is or as far as I know there's not any + +00:29:48.399 --> 00:29:51.640 +rule of thumb that tells you this is the + +00:29:49.720 --> 00:29:54.000 +way you should construct in context + +00:29:51.640 --> 00:29:55.880 +examples uh there are lots of papers + +00:29:54.000 --> 00:29:57.799 +that say they have methods that work + +00:29:55.880 --> 00:30:01.000 +better but I don't know if there's any + +00:29:57.799 --> 00:30:02.559 +like gold standard IND indry practice + +00:30:01.000 --> 00:30:05.799 +for doing something like this at the + +00:30:02.559 --> 00:30:07.799 +moment so just to give an example uh + +00:30:05.799 --> 00:30:10.399 +this paper it's a really nice paper + +00:30:07.799 --> 00:30:13.440 +examining why uh in context Learning + +00:30:10.399 --> 00:30:17.279 +Works one thing one interesting finding + +00:30:13.440 --> 00:30:19.760 +that they have is they output they take + +00:30:17.279 --> 00:30:22.720 +in context examples but they randomize + +00:30:19.760 --> 00:30:27.320 +the labels they make the labels wrong + +00:30:22.720 --> 00:30:29.519 +some of the time so even with completely + +00:30:27.320 --> 00:30:32.120 +wrong labels even with labels that are + +00:30:29.519 --> 00:30:34.399 +correct 0% of the time you still get + +00:30:32.120 --> 00:30:37.360 +much much better accuracy than if you + +00:30:34.399 --> 00:30:39.440 +use no Inc context examples and why is + +00:30:37.360 --> 00:30:41.640 +this probably you know it's getting the + +00:30:39.440 --> 00:30:44.600 +model formatting correct it's getting + +00:30:41.640 --> 00:30:47.679 +like the names of the labels correct + +00:30:44.600 --> 00:30:49.039 +even if it's not uh accurate so it seems + +00:30:47.679 --> 00:30:50.519 +like it's not really using these for + +00:30:49.039 --> 00:30:52.640 +training data it's using them more just + +00:30:50.519 --> 00:30:56.240 +to know the formatting + +00:30:52.640 --> 00:30:59.399 +appropriate like + +00:30:56.240 --> 00:31:01.399 +you so you already + +00:30:59.399 --> 00:31:03.760 +have + +00:31:01.399 --> 00:31:08.840 +right how is it + +00:31:03.760 --> 00:31:11.240 +Ma like is it just y one y i gu I'm just + +00:31:08.840 --> 00:31:15.000 +ask how you would inter + +00:31:11.240 --> 00:31:16.480 +that so this is you're not training the + +00:31:15.000 --> 00:31:17.880 +model at the moment we're going to talk + +00:31:16.480 --> 00:31:19.360 +about that next class but right now + +00:31:17.880 --> 00:31:21.279 +you're taking a model that has already + +00:31:19.360 --> 00:31:22.840 +been trained and you're providing it + +00:31:21.279 --> 00:31:25.519 +with a few examples and then you're + +00:31:22.840 --> 00:31:28.679 +asking it to fill in um the following + +00:31:25.519 --> 00:31:30.880 +examples just examples + +00:31:28.679 --> 00:31:32.960 +yes + +00:31:30.880 --> 00:31:34.679 +exactly and it's pretty amazing that + +00:31:32.960 --> 00:31:36.440 +that works in the first place especially + +00:31:34.679 --> 00:31:39.840 +with a model that hasn't been explicitly + +00:31:36.440 --> 00:31:41.200 +trained that way but um there's a a fair + +00:31:39.840 --> 00:31:42.320 +amount of research that I think we're + +00:31:41.200 --> 00:31:43.960 +probably going to be talking about in + +00:31:42.320 --> 00:31:47.000 +the interpretability class about why + +00:31:43.960 --> 00:31:49.600 +this happens but um + +00:31:47.000 --> 00:31:51.279 +basically my my interpretation for why + +00:31:49.600 --> 00:31:53.679 +this happens is because there's so much + +00:31:51.279 --> 00:31:56.000 +repetitive stuff on the internet right + +00:31:53.679 --> 00:31:58.240 +there's a bunch of examples of math + +00:31:56.000 --> 00:32:00.399 +problems which is like + +00:31:58.240 --> 00:32:02.279 +question one and then the math problem + +00:32:00.399 --> 00:32:04.320 +and then the answer question two math + +00:32:02.279 --> 00:32:06.440 +problem and then the answer so in order + +00:32:04.320 --> 00:32:08.320 +to model the text on the internet it + +00:32:06.440 --> 00:32:12.120 +needs to learn how to be able to do + +00:32:08.320 --> 00:32:15.399 +these things but so um + +00:32:12.120 --> 00:32:17.760 +cool the second thing is uh more + +00:32:15.399 --> 00:32:20.000 +demonstrations can sometimes hurt + +00:32:17.760 --> 00:32:22.120 +accuracy so this is like binary + +00:32:20.000 --> 00:32:25.080 +classification versus multiple choice + +00:32:22.120 --> 00:32:27.440 +question answering um and actually with + +00:32:25.080 --> 00:32:30.919 +binary classification the model ends up + +00:32:27.440 --> 00:32:33.159 +getting worse um with uh more examples + +00:32:30.919 --> 00:32:36.799 +probably just because the longer context + +00:32:33.159 --> 00:32:39.320 +uh you know confuses the model or moves + +00:32:36.799 --> 00:32:41.320 +the instructions that are provided to + +00:32:39.320 --> 00:32:44.279 +the model farther away in the context so + +00:32:41.320 --> 00:32:48.120 +it starts forgetting them so + +00:32:44.279 --> 00:32:50.240 +um basically what I want to say is uh + +00:32:48.120 --> 00:32:51.760 +you know this is more of an art than a + +00:32:50.240 --> 00:32:53.279 +science you might not get entirely + +00:32:51.760 --> 00:32:55.840 +predictable results but don't worry it's + +00:32:53.279 --> 00:32:59.320 +not just + +00:32:55.840 --> 00:32:59.320 +you cool cool + +00:33:09.200 --> 00:33:15.320 +yeah it can't so the question is if the + +00:33:12.639 --> 00:33:17.039 +in context examples reflect the data + +00:33:15.320 --> 00:33:18.919 +distribution well would that boost the + +00:33:17.039 --> 00:33:24.240 +accuracy I think the answer is probably + +00:33:18.919 --> 00:33:26.039 +yes yeah um I don't know if that it's + +00:33:24.240 --> 00:33:27.679 +that clear because like what I would + +00:33:26.039 --> 00:33:29.919 +expect + +00:33:27.679 --> 00:33:33.559 +is better + +00:33:29.919 --> 00:33:37.240 +coverage is probably more + +00:33:33.559 --> 00:33:39.760 +important than better representativeness + +00:33:37.240 --> 00:33:41.960 +so like even if you have some minority + +00:33:39.760 --> 00:33:43.639 +labels um it's probably better for the + +00:33:41.960 --> 00:33:44.880 +model to know what those minority labels + +00:33:43.639 --> 00:33:47.279 +look like and that's going to be + +00:33:44.880 --> 00:33:49.120 +especially true for like stronger models + +00:33:47.279 --> 00:33:50.679 +um I think + +00:33:49.120 --> 00:33:54.320 +so + +00:33:50.679 --> 00:33:56.440 +cool okay so uh next I want to talk + +00:33:54.320 --> 00:33:59.000 +about Chain of Thought prompting um so + +00:33:56.440 --> 00:34:01.320 +Chain of Thought prompting is a very + +00:33:59.000 --> 00:34:04.080 +popular way of prompting + +00:34:01.320 --> 00:34:06.080 +models and the way it works is you get + +00:34:04.080 --> 00:34:07.839 +the model to explain its reasoning + +00:34:06.080 --> 00:34:12.679 +before making an + +00:34:07.839 --> 00:34:14.520 +answer um and so sorry this example is a + +00:34:12.679 --> 00:34:18.879 +little bit small but like the standard + +00:34:14.520 --> 00:34:20.480 +prompting method is uh like Roger has + +00:34:18.879 --> 00:34:22.000 +five tennis balls he buys two more cans + +00:34:20.480 --> 00:34:23.480 +of tennis balls each can has three + +00:34:22.000 --> 00:34:28.200 +tennis balls how many tennis balls does + +00:34:23.480 --> 00:34:29.359 +he have now um the answer is 11 and so + +00:34:28.200 --> 00:34:32.119 +um this + +00:34:29.359 --> 00:34:34.320 +is an in context example and then you + +00:34:32.119 --> 00:34:37.240 +have your input which has a different + +00:34:34.320 --> 00:34:39.000 +problem uh the cafeteria has 23 apples + +00:34:37.240 --> 00:34:40.639 +if they Ed 20 to make lunch and bought + +00:34:39.000 --> 00:34:41.800 +six more how many apples do they have + +00:34:40.639 --> 00:34:46.720 +the answer is + +00:34:41.800 --> 00:34:49.000 +27 um and so this is wrong so what Chain + +00:34:46.720 --> 00:34:52.000 +of Thought prompting does is instead of + +00:34:49.000 --> 00:34:54.960 +just giving the answer it gives you an + +00:34:52.000 --> 00:34:57.079 +additional reasoning chain uh that says + +00:34:54.960 --> 00:34:59.680 +R started with five balls two cans of of + +00:34:57.079 --> 00:35:01.800 +three tennis balls uh each of six tennis + +00:34:59.680 --> 00:35:04.520 +balls 5 plus 6 equals 11 the answer is + +00:35:01.800 --> 00:35:06.280 +11 and so then when you feed this in + +00:35:04.520 --> 00:35:08.000 +basically the model will generate a + +00:35:06.280 --> 00:35:10.240 +similar reasoning chain and then it's + +00:35:08.000 --> 00:35:13.400 +more likely to get the answer correct + +00:35:10.240 --> 00:35:15.720 +and this very robustly works + +00:35:13.400 --> 00:35:19.440 +for many + +00:35:15.720 --> 00:35:21.440 +different problems where a reasoning + +00:35:19.440 --> 00:35:23.520 +chain is + +00:35:21.440 --> 00:35:27.839 +necessary and if you think about the + +00:35:23.520 --> 00:35:30.359 +reason why this uh why this works I + +00:35:27.839 --> 00:35:33.040 +think there's basically two reasons why + +00:35:30.359 --> 00:35:34.440 +um the first reason is I I only wrote + +00:35:33.040 --> 00:35:36.560 +one on the thing here but the first + +00:35:34.440 --> 00:35:38.760 +reason is it allows the model to + +00:35:36.560 --> 00:35:41.359 +decompose harder problems into simpler + +00:35:38.760 --> 00:35:45.119 +problems and simpler problems are easier + +00:35:41.359 --> 00:35:47.560 +right so um instead + +00:35:45.119 --> 00:35:51.319 +of immediately trying to solve the whole + +00:35:47.560 --> 00:35:53.800 +problem in a single go it will first + +00:35:51.319 --> 00:35:56.520 +solve the problem of like what how many + +00:35:53.800 --> 00:35:58.920 +are left after you use buy and so it + +00:35:56.520 --> 00:36:00.240 +gets three and so now it has this three + +00:35:58.920 --> 00:36:02.480 +here so now it can solve the next + +00:36:00.240 --> 00:36:05.160 +problem of adding six that's equal to 9 + +00:36:02.480 --> 00:36:07.880 +so it's solving simpler sub problems + +00:36:05.160 --> 00:36:11.440 +than it is and uh compared to harder + +00:36:07.880 --> 00:36:13.920 +ones another reason why is it allows for + +00:36:11.440 --> 00:36:17.319 +adaptive computation time so if you + +00:36:13.920 --> 00:36:17.319 +think about like a Transformer + +00:36:19.000 --> 00:36:23.119 +model um if you think about a + +00:36:21.280 --> 00:36:25.560 +Transformer model a Transformer model + +00:36:23.119 --> 00:36:27.200 +has fixed computation time for + +00:36:25.560 --> 00:36:29.920 +predicting each token right a fixed + +00:36:27.200 --> 00:36:31.560 +number of layers it um and based on that + +00:36:29.920 --> 00:36:33.839 +fixed number of layers it passes all the + +00:36:31.560 --> 00:36:36.520 +information through and makes a + +00:36:33.839 --> 00:36:38.200 +prediction and some problems are harder + +00:36:36.520 --> 00:36:39.599 +than others right so it would be very + +00:36:38.200 --> 00:36:42.480 +wasteful to have a really big + +00:36:39.599 --> 00:36:45.640 +Transformer that could solve you know + +00:36:42.480 --> 00:36:49.119 +really complex math problems in the same + +00:36:45.640 --> 00:36:53.359 +amount of time it takes to predict that + +00:36:49.119 --> 00:36:55.280 +the next word is like uh dog after the + +00:36:53.359 --> 00:36:57.280 +word the big or something like that + +00:36:55.280 --> 00:36:58.560 +right so there are some things that are + +00:36:57.280 --> 00:37:00.000 +easy we can do in a second there are + +00:36:58.560 --> 00:37:01.839 +some things that take us more time and + +00:37:00.000 --> 00:37:05.880 +essentially this Chain of Thought + +00:37:01.839 --> 00:37:09.280 +reasoning is um is doing that it's + +00:37:05.880 --> 00:37:12.280 +giving it more time to solve the harder + +00:37:09.280 --> 00:37:12.280 +problems + +00:37:17.200 --> 00:37:22.440 +yes + +00:37:18.839 --> 00:37:23.960 +okay yeah good good question so so + +00:37:22.440 --> 00:37:26.200 +that's what um that's what this next + +00:37:23.960 --> 00:37:27.920 +paper does so uh the the question was + +00:37:26.200 --> 00:37:31.160 +what what happens if we just ask it to + +00:37:27.920 --> 00:37:34.800 +reason and the answer is it still works + +00:37:31.160 --> 00:37:37.000 +um and this paper was really like I I I + +00:37:34.800 --> 00:37:39.760 +love this paper for its Simplicity and + +00:37:37.000 --> 00:37:43.160 +cleverness and basically uh they + +00:37:39.760 --> 00:37:45.000 +contrast few shot learning few shot + +00:37:43.160 --> 00:37:49.200 +Chain of Thought where you provide Chain + +00:37:45.000 --> 00:37:52.160 +of Thought examples zero shot prompting + +00:37:49.200 --> 00:37:54.560 +basically and zero shot Chain of Thought + +00:37:52.160 --> 00:37:58.720 +So what they do is they just + +00:37:54.560 --> 00:38:00.280 +add uh let's thinks step by step that + +00:37:58.720 --> 00:38:04.200 +they add that phrase to the end of The + +00:38:00.280 --> 00:38:06.079 +Prompt and then that elicits the model + +00:38:04.200 --> 00:38:08.000 +to basically do Chain of Thought + +00:38:06.079 --> 00:38:09.240 +reasoning without any further examples + +00:38:08.000 --> 00:38:12.599 +of how that Chain of Thought reasoning + +00:38:09.240 --> 00:38:14.440 +works why does this work again because + +00:38:12.599 --> 00:38:16.760 +like on the internet there's a bunch of + +00:38:14.440 --> 00:38:20.240 +examples of math problem solving data + +00:38:16.760 --> 00:38:22.800 +sets or QA corpora where it says let + +00:38:20.240 --> 00:38:24.480 +things step by step and after that you + +00:38:22.800 --> 00:38:28.040 +you know consistently have this sort of + +00:38:24.480 --> 00:38:29.800 +resoning chain added there so um good + +00:38:28.040 --> 00:38:31.200 +good intuition uh that this paper + +00:38:29.800 --> 00:38:32.480 +answers the question and this like + +00:38:31.200 --> 00:38:36.119 +actually does + +00:38:32.480 --> 00:38:39.119 +work one interesting thing is + +00:38:36.119 --> 00:38:39.119 +um + +00:38:39.720 --> 00:38:45.200 +now if I go to chat + +00:38:47.319 --> 00:38:52.240 +GPT and I say + +00:38:50.480 --> 00:38:58.520 +um + +00:38:52.240 --> 00:38:58.520 +I am teaching a class with 98 + +00:38:58.720 --> 00:39:06.000 +students + +00:39:01.480 --> 00:39:08.400 +70% turn in the + +00:39:06.000 --> 00:39:10.720 +assignment hint on + +00:39:08.400 --> 00:39:17.720 +time uh + +00:39:10.720 --> 00:39:21.880 +10% and it in play how many did not ENT + +00:39:17.720 --> 00:39:21.880 +in let's see let's see if this + +00:39:25.079 --> 00:39:29.200 +works okay it's writing code for + +00:39:29.440 --> 00:39:34.599 +me which is that's a feature slide I + +00:39:32.119 --> 00:39:37.319 +kind of didn't uh I kind of didn't + +00:39:34.599 --> 00:39:37.319 +wanted to do + +00:39:40.040 --> 00:39:44.160 +that okay + +00:39:45.920 --> 00:39:50.359 +um I do not + +00:39:54.280 --> 00:39:59.720 +like okay so um + +00:39:57.040 --> 00:39:59.720 +it's a little bit + +00:40:15.720 --> 00:40:21.839 +slow okay so there there that worked but + +00:40:19.680 --> 00:40:24.640 +not that I did not say let's think step + +00:40:21.839 --> 00:40:27.760 +by step I didn't I didn't ask it to do + +00:40:24.640 --> 00:40:29.240 +this um and the reason why is um we're + +00:40:27.760 --> 00:40:31.119 +going to talk about instruction tuning + +00:40:29.240 --> 00:40:34.359 +next time but basically GPT has been + +00:40:31.119 --> 00:40:36.560 +tuned to do this reasoning even if you + +00:40:34.359 --> 00:40:38.480 +don't ask it to do that uh it wouldn't + +00:40:36.560 --> 00:40:40.839 +do that naturally but it's because lots + +00:40:38.480 --> 00:40:43.880 +of supervised data has been added into + +00:40:40.839 --> 00:40:46.920 +this model so like another thing is like + +00:40:43.880 --> 00:40:48.960 +if you are planning on doing anything + +00:40:46.920 --> 00:40:51.240 +about like Chain of Thought reasoning or + +00:40:48.960 --> 00:40:53.000 +or stuff like that as a class project + +00:40:51.240 --> 00:40:54.960 +you need to keep in mind that the like + +00:40:53.000 --> 00:40:58.280 +GPD models have already been trained to + +00:40:54.960 --> 00:41:00.040 +do this and so so if you want to like + +00:40:58.280 --> 00:41:01.599 +try to find out a better way to elicit + +00:41:00.040 --> 00:41:03.960 +this from a raw model you'll need to use + +00:41:01.599 --> 00:41:07.119 +a raw model like llama 2 with no chat + +00:41:03.960 --> 00:41:10.200 +tuning or stuff like that um in order to + +00:41:07.119 --> 00:41:12.520 +uh do that in a neutral in a neutral + +00:41:10.200 --> 00:41:14.960 +setting that hasn't been contaminated by + +00:41:12.520 --> 00:41:14.960 +like super + +00:41:15.960 --> 00:41:20.520 +L cool um any + +00:41:21.079 --> 00:41:27.720 +questions okay um so next I want to talk + +00:41:24.720 --> 00:41:31.280 +about prompting in programs that uh Chad + +00:41:27.720 --> 00:41:35.560 +GPD gave me a good example of uh why why + +00:41:31.280 --> 00:41:37.160 +this is useful or important um so + +00:41:35.560 --> 00:41:40.640 +there's two results actually both of + +00:41:37.160 --> 00:41:43.440 +these are are from my uh collaborators + +00:41:40.640 --> 00:41:45.839 +but the first one is um it demonstrates + +00:41:43.440 --> 00:41:48.720 +that structuring outputs at programs can + +00:41:45.839 --> 00:41:51.599 +help you get better results even if the + +00:41:48.720 --> 00:41:55.119 +task isn't a programmatic task so this + +00:41:51.599 --> 00:41:57.000 +is kind of interesting um so we were + +00:41:55.119 --> 00:41:59.319 +looking at predicting stru structured + +00:41:57.000 --> 00:42:01.640 +outputs and these structured outputs + +00:41:59.319 --> 00:42:03.839 +specifically are procedural knowledge + +00:42:01.640 --> 00:42:06.920 +like this so like how do we cook a pie + +00:42:03.839 --> 00:42:09.040 +or how do we serve pot pies on a plate + +00:42:06.920 --> 00:42:10.800 +and we had this procedural knowledge + +00:42:09.040 --> 00:42:14.040 +like take the pies out to pool open the + +00:42:10.800 --> 00:42:16.079 +cabinet drawer take out several plates + +00:42:14.040 --> 00:42:17.720 +and we wanted to know the dependencies + +00:42:16.079 --> 00:42:19.520 +between these so we could create a + +00:42:17.720 --> 00:42:22.559 +structured like procedural knowledge + +00:42:19.520 --> 00:42:25.599 +base so this is not an inherently code + +00:42:22.559 --> 00:42:27.200 +based task it's not a you know you could + +00:42:25.599 --> 00:42:28.880 +just ask + +00:42:27.200 --> 00:42:32.160 +the model in natural language and that + +00:42:28.880 --> 00:42:35.000 +would work as well so we structured + +00:42:32.160 --> 00:42:37.720 +things in a couple varieties so we had a + +00:42:35.000 --> 00:42:39.800 +textual format we had uh something in + +00:42:37.720 --> 00:42:43.079 +the dot format which is a way to draw + +00:42:39.800 --> 00:42:45.480 +graphs and then we had we also tried + +00:42:43.079 --> 00:42:47.240 +structuring the output in Python so + +00:42:45.480 --> 00:42:48.960 +these are just different ways to format + +00:42:47.240 --> 00:42:50.720 +the output they all say the same thing + +00:42:48.960 --> 00:42:54.599 +and we can extract the answer from all + +00:42:50.720 --> 00:42:56.920 +of them um but we found that structuring + +00:42:54.599 --> 00:42:58.480 +it in in Python basically is the more + +00:42:56.920 --> 00:43:02.920 +effective way of doing + +00:42:58.480 --> 00:43:04.680 +this so why why is it this the case the + +00:43:02.920 --> 00:43:06.280 +answer is essentially the same thing + +00:43:04.680 --> 00:43:08.680 +that I was talking about before with you + +00:43:06.280 --> 00:43:11.480 +know predicting excellent instead of + +00:43:08.680 --> 00:43:13.319 +five right you know it's seen a ton of + +00:43:11.480 --> 00:43:15.960 +python in it's training data so it's + +00:43:13.319 --> 00:43:17.760 +very good at predicting python uh it's + +00:43:15.960 --> 00:43:20.359 +less good at predicting dot format + +00:43:17.760 --> 00:43:24.240 +because it seemed less do format and it + +00:43:20.359 --> 00:43:26.640 +hasn't seen very much text here + +00:43:24.240 --> 00:43:29.960 +um another + +00:43:26.640 --> 00:43:32.359 +comment is code is very highly + +00:43:29.960 --> 00:43:33.559 +structured compared to natural language + +00:43:32.359 --> 00:43:35.599 +and because code is very highly + +00:43:33.559 --> 00:43:37.520 +structured we have things like + +00:43:35.599 --> 00:43:39.079 +dependencies where we refer back to + +00:43:37.520 --> 00:43:41.079 +variables that we defined before and + +00:43:39.079 --> 00:43:44.119 +other things like this so I think when + +00:43:41.079 --> 00:43:46.760 +it starts outputting code the models get + +00:43:44.119 --> 00:43:48.359 +into this mode which say yes please + +00:43:46.760 --> 00:43:51.280 +refer back to the things you've seen + +00:43:48.359 --> 00:43:53.440 +previously more often like attend to + +00:43:51.280 --> 00:43:57.040 +previous stuff more often and don't just + +00:43:53.440 --> 00:43:59.760 +like generate things you know uh + +00:43:57.040 --> 00:44:02.440 +arbitrarily and hallucinate you know new + +00:43:59.760 --> 00:44:04.119 +content and because of this for + +00:44:02.440 --> 00:44:05.559 +generating structured outputs even if + +00:44:04.119 --> 00:44:08.920 +the structured outputs don't need to be + +00:44:05.559 --> 00:44:11.520 +code you can benefit by doing + +00:44:08.920 --> 00:44:13.200 +this another thing that's a really handy + +00:44:11.520 --> 00:44:16.319 +trick is anytime you want to get a + +00:44:13.200 --> 00:44:19.079 +structured output out of a model + +00:44:16.319 --> 00:44:22.760 +um you can ask it to generate something + +00:44:19.079 --> 00:44:24.839 +in Json instead of generating it in uh + +00:44:22.760 --> 00:44:26.640 +in text and the reason why Json is + +00:44:24.839 --> 00:44:28.079 +useful is you can press the on you can + +00:44:26.640 --> 00:44:32.319 +pull out the strings and other stuff + +00:44:28.079 --> 00:44:34.839 +like that um so this can be very + +00:44:32.319 --> 00:44:37.960 +effective because if you just add an + +00:44:34.839 --> 00:44:40.200 +instruction that says please um please + +00:44:37.960 --> 00:44:42.440 +format things in this particular + +00:44:40.200 --> 00:44:43.680 +format often the model won't listen to + +00:44:42.440 --> 00:44:44.800 +you and it will output something in a + +00:44:43.680 --> 00:44:46.280 +different format you need to write a + +00:44:44.800 --> 00:44:48.599 +really annoying parser to pull out the + +00:44:46.280 --> 00:44:50.280 +information that you actually want but + +00:44:48.599 --> 00:44:51.960 +it gets Json right almost all of the + +00:44:50.280 --> 00:44:54.040 +time just because it's seen so much Json + +00:44:51.960 --> 00:44:57.880 +so that's a nice trick if you want to do + +00:44:54.040 --> 00:45:01.520 +something like that + +00:44:57.880 --> 00:45:03.559 +another uh thing is a paper uh called + +00:45:01.520 --> 00:45:08.079 +program AED language models that we did + +00:45:03.559 --> 00:45:10.200 +about a year ago and the method that we + +00:45:08.079 --> 00:45:13.760 +proposed here is using a program to + +00:45:10.200 --> 00:45:16.480 +generate outputs uh using a program to + +00:45:13.760 --> 00:45:19.440 +generate outputs and this can be more + +00:45:16.480 --> 00:45:22.319 +precise than asking an LM to do so and + +00:45:19.440 --> 00:45:26.720 +so instead of doing Chain of Thought + +00:45:22.319 --> 00:45:30.640 +prompting we created a few F shot + +00:45:26.720 --> 00:45:34.319 +examples where we wrote like the text + +00:45:30.640 --> 00:45:37.160 +here and then the text in English and + +00:45:34.319 --> 00:45:40.160 +then we had code corresponding code the + +00:45:37.160 --> 00:45:42.280 +text in English corresponding code and + +00:45:40.160 --> 00:45:44.960 +then the answer is and then the final + +00:45:42.280 --> 00:45:48.160 +code and then we basically generate this + +00:45:44.960 --> 00:45:49.640 +code and execute it to get the answer so + +00:45:48.160 --> 00:45:52.319 +like as you saw this is implemented in + +00:45:49.640 --> 00:45:54.280 +chat GP now it's uh you write something + +00:45:52.319 --> 00:45:56.319 +out it will decide whether it wants to + +00:45:54.280 --> 00:45:58.599 +generate code or generate text depending + +00:45:56.319 --> 00:46:00.760 +on the type of problem and it's just + +00:45:58.599 --> 00:46:03.559 +more precise it can solve like actually + +00:46:00.760 --> 00:46:05.200 +rather complex problems like uh you know + +00:46:03.559 --> 00:46:07.880 +calculating how much tax you need to be + +00:46:05.200 --> 00:46:10.880 +paying or something like that + +00:46:07.880 --> 00:46:12.480 +um it's especially useful for numeric + +00:46:10.880 --> 00:46:14.599 +questions and it's implemented in things + +00:46:12.480 --> 00:46:17.040 +like the chat GPT uh code interpreter + +00:46:14.599 --> 00:46:18.640 +bar to execution other things like that + +00:46:17.040 --> 00:46:22.079 +it's pretty cool it can actually do + +00:46:18.640 --> 00:46:24.440 +visualizations for you for papers also + +00:46:22.079 --> 00:46:28.000 +so if you ask it to visualize data for + +00:46:24.440 --> 00:46:30.200 +you um chat GPD now does a pretty good + +00:46:28.000 --> 00:46:32.640 +job of doing this like to give an + +00:46:30.200 --> 00:46:34.319 +example I asked it I gave it a big + +00:46:32.640 --> 00:46:35.760 +python list and asked it to generate a + +00:46:34.319 --> 00:46:37.839 +histogram and it did a really good job + +00:46:35.760 --> 00:46:40.240 +of it for me it also gives you the code + +00:46:37.839 --> 00:46:42.839 +so you can go in and modify it later so + +00:46:40.240 --> 00:46:44.720 +um I would definitely recommend you know + +00:46:42.839 --> 00:46:46.200 +thinking about using this uh either in + +00:46:44.720 --> 00:46:49.200 +your research or just to write your + +00:46:46.200 --> 00:46:51.480 +reports uh for this class so um this + +00:46:49.200 --> 00:46:55.839 +class is uh generative AI friendly + +00:46:51.480 --> 00:46:57.760 +mostly so like I do I do want you to + +00:46:55.839 --> 00:46:59.880 +learn the things we expect you to learn + +00:46:57.760 --> 00:47:02.480 +which is why I suggest that you don't + +00:46:59.880 --> 00:47:04.400 +like just write every uh everything for + +00:47:02.480 --> 00:47:06.280 +assignment number one with chat GP key + +00:47:04.400 --> 00:47:07.720 +but I think even if you tried to do that + +00:47:06.280 --> 00:47:09.640 +it'd probably get it wrong in subtle + +00:47:07.720 --> 00:47:10.920 +ways so you're probably better off + +00:47:09.640 --> 00:47:13.880 +understanding the content + +00:47:10.920 --> 00:47:16.400 +anyway um + +00:47:13.880 --> 00:47:18.160 +cool this can also be expanded a whole + +00:47:16.400 --> 00:47:21.559 +lot into like agents and tools and I'm + +00:47:18.160 --> 00:47:21.559 +going to talk about that separately + +00:47:22.800 --> 00:47:27.720 +later cool uh any any things about this + +00:47:29.040 --> 00:47:34.200 +okay I'm uh I'm going to go next so + +00:47:31.800 --> 00:47:36.079 +prompt engineering um when you're + +00:47:34.200 --> 00:47:37.280 +designing prompts uh there's a number of + +00:47:36.079 --> 00:47:38.240 +different ways you can do this you can + +00:47:37.280 --> 00:47:41.559 +do this + +00:47:38.240 --> 00:47:42.960 +manually uh you to do this you configure + +00:47:41.559 --> 00:47:44.520 +a manual template based on the + +00:47:42.960 --> 00:47:46.160 +characteristics of the task using all of + +00:47:44.520 --> 00:47:48.880 +the knowledge that I told you + +00:47:46.160 --> 00:47:50.119 +before you can also do automated search + +00:47:48.880 --> 00:47:52.079 +and there's a number of different ways + +00:47:50.119 --> 00:47:55.119 +to do automated search for + +00:47:52.079 --> 00:47:58.319 +prompts uh the first one is doing some + +00:47:55.119 --> 00:48:00.599 +sort of search discret space uh so you + +00:47:58.319 --> 00:48:02.720 +find a prompt that is + +00:48:00.599 --> 00:48:04.680 +essentially + +00:48:02.720 --> 00:48:06.640 +text the other one is search in + +00:48:04.680 --> 00:48:08.559 +continuous space so you find a prompt + +00:48:06.640 --> 00:48:10.680 +that isn't actually comprehensible text + +00:48:08.559 --> 00:48:14.760 +but nonetheless is a good + +00:48:10.680 --> 00:48:16.960 +prompt so looking at manual engineering + +00:48:14.760 --> 00:48:19.000 +um making sure that the format matches + +00:48:16.960 --> 00:48:21.680 +that of a trained model uh such as the + +00:48:19.000 --> 00:48:24.359 +chat format is actually really really + +00:48:21.680 --> 00:48:26.119 +important um and this can have a a large + +00:48:24.359 --> 00:48:28.119 +effect on models there's a really paper + +00:48:26.119 --> 00:48:30.000 +that demonstrated this convincingly + +00:48:28.119 --> 00:48:33.200 +before and also releases some software + +00:48:30.000 --> 00:48:35.880 +that allows you to do this um kind of in + +00:48:33.200 --> 00:48:38.079 +an efficient manner and what this is + +00:48:35.880 --> 00:48:41.200 +showing is + +00:48:38.079 --> 00:48:45.079 +um this is the original formatting of a + +00:48:41.200 --> 00:48:48.400 +prompt that was given I I Believe by uh + +00:48:45.079 --> 00:48:50.119 +some sort of like uh machine reading or + +00:48:48.400 --> 00:48:52.799 +document based question answering data + +00:48:50.119 --> 00:48:55.480 +set which was like passage + +00:48:52.799 --> 00:48:58.440 +answer if you modify the spacing between + +00:48:55.480 --> 00:49:01.680 +the the fields that increases your score + +00:48:58.440 --> 00:49:04.280 +by several percentage points um if you + +00:49:01.680 --> 00:49:06.880 +remove the colons that increases your + +00:49:04.280 --> 00:49:08.720 +score by a few more percentage points + +00:49:06.880 --> 00:49:10.119 +it's kind of silly but like little + +00:49:08.720 --> 00:49:11.040 +things like this actually can make a + +00:49:10.119 --> 00:49:14.240 +really big + +00:49:11.040 --> 00:49:17.599 +difference um if you modify the casing + +00:49:14.240 --> 00:49:19.960 +this decreases by a lot if you modify + +00:49:17.599 --> 00:49:22.440 +the casing and remove colons so the + +00:49:19.960 --> 00:49:25.200 +thing that was useful like adding colons + +00:49:22.440 --> 00:49:26.720 +here remove colons uh that further + +00:49:25.200 --> 00:49:29.280 +decrease + +00:49:26.720 --> 00:49:31.400 +if you forget to add a space between the + +00:49:29.280 --> 00:49:32.559 +passage and the text that really hurts + +00:49:31.400 --> 00:49:35.599 +your + +00:49:32.559 --> 00:49:38.000 +accuracy so this is pretty painful right + +00:49:35.599 --> 00:49:40.599 +like you don't want to be getting uh + +00:49:38.000 --> 00:49:44.160 +0.036% accuracy when adding a space + +00:49:40.599 --> 00:49:48.680 +would give you like 75% accuracy + +00:49:44.160 --> 00:49:50.799 +right um and one interesting thing is um + +00:49:48.680 --> 00:49:53.160 +this is looking + +00:49:50.799 --> 00:49:56.559 +at different + +00:49:53.160 --> 00:49:58.520 +models and um + +00:49:56.559 --> 00:50:00.640 +with different models it's pretty + +00:49:58.520 --> 00:50:03.599 +consistent that many different plausible + +00:50:00.640 --> 00:50:05.400 +formats that you try the average gives + +00:50:03.599 --> 00:50:07.240 +you a really low accuracy but there's a + +00:50:05.400 --> 00:50:08.760 +few outliers that give you really good + +00:50:07.240 --> 00:50:11.119 +accuracy and these probably correspond + +00:50:08.760 --> 00:50:13.400 +to the things that it was trained on um + +00:50:11.119 --> 00:50:15.880 +instruction tuned on or or other things + +00:50:13.400 --> 00:50:17.480 +like this so number one make sure you're + +00:50:15.880 --> 00:50:19.799 +using like the canonical prompt + +00:50:17.480 --> 00:50:21.240 +formatting for the model for sure number + +00:50:19.799 --> 00:50:22.640 +two you might want to do a little bit of + +00:50:21.240 --> 00:50:24.720 +additional search to see if you can do + +00:50:22.640 --> 00:50:26.960 +even better than that so um this is + +00:50:24.720 --> 00:50:29.480 +something to be very aware + +00:50:26.960 --> 00:50:32.480 +of + +00:50:29.480 --> 00:50:32.480 +um + +00:50:34.200 --> 00:50:37.680 +okay do you have a + +00:50:39.599 --> 00:50:43.720 +question this is dependent on what it + +00:50:41.720 --> 00:50:47.680 +sees in trading time another thing + +00:50:43.720 --> 00:50:51.920 +actually is um this will definitely be + +00:50:47.680 --> 00:50:53.200 +tighter for uh like a chat GPT or GPT 4 + +00:50:51.920 --> 00:50:56.599 +um because it's been trained on many + +00:50:53.200 --> 00:50:59.319 +different formats at training time um + +00:50:56.599 --> 00:51:00.880 +and so the better the model has been + +00:50:59.319 --> 00:51:03.520 +trained on a lot of different formats + +00:51:00.880 --> 00:51:05.559 +the less this is going to have an + +00:51:03.520 --> 00:51:06.920 +effect but you know you're probably not + +00:51:05.559 --> 00:51:09.440 +going to be retraining a model that + +00:51:06.920 --> 00:51:10.799 +somebody gives you uh so like this is + +00:51:09.440 --> 00:51:12.880 +something to be very aware of if you're + +00:51:10.799 --> 00:51:14.839 +just a downstream newer model especially + +00:51:12.880 --> 00:51:17.599 +an open source + +00:51:14.839 --> 00:51:19.359 +model um another thing is how do you + +00:51:17.599 --> 00:51:22.280 +give instructions to + +00:51:19.359 --> 00:51:25.000 +models um instructions should be clear + +00:51:22.280 --> 00:51:29.280 +concise and easy to understand one very + +00:51:25.000 --> 00:51:31.559 +funny thing is um I think now like + +00:51:29.280 --> 00:51:33.280 +actually prompting language models is + +00:51:31.559 --> 00:51:34.960 +very similar to prompting humans + +00:51:33.280 --> 00:51:37.119 +especially if we're talking about like + +00:51:34.960 --> 00:51:38.760 +gp4 so if you're not very good at + +00:51:37.119 --> 00:51:41.599 +explaining things to humans that might + +00:51:38.760 --> 00:51:45.440 +actually be bad um and you might want to + +00:51:41.599 --> 00:51:47.359 +practice that and explaining things to + +00:51:45.440 --> 00:51:50.319 +models might be a good way to practice + +00:51:47.359 --> 00:51:51.799 +that right so you know um it actually + +00:51:50.319 --> 00:51:54.040 +can give you feedback without annoying + +00:51:51.799 --> 00:51:55.359 +your friends by having you explain uh + +00:51:54.040 --> 00:51:58.160 +things to them in several different ways + +00:51:55.359 --> 00:52:00.040 +way and seeing how they react so um but + +00:51:58.160 --> 00:52:03.680 +anyway clear concise easy to understand + +00:52:00.040 --> 00:52:05.319 +is good um there's this prompting guide + +00:52:03.680 --> 00:52:08.599 +uh which I I can + +00:52:05.319 --> 00:52:13.240 +open um this has a prompt engineering + +00:52:08.599 --> 00:52:14.520 +guide I I I like this site but it it + +00:52:13.240 --> 00:52:17.400 +does have a bit + +00:52:14.520 --> 00:52:18.880 +of like variance in the importance of + +00:52:17.400 --> 00:52:21.760 +the information that tells you but like + +00:52:18.880 --> 00:52:23.960 +this particular page is nice I feel so + +00:52:21.760 --> 00:52:26.160 +start simple start with simple + +00:52:23.960 --> 00:52:29.520 +instructions um + +00:52:26.160 --> 00:52:32.119 +you should tell the model what it should + +00:52:29.520 --> 00:52:36.839 +be doing so make sure you say write + +00:52:32.119 --> 00:52:39.799 +classify summarize translate order um + +00:52:36.839 --> 00:52:41.960 +and things like this uh it also gives + +00:52:39.799 --> 00:52:45.440 +some good examples of the level of + +00:52:41.960 --> 00:52:47.559 +specificity that you should be giving so + +00:52:45.440 --> 00:52:49.680 +something that's less precise is explain + +00:52:47.559 --> 00:52:51.559 +the concept of prompt engineering keep + +00:52:49.680 --> 00:52:53.920 +the explanation short only a few + +00:52:51.559 --> 00:52:57.119 +sentences and don't be too + +00:52:53.920 --> 00:52:58.799 +descriptive um it use two to three + +00:52:57.119 --> 00:53:00.240 +sentences to explain the concept of + +00:52:58.799 --> 00:53:02.599 +prompt engineering to a high school + +00:53:00.240 --> 00:53:04.839 +student so what this does is this tells + +00:53:02.599 --> 00:53:07.839 +you the level of read like the reading + +00:53:04.839 --> 00:53:07.839 +level + +00:53:07.960 --> 00:53:12.520 +um so this doesn't even tell you the + +00:53:10.200 --> 00:53:14.319 +reading level I guess um and then two to + +00:53:12.520 --> 00:53:16.240 +three sentences is more precise than + +00:53:14.319 --> 00:53:19.200 +keep it a few sentences don't be too + +00:53:16.240 --> 00:53:22.440 +descriptive so um the more precise you + +00:53:19.200 --> 00:53:25.760 +can be the the better it is um one + +00:53:22.440 --> 00:53:27.040 +interesting thing is like if you ask + +00:53:25.760 --> 00:53:28.359 +your friend to do something and they + +00:53:27.040 --> 00:53:32.400 +don't know how to do it they'll complain + +00:53:28.359 --> 00:53:34.240 +to you but right now uh LMS don't + +00:53:32.400 --> 00:53:35.720 +complain to you they may in the future + +00:53:34.240 --> 00:53:38.680 +uh that might be like actually an + +00:53:35.720 --> 00:53:40.799 +interesting thing to find uh the you + +00:53:38.680 --> 00:53:42.319 +know interesting methodological thing to + +00:53:40.799 --> 00:53:45.240 +look at for a project or something like + +00:53:42.319 --> 00:53:47.960 +that but um right now you need to be + +00:53:45.240 --> 00:53:49.040 +precise and like there's it doesn't give + +00:53:47.960 --> 00:53:51.799 +you feedback when you're not being + +00:53:49.040 --> 00:53:51.799 +precise so you need + +00:53:52.000 --> 00:53:56.359 +to um separately from this there are + +00:53:54.200 --> 00:53:59.160 +methods for automatic prompt engineering + +00:53:56.359 --> 00:54:00.960 +so uh prompt paraphrasing gradient based + +00:53:59.160 --> 00:54:02.240 +discreet prompt search prompt tuning + +00:54:00.960 --> 00:54:06.160 +prefix + +00:54:02.240 --> 00:54:09.880 +tuning so prompt paraphrasing um this is + +00:54:06.160 --> 00:54:12.559 +a method that uh we proposed a while ago + +00:54:09.880 --> 00:54:15.760 +um to basically paraphrase an existing + +00:54:12.559 --> 00:54:17.280 +prompt to get other candidates um it's + +00:54:15.760 --> 00:54:19.240 +rather simple basically you take a + +00:54:17.280 --> 00:54:21.960 +prompt you put it through a paraphrasing + +00:54:19.240 --> 00:54:24.280 +model and it will give you new prompts + +00:54:21.960 --> 00:54:25.440 +and this is good because it will tend to + +00:54:24.280 --> 00:54:28.319 +give you things that are natural + +00:54:25.440 --> 00:54:29.839 +language um you can paraphrase 50 times + +00:54:28.319 --> 00:54:32.480 +try all of them see which one gives you + +00:54:29.839 --> 00:54:37.079 +the highest accuracy and then use that + +00:54:32.480 --> 00:54:39.280 +one um there's also an interesting paper + +00:54:37.079 --> 00:54:43.079 +uh that demonstrates that you can do + +00:54:39.280 --> 00:54:45.240 +this iteratively so you paraphrase once + +00:54:43.079 --> 00:54:46.599 +um you filter down all the candidates + +00:54:45.240 --> 00:54:48.119 +that do well and then you go in and + +00:54:46.599 --> 00:54:49.960 +paraphrase them again and you just do + +00:54:48.119 --> 00:54:51.960 +this over and over again and that can + +00:54:49.960 --> 00:54:54.079 +give you better results than kind of one + +00:54:51.960 --> 00:54:57.079 +one off + +00:54:54.079 --> 00:54:57.079 +paraphrasing + +00:54:59.240 --> 00:55:02.079 +so that's very simple you can even use a + +00:55:01.079 --> 00:55:04.160 +large language model to do the + +00:55:02.079 --> 00:55:06.599 +paraphrasing for you um another thing + +00:55:04.160 --> 00:55:08.920 +that you can do is gradient based search + +00:55:06.599 --> 00:55:11.119 +so the way this works is you need to + +00:55:08.920 --> 00:55:16.319 +have a a model that you can calculate + +00:55:11.119 --> 00:55:19.920 +gradients for and what you do is you + +00:55:16.319 --> 00:55:22.240 +calculate you create a seed prompt and + +00:55:19.920 --> 00:55:26.000 +then you calculate gradients into that + +00:55:22.240 --> 00:55:29.760 +seed prompt so you treat the + +00:55:26.000 --> 00:55:33.160 +um you treat each of the tokens here + +00:55:29.760 --> 00:55:36.680 +like T1 T2 T3 T4 + +00:55:33.160 --> 00:55:38.240 +T5 as their own embeddings you do back + +00:55:36.680 --> 00:55:39.920 +propop into those embeddings and you + +00:55:38.240 --> 00:55:42.799 +optimize them to get high accuracy on + +00:55:39.920 --> 00:55:44.720 +your data set then after you're done + +00:55:42.799 --> 00:55:47.319 +optimizing them to get high accuracy on + +00:55:44.720 --> 00:55:49.079 +your data set you clamp them onto the + +00:55:47.319 --> 00:55:52.160 +nearest neighbor embedding that you + +00:55:49.079 --> 00:55:53.520 +already have so you basically say okay + +00:55:52.160 --> 00:55:56.720 +the nearest neighbor to the embedding + +00:55:53.520 --> 00:55:58.920 +that I learned you um is atmosphere then + +00:55:56.720 --> 00:56:02.240 +a lot dialogue clone + +00:55:58.920 --> 00:56:03.799 +totally and so this is this will + +00:56:02.240 --> 00:56:05.599 +actually give you better results than + +00:56:03.799 --> 00:56:07.839 +paraphrasing in many cases because the + +00:56:05.599 --> 00:56:11.520 +search space is less constrained you can + +00:56:07.839 --> 00:56:12.960 +get these very unnatural prompts uh that + +00:56:11.520 --> 00:56:16.280 +don't seem to make sense but actually + +00:56:12.960 --> 00:56:20.280 +work well this has particularly been + +00:56:16.280 --> 00:56:22.960 +widely used in um adversarial attacks on + +00:56:20.280 --> 00:56:25.599 +language modals so how can you come up + +00:56:22.960 --> 00:56:27.720 +with um + +00:56:25.599 --> 00:56:31.559 +with prompts that + +00:56:27.720 --> 00:56:33.319 +cause language models to uh do things + +00:56:31.559 --> 00:56:36.039 +that you don't want them to be + +00:56:33.319 --> 00:56:38.920 +doing and um there's actually this nice + +00:56:36.039 --> 00:56:41.440 +paper uh also by people at CMU called + +00:56:38.920 --> 00:56:42.960 +Universal and transferable adversarial + +00:56:41.440 --> 00:56:45.400 +attacks on line language + +00:56:42.960 --> 00:56:50.559 +models and basically what they do is + +00:56:45.400 --> 00:56:53.880 +they try to optimize the uh they try to + +00:56:50.559 --> 00:56:56.839 +optimize the prompt to create a prompt + +00:56:53.880 --> 00:56:58.599 +that causes the model to do bad things + +00:56:56.839 --> 00:57:00.039 +basically and they try to do it even on + +00:56:58.599 --> 00:57:03.440 +models that have been trying to not do + +00:57:00.039 --> 00:57:05.039 +bad things and they demonstrate that + +00:57:03.440 --> 00:57:07.359 +number one you can cause things like + +00:57:05.039 --> 00:57:09.599 +models like llama to do bad you know bad + +00:57:07.359 --> 00:57:12.559 +things like output toxic things tell you + +00:57:09.599 --> 00:57:15.599 +how to build bombs stuff like that but + +00:57:12.559 --> 00:57:18.480 +also the same prompts also work on like + +00:57:15.599 --> 00:57:22.319 +GPD models uh which is kind of like + +00:57:18.480 --> 00:57:23.839 +interesting and and very uh you know + +00:57:22.319 --> 00:57:26.520 +confusing in a way because you thought + +00:57:23.839 --> 00:57:28.160 +this might be explo idiosyncrasies of a + +00:57:26.520 --> 00:57:32.440 +particular language model but actually + +00:57:28.160 --> 00:57:32.440 +it's not so I I find this kind of + +00:57:33.880 --> 00:57:39.520 +fascinating + +00:57:36.039 --> 00:57:42.240 +so if you take that a step further one + +00:57:39.520 --> 00:57:44.079 +thing that you can do is you can say oh + +00:57:42.240 --> 00:57:46.280 +actually there's no reason why we need + +00:57:44.079 --> 00:57:48.520 +to clamp these embeddings back to an + +00:57:46.280 --> 00:57:52.240 +existing embedding right so we could + +00:57:48.520 --> 00:57:56.079 +just optimize the prompts the embeddings + +00:57:52.240 --> 00:57:57.720 +of the prompts that go for a task and + +00:57:56.079 --> 00:58:02.000 +not clamp them back to embeddings and + +00:57:57.720 --> 00:58:03.599 +just keep them as is so um what I mean + +00:58:02.000 --> 00:58:07.079 +by that is like right here it's + +00:58:03.599 --> 00:58:09.160 +optimizing T1 T2 T3 T4 T5 and then + +00:58:07.079 --> 00:58:11.359 +clamping that back to Atmosphere a lot + +00:58:09.160 --> 00:58:13.960 +dialog clone totally but just keep them + +00:58:11.359 --> 00:58:16.160 +as is and don't worry about them like + +00:58:13.960 --> 00:58:18.039 +actually being a token in the model + +00:58:16.160 --> 00:58:19.400 +because if you have control over your + +00:58:18.039 --> 00:58:21.200 +model you can just add them as new + +00:58:19.400 --> 00:58:25.960 +elements in the vocabulary and you're + +00:58:21.200 --> 00:58:28.440 +fine right so what they demonstrate in + +00:58:25.960 --> 00:58:31.520 +this paper is that instead of taking + +00:58:28.440 --> 00:58:33.440 +your 11 billion parameter model and + +00:58:31.520 --> 00:58:35.920 +training the whole 11 billion parameter + +00:58:33.440 --> 00:58:38.359 +model for many different tasks on many + +00:58:35.920 --> 00:58:40.079 +different data sets they just train + +00:58:38.359 --> 00:58:42.039 +these prompts which are like 20K + +00:58:40.079 --> 00:58:44.039 +parameters each I I forget how long it + +00:58:42.039 --> 00:58:46.280 +is it's like 10 tokens or 20 tokens or + +00:58:44.039 --> 00:58:48.079 +something like that um and train it on + +00:58:46.280 --> 00:58:49.640 +all of the the data sets here and you + +00:58:48.079 --> 00:58:50.680 +don't actually need to do multitask + +00:58:49.640 --> 00:58:52.200 +learning you don't need to train on + +00:58:50.680 --> 00:58:53.720 +multiple tasks at the same time you can + +00:58:52.200 --> 00:58:56.119 +just train on a single + +00:58:53.720 --> 00:58:58.599 +task + +00:58:56.119 --> 00:59:01.000 +so now let's take that even a step + +00:58:58.599 --> 00:59:03.640 +further so this is only training the + +00:59:01.000 --> 00:59:06.359 +embeddings that you input into the model + +00:59:03.640 --> 00:59:08.160 +there's a method called prefix tuning + +00:59:06.359 --> 00:59:10.319 +and the way prefix tuning works is + +00:59:08.160 --> 00:59:12.280 +instead of training only the embeddings + +00:59:10.319 --> 00:59:14.799 +that go into the model they actually + +00:59:12.280 --> 00:59:18.920 +train a prefix that you then append to + +00:59:14.799 --> 00:59:20.839 +every layer of the model so prompt + +00:59:18.920 --> 00:59:23.319 +tuning basically does this for the first + +00:59:20.839 --> 00:59:24.839 +layer of the model prefix tuning does + +00:59:23.319 --> 00:59:28.400 +this for every layer of the model you + +00:59:24.839 --> 00:59:30.319 +append a prefix uh for every day so it's + +00:59:28.400 --> 00:59:32.200 +just a more expressive version of + +00:59:30.319 --> 00:59:36.119 +prompting + +00:59:32.200 --> 00:59:40.200 +essentially so these are all kinds of + +00:59:36.119 --> 00:59:43.680 +gradual steps from a human created + +00:59:40.200 --> 00:59:47.880 +prompt into something that is basically + +00:59:43.680 --> 00:59:50.839 +training a a prompt or a prefix to the + +00:59:47.880 --> 00:59:52.960 +model so I I would take questions but + +00:59:50.839 --> 00:59:55.200 +let me get to the end of this section uh + +00:59:52.960 --> 00:59:58.839 +also because uh I think there's + +00:59:55.200 --> 01:00:00.720 +interesting analogies here so in the + +00:59:58.839 --> 01:00:02.880 +next class I'm going to talk about + +01:00:00.720 --> 01:00:04.440 +parameter efficient fine-tuning methods + +01:00:02.880 --> 01:00:06.960 +which is kind of a more + +01:00:04.440 --> 01:00:10.000 +General it's a more + +01:00:06.960 --> 01:00:11.480 +General version of prompt tuning or + +01:00:10.000 --> 01:00:13.280 +prefix tuning there are methods that + +01:00:11.480 --> 01:00:15.960 +tune a small number of parameters to get + +01:00:13.280 --> 01:00:17.400 +the model to do something and there's a + +01:00:15.960 --> 01:00:18.880 +bunch of different parameter efficient + +01:00:17.400 --> 01:00:21.520 +tuning methods many people may have + +01:00:18.880 --> 01:00:23.880 +heard of something like Laura uh or + +01:00:21.520 --> 01:00:25.440 +adapters um I just talked about prefix + +01:00:23.880 --> 01:00:28.119 +tuning + +01:00:25.440 --> 01:00:30.960 +so essentially prompt tuning and prefix + +01:00:28.119 --> 01:00:33.359 +tuning are part of this more General + +01:00:30.960 --> 01:00:36.680 +class of parameter efficient find tuning + +01:00:33.359 --> 01:00:39.240 +methods and so what we can say is + +01:00:36.680 --> 01:00:41.119 +actually prompting is fine-tuning + +01:00:39.240 --> 01:00:42.920 +prompting is a way of fine-tuning the + +01:00:41.119 --> 01:00:46.799 +model or getting the model to perform a + +01:00:42.920 --> 01:00:49.839 +particular task um and we have this + +01:00:46.799 --> 01:00:53.720 +taxonomy of we have prompts in natural + +01:00:49.839 --> 01:00:55.160 +language that are created uh by humans + +01:00:53.720 --> 01:00:57.240 +actually maybe I should say manual + +01:00:55.160 --> 01:00:59.559 +prompt engineering here this was first + +01:00:57.240 --> 01:01:01.480 +done in the gpd2 paper where they + +01:00:59.559 --> 01:01:04.359 +demonstrate that models uh models could + +01:01:01.480 --> 01:01:06.200 +solve tasks by doing it this way prompt + +01:01:04.359 --> 01:01:07.760 +paraphrasing is a step up from this + +01:01:06.200 --> 01:01:09.799 +because it's no longer relying on human + +01:01:07.760 --> 01:01:12.680 +engineering and you can you know expand + +01:01:09.799 --> 01:01:15.280 +to a broader set of prompts um it can + +01:01:12.680 --> 01:01:17.359 +always start with human created prompts + +01:01:15.280 --> 01:01:20.240 +so it's kind of like broader uh than + +01:01:17.359 --> 01:01:21.799 +that discrete prompt search doesn't + +01:01:20.240 --> 01:01:23.599 +necessarily need to rely on a + +01:01:21.799 --> 01:01:25.559 +paraphrasing model it could rely on like + +01:01:23.599 --> 01:01:26.760 +gradient-based models or something else + +01:01:25.559 --> 01:01:29.240 +like that to give you something that's + +01:01:26.760 --> 01:01:32.559 +not actually natural language uh kind of + +01:01:29.240 --> 01:01:35.920 +just random tokens continuous prompts or + +01:01:32.559 --> 01:01:38.119 +prompt tuning is a step above that + +01:01:35.920 --> 01:01:41.039 +multi-layer continuous prompts or prefix + +01:01:38.119 --> 01:01:42.520 +tuning is a layer above that parameter + +01:01:41.039 --> 01:01:43.520 +efficient tuning is more General than + +01:01:42.520 --> 01:01:45.359 +that and then you have all training + +01:01:43.520 --> 01:01:49.160 +methods so including fine tuning your + +01:01:45.359 --> 01:01:52.680 +model and so what are the implications + +01:01:49.160 --> 01:01:55.760 +of this um I think so a lot of people + +01:01:52.680 --> 01:01:58.720 +when prompting came out they were like + +01:01:55.760 --> 01:02:00.640 +prompting methods are very hacky I don't + +01:01:58.720 --> 01:02:03.839 +like how we have to do manual prompt + +01:02:00.640 --> 01:02:08.160 +engineering um it seems like a dark art + +01:02:03.839 --> 01:02:11.000 +as opposed to like you know actually you + +01:02:08.160 --> 01:02:14.160 +know some sort of well understood + +01:02:11.000 --> 01:02:16.839 +fine-tuning method that we could use um + +01:02:14.160 --> 01:02:20.520 +but I I actually like them I like + +01:02:16.839 --> 01:02:23.920 +prompting a lot because um if anybody is + +01:02:20.520 --> 01:02:25.960 +familiar with like basian basian + +01:02:23.920 --> 01:02:27.920 +statistics or machine learning we have + +01:02:25.960 --> 01:02:28.799 +the concept of like a prior probability + +01:02:27.920 --> 01:02:31.200 +over + +01:02:28.799 --> 01:02:32.359 +parameters and then a probability that + +01:02:31.200 --> 01:02:34.680 +we get + +01:02:32.359 --> 01:02:37.880 +after after fine tuning the model or + +01:02:34.680 --> 01:02:40.440 +after training the model and prompts in + +01:02:37.880 --> 01:02:42.640 +a way are our first like good prior over + +01:02:40.440 --> 01:02:43.880 +neural network models they give us the + +01:02:42.640 --> 01:02:46.319 +ability to + +01:02:43.880 --> 01:02:48.559 +specify what task the model should be + +01:02:46.319 --> 01:02:51.880 +doing or like a general idea of what + +01:02:48.559 --> 01:02:54.200 +task the model should be doing before we + +01:02:51.880 --> 01:02:56.359 +ask the model to actually do the task + +01:02:54.200 --> 01:02:58.640 +and and so we can either use that prior + +01:02:56.359 --> 01:03:02.119 +Asis we can use a prompted model Asis + +01:02:58.640 --> 01:03:04.839 +without doing any additional tuning or + +01:03:02.119 --> 01:03:06.480 +we could take the prior that we have + +01:03:04.839 --> 01:03:07.920 +given to the model by using a natural + +01:03:06.480 --> 01:03:09.039 +language description of the task it + +01:03:07.920 --> 01:03:12.079 +should be + +01:03:09.039 --> 01:03:14.799 +doing and then combine it with fineing + +01:03:12.079 --> 01:03:17.039 +so we can take the prompted + +01:03:14.799 --> 01:03:19.279 +model we can + +01:03:17.039 --> 01:03:21.640 +initialize we can initialize the + +01:03:19.279 --> 01:03:23.960 +distribution of this like Cas a prompt + +01:03:21.640 --> 01:03:25.720 +using the prompt using a human created + +01:03:23.960 --> 01:03:28.160 +prompt and then go on and fine-tune it + +01:03:25.720 --> 01:03:30.960 +on lots of training data as well and + +01:03:28.160 --> 01:03:33.799 +there's a method for doing that um by + +01:03:30.960 --> 01:03:35.880 +shik and schutza uh called uh pattern + +01:03:33.799 --> 01:03:37.559 +exploiting training where they do + +01:03:35.880 --> 01:03:39.799 +exactly that they basically initialize + +01:03:37.559 --> 01:03:41.720 +with a manually created prompt and then + +01:03:39.799 --> 01:03:44.559 +they find the model on finding inator + +01:03:41.720 --> 01:03:46.400 +after that so um that's a reason why I + +01:03:44.559 --> 01:03:47.920 +like prompting based methods they they + +01:03:46.400 --> 01:03:49.720 +give us this like really nice way to + +01:03:47.920 --> 01:03:53.039 +very quickly create a system but we can + +01:03:49.720 --> 01:03:56.079 +also have you know whatever level of + +01:03:53.039 --> 01:03:59.880 +additional training on top of that + +01:03:56.079 --> 01:03:59.880 +cool so that's a little bit early I'm diff --git a/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning.mp4 b/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..73e779d1a2ce68afba294257e5c5545d23475b7f --- /dev/null +++ b/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f4fdf3f6c34fe49a6b7dde762f86ff2b2bb40dc2592baf78c07a5cf8fcc21ba +size 79921831 diff --git a/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning/metadata.json b/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..f652f7f68c9d89eb152a917973bfbdb52e544a46 --- /dev/null +++ b/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=KLJ3EEo8aPU", + "title": "CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning/transcript.srt b/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..ef890065af20718ba5c6b269bf9eddc59ae38ac7 --- /dev/null +++ b/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning/transcript.srt @@ -0,0 +1,6943 @@ +1 +00:00:02,720 --> 00:00:06,720 +yeah today I'll talk about fine tuning + +2 +00:00:04,400 --> 00:00:09,599 +and instruction tuning uh so this is + +3 +00:00:06,720 --> 00:00:12,679 +kind of the first step in the pipeline + +4 +00:00:09,599 --> 00:00:14,480 +of steps that people use to prepare + +5 +00:00:12,679 --> 00:00:16,320 +models to be ready to be used as + +6 +00:00:14,480 --> 00:00:20,760 +chatbots like you know what you see in + +7 +00:00:16,320 --> 00:00:22,880 +chat GPT or uh gemini or whatever else + +8 +00:00:20,760 --> 00:00:26,240 +you want to be + +9 +00:00:22,880 --> 00:00:28,240 +using and what + +10 +00:00:26,240 --> 00:00:29,679 +these what this basically takes + +11 +00:00:28,240 --> 00:00:32,160 +advantage of is that we have many many + +12 +00:00:29,679 --> 00:00:33,200 +different tasks that we can be solving + +13 +00:00:32,160 --> 00:00:35,960 +in + +14 +00:00:33,200 --> 00:00:37,160 +NLP and each requires different + +15 +00:00:35,960 --> 00:00:40,440 +varieties of + +16 +00:00:37,160 --> 00:00:42,680 +data so we up until this point we've + +17 +00:00:40,440 --> 00:00:46,239 +talked a lot about the varieties of + +18 +00:00:42,680 --> 00:00:47,520 +tasks that only require text uh such as + +19 +00:00:46,239 --> 00:00:51,600 +language + +20 +00:00:47,520 --> 00:00:54,280 +modeling and then we also have other + +21 +00:00:51,600 --> 00:00:56,160 +varieties of tasks that require only + +22 +00:00:54,280 --> 00:00:58,160 +naturally occurring data so like data + +23 +00:00:56,160 --> 00:01:01,600 +that we don't actually have to create by + +24 +00:00:58,160 --> 00:01:04,560 +hand and or do that we don't have to + +25 +00:01:01,600 --> 00:01:08,240 +create by hand for the purpose of + +26 +00:01:04,560 --> 00:01:10,840 +training like language models or M uh + +27 +00:01:08,240 --> 00:01:12,680 +NLP models and this includes stuff like + +28 +00:01:10,840 --> 00:01:14,280 +machine translation and the reason why + +29 +00:01:12,680 --> 00:01:16,240 +we have lots of machine translation data + +30 +00:01:14,280 --> 00:01:19,479 +is people do translation anyway even if + +31 +00:01:16,240 --> 00:01:20,799 +we didn't have like chat GPT or Google + +32 +00:01:19,479 --> 00:01:22,600 +translate or something people would be + +33 +00:01:20,799 --> 00:01:24,920 +doing translation a lot of this data can + +34 +00:01:22,600 --> 00:01:27,400 +be used to train + +35 +00:01:24,920 --> 00:01:29,640 +models then other things are hand + +36 +00:01:27,400 --> 00:01:33,040 +labeled data and so this is like for a + +37 +00:01:29,640 --> 00:01:35,159 +lot of things like question answering or + +38 +00:01:33,040 --> 00:01:37,280 +um other + +39 +00:01:35,159 --> 00:01:40,000 +tasks that you need to create data like + +40 +00:01:37,280 --> 00:01:42,399 +name dty recognition or stuff like this + +41 +00:01:40,000 --> 00:01:44,079 +there that data really mostly isn't + +42 +00:01:42,399 --> 00:01:46,159 +naturally occurring so we need to go in + +43 +00:01:44,079 --> 00:01:47,960 +and actually create it by hand in order + +44 +00:01:46,159 --> 00:01:50,399 +to do + +45 +00:01:47,960 --> 00:01:53,280 +training so like one of the interesting + +46 +00:01:50,399 --> 00:01:54,840 +things about you know the whole Paradigm + +47 +00:01:53,280 --> 00:01:57,960 +of training language models over the + +48 +00:01:54,840 --> 00:02:00,880 +past several years is that it we have + +49 +00:01:57,960 --> 00:02:03,439 +been remarkably successful in getting + +50 +00:02:00,880 --> 00:02:07,640 +models to work at a very large number of + +51 +00:02:03,439 --> 00:02:09,319 +tasks by training only on text so you + +52 +00:02:07,640 --> 00:02:11,920 +know we train something like llama we + +53 +00:02:09,319 --> 00:02:13,720 +train something like the early GP models + +54 +00:02:11,920 --> 00:02:16,360 +that were trained only on text without + +55 +00:02:13,720 --> 00:02:19,560 +uh very much supervised training + +56 +00:02:16,360 --> 00:02:21,680 +data the and the reason why is like what + +57 +00:02:19,560 --> 00:02:23,920 +I mentioned last class which is like + +58 +00:02:21,680 --> 00:02:27,239 +actually a lot of data on the internet + +59 +00:02:23,920 --> 00:02:28,760 +just occurs in this form anyway so we + +60 +00:02:27,239 --> 00:02:31,519 +have + +61 +00:02:28,760 --> 00:02:34,840 +uh things + +62 +00:02:31,519 --> 00:02:36,519 +like phrase books that appear online and + +63 +00:02:34,840 --> 00:02:38,959 +these phrase books weren't explicitly + +64 +00:02:36,519 --> 00:02:41,000 +created it's machine translation data or + +65 +00:02:38,959 --> 00:02:43,519 +translation data even but they appear + +66 +00:02:41,000 --> 00:02:46,519 +online and there's actually a + +67 +00:02:43,519 --> 00:02:46,519 +paper + +68 +00:02:49,879 --> 00:02:53,959 +um that examines + +69 +00:02:58,519 --> 00:03:03,159 +this did I didn't cite in the slides but + +70 +00:03:01,440 --> 00:03:08,440 +it's it's a kind of interesting paper + +71 +00:03:03,159 --> 00:03:10,200 +from ACL this year where they find that + +72 +00:03:08,440 --> 00:03:12,680 +despite the fact that that there's a + +73 +00:03:10,200 --> 00:03:14,920 +language model that was trained on just + +74 +00:03:12,680 --> 00:03:17,360 +you know random data from the web they + +75 +00:03:14,920 --> 00:03:20,799 +found over 30 million translation pairs + +76 +00:03:17,360 --> 00:03:22,959 +across at least 44 languages um in this + +77 +00:03:20,799 --> 00:03:25,920 +data that was just like scrip in the web + +78 +00:03:22,959 --> 00:03:28,080 +not you know explicitly for translation + +79 +00:03:25,920 --> 00:03:32,000 +and so there's lots of other examples of + +80 +00:03:28,080 --> 00:03:35,239 +this uh you know question pairs from FAQ + +81 +00:03:32,000 --> 00:03:38,280 +pages on sites or other things like that + +82 +00:03:35,239 --> 00:03:41,319 +so but anyway yeah getting back to the + +83 +00:03:38,280 --> 00:03:43,959 +original uh the original thing here in + +84 +00:03:41,319 --> 00:03:47,120 +many cases uh your models will have + +85 +00:03:43,959 --> 00:03:48,640 +already been exposed to some data uh + +86 +00:03:47,120 --> 00:03:51,319 +there's some naturally occurring data + +87 +00:03:48,640 --> 00:03:53,239 +that you can Harvest and curate in an + +88 +00:03:51,319 --> 00:03:54,720 +appropriate way and then sometimes if + +89 +00:03:53,239 --> 00:03:56,720 +you really want models to do something + +90 +00:03:54,720 --> 00:03:57,799 +well you can do handl but that's very + +91 +00:03:56,720 --> 00:04:00,959 +expensive and you're not going to be + +92 +00:03:57,799 --> 00:04:04,720 +able to create very much data + +93 +00:04:00,959 --> 00:04:07,319 +so one one very funny thing is uh I was + +94 +00:04:04,720 --> 00:04:10,079 +playing around with GPT for + +95 +00:04:07,319 --> 00:04:11,879 +translation and I asked it to translate + +96 +00:04:10,079 --> 00:04:15,239 +from English to + +97 +00:04:11,879 --> 00:04:17,079 +Japanese and it did really well most of + +98 +00:04:15,239 --> 00:04:19,639 +the time it did you know very good + +99 +00:04:17,079 --> 00:04:23,880 +translations on English to Japanese and + +100 +00:04:19,639 --> 00:04:25,160 +like 900 900 out of a thousand examples + +101 +00:04:23,880 --> 00:04:26,680 +sometimes it just got it wrong because + +102 +00:04:25,160 --> 00:04:28,199 +it's not a perfect translation system + +103 +00:04:26,680 --> 00:04:30,560 +but every once in a while it would + +104 +00:04:28,199 --> 00:04:32,320 +translate into Japanese uh which is in + +105 +00:04:30,560 --> 00:04:35,280 +Japanese characters and then it would + +106 +00:04:32,320 --> 00:04:36,280 +translate into romanized Japanese into + +107 +00:04:35,280 --> 00:04:41,160 +like the + +108 +00:04:36,280 --> 00:04:42,520 +pronunciation um so no Japanese + +109 +00:04:41,160 --> 00:04:44,080 +translator that you ask to translate + +110 +00:04:42,520 --> 00:04:45,759 +into Japanese would ever do that that + +111 +00:04:44,080 --> 00:04:47,039 +would be like extremely unprofessional + +112 +00:04:45,759 --> 00:04:48,639 +right you know you're saying trans + +113 +00:04:47,039 --> 00:04:51,720 +please translate this into Japanese for + +114 +00:04:48,639 --> 00:04:55,360 +Japanese speakers but why would GP do + +115 +00:04:51,720 --> 00:04:55,360 +this anyone have any + +116 +00:04:56,639 --> 00:05:02,240 +ideas yeah someone on the internet + +117 +00:05:00,600 --> 00:05:05,240 +yeah someone on the internet did it that + +118 +00:05:02,240 --> 00:05:07,199 +way so a lot of the times when you have + +119 +00:05:05,240 --> 00:05:08,560 +incidental training data on the internet + +120 +00:05:07,199 --> 00:05:10,280 +it would be from like phrase books and + +121 +00:05:08,560 --> 00:05:12,800 +people who are trying to teach Japanese + +122 +00:05:10,280 --> 00:05:14,880 +for example so every once in a while + +123 +00:05:12,800 --> 00:05:16,840 +like GPD got the idea that it should be + +124 +00:05:14,880 --> 00:05:18,840 +translating like it did in a phrase book + +125 +00:05:16,840 --> 00:05:21,199 +for Japanese Learners as opposed to you + +126 +00:05:18,840 --> 00:05:23,759 +know like actually English to Japanese + +127 +00:05:21,199 --> 00:05:26,400 +translations so the problem is if you're + +128 +00:05:23,759 --> 00:05:28,560 +learning only on this language modeling + +129 +00:05:26,400 --> 00:05:29,919 +based text you might get exactly what + +130 +00:05:28,560 --> 00:05:31,440 +you want but everyone once in a while + +131 +00:05:29,919 --> 00:05:34,160 +you'll get something completely crazy + +132 +00:05:31,440 --> 00:05:36,919 +that you never expected to happen so uh + +133 +00:05:34,160 --> 00:05:39,639 +that's the problem with just relying on + +134 +00:05:36,919 --> 00:05:39,639 +base language + +135 +00:05:41,280 --> 00:05:47,240 +mods so the all the methods that I'm + +136 +00:05:44,880 --> 00:05:50,560 +going to be talking about here uh fall + +137 +00:05:47,240 --> 00:05:52,600 +in under the class of multitask learning + +138 +00:05:50,560 --> 00:05:54,600 +and so multitask learning is training + +139 +00:05:52,600 --> 00:05:57,759 +models to do well on multiple tasks at + +140 +00:05:54,600 --> 00:05:59,160 +once um just to give an example uh you + +141 +00:05:57,759 --> 00:06:01,400 +could have this as an example and you + +142 +00:05:59,160 --> 00:06:02,919 +could be doing language modeling on it + +143 +00:06:01,400 --> 00:06:04,720 +you could also be training a model to do + +144 +00:06:02,919 --> 00:06:06,720 +tagging on it and other things like this + +145 +00:06:04,720 --> 00:06:10,319 +and exactly how you do this can be + +146 +00:06:06,720 --> 00:06:13,560 +different but the important thing is + +147 +00:06:10,319 --> 00:06:15,599 +that you have some shared parameters + +148 +00:06:13,560 --> 00:06:17,840 +between the models that are trained on + +149 +00:06:15,599 --> 00:06:19,280 +all tasks and if you're just training a + +150 +00:06:17,840 --> 00:06:21,360 +big language model then you'll probably + +151 +00:06:19,280 --> 00:06:25,440 +be sharing all of the parameters if + +152 +00:06:21,360 --> 00:06:27,199 +you're training a uh something like Bert + +153 +00:06:25,440 --> 00:06:29,080 +or like you're you're pre-training and + +154 +00:06:27,199 --> 00:06:31,000 +the then fine tuning you might train the + +155 +00:06:29,080 --> 00:06:32,800 +body the model on multiple tasks but + +156 +00:06:31,000 --> 00:06:35,479 +have a separate classification for + +157 +00:06:32,800 --> 00:06:37,520 +different tasks so there's different + +158 +00:06:35,479 --> 00:06:40,880 +ways you can do that but the basic idea + +159 +00:06:37,520 --> 00:06:40,880 +is that you need to have lots of shared + +160 +00:06:40,960 --> 00:06:46,280 +parameters um one easy way to do this uh + +161 +00:06:44,160 --> 00:06:49,479 +the very simplest way to do this is to + +162 +00:06:46,280 --> 00:06:51,800 +train the model and Sample One Mini + +163 +00:06:49,479 --> 00:06:53,520 +batch for one task another mini batch + +164 +00:06:51,800 --> 00:06:55,720 +for another task and just alternate + +165 +00:06:53,520 --> 00:06:58,400 +between them or alternate between them + +166 +00:06:55,720 --> 00:07:01,400 +and Sample four from one task and one + +167 +00:06:58,400 --> 00:07:03,879 +from another tasks so uh it's often as + +168 +00:07:01,400 --> 00:07:03,879 +simple as + +169 +00:07:04,199 --> 00:07:08,599 +that or you can just uh sorry or you can + +170 +00:07:06,840 --> 00:07:11,319 +just mix all of the data together so if + +171 +00:07:08,599 --> 00:07:12,639 +you're doing like text um everything is + +172 +00:07:11,319 --> 00:07:15,280 +text based then you don't even need to + +173 +00:07:12,639 --> 00:07:15,280 +worry about mini + +174 +00:07:15,560 --> 00:07:21,440 +batches cool so separately from this uh + +175 +00:07:18,759 --> 00:07:23,960 +pre-train and fine-tune so in pre-train + +176 +00:07:21,440 --> 00:07:26,360 +and fine-tune you first train on one + +177 +00:07:23,960 --> 00:07:28,240 +task and then on another and the way + +178 +00:07:26,360 --> 00:07:30,599 +this works is you first train for + +179 +00:07:28,240 --> 00:07:31,960 +example language modeling objective and + +180 +00:07:30,599 --> 00:07:35,199 +then after you're done training the + +181 +00:07:31,960 --> 00:07:37,440 +language modeling objective you uh you + +182 +00:07:35,199 --> 00:07:41,360 +train on something else like + +183 +00:07:37,440 --> 00:07:43,520 +tagging and there's several reasons why + +184 +00:07:41,360 --> 00:07:45,199 +you might want to do this is does anyone + +185 +00:07:43,520 --> 00:07:48,479 +have an idea about why you might want to + +186 +00:07:45,199 --> 00:07:50,720 +do this as opposed to something like + +187 +00:07:48,479 --> 00:07:53,319 +standard multitask learning where you do + +188 +00:07:50,720 --> 00:07:57,000 +both of them at the same time this is a + +189 +00:07:53,319 --> 00:07:57,000 +straightforward question perhaps + +190 +00:07:57,479 --> 00:08:03,520 +but now when I say straightforward I + +191 +00:08:00,039 --> 00:08:03,520 +don't mean easy I mean not a tricked + +192 +00:08:03,599 --> 00:08:06,800 +question any + +193 +00:08:09,039 --> 00:08:15,120 +ideas um okay how many of you have + +194 +00:08:11,800 --> 00:08:17,960 +trained uh a 70 billion parameter + +195 +00:08:15,120 --> 00:08:17,960 +language model from + +196 +00:08:18,960 --> 00:08:23,080 +scratch I see somebody I see somebody + +197 +00:08:21,360 --> 00:08:27,680 +saying maybe so that's actually pretty + +198 +00:08:23,080 --> 00:08:27,680 +impressive but um why why + +199 +00:08:27,720 --> 00:08:31,240 +not yeah + +200 +00:08:31,800 --> 00:08:35,440 +yeah it's an unbel it's unbelievably + +201 +00:08:33,680 --> 00:08:37,320 +expensive and a waste of resources yeah + +202 +00:08:35,440 --> 00:08:39,440 +so like if everybody was doing it it + +203 +00:08:37,320 --> 00:08:41,240 +would be a waste of resources so we + +204 +00:08:39,440 --> 00:08:42,479 +actually benefit a lot by a very small + +205 +00:08:41,240 --> 00:08:45,600 +number of people doing this free + +206 +00:08:42,479 --> 00:08:48,240 +training and then the rest of us doing + +207 +00:08:45,600 --> 00:08:50,560 +you know fine tuning uh on a smaller + +208 +00:08:48,240 --> 00:08:53,320 +amount of data so if you were doing all + +209 +00:08:50,560 --> 00:08:55,040 +the multitasking uh from scratch then + +210 +00:08:53,320 --> 00:08:57,600 +that could be a + +211 +00:08:55,040 --> 00:09:01,200 +waste does anyone have an idea why you + +212 +00:08:57,600 --> 00:09:01,200 +might not want to do this + +213 +00:09:02,640 --> 00:09:06,800 +or actually there there's some other + +214 +00:09:04,079 --> 00:09:08,240 +reasons why you might want to do this um + +215 +00:09:06,800 --> 00:09:10,480 +another reason why you might want to do + +216 +00:09:08,240 --> 00:09:13,240 +this is for example if your pre-training + +217 +00:09:10,480 --> 00:09:15,040 +data is big and messy uh like for + +218 +00:09:13,240 --> 00:09:17,600 +example if your pre-training data is all + +219 +00:09:15,040 --> 00:09:20,600 +of the internet and the all the internet + +220 +00:09:17,600 --> 00:09:22,000 +contains like lots of toxic text and + +221 +00:09:20,600 --> 00:09:23,640 +text that's in a format that you don't + +222 +00:09:22,000 --> 00:09:25,959 +want you can still train on it and learn + +223 +00:09:23,640 --> 00:09:28,800 +from it but then fine-tuning can you + +224 +00:09:25,959 --> 00:09:32,000 +know make your model safer or uh remove + +225 +00:09:28,800 --> 00:09:33,360 +tox or other like that as well so does + +226 +00:09:32,000 --> 00:09:34,480 +anyone have an idea why you might not + +227 +00:09:33,360 --> 00:09:38,480 +want to do + +228 +00:09:34,480 --> 00:09:38,480 +this this is a trickier + +229 +00:09:40,200 --> 00:09:43,440 +question any + +230 +00:09:45,320 --> 00:09:49,720 +ideas or or to put it in a different way + +231 +00:09:48,079 --> 00:09:52,880 +while you might want to do standard + +232 +00:09:49,720 --> 00:09:56,000 +multitasking instead of this yeah just + +233 +00:09:52,880 --> 00:09:59,279 +again so if you don't have much teching + +234 +00:09:56,000 --> 00:10:01,480 +data for example then you might consider + +235 +00:09:59,279 --> 00:10:01,480 +like + +236 +00:10:02,399 --> 00:10:10,200 +doing uh so if you have lots of tagging + +237 +00:10:06,480 --> 00:10:12,320 +data I I think you're you're yeah so I I + +238 +00:10:10,200 --> 00:10:13,560 +think you're basically this is a good + +239 +00:10:12,320 --> 00:10:15,880 +point so if you don't have lots of + +240 +00:10:13,560 --> 00:10:17,240 +tagging data um you might have much much + +241 +00:10:15,880 --> 00:10:18,800 +more language modeling data than you + +242 +00:10:17,240 --> 00:10:21,200 +have tagging data so it's a better idea + +243 +00:10:18,800 --> 00:10:24,959 +to train more on it that is true but you + +244 +00:10:21,200 --> 00:10:26,519 +could sample like 99 mini batches of uh + +245 +00:10:24,959 --> 00:10:29,480 +of language modeling data and One Mini + +246 +00:10:26,519 --> 00:10:31,399 +batch of train data or 999 of language + +247 +00:10:29,480 --> 00:10:34,480 +model in data + +248 +00:10:31,399 --> 00:10:37,040 +so it's a good you're in going in a good + +249 +00:10:34,480 --> 00:10:40,040 +direction anything + +250 +00:10:37,040 --> 00:10:40,040 +else + +251 +00:10:44,639 --> 00:10:50,800 +yeah uh so if your pre-training data has + +252 +00:10:48,959 --> 00:10:52,000 +certain biases you might inherit it do + +253 +00:10:50,800 --> 00:10:54,240 +you think that's a bigger problem with + +254 +00:10:52,000 --> 00:10:56,839 +pre-training or pre- tring and F traing + +255 +00:10:54,240 --> 00:10:56,839 +or standard + +256 +00:10:58,040 --> 00:11:01,040 +Ming + +257 +00:11:12,660 --> 00:11:15,750 +[Music] + +258 +00:11:18,600 --> 00:11:23,240 +yeah um so you might you might lose some + +259 +00:11:21,320 --> 00:11:25,560 +of the information that exists in the + +260 +00:11:23,240 --> 00:11:27,480 +multitask data set I think that's pretty + +261 +00:11:25,560 --> 00:11:29,560 +close to what I'm going to say so let me + +262 +00:11:27,480 --> 00:11:30,920 +um let me just go ahead and give the + +263 +00:11:29,560 --> 00:11:35,160 +hopefully everybody had time to think + +264 +00:11:30,920 --> 00:11:37,279 +about it but um this is a paper that we + +265 +00:11:35,160 --> 00:11:40,320 +wrote previously and basically one + +266 +00:11:37,279 --> 00:11:41,320 +interesting thing is that you actually + +267 +00:11:40,320 --> 00:11:44,560 +do + +268 +00:11:41,320 --> 00:11:47,320 +better um you do better if you train on + +269 +00:11:44,560 --> 00:11:50,160 +multiple tasks at the same time and our + +270 +00:11:47,320 --> 00:11:51,480 +hypothesis about why the reason uh you + +271 +00:11:50,160 --> 00:11:53,279 +do better on the end task that you + +272 +00:11:51,480 --> 00:11:55,200 +finally want to do well on compared to + +273 +00:11:53,279 --> 00:11:58,079 +pre-training and fine tuning and our + +274 +00:11:55,200 --> 00:12:01,079 +hypothesis about this um which I've also + +275 +00:11:58,079 --> 00:12:03,160 +seen a few other works is if you + +276 +00:12:01,079 --> 00:12:05,040 +pre-train on the task that you finally + +277 +00:12:03,160 --> 00:12:07,959 +want to solve while you're also solving + +278 +00:12:05,040 --> 00:12:12,120 +the language modeling task + +279 +00:12:07,959 --> 00:12:14,279 +the essentially the model is learning + +280 +00:12:12,120 --> 00:12:17,000 +representations that are useful for both + +281 +00:12:14,279 --> 00:12:18,839 +at the same time as opposed to if you're + +282 +00:12:17,000 --> 00:12:20,680 +training on the language modeling task + +283 +00:12:18,839 --> 00:12:22,079 +it will be learning representations that + +284 +00:12:20,680 --> 00:12:24,440 +are useful for the language modeling + +285 +00:12:22,079 --> 00:12:26,079 +task but not necessarily focusing on the + +286 +00:12:24,440 --> 00:12:28,639 +representations that would be useful for + +287 +00:12:26,079 --> 00:12:31,360 +the end so like for example if you're + +288 +00:12:28,639 --> 00:12:33,040 +joining training on sentiment analysis + +289 +00:12:31,360 --> 00:12:34,600 +and language modeling the + +290 +00:12:33,040 --> 00:12:36,600 +representations that are useful for + +291 +00:12:34,600 --> 00:12:39,600 +sentiment analysis will be more Salient + +292 +00:12:36,600 --> 00:12:43,199 +than the like in the model essentially + +293 +00:12:39,600 --> 00:12:45,160 +and so we um that will be particularly a + +294 +00:12:43,199 --> 00:12:46,639 +problem when there's multiple when you + +295 +00:12:45,160 --> 00:12:49,560 +have a + +296 +00:12:46,639 --> 00:12:51,519 +like a varied optimization landscape in + +297 +00:12:49,560 --> 00:12:53,199 +multiple local Optima and the language + +298 +00:12:51,519 --> 00:12:55,199 +modeling might not get you into the + +299 +00:12:53,199 --> 00:12:57,480 +global Optimum uh that you want for the + +300 +00:12:55,199 --> 00:12:59,279 +end task that you're solving there's + +301 +00:12:57,480 --> 00:13:02,519 +also another interesting paper from + +302 +00:12:59,279 --> 00:13:04,120 +anthropic more recently than ours that + +303 +00:13:02,519 --> 00:13:05,399 +shows something a little bit similar + +304 +00:13:04,120 --> 00:13:06,279 +specifically from the point of view of + +305 +00:13:05,399 --> 00:13:08,760 +safety + +306 +00:13:06,279 --> 00:13:12,000 +training and they demonstrate that if + +307 +00:13:08,760 --> 00:13:14,040 +you start out by having a concept of + +308 +00:13:12,000 --> 00:13:17,279 +safety early in your training you're + +309 +00:13:14,040 --> 00:13:19,600 +able to reach better um better final + +310 +00:13:17,279 --> 00:13:21,000 +results than if you start safety + +311 +00:13:19,600 --> 00:13:23,760 +training after you trained your model + +312 +00:13:21,000 --> 00:13:26,480 +for a while so um and this is + +313 +00:13:23,760 --> 00:13:28,880 +particularly for things like toxicity to + +314 +00:13:26,480 --> 00:13:30,920 +so there are downsides to pre-training + +315 +00:13:28,880 --> 00:13:32,720 +find tuning but the upsides of you know + +316 +00:13:30,920 --> 00:13:34,360 +spending lots of compute once and then F + +317 +00:13:32,720 --> 00:13:36,440 +tuning for lots of different you know + +318 +00:13:34,360 --> 00:13:40,839 +Downstream tests is like large enough + +319 +00:13:36,440 --> 00:13:40,839 +that that's still the standard not + +320 +00:13:41,160 --> 00:13:49,720 +this um any questions about + +321 +00:13:44,920 --> 00:13:49,720 +that okay cool let's uh let's move + +322 +00:13:49,959 --> 00:13:55,040 +on um so we talked about prompting + +323 +00:13:53,199 --> 00:13:57,399 +before I'm just going to go over that + +324 +00:13:55,040 --> 00:13:59,920 +very quick you know just say it for + +325 +00:13:57,399 --> 00:14:03,079 +completeness but when we're prompting uh + +326 +00:13:59,920 --> 00:14:04,839 +we have an encoder uh we train it on + +327 +00:14:03,079 --> 00:14:07,000 +language modeling or whatever else but + +328 +00:14:04,839 --> 00:14:10,399 +then we freeze it and then we specify + +329 +00:14:07,000 --> 00:14:13,240 +the task by a prefix like + +330 +00:14:10,399 --> 00:14:15,000 +this and and what instruction tuning + +331 +00:14:13,240 --> 00:14:17,240 +does is instruction tuning is like a + +332 +00:14:15,000 --> 00:14:20,839 +combination of fine-tuning and prompting + +333 +00:14:17,240 --> 00:14:23,160 +and so what we do is we pre-train and + +334 +00:14:20,839 --> 00:14:27,360 +then we + +335 +00:14:23,160 --> 00:14:29,040 +oh sorry I uh guess I I failed to update + +336 +00:14:27,360 --> 00:14:31,440 +the the figure here so this is just a + +337 +00:14:29,040 --> 00:14:37,199 +figure for f tuning so normally what you + +338 +00:14:31,440 --> 00:14:39,519 +do is you um you have a prompt for one + +339 +00:14:37,199 --> 00:14:42,480 +task a prompt for another task a prompt + +340 +00:14:39,519 --> 00:14:45,440 +for another task and then you uh train + +341 +00:14:42,480 --> 00:14:47,040 +your model specifically so that it does + +342 +00:14:45,440 --> 00:14:49,079 +good completions of those prps and it'll + +343 +00:14:47,040 --> 00:14:51,680 +give some actual examples of that right + +344 +00:14:49,079 --> 00:14:54,199 +so yeah sorry the the figure uh I I will + +345 +00:14:51,680 --> 00:14:54,199 +need to fix + +346 +00:14:57,680 --> 00:15:03,079 +it + +347 +00:15:00,399 --> 00:15:03,079 +just taking a + +348 +00:15:03,800 --> 00:15:11,399 +note + +349 +00:15:06,560 --> 00:15:14,560 +um okay so we haven't really covered + +350 +00:15:11,399 --> 00:15:16,279 +um fine tuning yet in general so I want + +351 +00:15:14,560 --> 00:15:18,240 +to talk a little bit about what we do uh + +352 +00:15:16,279 --> 00:15:20,160 +for fine tuning and particularly what we + +353 +00:15:18,240 --> 00:15:22,079 +do for fine tuning very large models + +354 +00:15:20,160 --> 00:15:23,639 +because I think that's what a lot of + +355 +00:15:22,079 --> 00:15:27,680 +people want to do + +356 +00:15:23,639 --> 00:15:30,360 +nowadays so for full fine tuning um full + +357 +00:15:27,680 --> 00:15:31,120 +fine tuning is relative L easy uh what + +358 +00:15:30,360 --> 00:15:35,120 +we + +359 +00:15:31,120 --> 00:15:36,920 +do um easy conceptually hard in practice + +360 +00:15:35,120 --> 00:15:40,360 +so what we do is we simply continue + +361 +00:15:36,920 --> 00:15:43,480 +training the language model on uh + +362 +00:15:40,360 --> 00:15:45,839 +whatever data we want to be fitting to + +363 +00:15:43,480 --> 00:15:47,240 +so this could be like translation pairs + +364 +00:15:45,839 --> 00:15:49,199 +it could be question answering pairs it + +365 +00:15:47,240 --> 00:15:52,000 +could be anything else like + +366 +00:15:49,199 --> 00:15:53,839 +that um but the issue is depending on + +367 +00:15:52,000 --> 00:15:56,720 +the method that you're using to optimize + +368 +00:15:53,839 --> 00:15:59,120 +your model uh the method can take lots + +369 +00:15:56,720 --> 00:16:00,959 +of memory and also in some some cases it + +370 +00:15:59,120 --> 00:16:02,319 +can be relatively unstable compared to + +371 +00:16:00,959 --> 00:16:04,240 +some other Alternatives that I'm going + +372 +00:16:02,319 --> 00:16:07,079 +to talk about in a + +373 +00:16:04,240 --> 00:16:10,440 +bit and just to give an example uh + +374 +00:16:07,079 --> 00:16:13,560 +training a 65 billion parameter model uh + +375 +00:16:10,440 --> 00:16:16,319 +which is the largest version of llama 1 + +376 +00:16:13,560 --> 00:16:18,880 +uh with 16 bit mixed Precision actually + +377 +00:16:16,319 --> 00:16:21,759 +takes uh much more memory than you would + +378 +00:16:18,880 --> 00:16:26,440 +expect uh if you haven't done this + +379 +00:16:21,759 --> 00:16:29,240 +before so if you look at the amount of + +380 +00:16:26,440 --> 00:16:32,160 +memory required for + +381 +00:16:29,240 --> 00:16:34,120 +holding the model in the first place if + +382 +00:16:32,160 --> 00:16:38,040 +we have 65 billion + +383 +00:16:34,120 --> 00:16:40,120 +parameters uh times two that would be + +384 +00:16:38,040 --> 00:16:43,160 +130 gigabytes of memory already so + +385 +00:16:40,120 --> 00:16:47,079 +that's already a lot of memory right but + +386 +00:16:43,160 --> 00:16:49,639 +if we want to do um if we want to hold + +387 +00:16:47,079 --> 00:16:52,399 +both the parameters and the gradients of + +388 +00:16:49,639 --> 00:16:55,839 +the model um obviously we need to double + +389 +00:16:52,399 --> 00:16:58,240 +the number of uh points here so we + +390 +00:16:55,839 --> 00:16:59,880 +double we also have 130 gbt for the + +391 +00:16:58,240 --> 00:17:01,880 +param + +392 +00:16:59,880 --> 00:17:04,160 +uh sorry for the + +393 +00:17:01,880 --> 00:17:06,240 +gradients then we have the optimizer and + +394 +00:17:04,160 --> 00:17:09,039 +this could be an Optimizer like atam if + +395 +00:17:06,240 --> 00:17:10,959 +people remember atom has first moments + +396 +00:17:09,039 --> 00:17:12,360 +and second moments so it has the mean + +397 +00:17:10,959 --> 00:17:13,280 +and the and something that looks like + +398 +00:17:12,360 --> 00:17:15,160 +the + +399 +00:17:13,280 --> 00:17:17,839 +variance + +400 +00:17:15,160 --> 00:17:20,079 +and + +401 +00:17:17,839 --> 00:17:21,240 +these at least according to this paper + +402 +00:17:20,079 --> 00:17:25,280 +from + +403 +00:17:21,240 --> 00:17:28,760 +2019 uh needed to be stored in 32 + +404 +00:17:25,280 --> 00:17:31,520 +bits so um these needed to be stored in + +405 +00:17:28,760 --> 00:17:33,960 +uh 32 bits of memory because if you + +406 +00:17:31,520 --> 00:17:35,480 +stored them in smaller amounts of memory + +407 +00:17:33,960 --> 00:17:39,000 +they would have underflow issues + +408 +00:17:35,480 --> 00:17:40,640 +overflow issues and uh basically uh the + +409 +00:17:39,000 --> 00:17:43,960 +numerical Precision would destabilize + +410 +00:17:40,640 --> 00:17:47,000 +your training and then in addition the + +411 +00:17:43,960 --> 00:17:49,440 +parameters also needed to be measured in + +412 +00:17:47,000 --> 00:17:51,760 +uh in 32-bits so you needed a 32-bit + +413 +00:17:49,440 --> 00:17:54,280 +copy of the + +414 +00:17:51,760 --> 00:17:55,919 +parameters this is just the parameters + +415 +00:17:54,280 --> 00:17:57,320 +of the model and then separately from + +416 +00:17:55,919 --> 00:17:59,320 +that you also need to do the forward and + +417 +00:17:57,320 --> 00:18:01,039 +backward passes and so if you do the + +418 +00:17:59,320 --> 00:18:04,640 +forward and backward passes depending on + +419 +00:18:01,039 --> 00:18:07,520 +how big your batch size is how many uh + +420 +00:18:04,640 --> 00:18:09,120 +tokens you have in each instance this + +421 +00:18:07,520 --> 00:18:11,559 +could take you know significant amounts + +422 +00:18:09,120 --> 00:18:14,679 +of memory too like 100 to 200 + +423 +00:18:11,559 --> 00:18:17,679 +gigabytes so overall this would take + +424 +00:18:14,679 --> 00:18:21,240 +around a, to 1,400 gigabytes of GPU + +425 +00:18:17,679 --> 00:18:24,520 +memory in the very naive scenario and + +426 +00:18:21,240 --> 00:18:27,360 +this is uh not that + +427 +00:18:24,520 --> 00:18:30,440 +great now uh this paper was written in + +428 +00:18:27,360 --> 00:18:33,880 +2019 and there's have been some ADV uh + +429 +00:18:30,440 --> 00:18:36,440 +advances since then in optimizing models + +430 +00:18:33,880 --> 00:18:37,720 +so to give some examples of things that + +431 +00:18:36,440 --> 00:18:39,400 +can be + +432 +00:18:37,720 --> 00:18:43,000 +fixed + +433 +00:18:39,400 --> 00:18:47,520 +previously when we were using + +434 +00:18:43,000 --> 00:18:49,280 +fp16 uh so like the regular uh ansy + +435 +00:18:47,520 --> 00:18:53,280 +floating Point numbers like we use on + +436 +00:18:49,280 --> 00:18:55,400 +our CPU this was it you needed 32bit + +437 +00:18:53,280 --> 00:18:57,840 +integer uh 32-bit floats to make this + +438 +00:18:55,400 --> 00:19:01,080 +stable now it's pretty standard to use + +439 +00:18:57,840 --> 00:19:04,799 +BF 16 uh brain float 16 like I talked + +440 +00:19:01,080 --> 00:19:06,559 +about earlier in the uh in the class and + +441 +00:19:04,799 --> 00:19:08,799 +because of that this can be made more + +442 +00:19:06,559 --> 00:19:11,880 +stable so you can reduce this to things + +443 +00:19:08,799 --> 00:19:15,919 +like two bytes instead of four bytes uh + +444 +00:19:11,880 --> 00:19:17,159 +we can also uh if we make do that we + +445 +00:19:15,919 --> 00:19:18,760 +don't need this extra copy of the + +446 +00:19:17,159 --> 00:19:21,760 +parameters so we can get away with about + +447 +00:19:18,760 --> 00:19:24,039 +eight bytes per uh parameter we want to + +448 +00:19:21,760 --> 00:19:26,480 +optimize but that's still you know a lot + +449 +00:19:24,039 --> 00:19:29,000 +of memory that's 130 gabt of memory uh + +450 +00:19:26,480 --> 00:19:30,360 +for a 65 gigabyte m + +451 +00:19:29,000 --> 00:19:32,960 +and the forward and backward path is + +452 +00:19:30,360 --> 00:19:35,120 +still play SP as well so basically what + +453 +00:19:32,960 --> 00:19:38,159 +I want to say is full fine tuning is uh + +454 +00:19:35,120 --> 00:19:42,400 +pretty memory intensive + +455 +00:19:38,159 --> 00:19:47,480 +and if we look at how big a standard GPU + +456 +00:19:42,400 --> 00:19:49,679 +is I took some specs here the memory is + +457 +00:19:47,480 --> 00:19:53,039 +uh the memory is just the memory on the + +458 +00:19:49,679 --> 00:19:55,840 +GPU the cost I did a very unscientific + +459 +00:19:53,039 --> 00:19:58,280 +thing of uh Google the price on Amazon + +460 +00:19:55,840 --> 00:20:01,240 +and and take a look at the price of the + +461 +00:19:58,280 --> 00:20:04,000 +GPU here and then on the right side this + +462 +00:20:01,240 --> 00:20:08,000 +is uh the types of cloud machines that + +463 +00:20:04,000 --> 00:20:09,880 +support these gpus and in this class uh + +464 +00:20:08,000 --> 00:20:13,559 +a lot of people are using Google collab + +465 +00:20:09,880 --> 00:20:15,640 +I think for your uh for your current + +466 +00:20:13,559 --> 00:20:17,640 +assignment and soon we'll have AWS + +467 +00:20:15,640 --> 00:20:20,080 +credits for everybody so you can use AWS + +468 +00:20:17,640 --> 00:20:22,039 +machines so if you look at the gpus that + +469 +00:20:20,080 --> 00:20:26,880 +are available we have things everywhere + +470 +00:20:22,039 --> 00:20:29,799 +from 24 gigabytes uh 32 gigabytes 40 40 + +471 +00:20:26,880 --> 00:20:33,520 +to 80 gigabytes uh 48 + +472 +00:20:29,799 --> 00:20:35,760 +gabes um or on your Mac the GPU and CPU + +473 +00:20:33,520 --> 00:20:40,000 +memory is shared + +474 +00:20:35,760 --> 00:20:42,720 +and basically what we can see is that + +475 +00:20:40,000 --> 00:20:44,760 +there's no GPU with 130 gigabytes of + +476 +00:20:42,720 --> 00:20:47,039 +memory right so none of them can do this + +477 +00:20:44,760 --> 00:20:49,400 +with a single + +478 +00:20:47,039 --> 00:20:52,000 +GPU uh there's also a bunch of other + +479 +00:20:49,400 --> 00:20:54,960 +Hardware options like AMD gpus Google + +480 +00:20:52,000 --> 00:20:58,640 +PPU special purpose uh training things + +481 +00:20:54,960 --> 00:20:59,760 +like cerebrus awsum Etc but I think for + +482 +00:20:58,640 --> 00:21:01,120 +the purpose of this class you're + +483 +00:20:59,760 --> 00:21:04,520 +probably going to use standard Hardware + +484 +00:21:01,120 --> 00:21:07,679 +like this so anyway like that model will + +485 +00:21:04,520 --> 00:21:10,720 +not fit on any or that fine tuning will + +486 +00:21:07,679 --> 00:21:15,000 +not fit on any GPU that you have to get + +487 +00:21:10,720 --> 00:21:15,000 +to um any questions about + +488 +00:21:16,200 --> 00:21:19,200 +this + +489 +00:21:21,360 --> 00:21:28,880 +yeah so a lot of these are created + +490 +00:21:25,080 --> 00:21:30,360 +specifically for training neur networks + +491 +00:21:28,880 --> 00:21:32,799 +so they're like really really good at + +492 +00:21:30,360 --> 00:21:37,360 +the things you need to be training their + +493 +00:21:32,799 --> 00:21:39,600 +networks for um I haven't actually used + +494 +00:21:37,360 --> 00:21:43,000 +any of these so I I can't like endorse + +495 +00:21:39,600 --> 00:21:44,120 +or disor any of them but they're made to + +496 +00:21:43,000 --> 00:21:46,640 +be like really good at training + +497 +00:21:44,120 --> 00:21:48,320 +Transformer langage models or like the + +498 +00:21:46,640 --> 00:21:50,960 +specific thing that everybody wants to + +499 +00:21:48,320 --> 00:21:52,320 +train uh the disadvantage is if you + +500 +00:21:50,960 --> 00:21:54,720 +start wanting to be like a little bit + +501 +00:21:52,320 --> 00:21:57,840 +more creative than you know what they + +502 +00:21:54,720 --> 00:22:00,159 +imagined it might not support that so um + +503 +00:21:57,840 --> 00:22:02,200 +then that's also a problem with tpus + +504 +00:22:00,159 --> 00:22:03,919 +tpus are very good at certain things + +505 +00:22:02,200 --> 00:22:05,600 +like they're very good at like batch + +506 +00:22:03,919 --> 00:22:08,480 +large operations but they're less good + +507 +00:22:05,600 --> 00:22:10,679 +at nimbly executing dynamic constition + +508 +00:22:08,480 --> 00:22:12,720 +graphs and stuff so from that point of + +509 +00:22:10,679 --> 00:22:15,360 +view I think most people in research + +510 +00:22:12,720 --> 00:22:15,360 +selles + +511 +00:22:15,679 --> 00:22:22,000 +to um one one thing I should mention is + +512 +00:22:18,799 --> 00:22:25,000 +the AMD AMD gpus uh a lot of people have + +513 +00:22:22,000 --> 00:22:27,080 +started using them in like 2023 2024 + +514 +00:22:25,000 --> 00:22:28,480 +like I think previously uh it was kind + +515 +00:22:27,080 --> 00:22:30,120 +of a Nvidia + +516 +00:22:28,480 --> 00:22:32,880 +one horse race but I've heard more and + +517 +00:22:30,120 --> 00:22:36,720 +more people using amds so and they're + +518 +00:22:32,880 --> 00:22:39,919 +they're not price Val quite as much so + +519 +00:22:36,720 --> 00:22:39,919 +um any other + +520 +00:22:47,919 --> 00:22:53,279 +questions um so training models like if + +521 +00:22:51,799 --> 00:22:57,240 +they're training pre-training models + +522 +00:22:53,279 --> 00:23:01,960 +they're using like a th a th000 2,000 + +523 +00:22:57,240 --> 00:23:05,279 +4,000 or something um like they meta + +524 +00:23:01,960 --> 00:23:07,360 +just announced that they + +525 +00:23:05,279 --> 00:23:12,480 +got + +526 +00:23:07,360 --> 00:23:14,760 +350,000 h100s or something like this and + +527 +00:23:12,480 --> 00:23:18,360 +in case you're you are too lazy to + +528 +00:23:14,760 --> 00:23:20,559 +calculate that's about um 10 to20 + +529 +00:23:18,360 --> 00:23:24,360 +billion + +530 +00:23:20,559 --> 00:23:25,840 +doll it's a lot of money um and I'm sure + +531 +00:23:24,360 --> 00:23:28,159 +not all of them are being used to train + +532 +00:23:25,840 --> 00:23:29,640 +a model uh you know a lot of them are + +533 +00:23:28,159 --> 00:23:32,520 +used for model surveying and stuff like + +534 +00:23:29,640 --> 00:23:34,240 +that so um but pre there's a reason why + +535 +00:23:32,520 --> 00:23:36,360 +we're not all pre-training models right + +536 +00:23:34,240 --> 00:23:38,159 +you know um it's a big it's a big effort + +537 +00:23:36,360 --> 00:23:43,640 +it's very expensive + +538 +00:23:38,159 --> 00:23:43,640 +so cool any other uh any other + +539 +00:23:44,320 --> 00:23:50,400 +questions cool okay so how can we + +540 +00:23:48,240 --> 00:23:52,039 +overcome this uh the first way we can + +541 +00:23:50,400 --> 00:23:53,919 +overcome this is using things like + +542 +00:23:52,039 --> 00:23:56,919 +multi-gpu + +543 +00:23:53,919 --> 00:23:59,279 +training and uh one solution is just to + +544 +00:23:56,919 --> 00:24:02,600 +throw more Hardware at the models and + +545 +00:23:59,279 --> 00:24:06,159 +distribute the models over multiple + +546 +00:24:02,600 --> 00:24:08,760 +places and the canonical or the most + +547 +00:24:06,159 --> 00:24:10,400 +well-known version of this that still + +548 +00:24:08,760 --> 00:24:12,159 +many many people use when they're + +549 +00:24:10,400 --> 00:24:14,799 +pre-training or F tuning language models + +550 +00:24:12,159 --> 00:24:16,679 +is something called Deep speed zero and + +551 +00:24:14,799 --> 00:24:18,760 +the way deep speed zero works is it + +552 +00:24:16,679 --> 00:24:19,720 +works by partitioning optimization over + +553 +00:24:18,760 --> 00:24:22,559 +different + +554 +00:24:19,720 --> 00:24:25,399 +devices and + +555 +00:24:22,559 --> 00:24:28,640 +so there's different stages of deep + +556 +00:24:25,399 --> 00:24:31,799 +speed uh the F zero the first one is + +557 +00:24:28,640 --> 00:24:35,880 +this one right here and this says 2 + 2 + +558 +00:24:31,799 --> 00:24:39,399 ++ K where K is the size of the optimizer + +559 +00:24:35,880 --> 00:24:41,919 +state that I had here so two uh two + +560 +00:24:39,399 --> 00:24:44,600 +bytes two byes plus all of the bytes + +561 +00:24:41,919 --> 00:24:44,600 +required for + +562 +00:24:44,880 --> 00:24:49,360 +this and the blue is the first two the + +563 +00:24:47,840 --> 00:24:50,720 +Orange is the second two and the green + +564 +00:24:49,360 --> 00:24:54,279 +is the third + +565 +00:24:50,720 --> 00:24:56,559 +one and so basically the Baseline is you + +566 +00:24:54,279 --> 00:24:59,399 +you hold all of these on each + +567 +00:24:56,559 --> 00:25:01,320 +GPU the the second thing is you + +568 +00:24:59,399 --> 00:25:03,279 +partition the optimizer State across + +569 +00:25:01,320 --> 00:25:06,039 +different gpus and because Optimizer + +570 +00:25:03,279 --> 00:25:08,200 +state is generally larger or at least as + +571 +00:25:06,039 --> 00:25:10,440 +large as all of the others this can + +572 +00:25:08,200 --> 00:25:13,919 +reduce memory requirements significantly + +573 +00:25:10,440 --> 00:25:16,000 +so this um went from 120 gabyt for + +574 +00:25:13,919 --> 00:25:19,240 +whatever model they were doing there to + +575 +00:25:16,000 --> 00:25:22,799 +31 gigabytes based on + +576 +00:25:19,240 --> 00:25:26,600 +um so this was a 7.5 billion parameter + +577 +00:25:22,799 --> 00:25:29,600 +model so the seven and they had let's + +578 +00:25:26,600 --> 00:25:34,120 +see yeah and they had devices so they + +579 +00:25:29,600 --> 00:25:36,640 +went down from 120 to 31 um this is with + +580 +00:25:34,120 --> 00:25:38,799 +12 bytes for their Optimizer State like + +581 +00:25:36,640 --> 00:25:40,480 +I said here um but actually we can get + +582 +00:25:38,799 --> 00:25:43,399 +away with four bytes for the optimizer + +583 +00:25:40,480 --> 00:25:46,120 +state so actually you can train a seven + +584 +00:25:43,399 --> 00:25:49,200 +uh billion model reasonably easily on + +585 +00:25:46,120 --> 00:25:52,360 +you know one or two devices uh one or + +586 +00:25:49,200 --> 00:25:55,200 +several devices now with + +587 +00:25:52,360 --> 00:25:57,159 +this so This is called stage one this is + +588 +00:25:55,200 --> 00:26:00,320 +partition partitioning the optimizer + +589 +00:25:57,159 --> 00:26:02,440 +state stage two this is partition + +590 +00:26:00,320 --> 00:26:04,640 +partitioning the optimizer State and the + +591 +00:26:02,440 --> 00:26:06,880 +gradients the optimizer state is + +592 +00:26:04,640 --> 00:26:09,600 +actually doing the optimizer state is + +593 +00:26:06,880 --> 00:26:13,679 +actually relatively like harmless it + +594 +00:26:09,600 --> 00:26:15,600 +doesn't slow down too much um partition + +595 +00:26:13,679 --> 00:26:17,799 +the gradients gets a little bit more + +596 +00:26:15,600 --> 00:26:20,520 +tricky because you start having to uh + +597 +00:26:17,799 --> 00:26:22,320 +move things between devices a lot and + +598 +00:26:20,520 --> 00:26:25,880 +then uh if you do this for the + +599 +00:26:22,320 --> 00:26:28,159 +parameters you can uh you can do even + +600 +00:26:25,880 --> 00:26:30,760 +more so you can get it to like + +601 +00:26:28,159 --> 00:26:32,399 +ridiculously small uh values here but + +602 +00:26:30,760 --> 00:26:35,279 +this is going to be very expensive in + +603 +00:26:32,399 --> 00:26:37,919 +terms of uh you know moving things + +604 +00:26:35,279 --> 00:26:41,360 +around so that you can calculate your + +605 +00:26:37,919 --> 00:26:43,159 +gradients so I I'd say that by default + +606 +00:26:41,360 --> 00:26:45,720 +if you can go to deep speed with like + +607 +00:26:43,159 --> 00:26:48,799 +stage one or stage two you can spread + +608 +00:26:45,720 --> 00:26:52,940 +this out across different uh devices in + +609 +00:26:48,799 --> 00:26:56,019 +in trads yeah is this a DAT + +610 +00:26:52,940 --> 00:26:56,019 +[Music] + +611 +00:26:56,600 --> 00:26:59,600 +question + +612 +00:27:02,520 --> 00:27:09,200 +I does your central device um your + +613 +00:27:05,720 --> 00:27:10,640 +central device can basically be your CPU + +614 +00:27:09,200 --> 00:27:13,520 +when you say multi- device sorry do you + +615 +00:27:10,640 --> 00:27:16,520 +mean multi-gpu or do you mean + +616 +00:27:13,520 --> 00:27:16,520 +multi + +617 +00:27:20,640 --> 00:27:26,240 +okay able + +618 +00:27:23,640 --> 00:27:27,919 +to I don't I don't think so I mean it + +619 +00:27:26,240 --> 00:27:30,320 +depends on the implementation but not + +620 +00:27:27,919 --> 00:27:32,360 +theoretically any way and I can do speed + +621 +00:27:30,320 --> 00:27:34,640 +this that for + +622 +00:27:32,360 --> 00:27:36,880 +you yeah other otherwise you'd have lots + +623 +00:27:34,640 --> 00:27:40,159 +of trouble like getting a machine that + +624 +00:27:36,880 --> 00:27:43,600 +had you know a thousand gigabytes + +625 +00:27:40,159 --> 00:27:46,039 +in um so yeah I would suggest definitely + +626 +00:27:43,600 --> 00:27:48,720 +using something like uh deep speed but + +627 +00:27:46,039 --> 00:27:51,080 +actually a lot of uh a lot of libraries + +628 +00:27:48,720 --> 00:27:53,960 +use deep speed under the hood also so + +629 +00:27:51,080 --> 00:27:56,720 +things like um uh hugging case + +630 +00:27:53,960 --> 00:27:59,720 +accelerate GPD neox other things like + +631 +00:27:56,720 --> 00:28:01,039 +this they they all uh interface many of + +632 +00:27:59,720 --> 00:28:03,760 +them interface to deep speed or + +633 +00:28:01,039 --> 00:28:06,039 +something similar to it so uh whatever + +634 +00:28:03,760 --> 00:28:09,080 +Library you're using for uh training + +635 +00:28:06,039 --> 00:28:12,000 +like this you can do I don't have a a + +636 +00:28:09,080 --> 00:28:14,640 +list but there's a whole bunch of them + +637 +00:28:12,000 --> 00:28:16,960 +you can either use use deep speed uh + +638 +00:28:14,640 --> 00:28:20,480 +things like hugging face accelerator TRL + +639 +00:28:16,960 --> 00:28:23,640 +I think we might have a TRL um uh + +640 +00:28:20,480 --> 00:28:26,000 +recitation later uh also I haven't used + +641 +00:28:23,640 --> 00:28:28,960 +it myself or worked with people who used + +642 +00:28:26,000 --> 00:28:31,120 +it but ax aot a lot of people are using + +643 +00:28:28,960 --> 00:28:33,799 +um so uh we maybe we could come up with + +644 +00:28:31,120 --> 00:28:33,799 +a list of those + +645 +00:28:37,480 --> 00:28:43,039 +later + +646 +00:28:39,760 --> 00:28:44,799 +so the other option that you can use is + +647 +00:28:43,039 --> 00:28:48,399 +don't tune all of the parameters of the + +648 +00:28:44,799 --> 00:28:51,399 +model but just some of them and this is + +649 +00:28:48,399 --> 00:28:54,039 +really popular nowadays because this + +650 +00:28:51,399 --> 00:28:57,799 +further improves your ability to train + +651 +00:28:54,039 --> 00:29:01,240 +on many different uh you know uh data + +652 +00:28:57,799 --> 00:29:03,120 +sets without huge uh gpus or without + +653 +00:29:01,240 --> 00:29:06,919 +many many GPU + +654 +00:29:03,120 --> 00:29:08,519 +devices and so the first one is + +655 +00:29:06,919 --> 00:29:10,399 +something like prefix tuning so I + +656 +00:29:08,519 --> 00:29:12,240 +already talked about this last time + +657 +00:29:10,399 --> 00:29:13,679 +prefix tuning is like a bridge between + +658 +00:29:12,240 --> 00:29:17,480 +parameter efficient F tuning and + +659 +00:29:13,679 --> 00:29:21,640 +prompting right so it Tunes + +660 +00:29:17,480 --> 00:29:21,640 +one prefix for each of the + +661 +00:29:22,799 --> 00:29:28,480 +layers so the next one that I'd like to + +662 +00:29:25,320 --> 00:29:32,840 +talk about is adapters and adapters + +663 +00:29:28,480 --> 00:29:37,559 +basically look like this so what you do + +664 +00:29:32,840 --> 00:29:40,000 +is you have your standard Transformer + +665 +00:29:37,559 --> 00:29:41,440 +architecture uh which um has you know + +666 +00:29:40,000 --> 00:29:47,480 +like multi-headed + +667 +00:29:41,440 --> 00:29:47,480 +detention um and other things like this + +668 +00:29:47,760 --> 00:29:51,360 +and yeah this is written in a slightly + +669 +00:29:50,159 --> 00:29:53,200 +different way than I wrote the + +670 +00:29:51,360 --> 00:29:56,200 +Transformer diagram but it's saying the + +671 +00:29:53,200 --> 00:29:59,960 +same things so multi-headed attention + +672 +00:29:56,200 --> 00:30:03,399 +this is uh kind of of your W your q k + +673 +00:29:59,960 --> 00:30:06,679 +and V matrices and then this is your o + +674 +00:30:03,399 --> 00:30:09,240 +Matrix in um in the Transformer + +675 +00:30:06,679 --> 00:30:10,600 +architecture so this is what we were + +676 +00:30:09,240 --> 00:30:13,279 +calling multi-head attension in the + +677 +00:30:10,600 --> 00:30:15,440 +previous diagram this says 2x feed + +678 +00:30:13,279 --> 00:30:17,960 +forward layer it's basically 2x linear + +679 +00:30:15,440 --> 00:30:21,039 +layer with a sandwiched nonlinearity so + +680 +00:30:17,960 --> 00:30:25,000 +it's basically a a feed forward block so + +681 +00:30:21,039 --> 00:30:27,679 +this is just the standard um the + +682 +00:30:25,000 --> 00:30:30,039 +standard like Transformer so what + +683 +00:30:27,679 --> 00:30:33,600 +adapters do is they add yet another + +684 +00:30:30,039 --> 00:30:35,000 +layer right here and you freeze the + +685 +00:30:33,600 --> 00:30:37,000 +things that are in Gray here like the + +686 +00:30:35,000 --> 00:30:40,000 +feed forward layer feed forward layer + +687 +00:30:37,000 --> 00:30:41,200 +multi-headed attention but only train + +688 +00:30:40,000 --> 00:30:44,399 +this + +689 +00:30:41,200 --> 00:30:46,760 +adapter and the way the adapter works is + +690 +00:30:44,399 --> 00:30:49,880 +you have a standard + +691 +00:30:46,760 --> 00:30:52,760 +large representation Vector here and you + +692 +00:30:49,880 --> 00:30:55,000 +have a feed forward down projection that + +693 +00:30:52,760 --> 00:30:58,000 +down projects to a very small number of + +694 +00:30:55,000 --> 00:30:59,679 +nodes here and then you have a linearity + +695 +00:30:58,000 --> 00:31:02,000 +and then you have a feed forward up + +696 +00:30:59,679 --> 00:31:04,679 +projection that projects it back to the + +697 +00:31:02,000 --> 00:31:08,720 +standard space and this + +698 +00:31:04,679 --> 00:31:13,840 +is uh included within the uh residual + +699 +00:31:08,720 --> 00:31:13,840 +layer here and so + +700 +00:31:14,440 --> 00:31:21,320 +ideally this will project down from like + +701 +00:31:17,519 --> 00:31:21,320 +512 to something like + +702 +00:31:23,559 --> 00:31:31,519 +16 and then back up to 512 + +703 +00:31:27,919 --> 00:31:35,679 +so if it was just a 5002 by 5002 Matrix + +704 +00:31:31,519 --> 00:31:36,720 +that would be 2 the 9 right so you get + +705 +00:31:35,679 --> 00:31:41,399 +two to + +706 +00:31:36,720 --> 00:31:41,399 +the you get two to the 10 par + +707 +00:31:49,200 --> 00:31:59,159 +Ang yeah for is um this is only 2 to the + +708 +00:31:56,159 --> 00:31:59,159 +4 + +709 +00:31:59,440 --> 00:32:08,360 +so if you have uh this that would be 29 + +710 +00:32:03,200 --> 00:32:13,720 ++ 4 + 1 is 2 + +711 +00:32:08,360 --> 00:32:15,720 +14 um so you would have 16 times less + +712 +00:32:13,720 --> 00:32:17,799 +parameters for the adapters than you + +713 +00:32:15,720 --> 00:32:21,200 +would have for the uh for the full + +714 +00:32:17,799 --> 00:32:24,080 +Matrix so and then if we instead of + +715 +00:32:21,200 --> 00:32:25,760 +using 16 we just did two or one or + +716 +00:32:24,080 --> 00:32:30,000 +something like that it would be you know + +717 +00:32:25,760 --> 00:32:33,519 +much much less so basically uh by making + +718 +00:32:30,000 --> 00:32:34,600 +this making these matrices or these + +719 +00:32:33,519 --> 00:32:38,840 +vectors + +720 +00:32:34,600 --> 00:32:44,360 +very um very skinny this allows us to + +721 +00:32:38,840 --> 00:32:44,360 +minimize our um minimize the additional + +722 +00:32:47,080 --> 00:32:52,159 +paramet so are there any uh any + +723 +00:32:49,519 --> 00:32:52,159 +questions about + +724 +00:32:52,519 --> 00:32:55,519 +this + +725 +00:32:56,039 --> 00:32:59,039 +yeah + +726 +00:33:02,200 --> 00:33:05,440 +yeah so why do they make it smaller and + +727 +00:33:03,919 --> 00:33:06,880 +then larger the main reason why they + +728 +00:33:05,440 --> 00:33:08,159 +make it smaller and then larger is + +729 +00:33:06,880 --> 00:33:09,799 +because that's a way to reduce the + +730 +00:33:08,159 --> 00:33:12,480 +parameter count so if they kept it the + +731 +00:33:09,799 --> 00:33:14,320 +same size um if they kept it the same + +732 +00:33:12,480 --> 00:33:17,159 +size it would be two to 18 but you would + +733 +00:33:14,320 --> 00:33:18,799 +actually have two of them uh you would + +734 +00:33:17,159 --> 00:33:21,639 +have two of them so you'd have even more + +735 +00:33:18,799 --> 00:33:21,639 +parameters + +736 +00:33:24,399 --> 00:33:30,399 +but so it would hurt the performance uh + +737 +00:33:28,720 --> 00:33:31,919 +so making them smaller would hurt the + +738 +00:33:30,399 --> 00:33:34,440 +performance if he had lots and lots of + +739 +00:33:31,919 --> 00:33:36,320 +training data so if you have lots and + +740 +00:33:34,440 --> 00:33:39,000 +lots of training data you would benefit + +741 +00:33:36,320 --> 00:33:41,440 +by making the adapter Dimension larger + +742 +00:33:39,000 --> 00:33:43,279 +and uh just you know fitting fitting + +743 +00:33:41,440 --> 00:33:45,080 +very well but if you have lots and lots + +744 +00:33:43,279 --> 00:33:47,919 +of training data and you have the memory + +745 +00:33:45,080 --> 00:33:49,080 +that allows you to train a larger model + +746 +00:33:47,919 --> 00:33:50,440 +then you might as well just train the + +747 +00:33:49,080 --> 00:33:51,679 +whole model itself you might as well do + +748 +00:33:50,440 --> 00:33:53,960 +full fine + +749 +00:33:51,679 --> 00:33:56,200 +tuning there's two advantages to + +750 +00:33:53,960 --> 00:33:58,120 +parameter efficient uh fine tuning + +751 +00:33:56,200 --> 00:34:00,279 +methods uh the first one is that they + +752 +00:33:58,120 --> 00:34:01,960 +reduce memory like I mentioned here + +753 +00:34:00,279 --> 00:34:03,799 +reduce the memory for the parameters + +754 +00:34:01,960 --> 00:34:05,679 +you're training also because there's + +755 +00:34:03,799 --> 00:34:07,320 +fewer parameters it's harder to like + +756 +00:34:05,679 --> 00:34:08,960 +overfit so if you have very small + +757 +00:34:07,320 --> 00:34:12,320 +training data full fine tuning can + +758 +00:34:08,960 --> 00:34:14,399 +overfit and become unstable but because + +759 +00:34:12,320 --> 00:34:18,000 +this has fewer parameters it + +760 +00:34:14,399 --> 00:34:21,040 +essentially is less easy to overfit and + +761 +00:34:18,000 --> 00:34:21,040 +will generalize better + +762 +00:34:24,599 --> 00:34:29,359 +often so when you find tune you only + +763 +00:34:27,159 --> 00:34:30,679 +fine-tune the parameters of the adapters + +764 +00:34:29,359 --> 00:34:32,440 +and so we assume that we have a + +765 +00:34:30,679 --> 00:34:34,200 +pre-trained model like the gray parts + +766 +00:34:32,440 --> 00:34:36,480 +are pre-trained and then we fine tune + +767 +00:34:34,200 --> 00:34:36,480 +just + +768 +00:34:37,960 --> 00:34:40,960 +that + +769 +00:34:43,720 --> 00:34:51,760 +butay so very good + +770 +00:34:48,040 --> 00:34:51,760 +question you need + +771 +00:34:53,760 --> 00:34:59,280 +to so the question was even + +772 +00:34:57,880 --> 00:35:00,760 +though we are only fine tuning the + +773 +00:34:59,280 --> 00:35:02,320 +adapter layers we still need to store + +774 +00:35:00,760 --> 00:35:04,760 +the gradients of the other layers right + +775 +00:35:02,320 --> 00:35:09,480 +so we still need to store this part + +776 +00:35:04,760 --> 00:35:13,320 +that's actually not the case um + +777 +00:35:09,480 --> 00:35:15,000 +so when you are doing back propop you + +778 +00:35:13,320 --> 00:35:18,680 +only need to do back propop into the + +779 +00:35:15,000 --> 00:35:20,839 +parts of the model that are on the path + +780 +00:35:18,680 --> 00:35:23,240 +to the gradients that you want to be + +781 +00:35:20,839 --> 00:35:25,800 +updated so like for + +782 +00:35:23,240 --> 00:35:28,760 +example if I + +783 +00:35:25,800 --> 00:35:32,160 +write + +784 +00:35:28,760 --> 00:35:32,160 +if I write the computation + +785 +00:35:55,800 --> 00:35:58,800 +graph + +786 +00:36:22,599 --> 00:36:28,240 +so this is like the computation graph of + +787 +00:36:25,720 --> 00:36:32,240 +a + +788 +00:36:28,240 --> 00:36:34,319 +um in a tension block so we get our loss + +789 +00:36:32,240 --> 00:36:36,400 +like the gradient from the loss is + +790 +00:36:34,319 --> 00:36:41,160 +flowing in + +791 +00:36:36,400 --> 00:36:44,000 +here and so it goes + +792 +00:36:41,160 --> 00:36:47,640 +back to the Fe forward Network to the + +793 +00:36:44,000 --> 00:36:49,200 +adapter to the attention and then here + +794 +00:36:47,640 --> 00:36:51,119 +so we definitely need to pass it back + +795 +00:36:49,200 --> 00:36:53,880 +through the layers so we get to you know + +796 +00:36:51,119 --> 00:36:56,160 +like the earlier layers and stuff we + +797 +00:36:53,880 --> 00:36:57,720 +don't actually need to pass it into this + +798 +00:36:56,160 --> 00:36:59,400 +into the weights of the attention + +799 +00:36:57,720 --> 00:37:01,280 +because we're not we're not updating + +800 +00:36:59,400 --> 00:37:02,520 +them so we don't really need to even + +801 +00:37:01,280 --> 00:37:04,640 +calculate the gradients of the weights + +802 +00:37:02,520 --> 00:37:07,800 +of the attention we also don't need to + +803 +00:37:04,640 --> 00:37:09,160 +calculate the gradient of this um but we + +804 +00:37:07,800 --> 00:37:11,280 +do need to calculate the gradient of + +805 +00:37:09,160 --> 00:37:14,240 +this because we're updating it so + +806 +00:37:11,280 --> 00:37:15,839 +basically um you don't even need to do + +807 +00:37:14,240 --> 00:37:19,800 +backrop in the parts that you can just + +808 +00:37:15,839 --> 00:37:21,560 +cut off without updating yeah so forward + +809 +00:37:19,800 --> 00:37:23,200 +you do need to you know use them + +810 +00:37:21,560 --> 00:37:25,440 +obviously to calculate the forward path + +811 +00:37:23,200 --> 00:37:27,560 +so by like being smart about that you + +812 +00:37:25,440 --> 00:37:31,119 +can fix that there's also something + +813 +00:37:27,560 --> 00:37:33,319 +called um uh checkpointing like + +814 +00:37:31,119 --> 00:37:34,920 +computation graph checkpointing or + +815 +00:37:33,319 --> 00:37:36,720 +forward pass or backward pass + +816 +00:37:34,920 --> 00:37:38,640 +checkpointing where basically what you + +817 +00:37:36,720 --> 00:37:40,040 +do is you calculate part part of the way + +818 +00:37:38,640 --> 00:37:41,359 +through the graph and then throw out the + +819 +00:37:40,040 --> 00:37:45,280 +intermediate + +820 +00:37:41,359 --> 00:37:47,760 +calculation um and so for example you + +821 +00:37:45,280 --> 00:37:50,000 +might you might do the forward pass all + +822 +00:37:47,760 --> 00:37:52,240 +the way up to here and then throw out + +823 +00:37:50,000 --> 00:37:53,720 +all the intermediate States and then + +824 +00:37:52,240 --> 00:37:55,400 +recalculate them when you're doing the + +825 +00:37:53,720 --> 00:37:57,240 +backward pass and so there's like lots + +826 +00:37:55,400 --> 00:37:58,920 +of tricky things that we can do + +827 +00:37:57,240 --> 00:38:01,920 +like squeeze your memory + +828 +00:37:58,920 --> 00:38:01,920 +you + +829 +00:38:02,839 --> 00:38:10,200 +yeah how uh great question + +830 +00:38:06,079 --> 00:38:12,079 +um do I have that on the slide maybe + +831 +00:38:10,200 --> 00:38:17,119 +not + +832 +00:38:12,079 --> 00:38:19,599 +um so one way that you can do it this is + +833 +00:38:17,119 --> 00:38:22,960 +from Laura but the B the same idea is + +834 +00:38:19,599 --> 00:38:25,960 +basically there so in Laura you do the + +835 +00:38:22,960 --> 00:38:28,920 +upscaling with a zero Matrix you + +836 +00:38:25,960 --> 00:38:30,599 +initiate it to a zero Matrix and the + +837 +00:38:28,920 --> 00:38:34,680 +downscaling you can initialize it to + +838 +00:38:30,599 --> 00:38:38,000 +zero or like some random uh random no + +839 +00:38:34,680 --> 00:38:40,000 +actually this needs to be random uh and + +840 +00:38:38,000 --> 00:38:42,480 +so the reason why this is zero is + +841 +00:38:40,000 --> 00:38:46,839 +because then if you don't do anything it + +842 +00:38:42,480 --> 00:38:51,119 +will just stay the same right so uh so + +843 +00:38:46,839 --> 00:38:51,119 +that is uh the standard one standard + +844 +00:38:52,839 --> 00:38:57,440 +way + +845 +00:38:54,520 --> 00:38:58,960 +cool okay so um another thing that I + +846 +00:38:57,440 --> 00:39:00,839 +want to mention this is a kind of + +847 +00:38:58,960 --> 00:39:03,800 +interesting technique it's not super + +848 +00:39:00,839 --> 00:39:06,359 +standard but I I like it so I'm going to + +849 +00:39:03,800 --> 00:39:08,880 +uh going to talk about it anyway this is + +850 +00:39:06,359 --> 00:39:10,760 +something called adapter Fusion and the + +851 +00:39:08,880 --> 00:39:13,240 +basic idea is to learn an adapter for + +852 +00:39:10,760 --> 00:39:16,040 +various tasks and combine them + +853 +00:39:13,240 --> 00:39:17,880 +together and so instead of having just + +854 +00:39:16,040 --> 00:39:19,400 +your adapter layer you have multiple + +855 +00:39:17,880 --> 00:39:20,880 +adapters and then you have adapter + +856 +00:39:19,400 --> 00:39:22,400 +Fusion up + +857 +00:39:20,880 --> 00:39:26,680 +here + +858 +00:39:22,400 --> 00:39:28,000 +and the basic idea is uh an adapter is + +859 +00:39:26,680 --> 00:39:30,560 +just you know what I wrote on the + +860 +00:39:28,000 --> 00:39:33,599 +previous slide but adapter Fusion is + +861 +00:39:30,560 --> 00:39:36,000 +attention over adapters so you can + +862 +00:39:33,599 --> 00:39:39,720 +decide which adapter to use in which + +863 +00:39:36,000 --> 00:39:42,160 +case and each of the adapters is trained + +864 +00:39:39,720 --> 00:39:44,800 +separately on like task specific data so + +865 +00:39:42,160 --> 00:39:47,200 +you have uh data from lots of question + +866 +00:39:44,800 --> 00:39:49,119 +answering data sets and you train a + +867 +00:39:47,200 --> 00:39:50,640 +question answering adapter you have data + +868 +00:39:49,119 --> 00:39:53,160 +from + +869 +00:39:50,640 --> 00:39:54,880 +uh I don't know translation data sets + +870 +00:39:53,160 --> 00:39:57,560 +and you train a translation adapter you + +871 +00:39:54,880 --> 00:40:00,440 +have uh other things like that + +872 +00:39:57,560 --> 00:40:03,920 +and so then when you actually use them + +873 +00:40:00,440 --> 00:40:06,400 +you do attension over which adapter to + +874 +00:40:03,920 --> 00:40:08,880 +use and then uh take the value from that + +875 +00:40:06,400 --> 00:40:10,520 +adapter and I I kind of like this idea + +876 +00:40:08,880 --> 00:40:12,560 +because it allows you to you know train + +877 +00:40:10,520 --> 00:40:15,200 +modules that are useful for a particular + +878 +00:40:12,560 --> 00:40:17,680 +task and then decide which one to use at + +879 +00:40:15,200 --> 00:40:19,319 +any particular point so uh I think + +880 +00:40:17,680 --> 00:40:22,040 +there's lots of creative things that we + +881 +00:40:19,319 --> 00:40:24,599 +could do with this there's also um + +882 +00:40:22,040 --> 00:40:26,560 +multilingual versions so you train + +883 +00:40:24,599 --> 00:40:28,520 +adapters for individual languages and + +884 +00:40:26,560 --> 00:40:30,119 +you train adapter for individual tasks + +885 +00:40:28,520 --> 00:40:32,200 +and then you combine them together too + +886 +00:40:30,119 --> 00:40:34,079 +so if that's interesting you can take a + +887 +00:40:32,200 --> 00:40:36,319 +look at that + +888 +00:40:34,079 --> 00:40:37,960 +Pap in a way this is kind of like a + +889 +00:40:36,319 --> 00:40:39,200 +mixture of experts model if you've heard + +890 +00:40:37,960 --> 00:40:40,599 +of that we're going to talk about that + +891 +00:40:39,200 --> 00:40:42,760 +in a future class so I won't go into + +892 +00:40:40,599 --> 00:40:45,760 +lots of detail but um I wanted to talk + +893 +00:40:42,760 --> 00:40:48,079 +about it here and we we talk about about + +894 +00:40:45,760 --> 00:40:52,160 +this + +895 +00:40:48,079 --> 00:40:54,480 +cool okay so now I want to go into + +896 +00:40:52,160 --> 00:40:56,440 +talking about Laura and Laura is very + +897 +00:40:54,480 --> 00:40:57,560 +popular you it's very likely that you've + +898 +00:40:56,440 --> 00:41:02,000 +heard of it + +899 +00:40:57,560 --> 00:41:03,960 +nowadays um the way Laura works is very + +900 +00:41:02,000 --> 00:41:05,800 +similar conceptually to adapters but it + +901 +00:41:03,960 --> 00:41:09,000 +has an important implementation + +902 +00:41:05,800 --> 00:41:14,680 +difference and the difference is + +903 +00:41:09,000 --> 00:41:17,560 +that in contrast to adapters which had + +904 +00:41:14,680 --> 00:41:20,720 +a um in contrast to adapters which had a + +905 +00:41:17,560 --> 00:41:23,599 +nonlinear layer here Laura has no + +906 +00:41:20,720 --> 00:41:27,000 +nonlinear layer so basically what it is + +907 +00:41:23,599 --> 00:41:29,560 +doing is it is uh taking + +908 +00:41:27,000 --> 00:41:32,880 +downscaled Matrix in upscale uh + +909 +00:41:29,560 --> 00:41:36,440 +downscale Matrix in upscale Matrix and + +910 +00:41:32,880 --> 00:41:38,319 +just doing a linear transform with them + +911 +00:41:36,440 --> 00:41:42,560 +and + +912 +00:41:38,319 --> 00:41:44,560 +so in this graph or in this figure here + +913 +00:41:42,560 --> 00:41:46,520 +which I took from the Laura paper it's + +914 +00:41:44,560 --> 00:41:48,480 +actually showing them as like separate + +915 +00:41:46,520 --> 00:41:50,040 +computation paths it's showing like you + +916 +00:41:48,480 --> 00:41:54,119 +use a normal Matrix and then you use the + +917 +00:41:50,040 --> 00:41:56,079 +Laura Matrix separately but actually um + +918 +00:41:54,119 --> 00:41:59,240 +you can just add them together and you + +919 +00:41:56,079 --> 00:42:01,200 +get the equivalent result so you add + +920 +00:41:59,240 --> 00:42:04,319 +this Matrix times this Matrix into the + +921 +00:42:01,200 --> 00:42:05,960 +pre-rain weights and that gives you the + +922 +00:42:04,319 --> 00:42:07,960 +same result as if you calculated them + +923 +00:42:05,960 --> 00:42:12,599 +separately and then added them + +924 +00:42:07,960 --> 00:42:14,319 +afterwards so why is Laura so popular uh + +925 +00:42:12,599 --> 00:42:16,599 +I would say Laura is so popular because + +926 +00:42:14,319 --> 00:42:18,760 +it's super convenient after you finished + +927 +00:42:16,599 --> 00:42:19,920 +training with Laura because after you + +928 +00:42:18,760 --> 00:42:22,680 +finished training with Laura you can + +929 +00:42:19,920 --> 00:42:25,040 +just add that the learn matrices back + +930 +00:42:22,680 --> 00:42:26,440 +into the original weight Matrix and you + +931 +00:42:25,040 --> 00:42:27,800 +have a model that's exactly the same + +932 +00:42:26,440 --> 00:42:29,280 +shape it doesn't have any other + +933 +00:42:27,800 --> 00:42:31,839 +components you don't need any different + +934 +00:42:29,280 --> 00:42:34,760 +code path you just have updated + +935 +00:42:31,839 --> 00:42:36,640 +parameters and that contrasts to + +936 +00:42:34,760 --> 00:42:38,359 +adapters because in adapters you + +937 +00:42:36,640 --> 00:42:39,760 +actually need to add extra model + +938 +00:42:38,359 --> 00:42:43,599 +components you have to have different + +939 +00:42:39,760 --> 00:42:46,160 +pip merch code to implement this so um I + +940 +00:42:43,599 --> 00:42:48,359 +think that's the big reason why Laura is + +941 +00:42:46,160 --> 00:42:48,359 +so + +942 +00:42:48,880 --> 00:42:53,920 +po it's not actually that complicated + +943 +00:42:51,359 --> 00:42:55,160 +it's pretty simple but um it's important + +944 +00:42:53,920 --> 00:42:56,960 +to + +945 +00:42:55,160 --> 00:42:58,800 +know + +946 +00:42:56,960 --> 00:43:02,160 +cool + +947 +00:42:58,800 --> 00:43:05,839 +um so another popular thing uh that you + +948 +00:43:02,160 --> 00:43:07,359 +might have heard of is Cur and qora + +949 +00:43:05,839 --> 00:43:10,440 +combines together + +950 +00:43:07,359 --> 00:43:11,760 +quantization um with parameter efficient + +951 +00:43:10,440 --> 00:43:13,480 +tuning and we're going to talk a lot + +952 +00:43:11,760 --> 00:43:17,040 +more about quantisation in a future + +953 +00:43:13,480 --> 00:43:18,760 +class in maybe a week or so but + +954 +00:43:17,040 --> 00:43:21,720 +basically there are ways to compress the + +955 +00:43:18,760 --> 00:43:25,640 +model down to not be in like 16 bits but + +956 +00:43:21,720 --> 00:43:27,319 +be in like four bits and um so if each + +957 +00:43:25,640 --> 00:43:31,720 +parameter and four bits that makes the + +958 +00:43:27,319 --> 00:43:31,720 +model very very Compact and + +959 +00:43:32,240 --> 00:43:40,240 +so if we go back to our calculation in + +960 +00:43:35,599 --> 00:43:44,640 +this previous slide uh if we + +961 +00:43:40,240 --> 00:43:48,000 +had if we had a 16bit model to fit + +962 +00:43:44,640 --> 00:43:49,839 +llama uh on your memory you needed 130 + +963 +00:43:48,000 --> 00:43:54,160 +gigabytes but like let's say we have a + +964 +00:43:49,839 --> 00:43:56,880 +4bit model Suddenly It's not 130 it's uh + +965 +00:43:54,160 --> 00:44:00,559 +something closer to 32 and a half I + +966 +00:43:56,880 --> 00:44:03,880 +guess and 32 and a half + +967 +00:44:00,559 --> 00:44:07,960 +is actually fits on a lot of Hardware it + +968 +00:44:03,880 --> 00:44:12,119 +fits on A1 100s or h100s easily it also + +969 +00:44:07,960 --> 00:44:16,119 +fits on these like less expensive gpus I + +970 +00:44:12,119 --> 00:44:17,599 +mean less expensive might be you know + +971 +00:44:16,119 --> 00:44:19,559 +relative it's still very expensive but + +972 +00:44:17,599 --> 00:44:21,559 +it'll also F on your Mac probably if you + +973 +00:44:19,559 --> 00:44:22,960 +have a Mac with a fair amount of memory + +974 +00:44:21,559 --> 00:44:27,480 +so you could just run it on a local + +975 +00:44:22,960 --> 00:44:32,559 +machine in your CPU memory also so + +976 +00:44:27,480 --> 00:44:34,559 +um so basically the idea is we can press + +977 +00:44:32,559 --> 00:44:36,720 +down the model to be much smaller so the + +978 +00:44:34,559 --> 00:44:41,559 +forward and backward um so the + +979 +00:44:36,720 --> 00:44:45,000 +parameters are small and then we have a + +980 +00:44:41,559 --> 00:44:47,000 +very very compact Laura layer that + +981 +00:44:45,000 --> 00:44:48,280 +doesn't Laura which doesn't take very + +982 +00:44:47,000 --> 00:44:51,079 +much memory + +983 +00:44:48,280 --> 00:44:53,480 +itself and that allows us to basically + +984 +00:44:51,079 --> 00:44:58,280 +train a model on you know commodity + +985 +00:44:53,480 --> 00:45:00,119 +Hardware like 48 48 gigabyte uh GPU or + +986 +00:44:58,280 --> 00:45:02,599 +uh something like your your MacBook or + +987 +00:45:00,119 --> 00:45:05,880 +something like that and it it also has + +988 +00:45:02,599 --> 00:45:07,400 +uh like paging to page things from CPU + +989 +00:45:05,880 --> 00:45:10,760 +to GPU memory to make it even more + +990 +00:45:07,400 --> 00:45:12,359 +efficient but uh basically that's the + +991 +00:45:10,760 --> 00:45:15,880 +general + +992 +00:45:12,359 --> 00:45:18,000 +idea so um I definitely if you want to + +993 +00:45:15,880 --> 00:45:19,520 +train a large model on limited Hardware + +994 +00:45:18,000 --> 00:45:21,480 +I'd recommend this if you're not + +995 +00:45:19,520 --> 00:45:23,880 +training a super large model like 65 + +996 +00:45:21,480 --> 00:45:25,960 +gabes I think just Laura should be fine + +997 +00:45:23,880 --> 00:45:28,319 +like you can probably train a 7D model + +998 +00:45:25,960 --> 00:45:31,400 +or a 1B model with just Laur and that + +999 +00:45:28,319 --> 00:45:36,000 +should be know on a single GP + +1000 +00:45:31,400 --> 00:45:36,000 +VI cool uh any questions about + +1001 +00:45:41,079 --> 00:45:48,000 +this does low Precision not cause any + +1002 +00:45:43,680 --> 00:45:49,559 +problems it definitely is you need to be + +1003 +00:45:48,000 --> 00:45:51,680 +a little bit concerned about it but + +1004 +00:45:49,559 --> 00:45:53,440 +you're not doing optimization in low + +1005 +00:45:51,680 --> 00:45:55,359 +Precision you're just keeping the + +1006 +00:45:53,440 --> 00:45:59,040 +original model in low Precision so from + +1007 +00:45:55,359 --> 00:46:01,119 +that point of view it's you know it's + +1008 +00:45:59,040 --> 00:46:03,599 +manageable I guess + +1009 +00:46:01,119 --> 00:46:06,880 +so and you can also look at the hura + +1010 +00:46:03,599 --> 00:46:08,400 +paper they have very extensive + +1011 +00:46:06,880 --> 00:46:10,680 +experiments + +1012 +00:46:08,400 --> 00:46:14,040 +cool um a final one that I'd like to + +1013 +00:46:10,680 --> 00:46:15,880 +talk about is bitfit um this is very + +1014 +00:46:14,040 --> 00:46:17,680 +very simple you basically just train the + +1015 +00:46:15,880 --> 00:46:22,440 +biases of the model for any model that + +1016 +00:46:17,680 --> 00:46:24,520 +has biases uh this also can fit uh + +1017 +00:46:22,440 --> 00:46:26,119 +models it's very simple because you + +1018 +00:46:24,520 --> 00:46:28,359 +don't even need to change you don't need + +1019 +00:46:26,119 --> 00:46:30,000 +to add any extra code uh you just need + +1020 +00:46:28,359 --> 00:46:33,520 +to freeze all the parameters except the + +1021 +00:46:30,000 --> 00:46:36,160 +biases so from that point of view it's + +1022 +00:46:33,520 --> 00:46:38,559 +very + +1023 +00:46:36,160 --> 00:46:40,520 +easy so I talked about this a little bit + +1024 +00:46:38,559 --> 00:46:41,319 +last time but I think everybody didn't + +1025 +00:46:40,520 --> 00:46:43,400 +have + +1026 +00:46:41,319 --> 00:46:44,960 +full understanding of all the parameter + +1027 +00:46:43,400 --> 00:46:48,760 +efficient tuning methods to understand + +1028 +00:46:44,960 --> 00:46:50,839 +this well um but we had a paper where we + +1029 +00:46:48,760 --> 00:46:52,559 +basically looked at all of these tuning + +1030 +00:46:50,839 --> 00:46:56,280 +methods and we kind of decomposed them + +1031 +00:46:52,559 --> 00:46:59,240 +into several different design components + +1032 +00:46:56,280 --> 00:47:01,839 +and actually um maybe I'll + +1033 +00:46:59,240 --> 00:47:04,319 +also pull up + +1034 +00:47:01,839 --> 00:47:07,440 +the table that we have of this that + +1035 +00:47:04,319 --> 00:47:07,440 +might be even easier to + +1036 +00:47:14,079 --> 00:47:17,079 +follow + +1037 +00:47:20,839 --> 00:47:25,800 +so basically there there's different + +1038 +00:47:23,599 --> 00:47:27,960 +things that you can look at with respect + +1039 +00:47:25,800 --> 00:47:30,160 +to parameter efficient tuning methods + +1040 +00:47:27,960 --> 00:47:33,000 +there's the functional form of the + +1041 +00:47:30,160 --> 00:47:36,680 +nonlinearity that you're using there's + +1042 +00:47:33,000 --> 00:47:38,280 +the place where you insert the model + +1043 +00:47:36,680 --> 00:47:39,760 +there's how you modify the + +1044 +00:47:38,280 --> 00:47:41,200 +representation and then there's a + +1045 +00:47:39,760 --> 00:47:42,880 +composition function for how you take + +1046 +00:47:41,200 --> 00:47:44,559 +the modified representation and add it + +1047 +00:47:42,880 --> 00:47:48,040 +into the original + +1048 +00:47:44,559 --> 00:47:49,800 +representation so if you if you want to + +1049 +00:47:48,040 --> 00:47:52,559 +take a look at the table you can take a + +1050 +00:47:49,800 --> 00:47:55,319 +look at this it's also in the references + +1051 +00:47:52,559 --> 00:47:56,359 +but basically what we can find is that + +1052 +00:47:55,319 --> 00:47:59,680 +things like + +1053 +00:47:56,359 --> 00:48:01,800 +adapters uh Laura and prefix tuning are + +1054 +00:47:59,680 --> 00:48:04,280 +actually very uh very similar to each + +1055 +00:48:01,800 --> 00:48:07,119 +other but the difference being where do + +1056 +00:48:04,280 --> 00:48:09,079 +you get the original representation that + +1057 +00:48:07,119 --> 00:48:11,839 +you're feeding in so adapters generally + +1058 +00:48:09,079 --> 00:48:15,040 +get it from after the the module that + +1059 +00:48:11,839 --> 00:48:17,160 +you're uh adapting prefix tuning gets it + +1060 +00:48:15,040 --> 00:48:19,800 +from before Laura also gets it from + +1061 +00:48:17,160 --> 00:48:23,559 +before also what's nonlinearity it's a + +1062 +00:48:19,800 --> 00:48:25,440 +relu A softmax or nothing um Laura + +1063 +00:48:23,559 --> 00:48:27,599 +actually this isn't really mentioned in + +1064 +00:48:25,440 --> 00:48:29,200 +the paper but it is uh like actually + +1065 +00:48:27,599 --> 00:48:31,920 +implemented in the code there's also a + +1066 +00:48:29,200 --> 00:48:33,680 +scalar scaling Factor here uh which is a + +1067 +00:48:31,920 --> 00:48:36,280 +hyper parameter so that's something to + +1068 +00:48:33,680 --> 00:48:37,640 +be aware of um and so basically by + +1069 +00:48:36,280 --> 00:48:40,079 +breaking these down you can number one + +1070 +00:48:37,640 --> 00:48:42,359 +better understand each of the uh modules + +1071 +00:48:40,079 --> 00:48:44,280 +and how they or each of the methods and + +1072 +00:48:42,359 --> 00:48:47,200 +how they interact with each + +1073 +00:48:44,280 --> 00:48:48,760 +other and also uh what we show in this + +1074 +00:48:47,200 --> 00:48:51,680 +paper is that this understanding can + +1075 +00:48:48,760 --> 00:48:53,119 +lead you to you know new variants that + +1076 +00:48:51,680 --> 00:48:56,400 +can be more effective than any of the + +1077 +00:48:53,119 --> 00:48:59,160 +existing variants and so we proposed two + +1078 +00:48:56,400 --> 00:49:00,880 +things called The Parallel adapter and + +1079 +00:48:59,160 --> 00:49:04,400 +uh the scaled parallel adapter and we + +1080 +00:49:00,880 --> 00:49:06,559 +demonstrate that they get better + +1081 +00:49:04,400 --> 00:49:09,760 +results so then the question is which + +1082 +00:49:06,559 --> 00:49:11,200 +one to choose um for convenience Laura + +1083 +00:49:09,760 --> 00:49:13,799 +and bitfit don't change the model + +1084 +00:49:11,200 --> 00:49:15,920 +architecture so if you don't really care + +1085 +00:49:13,799 --> 00:49:17,319 +about like the absolute best accuracy + +1086 +00:49:15,920 --> 00:49:20,079 +out of these tuning methods I would + +1087 +00:49:17,319 --> 00:49:22,119 +definitely recommend um you use + +1088 +00:49:20,079 --> 00:49:24,960 +something like this it's definitely the + +1089 +00:49:22,119 --> 00:49:27,640 +easiest thing after you're done training + +1090 +00:49:24,960 --> 00:49:29,960 +for AC accy uh one thing that we found + +1091 +00:49:27,640 --> 00:49:31,920 +in our paper for simpler tasks it really + +1092 +00:49:29,960 --> 00:49:33,559 +actually doesn't matter very much so if + +1093 +00:49:31,920 --> 00:49:35,480 +you're just doing classification tasks + +1094 +00:49:33,559 --> 00:49:37,440 +even something super simple like bitfit + +1095 +00:49:35,480 --> 00:49:38,280 +is rather competitive with all of the + +1096 +00:49:37,440 --> 00:49:41,319 +other + +1097 +00:49:38,280 --> 00:49:43,880 +methods for more complex tasks and a + +1098 +00:49:41,319 --> 00:49:46,680 +small parameter budget uh we found + +1099 +00:49:43,880 --> 00:49:49,960 +prefix tuning to do a pretty good job uh + +1100 +00:49:46,680 --> 00:49:52,359 +this is not a like Universal finding but + +1101 +00:49:49,960 --> 00:49:54,319 +it's what we found in our paper and then + +1102 +00:49:52,359 --> 00:49:57,319 +for more complex tasks plus larger + +1103 +00:49:54,319 --> 00:50:00,079 +parameter budgets um adapters or some + +1104 +00:49:57,319 --> 00:50:03,400 +sort of mixture of multiple methods can + +1105 +00:50:00,079 --> 00:50:04,720 +be can give you better results so again + +1106 +00:50:03,400 --> 00:50:07,160 +all of this is into paper if you want to + +1107 +00:50:04,720 --> 00:50:07,160 +look at more + +1108 +00:50:07,960 --> 00:50:14,359 +details + +1109 +00:50:10,200 --> 00:50:16,000 +cool okay so any any questions about + +1110 +00:50:14,359 --> 00:50:18,880 +that + +1111 +00:50:16,000 --> 00:50:20,920 +or okay uh next I'm going to go through + +1112 +00:50:18,880 --> 00:50:22,440 +some NLP tasks and the reason why I'm + +1113 +00:50:20,920 --> 00:50:23,640 +going to go through some NLP tasks is + +1114 +00:50:22,440 --> 00:50:25,240 +because when we're fine-tuning we need + +1115 +00:50:23,640 --> 00:50:26,680 +to be fine-tuning towards individual + +1116 +00:50:25,240 --> 00:50:29,400 +tasks we want to + +1117 +00:50:26,680 --> 00:50:30,760 +solve um and so basic fine tuning we + +1118 +00:50:29,400 --> 00:50:32,400 +build a model that's good at performing + +1119 +00:50:30,760 --> 00:50:34,160 +a single task instruction tuning we + +1120 +00:50:32,400 --> 00:50:35,640 +build a general General list model that + +1121 +00:50:34,160 --> 00:50:37,240 +is good at many + +1122 +00:50:35,640 --> 00:50:40,040 +tasks + +1123 +00:50:37,240 --> 00:50:41,799 +um and what I want to go through now is + +1124 +00:50:40,040 --> 00:50:46,119 +I want to go through some tasks that + +1125 +00:50:41,799 --> 00:50:48,520 +I've seen people use number one being + +1126 +00:50:46,119 --> 00:50:50,720 +really important in like actual + +1127 +00:50:48,520 --> 00:50:52,559 +applications of NLP models in industry + +1128 +00:50:50,720 --> 00:50:54,760 +but number two what what is the set of + +1129 +00:50:52,559 --> 00:50:56,680 +tasks that people use to evaluate gener + +1130 +00:50:54,760 --> 00:51:00,000 +models so like if you look at the GPD + +1131 +00:50:56,680 --> 00:51:01,400 +papers or you look at the Gemini paper + +1132 +00:51:00,000 --> 00:51:02,960 +what is the set of tasks that they're + +1133 +00:51:01,400 --> 00:51:06,400 +using to demonstrate that their models + +1134 +00:51:02,960 --> 00:51:07,599 +work well so the first one is context + +1135 +00:51:06,400 --> 00:51:11,000 +free question + +1136 +00:51:07,599 --> 00:51:13,880 +answering also called open book QA + +1137 +00:51:11,000 --> 00:51:15,640 +basically this requires answering a + +1138 +00:51:13,880 --> 00:51:17,720 +question without any specific grounding + +1139 +00:51:15,640 --> 00:51:19,480 +into documents it's also what happens + +1140 +00:51:17,720 --> 00:51:21,119 +when chat GPT answers your questions + +1141 +00:51:19,480 --> 00:51:22,799 +without looking something up on the web + +1142 +00:51:21,119 --> 00:51:25,160 +for + +1143 +00:51:22,799 --> 00:51:26,920 +example an example data set that lots of + +1144 +00:51:25,160 --> 00:51:30,920 +people use is something called + +1145 +00:51:26,920 --> 00:51:33,119 +MML um this is uh a massively multitask + +1146 +00:51:30,920 --> 00:51:35,920 +language understanding data set and it + +1147 +00:51:33,119 --> 00:51:38,559 +has questions in a number of relatively + +1148 +00:51:35,920 --> 00:51:42,599 +difficult areas like professional law so + +1149 +00:51:38,559 --> 00:51:45,079 +this is asking what happens when a + +1150 +00:51:42,599 --> 00:51:47,920 +Salesman ignores that trespassers will + +1151 +00:51:45,079 --> 00:51:52,000 +be prosecuted signed and enters a + +1152 +00:51:47,920 --> 00:51:54,839 +hermit's house he drives up the driveway + +1153 +00:51:52,000 --> 00:51:56,319 +and an explosive charge explodes the + +1154 +00:51:54,839 --> 00:51:58,319 +seller was was injured can the Celler + +1155 +00:51:56,319 --> 00:52:01,960 +recover damages from The + +1156 +00:51:58,319 --> 00:52:03,880 +Hermit so I I would not be able to + +1157 +00:52:01,960 --> 00:52:06,480 +answer this with you know certainty + +1158 +00:52:03,880 --> 00:52:08,799 +because I'm not a lawyer um the answer + +1159 +00:52:06,480 --> 00:52:10,720 +is yes if the hermit was responsible for + +1160 +00:52:08,799 --> 00:52:13,240 +the explosive charge under the driveway + +1161 +00:52:10,720 --> 00:52:15,200 +so now you know uh you can collect + +1162 +00:52:13,240 --> 00:52:17,559 +images if somebody tries to blow you up + +1163 +00:52:15,200 --> 00:52:20,559 +when you trespass on their + +1164 +00:52:17,559 --> 00:52:22,880 +property but uh yeah and this has lots + +1165 +00:52:20,559 --> 00:52:25,079 +and lots of categories like + +1166 +00:52:22,880 --> 00:52:27,000 +this the next thing is contextual + +1167 +00:52:25,079 --> 00:52:29,720 +question question answering and this is + +1168 +00:52:27,000 --> 00:52:30,839 +uh question answering uh grounded in + +1169 +00:52:29,720 --> 00:52:34,440 +actual + +1170 +00:52:30,839 --> 00:52:35,640 +context um one example data set that a + +1171 +00:52:34,440 --> 00:52:38,839 +lot of people use is something called + +1172 +00:52:35,640 --> 00:52:40,680 +natural questions and this is uh + +1173 +00:52:38,839 --> 00:52:43,200 +questions grounded in a Wikipedia + +1174 +00:52:40,680 --> 00:52:46,440 +document or the Wikipedia document + +1175 +00:52:43,200 --> 00:52:48,079 +collection so grounded in a Wikipedia + +1176 +00:52:46,440 --> 00:52:49,440 +document means they give you the actual + +1177 +00:52:48,079 --> 00:52:50,559 +document you should be answering the + +1178 +00:52:49,440 --> 00:52:52,559 +question about and then you need to + +1179 +00:52:50,559 --> 00:52:55,640 +answer the question about + +1180 +00:52:52,559 --> 00:52:57,440 +it this is often called machine reading + +1181 +00:52:55,640 --> 00:52:59,960 +because you expect it to like read and + +1182 +00:52:57,440 --> 00:53:02,599 +answer questions about the + +1183 +00:52:59,960 --> 00:53:04,799 +document or it could be okay we're going + +1184 +00:53:02,599 --> 00:53:06,400 +to give you all of Wikipedia please + +1185 +00:53:04,799 --> 00:53:10,280 +provide us the answer to this question + +1186 +00:53:06,400 --> 00:53:11,880 +and this is uh often called uh retrieval + +1187 +00:53:10,280 --> 00:53:14,319 +based question answering or retrieval + +1188 +00:53:11,880 --> 00:53:18,000 +augmented one variety of retrial + +1189 +00:53:14,319 --> 00:53:21,960 +augmented generation or rag so this is + +1190 +00:53:18,000 --> 00:53:23,520 +really really important um I think most + +1191 +00:53:21,960 --> 00:53:25,880 +many people that I talked to who want to + +1192 +00:53:23,520 --> 00:53:29,079 +build actual systems + +1193 +00:53:25,880 --> 00:53:31,400 +from language models or NLP systems are + +1194 +00:53:29,079 --> 00:53:34,319 +are trying to do this sort of + +1195 +00:53:31,400 --> 00:53:36,680 +thing the second most popular thing that + +1196 +00:53:34,319 --> 00:53:39,040 +I talk to people who are trying to build + +1197 +00:53:36,680 --> 00:53:41,960 +uh like NLP systems of some variety is + +1198 +00:53:39,040 --> 00:53:45,119 +code generation and basically this is + +1199 +00:53:41,960 --> 00:53:47,440 +simply generating code like python SQL + +1200 +00:53:45,119 --> 00:53:50,160 +from a natural language + +1201 +00:53:47,440 --> 00:53:52,799 +command uh the most popular data set for + +1202 +00:53:50,160 --> 00:53:55,359 +this is something called human ofel and + +1203 +00:53:52,799 --> 00:53:56,920 +basically it has questions about about + +1204 +00:53:55,359 --> 00:53:58,720 +the python standard how you do things + +1205 +00:53:56,920 --> 00:54:00,440 +with the python standard Library like + +1206 +00:53:58,720 --> 00:54:04,799 +return a list with elements incremented + +1207 +00:54:00,440 --> 00:54:08,160 +by one um + +1208 +00:54:04,799 --> 00:54:09,880 +the it gives you the text and several + +1209 +00:54:08,160 --> 00:54:11,119 +examples of what the inputs and outputs + +1210 +00:54:09,880 --> 00:54:12,480 +should be and you're supposed to return + +1211 +00:54:11,119 --> 00:54:14,040 +a program like this and this is a + +1212 +00:54:12,480 --> 00:54:16,680 +simpler version of this there's also + +1213 +00:54:14,040 --> 00:54:19,079 +more complex ones one thing I should + +1214 +00:54:16,680 --> 00:54:21,760 +note um this is a area that I do a lot + +1215 +00:54:19,079 --> 00:54:24,119 +of research in human Nel is a very + +1216 +00:54:21,760 --> 00:54:26,079 +simple uh example of this it doesn't use + +1217 +00:54:24,119 --> 00:54:27,280 +any external Library it doesn't use + +1218 +00:54:26,079 --> 00:54:29,920 +context and other stuff like that + +1219 +00:54:27,280 --> 00:54:31,839 +there's a lot of other more interesting + +1220 +00:54:29,920 --> 00:54:33,400 +data sets also so if you're working on + +1221 +00:54:31,839 --> 00:54:36,839 +code generation I can recommend those as + +1222 +00:54:33,400 --> 00:54:36,839 +well and I'll do that later in the class + +1223 +00:54:38,000 --> 00:54:43,599 +too cool next is uh summarization and + +1224 +00:54:41,839 --> 00:54:45,319 +summarization uh there's a couple + +1225 +00:54:43,599 --> 00:54:47,480 +varieties of this one is single document + +1226 +00:54:45,319 --> 00:54:49,359 +summarization another is multi-document + +1227 +00:54:47,480 --> 00:54:50,920 +summarization uh single document + +1228 +00:54:49,359 --> 00:54:53,240 +compresses a longer document to a + +1229 +00:54:50,920 --> 00:54:57,040 +shorter one multi-document compresses + +1230 +00:54:53,240 --> 00:54:59,799 +multiple documents into one + +1231 +00:54:57,040 --> 00:55:02,319 +um honestly right now single document + +1232 +00:54:59,799 --> 00:55:05,000 +summarization in English works pretty + +1233 +00:55:02,319 --> 00:55:07,079 +well out of the box uh it's not perfect + +1234 +00:55:05,000 --> 00:55:09,480 +but it's close enough to being perfect + +1235 +00:55:07,079 --> 00:55:10,720 +that um I've worked in summarization + +1236 +00:55:09,480 --> 00:55:12,760 +before and I don't know if there's a + +1237 +00:55:10,720 --> 00:55:15,000 +whole lot more that we can do there of + +1238 +00:55:12,760 --> 00:55:16,400 +course multilingual is interesting + +1239 +00:55:15,000 --> 00:55:18,319 +multi-document summarization is + +1240 +00:55:16,400 --> 00:55:19,920 +definitely not solved um in + +1241 +00:55:18,319 --> 00:55:22,160 +multi-document summarization is when you + +1242 +00:55:19,920 --> 00:55:23,960 +have lots of documents about a + +1243 +00:55:22,160 --> 00:55:25,920 +particular topic and you want to + +1244 +00:55:23,960 --> 00:55:29,039 +summarize them down into a coherent + +1245 +00:55:25,920 --> 00:55:31,480 +summary of that topic one example of + +1246 +00:55:29,039 --> 00:55:34,039 +that is wikum this is a data set where + +1247 +00:55:31,480 --> 00:55:37,319 +you're provided with all of the links to + +1248 +00:55:34,039 --> 00:55:39,680 +pages about a Wikipedia article and + +1249 +00:55:37,319 --> 00:55:41,400 +you're expected to generate the first + +1250 +00:55:39,680 --> 00:55:44,200 +paragraph or few paragraphs of the + +1251 +00:55:41,400 --> 00:55:48,039 +article and so you're expected to take + +1252 +00:55:44,200 --> 00:55:50,000 +like lots of noisy you know incoherent + +1253 +00:55:48,039 --> 00:55:52,160 +articles about Barack Obama and actually + +1254 +00:55:50,000 --> 00:55:55,039 +write about Barack OB something like + +1255 +00:55:52,160 --> 00:55:57,680 +this uh some other example interesting + +1256 +00:55:55,039 --> 00:56:00,680 +tasks for this include things like uh + +1257 +00:55:57,680 --> 00:56:02,400 +survey generation for papers or + +1258 +00:56:00,680 --> 00:56:05,480 +something like that you want to know + +1259 +00:56:02,400 --> 00:56:07,920 +everything about a scientific topic or + +1260 +00:56:05,480 --> 00:56:10,400 +um generating + +1261 +00:56:07,920 --> 00:56:12,599 +a report of all the things that happened + +1262 +00:56:10,400 --> 00:56:14,839 +in the stock market today or something + +1263 +00:56:12,599 --> 00:56:17,720 +like that you know there's lots of uh + +1264 +00:56:14,839 --> 00:56:17,720 +places where this could be + +1265 +00:56:18,240 --> 00:56:23,359 +useful another class of tasks is + +1266 +00:56:20,520 --> 00:56:25,400 +information extraction um there's lots + +1267 +00:56:23,359 --> 00:56:27,799 +of examples of this but basically they + +1268 +00:56:25,400 --> 00:56:31,319 +all boil down to extracting some sort of + +1269 +00:56:27,799 --> 00:56:33,200 +information in structured format uh from + +1270 +00:56:31,319 --> 00:56:35,240 +text and this is things like entity + +1271 +00:56:33,200 --> 00:56:37,960 +recognition identifying which words are + +1272 +00:56:35,240 --> 00:56:40,920 +entities entity linking linking entities + +1273 +00:56:37,960 --> 00:56:42,799 +to a knowledge base entity co-reference + +1274 +00:56:40,920 --> 00:56:45,319 +finding which entities in an input + +1275 +00:56:42,799 --> 00:56:47,440 +correspond to each other uh event + +1276 +00:56:45,319 --> 00:56:49,079 +recognition linking co- reference so all + +1277 +00:56:47,440 --> 00:56:50,799 +of the same things except doing it for + +1278 +00:56:49,079 --> 00:56:53,839 +events instead of + +1279 +00:56:50,799 --> 00:56:55,480 +entities um an example data set is uh + +1280 +00:56:53,839 --> 00:56:57,119 +something called ontonotes it's an older + +1281 +00:56:55,480 --> 00:56:59,280 +data set but it has all these things + +1282 +00:56:57,119 --> 00:57:00,680 +annotated and you can extract things + +1283 +00:56:59,280 --> 00:57:03,119 +from this there's lots of other data + +1284 +00:57:00,680 --> 00:57:04,839 +sets for this too Al also kind of more + +1285 +00:57:03,119 --> 00:57:07,440 +in general you can think of like what if + +1286 +00:57:04,839 --> 00:57:09,680 +I gave you an Excel sheet uh could you + +1287 +00:57:07,440 --> 00:57:11,319 +go and like Fill in Excel or Google + +1288 +00:57:09,680 --> 00:57:12,880 +sheet could you go and fill in all of + +1289 +00:57:11,319 --> 00:57:14,760 +the columns in the sheet uh + +1290 +00:57:12,880 --> 00:57:18,160 +appropriately given all the information + +1291 +00:57:14,760 --> 00:57:22,000 +on the internet so um this is a a pretty + +1292 +00:57:18,160 --> 00:57:22,000 +important T category as + +1293 +00:57:22,079 --> 00:57:26,599 +well translation so I don't really to + +1294 +00:57:25,160 --> 00:57:30,319 +talk that much about it it's translating + +1295 +00:57:26,599 --> 00:57:32,319 +from one language to another um for both + +1296 +00:57:30,319 --> 00:57:34,039 +translation and summarization uh + +1297 +00:57:32,319 --> 00:57:35,960 +evaluation is kind of tricky I'll talk + +1298 +00:57:34,039 --> 00:57:38,680 +about this uh in the future but + +1299 +00:57:35,960 --> 00:57:41,559 +basically uh you assess quality based on + +1300 +00:57:38,680 --> 00:57:45,079 +similarity to some sort of reference + +1301 +00:57:41,559 --> 00:57:46,960 +uh using things like blue score or uh + +1302 +00:57:45,079 --> 00:57:49,680 +neural + +1303 +00:57:46,960 --> 00:57:51,160 +metrics an example of this uh which I + +1304 +00:57:49,680 --> 00:57:52,760 +think is actually a really nice example + +1305 +00:57:51,160 --> 00:57:56,200 +is something called the Flores data set + +1306 +00:57:52,760 --> 00:57:59,480 +and this is a translation of several uh + +1307 +00:57:56,200 --> 00:57:59,480 +or like a thousand + +1308 +00:57:59,520 --> 00:58:05,799 +Wikipedia not a thousand but like seever + +1309 +00:58:03,079 --> 00:58:07,960 +quite a few Wikipedia articles into 101 + +1310 +00:58:05,799 --> 00:58:09,400 +languages the reason why I like this + +1311 +00:58:07,960 --> 00:58:10,960 +data set a lot is because if you could + +1312 +00:58:09,400 --> 00:58:12,720 +translate into all of these languages + +1313 +00:58:10,960 --> 00:58:14,799 +you would be able to you know Aid + +1314 +00:58:12,720 --> 00:58:16,720 +information dissemination across the + +1315 +00:58:14,799 --> 00:58:20,640 +world make access to information more + +1316 +00:58:16,720 --> 00:58:23,799 +Equitable so I I like this D + +1317 +00:58:20,640 --> 00:58:25,440 +well separately from this there are + +1318 +00:58:23,799 --> 00:58:27,480 +general purpose vent marks these + +1319 +00:58:25,440 --> 00:58:31,119 +benchmarks are not really for the + +1320 +00:58:27,480 --> 00:58:33,559 +purpose of evaluating any specific task + +1321 +00:58:31,119 --> 00:58:35,280 +that people think is actually useful but + +1322 +00:58:33,559 --> 00:58:38,200 +rather trying to test the language + +1323 +00:58:35,280 --> 00:58:41,119 +abilities of language models themselves + +1324 +00:58:38,200 --> 00:58:44,480 +a typical example of this is big bench + +1325 +00:58:41,119 --> 00:58:46,640 +and this contains a whole bunch of tasks + +1326 +00:58:44,480 --> 00:58:48,720 +that uh test you know different + +1327 +00:58:46,640 --> 00:58:50,240 +abilities I have some examples here + +1328 +00:58:48,720 --> 00:58:52,440 +these are very small so you might need + +1329 +00:58:50,240 --> 00:58:54,359 +to look at the slides to see them but + +1330 +00:58:52,440 --> 00:58:57,760 +for example this is tracking shuffled + +1331 +00:58:54,359 --> 00:59:00,359 +objects like um Ellis Bob and CLA are + +1332 +00:58:57,760 --> 00:59:01,880 +friends uh who occasionally trade books + +1333 +00:59:00,359 --> 00:59:04,119 +at the start of the semester each one + +1334 +00:59:01,880 --> 00:59:05,640 +has a new book then they trade then they + +1335 +00:59:04,119 --> 00:59:09,039 +trade then they trade then they trade + +1336 +00:59:05,640 --> 00:59:11,599 +which one does Bob have um today is + +1337 +00:59:09,039 --> 00:59:13,640 +Christmas Eve of 1937 what is the date + +1338 +00:59:11,599 --> 00:59:17,599 +tomorrow and you need to write it in the + +1339 +00:59:13,640 --> 00:59:20,359 +appropriate format um Sherry tells the + +1340 +00:59:17,599 --> 00:59:22,960 +truth Vernal says Sherry tells the truth + +1341 +00:59:20,359 --> 00:59:25,240 +Alexis says burn lies Michaela says + +1342 +00:59:22,960 --> 00:59:26,880 +Alexis tells the truth enor says + +1343 +00:59:25,240 --> 00:59:29,880 +Michaela tells the truth does Elanor + +1344 +00:59:26,880 --> 00:59:31,119 +tell the truth um hope you all got that + +1345 +00:59:29,880 --> 00:59:34,319 +one + +1346 +00:59:31,119 --> 00:59:37,440 +right um so like it's just these kind of + +1347 +00:59:34,319 --> 00:59:38,880 +exercises and like when you look at how + +1348 +00:59:37,440 --> 00:59:40,520 +language models are being evaluated + +1349 +00:59:38,880 --> 00:59:42,559 +they're being evaluated against like + +1350 +00:59:40,520 --> 00:59:44,400 +many of these tasks not all of them + +1351 +00:59:42,559 --> 00:59:47,200 +necessarily but many of them I think + +1352 +00:59:44,400 --> 00:59:48,920 +Gemini evaluated with respect to every + +1353 +00:59:47,200 --> 00:59:51,680 +all of these task categories except + +1354 +00:59:48,920 --> 00:59:53,799 +information Extraction movie so um these + +1355 +00:59:51,680 --> 00:59:56,640 +are kind of typical task categories that + +1356 +00:59:53,799 --> 00:59:56,640 +people look at + +1357 +00:59:57,039 --> 01:00:00,680 +cool um any questions about + +1358 +01:00:02,359 --> 01:00:06,680 +this nice okay uh yeah + +1359 +01:00:09,400 --> 01:00:14,400 +sorry how how do they ensure that + +1360 +01:00:12,880 --> 01:00:18,280 +similar data does not appear in the + +1361 +01:00:14,400 --> 01:00:19,680 +training data so people have tried uh a + +1362 +01:00:18,280 --> 01:00:20,920 +bunch of different things this is + +1363 +01:00:19,680 --> 01:00:23,240 +actually this actually might be a good + +1364 +01:00:20,920 --> 01:00:25,480 +thing to talk about uh at some point + +1365 +01:00:23,240 --> 01:00:30,559 +when we talk about data cation or things + +1366 +01:00:25,480 --> 01:00:33,720 +like this um the first thing is uh you + +1367 +01:00:30,559 --> 01:00:35,920 +actually create the data so similar is + +1368 +01:00:33,720 --> 01:00:39,119 +actually okay right because you + +1369 +01:00:35,920 --> 01:00:42,160 +know if it appears everywhere on the + +1370 +01:00:39,119 --> 01:00:44,599 +internet gp4 will learn it the problem + +1371 +01:00:42,160 --> 01:00:47,680 +is like if the exact same thing appears + +1372 +01:00:44,599 --> 01:00:49,520 +um so number one how do we prevent this + +1373 +01:00:47,680 --> 01:00:52,520 +from happening number two how do we even + +1374 +01:00:49,520 --> 01:00:54,000 +tell that it did happen um so some + +1375 +01:00:52,520 --> 01:00:56,280 +things that people do to tell that it + +1376 +01:00:54,000 --> 01:01:00,319 +did happen is they make small + +1377 +01:00:56,280 --> 01:01:04,119 +perturbations to the the test data and + +1378 +01:01:00,319 --> 01:01:06,200 +test whether that like drops the model + +1379 +01:01:04,119 --> 01:01:07,680 +score by a whole lot there was actually + +1380 +01:01:06,200 --> 01:01:09,119 +a paper I don't know if I can find it + +1381 +01:01:07,680 --> 01:01:12,280 +immediately but there was a paper + +1382 +01:01:09,119 --> 01:01:16,520 +recently that just like swapped the + +1383 +01:01:12,280 --> 01:01:18,280 +order of uh of outputs in mlu and saw + +1384 +01:01:16,520 --> 01:01:21,880 +that the accuracy went down for some + +1385 +01:01:18,280 --> 01:01:22,880 +language models so that should have no + +1386 +01:01:21,880 --> 01:01:24,119 +that should make no difference + +1387 +01:01:22,880 --> 01:01:26,960 +whatsoever because you're just changing + +1388 +01:01:24,119 --> 01:01:29,000 +the order of answers but it it ca + +1389 +01:01:26,960 --> 01:01:30,880 +outputs to go down if that's the case + +1390 +01:01:29,000 --> 01:01:33,920 +it's a pretty clear sign that it's + +1391 +01:01:30,880 --> 01:01:35,760 +leaking um other things that people do + +1392 +01:01:33,920 --> 01:01:38,480 +are change the input a little bit so + +1393 +01:01:35,760 --> 01:01:39,880 +like change the number in a math problem + +1394 +01:01:38,480 --> 01:01:41,760 +uh to be a slightly different value and + +1395 +01:01:39,880 --> 01:01:43,640 +see if that hurts the accuracy overall + +1396 +01:01:41,760 --> 01:01:45,200 +and like making these little + +1397 +01:01:43,640 --> 01:01:46,280 +perturbations that shouldn't change the + +1398 +01:01:45,200 --> 01:01:47,640 +accuracy and then if they do + +1399 +01:01:46,280 --> 01:01:49,920 +significantly then you think there's a + +1400 +01:01:47,640 --> 01:01:53,119 +problem so I think that's a basic tool + +1401 +01:01:49,920 --> 01:01:56,079 +to diagnose this how do you prevent it + +1402 +01:01:53,119 --> 01:01:58,160 +from happening um there's really simple + +1403 +01:01:56,079 --> 01:02:04,039 +and silly things that you can do + +1404 +01:01:58,160 --> 01:02:07,240 +like uh ZIP the file and like put a + +1405 +01:02:04,039 --> 01:02:09,119 +password on the file um and then like a + +1406 +01:02:07,240 --> 01:02:12,440 +scraper even if it's scraping all of + +1407 +01:02:09,119 --> 01:02:14,680 +GitHub for training data it won't scrape + +1408 +01:02:12,440 --> 01:02:16,960 +your zipped and password protected file + +1409 +01:02:14,680 --> 01:02:19,279 +right so that's kind of a first line of + +1410 +01:02:16,960 --> 01:02:21,799 +defense it doesn't work if someone puts + +1411 +01:02:19,279 --> 01:02:25,200 +it like you know someone puts it + +1412 +01:02:21,799 --> 01:02:26,839 +somewhere uh in unzipped formats so + +1413 +01:02:25,200 --> 01:02:28,039 +that's a problem but you know there + +1414 +01:02:26,839 --> 01:02:29,839 +there are things that you can do like + +1415 +01:02:28,039 --> 01:02:31,799 +that another thing you can do is just + +1416 +01:02:29,839 --> 01:02:34,440 +not reveal your data whatsoever so you + +1417 +01:02:31,799 --> 01:02:36,160 +can keep a private version of the data + +1418 +01:02:34,440 --> 01:02:39,319 +um you can keep a private version of the + +1419 +01:02:36,160 --> 01:02:42,359 +data and not you know let anybody else + +1420 +01:02:39,319 --> 01:02:45,000 +see the the outputs so yeah it's pretty + +1421 +01:02:42,359 --> 01:02:45,000 +tricky + +1422 +01:02:48,279 --> 01:02:54,920 +yeah how do you control for test + +1423 +01:02:50,520 --> 01:02:56,480 +complexity that's a great question um + +1424 +01:02:54,920 --> 01:02:59,119 +I don't think there's any really good + +1425 +01:02:56,480 --> 01:03:01,400 +definition of task complexity yet um + +1426 +01:02:59,119 --> 01:03:04,559 +some things that you can do are control + +1427 +01:03:01,400 --> 01:03:07,520 +for like length or control + +1428 +01:03:04,559 --> 01:03:11,400 +for um you know the + +1429 +01:03:07,520 --> 01:03:15,119 +number the number of hops that are + +1430 +01:03:11,400 --> 01:03:19,839 +required in like multihop reasoning um + +1431 +01:03:15,119 --> 01:03:19,839 +there's actually one really interesting + +1432 +01:03:21,760 --> 01:03:26,880 +work that tries to do um not control + +1433 +01:03:25,440 --> 01:03:29,880 +necessarily but at + +1434 +01:03:26,880 --> 01:03:29,880 +least + +1435 +01:03:32,720 --> 01:03:39,359 +evaluate there's actually a + +1436 +01:03:35,640 --> 01:03:42,480 +couple so what this tries to do is this + +1437 +01:03:39,359 --> 01:03:44,160 +tries to break down questions into kind + +1438 +01:03:42,480 --> 01:03:49,039 +of like operations that you would need + +1439 +01:03:44,160 --> 01:03:51,640 +to solve do to solve them so it's like + +1440 +01:03:49,039 --> 01:03:53,920 +uh which keywords have been contained by + +1441 +01:03:51,640 --> 01:03:55,279 +more than 100 ACL papers and they say + +1442 +01:03:53,920 --> 01:03:56,839 +okay for you need to select then you + +1443 +01:03:55,279 --> 01:03:58,799 +need to filter then you need to project + +1444 +01:03:56,839 --> 01:04:00,760 +and stuff like that so they try to at + +1445 +01:03:58,799 --> 01:04:03,520 +least Express the level of complexity in + +1446 +01:04:00,760 --> 01:04:06,680 +this way um there's also another one + +1447 +01:04:03,520 --> 01:04:08,920 +that's not on so much on real + +1448 +01:04:06,680 --> 01:04:10,960 +data sorry this is this is called the + +1449 +01:04:08,920 --> 01:04:13,079 +break Benchmark if you're + +1450 +01:04:10,960 --> 01:04:16,440 +interested there was also a more recent + +1451 +01:04:13,079 --> 01:04:16,440 +one paper that I + +1452 +01:04:17,599 --> 01:04:22,960 +liked that tried to do something + +1453 +01:04:20,559 --> 01:04:22,960 +somewhat + +1454 +01:04:23,200 --> 01:04:26,200 +similar + +1455 +01:04:27,160 --> 01:04:31,920 +um where they express they come up with + +1456 +01:04:29,760 --> 01:04:34,480 +like math or programming problems and + +1457 +01:04:31,920 --> 01:04:36,279 +try to express them as a graph and then + +1458 +01:04:34,480 --> 01:04:37,799 +do some examination of how Transformer + +1459 +01:04:36,279 --> 01:04:39,760 +models do on things of different + +1460 +01:04:37,799 --> 01:04:40,920 +complexity I think the problem is + +1461 +01:04:39,760 --> 01:04:43,039 +there's so many different things that + +1462 +01:04:40,920 --> 01:04:44,720 +could make something hard or easy uh + +1463 +01:04:43,039 --> 01:04:47,000 +there's also like is it in distribution + +1464 +01:04:44,720 --> 01:04:50,640 +or out of distribution um from the point + +1465 +01:04:47,000 --> 01:04:53,119 +of view of topic or language or speaking + +1466 +01:04:50,640 --> 01:04:55,640 +style or things like that um and + +1467 +01:04:53,119 --> 01:04:58,839 +actually I think uh we're going to talk + +1468 +01:04:55,640 --> 01:05:00,440 +about this in the debugging and + +1469 +01:04:58,839 --> 01:05:01,880 +evaluation lecture but like one of the + +1470 +01:05:00,440 --> 01:05:03,799 +things I really like to do is I like to + +1471 +01:05:01,880 --> 01:05:05,119 +Subs segment to data and look at + +1472 +01:05:03,799 --> 01:05:06,960 +different Subs segments of the data + +1473 +01:05:05,119 --> 01:05:09,920 +where I think the subs segments will + +1474 +01:05:06,960 --> 01:05:11,880 +affect accuracy by a lot and basically + +1475 +01:05:09,920 --> 01:05:14,359 +anything that you could Subs segment on + +1476 +01:05:11,880 --> 01:05:17,720 +is like something that determines + +1477 +01:05:14,359 --> 01:05:19,279 +difficulties so um yeah lots to lots to + +1478 +01:05:17,720 --> 01:05:21,960 +say about that + +1479 +01:05:19,279 --> 01:05:24,520 +basically cool any other + +1480 +01:05:21,960 --> 01:05:26,119 +questions okay um if not let me get on + +1481 +01:05:24,520 --> 01:05:27,680 +to instruction tuning I don't have a + +1482 +01:05:26,119 --> 01:05:30,079 +whole lot about instruction tuning + +1483 +01:05:27,680 --> 01:05:31,720 +because it's uh you know conceptually + +1484 +01:05:30,079 --> 01:05:34,799 +pretty simple but I I would like to talk + +1485 +01:05:31,720 --> 01:05:37,640 +about all of it so basic instruction + +1486 +01:05:34,799 --> 01:05:39,359 +tuning the uh was proposed almost + +1487 +01:05:37,640 --> 01:05:41,920 +simultaneously by people at Google and + +1488 +01:05:39,359 --> 01:05:45,799 +people at hugging face uh the way it + +1489 +01:05:41,920 --> 01:05:49,760 +works is you have + +1490 +01:05:45,799 --> 01:05:54,960 +tasks and you train on lots of tasks + +1491 +01:05:49,760 --> 01:05:57,240 +where you append The Prompt and you uh + +1492 +01:05:54,960 --> 01:05:58,599 +you append a prompt you append the input + +1493 +01:05:57,240 --> 01:06:01,400 +and then you just try to train to + +1494 +01:05:58,599 --> 01:06:03,160 +generate the output and so this is this + +1495 +01:06:01,400 --> 01:06:04,480 +contrast from like base language model + +1496 +01:06:03,160 --> 01:06:06,039 +training because you're still training a + +1497 +01:06:04,480 --> 01:06:08,480 +language model based on a prompt and an + +1498 +01:06:06,039 --> 01:06:10,559 +output but you're + +1499 +01:06:08,480 --> 01:06:12,400 +specifically formatting them in a + +1500 +01:06:10,559 --> 01:06:14,680 +particular way so it corresponds to + +1501 +01:06:12,400 --> 01:06:16,119 +solving tasks it's essentially + +1502 +01:06:14,680 --> 01:06:17,640 +supervised training but supervised + +1503 +01:06:16,119 --> 01:06:19,200 +training over many many tasks fine + +1504 +01:06:17,640 --> 01:06:21,520 +tuning over many many + +1505 +01:06:19,200 --> 01:06:25,480 +tests the interesting thing that these + +1506 +01:06:21,520 --> 01:06:29,359 +papers showed was that basically if you + +1507 +01:06:25,480 --> 01:06:31,000 +do this instruction tuning you do well + +1508 +01:06:29,359 --> 01:06:32,920 +not only on the tasks that you trained + +1509 +01:06:31,000 --> 01:06:36,640 +on but also on new tasks that you didn't + +1510 +01:06:32,920 --> 01:06:38,720 +train on um and so this is now really + +1511 +01:06:36,640 --> 01:06:40,960 +like important it's Incorporated in + +1512 +01:06:38,720 --> 01:06:43,279 +every serious language model that's used + +1513 +01:06:40,960 --> 01:06:48,720 +in a kind of like production setting + +1514 +01:06:43,279 --> 01:06:52,599 +nowadays and all um yeah so uh I think + +1515 +01:06:48,720 --> 01:06:52,599 +that's the basic idea + +1516 +01:06:53,000 --> 01:06:57,520 +here + +1517 +01:06:55,160 --> 01:06:59,480 +you can also do things like learn to in + +1518 +01:06:57,520 --> 01:07:02,160 +context learn so we talked about in + +1519 +01:06:59,480 --> 01:07:05,160 +context learning so in context learning + +1520 +01:07:02,160 --> 01:07:07,799 +instead of uh giving just a prompt you + +1521 +01:07:05,160 --> 01:07:09,960 +give training examples in the context + +1522 +01:07:07,799 --> 01:07:12,240 +and so that's what you do in this paper + +1523 +01:07:09,960 --> 01:07:14,400 +here as well you sample a whole bunch of + +1524 +01:07:12,240 --> 01:07:17,880 +training examples you append them to the + +1525 +01:07:14,400 --> 01:07:19,720 +context and then you train the model and + +1526 +01:07:17,880 --> 01:07:21,359 +so why is this good this is good because + +1527 +01:07:19,720 --> 01:07:24,400 +it will train a model that's better in + +1528 +01:07:21,359 --> 01:07:26,160 +context learning basically so if you um + +1529 +01:07:24,400 --> 01:07:29,480 +if you want to provide these training + +1530 +01:07:26,160 --> 01:07:29,480 +examples then you can trade it like + +1531 +01:07:30,039 --> 01:07:35,480 +that so these are the two basic ways of + +1532 +01:07:32,680 --> 01:07:37,039 +doing instruction tuning um all all came + +1533 +01:07:35,480 --> 01:07:40,920 +out around the same + +1534 +01:07:37,039 --> 01:07:43,400 +time um there are a bunch of data sets + +1535 +01:07:40,920 --> 01:07:45,440 +that people have compiled and you + +1536 +01:07:43,400 --> 01:07:47,160 +probably if you want to do any sort of + +1537 +01:07:45,440 --> 01:07:48,599 +instruction tuning you probably want to + +1538 +01:07:47,160 --> 01:07:50,680 +use one of these data sets because + +1539 +01:07:48,599 --> 01:07:52,920 +compiling together a bunch of data sets + +1540 +01:07:50,680 --> 01:07:55,720 +is just annoying it's not hard but it's + +1541 +01:07:52,920 --> 01:07:59,319 +annoying um and + +1542 +01:07:55,720 --> 01:08:01,520 +so some pop I I very highly recommend + +1543 +01:07:59,319 --> 01:08:03,960 +this paper the on the flan collection + +1544 +01:08:01,520 --> 01:08:05,480 +because it gives a good uh summary it + +1545 +01:08:03,960 --> 01:08:08,079 +has this really nice table that breaks + +1546 +01:08:05,480 --> 01:08:10,960 +them down based on like um what's the + +1547 +01:08:08,079 --> 01:08:14,440 +name of the data set uh what is the size + +1548 +01:08:10,960 --> 01:08:17,880 +of the training data um what prompts do + +1549 +01:08:14,440 --> 01:08:20,040 +they use zero shot or F shot uh so like + +1550 +01:08:17,880 --> 01:08:21,799 +fot is learning in context learn like I + +1551 +01:08:20,040 --> 01:08:24,719 +mentioned before how many tasks are + +1552 +01:08:21,799 --> 01:08:26,799 +there um and what detailed methods do + +1553 +01:08:24,719 --> 01:08:28,480 +they use so you can take a look at this + +1554 +01:08:26,799 --> 01:08:30,520 +some very popular ones that lots of + +1555 +01:08:28,480 --> 01:08:33,920 +people use are things like the FL + +1556 +01:08:30,520 --> 01:08:36,520 +collection from here also uh natural + +1557 +01:08:33,920 --> 01:08:40,640 +instructions is a very popular one uh + +1558 +01:08:36,520 --> 01:08:43,040 +that still people use a lot um and self- + +1559 +01:08:40,640 --> 01:08:46,560 +instruct is a popular one that I'll I'll + +1560 +01:08:43,040 --> 01:08:46,560 +talk about in in a + +1561 +01:08:47,640 --> 01:08:53,159 +second cool um so the second thing that + +1562 +01:08:50,960 --> 01:08:55,359 +I want to talk about is instruction + +1563 +01:08:53,159 --> 01:08:57,359 +tuned models or the next thing I want to + +1564 +01:08:55,359 --> 01:08:59,120 +talk about is instruction tun models + +1565 +01:08:57,359 --> 01:09:01,600 +these are examples of models that like I + +1566 +01:08:59,120 --> 01:09:05,560 +can recommend you use now in 2024 + +1567 +01:09:01,600 --> 01:09:10,839 +they're like good bottles to use um + +1568 +01:09:05,560 --> 01:09:12,279 +flant T5 I think is a very good model + +1569 +01:09:10,839 --> 01:09:16,199 +especially it's a very good model for + +1570 +01:09:12,279 --> 01:09:19,679 +its size and it comes in various sizes + +1571 +01:09:16,199 --> 01:09:20,880 +uh from smaller models to uh those up to + +1572 +01:09:19,679 --> 01:09:23,199 +11 billion + +1573 +01:09:20,880 --> 01:09:25,839 +parameters and it's an encoder decoder + +1574 +01:09:23,199 --> 01:09:29,279 +model based on T5 that was trained on + +1575 +01:09:25,839 --> 01:09:32,080 +lots of data my impression is that this + +1576 +01:09:29,279 --> 01:09:34,920 +is a model that's like consistently good + +1577 +01:09:32,080 --> 01:09:38,400 +at anything that's like a simple input + +1578 +01:09:34,920 --> 01:09:40,759 +output style task not like a chat task + +1579 +01:09:38,400 --> 01:09:43,040 +um so if you just have input output you + +1580 +01:09:40,759 --> 01:09:45,839 +want to do like uh code generation you + +1581 +01:09:43,040 --> 01:09:47,319 +want to do maybe not code generation you + +1582 +01:09:45,839 --> 01:09:49,199 +want to do like summarization or other + +1583 +01:09:47,319 --> 01:09:52,640 +things like that that's a good model to + +1584 +01:09:49,199 --> 01:09:55,560 +use um another one is Lama 2 chat so + +1585 +01:09:52,640 --> 01:09:58,120 +Lama 2 chat was in instruction tuned and + +1586 +01:09:55,560 --> 01:10:02,719 +uh kind of tuned with human preferences + +1587 +01:09:58,120 --> 01:10:05,600 +but it it is quite good at following + +1588 +01:10:02,719 --> 01:10:07,520 +instructions and then there's also + +1589 +01:10:05,600 --> 01:10:10,600 +excuse me mixol instruct and these are + +1590 +01:10:07,520 --> 01:10:13,360 +both decoder only models mixol is a + +1591 +01:10:10,600 --> 01:10:17,280 +decoder only mixture of experts model + +1592 +01:10:13,360 --> 01:10:19,400 +mixol is smaller and quite strong so I + +1593 +01:10:17,280 --> 01:10:20,920 +would recommend that you consider this + +1594 +01:10:19,400 --> 01:10:24,480 +maybe as a default if you want a decoder + +1595 +01:10:20,920 --> 01:10:26,840 +only model and then a flant T5 if few + +1596 +01:10:24,480 --> 01:10:26,840 +inod + +1597 +01:10:28,840 --> 01:10:33,800 +decod + +1598 +01:10:30,480 --> 01:10:35,719 +cool um the final thing I'd like to talk + +1599 +01:10:33,800 --> 01:10:37,000 +about a little bit um and then we're + +1600 +01:10:35,719 --> 01:10:39,239 +also going to talk about it a bit more + +1601 +01:10:37,000 --> 01:10:42,000 +in the distillation class is data set + +1602 +01:10:39,239 --> 01:10:43,440 +generation so it's possible to + +1603 +01:10:42,000 --> 01:10:46,440 +automatically generate instruction + +1604 +01:10:43,440 --> 01:10:48,199 +tuning data sets and the first or + +1605 +01:10:46,440 --> 01:10:51,560 +typical example of this is self- + +1606 +01:10:48,199 --> 01:10:55,080 +instruct and the way self instruct works + +1607 +01:10:51,560 --> 01:10:56,840 +is you have uh a bunch of of seed tasks + +1608 +01:10:55,080 --> 01:10:59,640 +that have one instruction and one + +1609 +01:10:56,840 --> 01:11:02,560 +instance per task you throw them into + +1610 +01:10:59,640 --> 01:11:05,960 +the task pool and then based on this you + +1611 +01:11:02,560 --> 01:11:07,239 +do a prompting to try to generate new + +1612 +01:11:05,960 --> 01:11:11,159 +tasks + +1613 +01:11:07,239 --> 01:11:14,440 +basically and um you identify what type + +1614 +01:11:11,159 --> 01:11:18,640 +of uh what type of task it is and then + +1615 +01:11:14,440 --> 01:11:19,640 +based on the task you generate uh inputs + +1616 +01:11:18,640 --> 01:11:22,440 +and + +1617 +01:11:19,640 --> 01:11:24,400 +outputs and from these inputs and + +1618 +01:11:22,440 --> 01:11:26,199 +outputs they do a little bit of minimal + +1619 +01:11:24,400 --> 01:11:27,800 +filtering to D duplicate the data set + +1620 +01:11:26,199 --> 01:11:29,640 +and also remove things that require like + +1621 +01:11:27,800 --> 01:11:31,679 +visual information and other stuff like + +1622 +01:11:29,640 --> 01:11:34,080 +that and then feed that back into the + +1623 +01:11:31,679 --> 01:11:36,560 +task pool so basically like they start + +1624 +01:11:34,080 --> 01:11:38,159 +with 175 examples and then they expand + +1625 +01:11:36,560 --> 01:11:40,520 +this data set to be very large to cover + +1626 +01:11:38,159 --> 01:11:45,320 +many many different tasks + +1627 +01:11:40,520 --> 01:11:46,520 +um so uh this is pretty influential and + +1628 +01:11:45,320 --> 01:11:49,679 +like one interesting thing that they + +1629 +01:11:46,520 --> 01:11:52,560 +showed here is that you can improve the + +1630 +01:11:49,679 --> 01:11:55,960 +model that was used to generate uh these + +1631 +01:11:52,560 --> 01:11:58,600 +itself um so basically they took this + +1632 +01:11:55,960 --> 01:12:01,719 +and they used it to fine-tune uh gpt3 + +1633 +01:11:58,600 --> 01:12:04,679 +basically um they used GPT 3 to generate + +1634 +01:12:01,719 --> 01:12:04,679 +the test and they use it to + +1635 +01:12:04,760 --> 01:12:11,639 +find um some other more recent examples + +1636 +01:12:07,920 --> 01:12:15,600 +are Chain of Thought um uh tuning for + +1637 +01:12:11,639 --> 01:12:17,320 +Chain of Thought so um Orca is a nice + +1638 +01:12:15,600 --> 01:12:20,840 +example of this this is uh something + +1639 +01:12:17,320 --> 01:12:23,120 +where they generated explanations for + +1640 +01:12:20,840 --> 01:12:24,679 +why um for why the model made a + +1641 +01:12:23,120 --> 01:12:27,159 +particular decision and then they use + +1642 +01:12:24,679 --> 01:12:30,400 +that to train models uh and improve + +1643 +01:12:27,159 --> 01:12:30,400 +their essentially reasoning + +1644 +01:12:31,120 --> 01:12:37,280 +capabilities another interesting example + +1645 +01:12:34,159 --> 01:12:38,880 +is uh something called Evol instruct and + +1646 +01:12:37,280 --> 01:12:40,760 +basically the idea here is they start + +1647 +01:12:38,880 --> 01:12:43,440 +out with a seed set of instructions from + +1648 +01:12:40,760 --> 01:12:45,800 +any data set that you want to be using + +1649 +01:12:43,440 --> 01:12:48,239 +and they modify those instructions to + +1650 +01:12:45,800 --> 01:12:50,480 +make them more complex so they say okay + +1651 +01:12:48,239 --> 01:12:52,920 +this is too easy let's make this harder + +1652 +01:12:50,480 --> 01:12:55,679 +um and that makes it possible to uh + +1653 +01:12:52,920 --> 01:12:58,320 +improve uh the ability of models to + +1654 +01:12:55,679 --> 01:13:00,440 +solve complex problems so this is + +1655 +01:12:58,320 --> 01:13:02,120 +actually a really popular you know area + +1656 +01:13:00,440 --> 01:13:04,080 +overall nowadays I I'm not going to do + +1657 +01:13:02,120 --> 01:13:06,960 +it justce in one slide so we'll talk a + +1658 +01:13:04,080 --> 01:13:09,199 +bit more about it later but um uh this + +1659 +01:13:06,960 --> 01:13:11,440 +is the the general + +1660 +01:13:09,199 --> 01:13:14,280 +idea + +1661 +01:13:11,440 --> 01:13:18,159 +than cool and yeah that's all I have for + +1662 +01:13:14,280 --> 01:13:18,159 +today uh any questions or + +1663 +01:13:20,679 --> 01:13:25,360 +yeah talk about other places + +1664 +01:13:26,760 --> 01:13:31,199 +oh yeah yeah sorry sorry very very good + +1665 +01:13:29,480 --> 01:13:32,880 +question and I actually wanted to put + +1666 +01:13:31,199 --> 01:13:34,960 +that on my slide but I just realized I + +1667 +01:13:32,880 --> 01:13:36,800 +forgot so thank you for prompting me um + +1668 +01:13:34,960 --> 01:13:38,920 +so when would you want to do Bas uh + +1669 +01:13:36,800 --> 01:13:40,760 +single task fine tuning versus + +1670 +01:13:38,920 --> 01:13:45,199 +instruction + +1671 +01:13:40,760 --> 01:13:47,880 +tuning if you have a very carefully like + +1672 +01:13:45,199 --> 01:13:49,360 +if you have a very clear task definition + +1673 +01:13:47,880 --> 01:13:51,280 +and you have lots of training data doing + +1674 +01:13:49,360 --> 01:13:53,440 +full fine tuning can be good for a + +1675 +01:13:51,280 --> 01:13:56,120 +number of reasons number one you can get + +1676 +01:13:53,440 --> 01:13:57,800 +get maybe slightly Superior accuracy + +1677 +01:13:56,120 --> 01:14:00,280 +with bigger models but you can get much + +1678 +01:13:57,800 --> 01:14:01,719 +Superior accuracy with smaller models + +1679 +01:14:00,280 --> 01:14:04,120 +because smaller models don't have the + +1680 +01:14:01,719 --> 01:14:07,960 +capacity to like do really really well + +1681 +01:14:04,120 --> 01:14:10,280 +on lots of different tasks so um I think + +1682 +01:14:07,960 --> 01:14:12,760 +you'll see you know some some + +1683 +01:14:10,280 --> 01:14:14,840 +improvement maybe a somewhat marginal + +1684 +01:14:12,760 --> 01:14:17,560 +Improvement on bigger models but you'll + +1685 +01:14:14,840 --> 01:14:20,040 +see a big Improvement on smaller models + +1686 +01:14:17,560 --> 01:14:22,440 +and there have been some + +1687 +01:14:20,040 --> 01:14:24,639 +interesting results on this recently + +1688 +01:14:22,440 --> 01:14:27,000 +like there's a really strong text tosql + +1689 +01:14:24,639 --> 01:14:28,760 +model that was based on llama 7B that + +1690 +01:14:27,000 --> 01:14:32,639 +was just trained on tons and tons of + +1691 +01:14:28,760 --> 01:14:35,520 +Text tosql data for example um and so + +1692 +01:14:32,639 --> 01:14:38,520 +there's certain tasks where it's really + +1693 +01:14:35,520 --> 01:14:41,520 +important another example is + +1694 +01:14:38,520 --> 01:14:45,120 +um on translation + +1695 +01:14:41,520 --> 01:14:48,280 +tasks uh there's a model called NL which + +1696 +01:14:45,120 --> 01:14:50,880 +is 3.3 billion parameters and it's + +1697 +01:14:48,280 --> 01:14:53,560 +competitive with gd4 on very + +1698 +01:14:50,880 --> 01:14:55,000 +large uh on very large languages with + +1699 +01:14:53,560 --> 01:14:57,199 +lots of prining data and way better than + +1700 +01:14:55,000 --> 01:15:00,080 +gp4 on languages with lots of prining + +1701 +01:14:57,199 --> 01:15:01,800 +data so um it just shows how like if you + +1702 +01:15:00,080 --> 01:15:03,880 +very carefully work on a special purpose + +1703 +01:15:01,800 --> 01:15:05,800 +model even if it's very small compared + +1704 +01:15:03,880 --> 01:15:08,280 +to the bigger model you can still do a + +1705 +01:15:05,800 --> 01:15:10,560 +really good job so I think that's the + +1706 +01:15:08,280 --> 01:15:13,440 +biggest + +1707 +01:15:10,560 --> 01:15:15,199 +ESS another thing is um another thing is + +1708 +01:15:13,440 --> 01:15:16,600 +if you have a very fixed format and you + +1709 +01:15:15,199 --> 01:15:17,880 +always want something in a format you + +1710 +01:15:16,600 --> 01:15:25,199 +might want to be + +1711 +01:15:17,880 --> 01:15:25,199 +doing the prev page thank only one Inu + +1712 +01:15:37,639 --> 01:15:43,840 +well it you are inputting at least 175 + +1713 +01:15:41,400 --> 01:15:47,280 +seed ones and + +1714 +01:15:43,840 --> 01:15:49,080 +um you know you're sampling from the + +1715 +01:15:47,280 --> 01:15:51,320 +model you're asking it to generate new + +1716 +01:15:49,080 --> 01:15:52,400 +instructions so if you have a a model + +1717 +01:15:51,320 --> 01:15:54,000 +that's good enough at following + +1718 +01:15:52,400 --> 01:15:56,320 +instructions it'll be be able toate + +1719 +01:15:54,000 --> 01:15:56,320 +something + +1720 +01:16:00,400 --> 01:16:05,400 +new for + +1721 +01:16:02,400 --> 01:16:07,600 +this yeah they have a class I believe + +1722 +01:16:05,400 --> 01:16:12,560 +they have a classifier that says it will + +1723 +01:16:07,600 --> 01:16:12,560 +be one of these two yeah + +1724 +01:16:15,000 --> 01:16:18,639 +dur it can be + +1725 +01:16:20,280 --> 01:16:26,760 +both yeah well but also the + +1726 +01:16:24,080 --> 01:16:28,440 +um the C test can be input first and + +1727 +01:16:26,760 --> 01:16:31,520 +output first and you're like generating + +1728 +01:16:28,440 --> 01:16:34,760 +a new instruction for the LM + +1729 +01:16:31,520 --> 01:16:36,199 +here so this this is from the task pool + +1730 +01:16:34,760 --> 01:16:37,960 +but you're asking the LM to generate a + +1731 +01:16:36,199 --> 01:16:40,120 +new + +1732 +01:16:37,960 --> 01:16:44,800 +instruction + +1733 +01:16:40,120 --> 01:16:44,800 +yeah cool and anything + +1734 +01:16:45,320 --> 01:16:51,239 +else okay um yeah that that's all we + +1735 +01:16:48,719 --> 01:16:54,239 +have for today so thank + +1736 +01:16:51,239 --> 01:16:54,239 +you \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning/transcript.vtt b/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..2b2c2b397877eab63873e5a08b764567320b590b --- /dev/null +++ b/CMU Advanced NLP 2024 (8) Fine-tuning and Instruction Tuning/transcript.vtt @@ -0,0 +1,5209 @@ +WEBVTT + +00:00:02.720 --> 00:00:06.720 +yeah today I'll talk about fine tuning + +00:00:04.400 --> 00:00:09.599 +and instruction tuning uh so this is + +00:00:06.720 --> 00:00:12.679 +kind of the first step in the pipeline + +00:00:09.599 --> 00:00:14.480 +of steps that people use to prepare + +00:00:12.679 --> 00:00:16.320 +models to be ready to be used as + +00:00:14.480 --> 00:00:20.760 +chatbots like you know what you see in + +00:00:16.320 --> 00:00:22.880 +chat GPT or uh gemini or whatever else + +00:00:20.760 --> 00:00:26.240 +you want to be + +00:00:22.880 --> 00:00:28.240 +using and what + +00:00:26.240 --> 00:00:29.679 +these what this basically takes + +00:00:28.240 --> 00:00:32.160 +advantage of is that we have many many + +00:00:29.679 --> 00:00:33.200 +different tasks that we can be solving + +00:00:32.160 --> 00:00:35.960 +in + +00:00:33.200 --> 00:00:37.160 +NLP and each requires different + +00:00:35.960 --> 00:00:40.440 +varieties of + +00:00:37.160 --> 00:00:42.680 +data so we up until this point we've + +00:00:40.440 --> 00:00:46.239 +talked a lot about the varieties of + +00:00:42.680 --> 00:00:47.520 +tasks that only require text uh such as + +00:00:46.239 --> 00:00:51.600 +language + +00:00:47.520 --> 00:00:54.280 +modeling and then we also have other + +00:00:51.600 --> 00:00:56.160 +varieties of tasks that require only + +00:00:54.280 --> 00:00:58.160 +naturally occurring data so like data + +00:00:56.160 --> 00:01:01.600 +that we don't actually have to create by + +00:00:58.160 --> 00:01:04.560 +hand and or do that we don't have to + +00:01:01.600 --> 00:01:08.240 +create by hand for the purpose of + +00:01:04.560 --> 00:01:10.840 +training like language models or M uh + +00:01:08.240 --> 00:01:12.680 +NLP models and this includes stuff like + +00:01:10.840 --> 00:01:14.280 +machine translation and the reason why + +00:01:12.680 --> 00:01:16.240 +we have lots of machine translation data + +00:01:14.280 --> 00:01:19.479 +is people do translation anyway even if + +00:01:16.240 --> 00:01:20.799 +we didn't have like chat GPT or Google + +00:01:19.479 --> 00:01:22.600 +translate or something people would be + +00:01:20.799 --> 00:01:24.920 +doing translation a lot of this data can + +00:01:22.600 --> 00:01:27.400 +be used to train + +00:01:24.920 --> 00:01:29.640 +models then other things are hand + +00:01:27.400 --> 00:01:33.040 +labeled data and so this is like for a + +00:01:29.640 --> 00:01:35.159 +lot of things like question answering or + +00:01:33.040 --> 00:01:37.280 +um other + +00:01:35.159 --> 00:01:40.000 +tasks that you need to create data like + +00:01:37.280 --> 00:01:42.399 +name dty recognition or stuff like this + +00:01:40.000 --> 00:01:44.079 +there that data really mostly isn't + +00:01:42.399 --> 00:01:46.159 +naturally occurring so we need to go in + +00:01:44.079 --> 00:01:47.960 +and actually create it by hand in order + +00:01:46.159 --> 00:01:50.399 +to do + +00:01:47.960 --> 00:01:53.280 +training so like one of the interesting + +00:01:50.399 --> 00:01:54.840 +things about you know the whole Paradigm + +00:01:53.280 --> 00:01:57.960 +of training language models over the + +00:01:54.840 --> 00:02:00.880 +past several years is that it we have + +00:01:57.960 --> 00:02:03.439 +been remarkably successful in getting + +00:02:00.880 --> 00:02:07.640 +models to work at a very large number of + +00:02:03.439 --> 00:02:09.319 +tasks by training only on text so you + +00:02:07.640 --> 00:02:11.920 +know we train something like llama we + +00:02:09.319 --> 00:02:13.720 +train something like the early GP models + +00:02:11.920 --> 00:02:16.360 +that were trained only on text without + +00:02:13.720 --> 00:02:19.560 +uh very much supervised training + +00:02:16.360 --> 00:02:21.680 +data the and the reason why is like what + +00:02:19.560 --> 00:02:23.920 +I mentioned last class which is like + +00:02:21.680 --> 00:02:27.239 +actually a lot of data on the internet + +00:02:23.920 --> 00:02:28.760 +just occurs in this form anyway so we + +00:02:27.239 --> 00:02:31.519 +have + +00:02:28.760 --> 00:02:34.840 +uh things + +00:02:31.519 --> 00:02:36.519 +like phrase books that appear online and + +00:02:34.840 --> 00:02:38.959 +these phrase books weren't explicitly + +00:02:36.519 --> 00:02:41.000 +created it's machine translation data or + +00:02:38.959 --> 00:02:43.519 +translation data even but they appear + +00:02:41.000 --> 00:02:46.519 +online and there's actually a + +00:02:43.519 --> 00:02:46.519 +paper + +00:02:49.879 --> 00:02:53.959 +um that examines + +00:02:58.519 --> 00:03:03.159 +this did I didn't cite in the slides but + +00:03:01.440 --> 00:03:08.440 +it's it's a kind of interesting paper + +00:03:03.159 --> 00:03:10.200 +from ACL this year where they find that + +00:03:08.440 --> 00:03:12.680 +despite the fact that that there's a + +00:03:10.200 --> 00:03:14.920 +language model that was trained on just + +00:03:12.680 --> 00:03:17.360 +you know random data from the web they + +00:03:14.920 --> 00:03:20.799 +found over 30 million translation pairs + +00:03:17.360 --> 00:03:22.959 +across at least 44 languages um in this + +00:03:20.799 --> 00:03:25.920 +data that was just like scrip in the web + +00:03:22.959 --> 00:03:28.080 +not you know explicitly for translation + +00:03:25.920 --> 00:03:32.000 +and so there's lots of other examples of + +00:03:28.080 --> 00:03:35.239 +this uh you know question pairs from FAQ + +00:03:32.000 --> 00:03:38.280 +pages on sites or other things like that + +00:03:35.239 --> 00:03:41.319 +so but anyway yeah getting back to the + +00:03:38.280 --> 00:03:43.959 +original uh the original thing here in + +00:03:41.319 --> 00:03:47.120 +many cases uh your models will have + +00:03:43.959 --> 00:03:48.640 +already been exposed to some data uh + +00:03:47.120 --> 00:03:51.319 +there's some naturally occurring data + +00:03:48.640 --> 00:03:53.239 +that you can Harvest and curate in an + +00:03:51.319 --> 00:03:54.720 +appropriate way and then sometimes if + +00:03:53.239 --> 00:03:56.720 +you really want models to do something + +00:03:54.720 --> 00:03:57.799 +well you can do handl but that's very + +00:03:56.720 --> 00:04:00.959 +expensive and you're not going to be + +00:03:57.799 --> 00:04:04.720 +able to create very much data + +00:04:00.959 --> 00:04:07.319 +so one one very funny thing is uh I was + +00:04:04.720 --> 00:04:10.079 +playing around with GPT for + +00:04:07.319 --> 00:04:11.879 +translation and I asked it to translate + +00:04:10.079 --> 00:04:15.239 +from English to + +00:04:11.879 --> 00:04:17.079 +Japanese and it did really well most of + +00:04:15.239 --> 00:04:19.639 +the time it did you know very good + +00:04:17.079 --> 00:04:23.880 +translations on English to Japanese and + +00:04:19.639 --> 00:04:25.160 +like 900 900 out of a thousand examples + +00:04:23.880 --> 00:04:26.680 +sometimes it just got it wrong because + +00:04:25.160 --> 00:04:28.199 +it's not a perfect translation system + +00:04:26.680 --> 00:04:30.560 +but every once in a while it would + +00:04:28.199 --> 00:04:32.320 +translate into Japanese uh which is in + +00:04:30.560 --> 00:04:35.280 +Japanese characters and then it would + +00:04:32.320 --> 00:04:36.280 +translate into romanized Japanese into + +00:04:35.280 --> 00:04:41.160 +like the + +00:04:36.280 --> 00:04:42.520 +pronunciation um so no Japanese + +00:04:41.160 --> 00:04:44.080 +translator that you ask to translate + +00:04:42.520 --> 00:04:45.759 +into Japanese would ever do that that + +00:04:44.080 --> 00:04:47.039 +would be like extremely unprofessional + +00:04:45.759 --> 00:04:48.639 +right you know you're saying trans + +00:04:47.039 --> 00:04:51.720 +please translate this into Japanese for + +00:04:48.639 --> 00:04:55.360 +Japanese speakers but why would GP do + +00:04:51.720 --> 00:04:55.360 +this anyone have any + +00:04:56.639 --> 00:05:02.240 +ideas yeah someone on the internet + +00:05:00.600 --> 00:05:05.240 +yeah someone on the internet did it that + +00:05:02.240 --> 00:05:07.199 +way so a lot of the times when you have + +00:05:05.240 --> 00:05:08.560 +incidental training data on the internet + +00:05:07.199 --> 00:05:10.280 +it would be from like phrase books and + +00:05:08.560 --> 00:05:12.800 +people who are trying to teach Japanese + +00:05:10.280 --> 00:05:14.880 +for example so every once in a while + +00:05:12.800 --> 00:05:16.840 +like GPD got the idea that it should be + +00:05:14.880 --> 00:05:18.840 +translating like it did in a phrase book + +00:05:16.840 --> 00:05:21.199 +for Japanese Learners as opposed to you + +00:05:18.840 --> 00:05:23.759 +know like actually English to Japanese + +00:05:21.199 --> 00:05:26.400 +translations so the problem is if you're + +00:05:23.759 --> 00:05:28.560 +learning only on this language modeling + +00:05:26.400 --> 00:05:29.919 +based text you might get exactly what + +00:05:28.560 --> 00:05:31.440 +you want but everyone once in a while + +00:05:29.919 --> 00:05:34.160 +you'll get something completely crazy + +00:05:31.440 --> 00:05:36.919 +that you never expected to happen so uh + +00:05:34.160 --> 00:05:39.639 +that's the problem with just relying on + +00:05:36.919 --> 00:05:39.639 +base language + +00:05:41.280 --> 00:05:47.240 +mods so the all the methods that I'm + +00:05:44.880 --> 00:05:50.560 +going to be talking about here uh fall + +00:05:47.240 --> 00:05:52.600 +in under the class of multitask learning + +00:05:50.560 --> 00:05:54.600 +and so multitask learning is training + +00:05:52.600 --> 00:05:57.759 +models to do well on multiple tasks at + +00:05:54.600 --> 00:05:59.160 +once um just to give an example uh you + +00:05:57.759 --> 00:06:01.400 +could have this as an example and you + +00:05:59.160 --> 00:06:02.919 +could be doing language modeling on it + +00:06:01.400 --> 00:06:04.720 +you could also be training a model to do + +00:06:02.919 --> 00:06:06.720 +tagging on it and other things like this + +00:06:04.720 --> 00:06:10.319 +and exactly how you do this can be + +00:06:06.720 --> 00:06:13.560 +different but the important thing is + +00:06:10.319 --> 00:06:15.599 +that you have some shared parameters + +00:06:13.560 --> 00:06:17.840 +between the models that are trained on + +00:06:15.599 --> 00:06:19.280 +all tasks and if you're just training a + +00:06:17.840 --> 00:06:21.360 +big language model then you'll probably + +00:06:19.280 --> 00:06:25.440 +be sharing all of the parameters if + +00:06:21.360 --> 00:06:27.199 +you're training a uh something like Bert + +00:06:25.440 --> 00:06:29.080 +or like you're you're pre-training and + +00:06:27.199 --> 00:06:31.000 +the then fine tuning you might train the + +00:06:29.080 --> 00:06:32.800 +body the model on multiple tasks but + +00:06:31.000 --> 00:06:35.479 +have a separate classification for + +00:06:32.800 --> 00:06:37.520 +different tasks so there's different + +00:06:35.479 --> 00:06:40.880 +ways you can do that but the basic idea + +00:06:37.520 --> 00:06:40.880 +is that you need to have lots of shared + +00:06:40.960 --> 00:06:46.280 +parameters um one easy way to do this uh + +00:06:44.160 --> 00:06:49.479 +the very simplest way to do this is to + +00:06:46.280 --> 00:06:51.800 +train the model and Sample One Mini + +00:06:49.479 --> 00:06:53.520 +batch for one task another mini batch + +00:06:51.800 --> 00:06:55.720 +for another task and just alternate + +00:06:53.520 --> 00:06:58.400 +between them or alternate between them + +00:06:55.720 --> 00:07:01.400 +and Sample four from one task and one + +00:06:58.400 --> 00:07:03.879 +from another tasks so uh it's often as + +00:07:01.400 --> 00:07:03.879 +simple as + +00:07:04.199 --> 00:07:08.599 +that or you can just uh sorry or you can + +00:07:06.840 --> 00:07:11.319 +just mix all of the data together so if + +00:07:08.599 --> 00:07:12.639 +you're doing like text um everything is + +00:07:11.319 --> 00:07:15.280 +text based then you don't even need to + +00:07:12.639 --> 00:07:15.280 +worry about mini + +00:07:15.560 --> 00:07:21.440 +batches cool so separately from this uh + +00:07:18.759 --> 00:07:23.960 +pre-train and fine-tune so in pre-train + +00:07:21.440 --> 00:07:26.360 +and fine-tune you first train on one + +00:07:23.960 --> 00:07:28.240 +task and then on another and the way + +00:07:26.360 --> 00:07:30.599 +this works is you first train for + +00:07:28.240 --> 00:07:31.960 +example language modeling objective and + +00:07:30.599 --> 00:07:35.199 +then after you're done training the + +00:07:31.960 --> 00:07:37.440 +language modeling objective you uh you + +00:07:35.199 --> 00:07:41.360 +train on something else like + +00:07:37.440 --> 00:07:43.520 +tagging and there's several reasons why + +00:07:41.360 --> 00:07:45.199 +you might want to do this is does anyone + +00:07:43.520 --> 00:07:48.479 +have an idea about why you might want to + +00:07:45.199 --> 00:07:50.720 +do this as opposed to something like + +00:07:48.479 --> 00:07:53.319 +standard multitask learning where you do + +00:07:50.720 --> 00:07:57.000 +both of them at the same time this is a + +00:07:53.319 --> 00:07:57.000 +straightforward question perhaps + +00:07:57.479 --> 00:08:03.520 +but now when I say straightforward I + +00:08:00.039 --> 00:08:03.520 +don't mean easy I mean not a tricked + +00:08:03.599 --> 00:08:06.800 +question any + +00:08:09.039 --> 00:08:15.120 +ideas um okay how many of you have + +00:08:11.800 --> 00:08:17.960 +trained uh a 70 billion parameter + +00:08:15.120 --> 00:08:17.960 +language model from + +00:08:18.960 --> 00:08:23.080 +scratch I see somebody I see somebody + +00:08:21.360 --> 00:08:27.680 +saying maybe so that's actually pretty + +00:08:23.080 --> 00:08:27.680 +impressive but um why why + +00:08:27.720 --> 00:08:31.240 +not yeah + +00:08:31.800 --> 00:08:35.440 +yeah it's an unbel it's unbelievably + +00:08:33.680 --> 00:08:37.320 +expensive and a waste of resources yeah + +00:08:35.440 --> 00:08:39.440 +so like if everybody was doing it it + +00:08:37.320 --> 00:08:41.240 +would be a waste of resources so we + +00:08:39.440 --> 00:08:42.479 +actually benefit a lot by a very small + +00:08:41.240 --> 00:08:45.600 +number of people doing this free + +00:08:42.479 --> 00:08:48.240 +training and then the rest of us doing + +00:08:45.600 --> 00:08:50.560 +you know fine tuning uh on a smaller + +00:08:48.240 --> 00:08:53.320 +amount of data so if you were doing all + +00:08:50.560 --> 00:08:55.040 +the multitasking uh from scratch then + +00:08:53.320 --> 00:08:57.600 +that could be a + +00:08:55.040 --> 00:09:01.200 +waste does anyone have an idea why you + +00:08:57.600 --> 00:09:01.200 +might not want to do this + +00:09:02.640 --> 00:09:06.800 +or actually there there's some other + +00:09:04.079 --> 00:09:08.240 +reasons why you might want to do this um + +00:09:06.800 --> 00:09:10.480 +another reason why you might want to do + +00:09:08.240 --> 00:09:13.240 +this is for example if your pre-training + +00:09:10.480 --> 00:09:15.040 +data is big and messy uh like for + +00:09:13.240 --> 00:09:17.600 +example if your pre-training data is all + +00:09:15.040 --> 00:09:20.600 +of the internet and the all the internet + +00:09:17.600 --> 00:09:22.000 +contains like lots of toxic text and + +00:09:20.600 --> 00:09:23.640 +text that's in a format that you don't + +00:09:22.000 --> 00:09:25.959 +want you can still train on it and learn + +00:09:23.640 --> 00:09:28.800 +from it but then fine-tuning can you + +00:09:25.959 --> 00:09:32.000 +know make your model safer or uh remove + +00:09:28.800 --> 00:09:33.360 +tox or other like that as well so does + +00:09:32.000 --> 00:09:34.480 +anyone have an idea why you might not + +00:09:33.360 --> 00:09:38.480 +want to do + +00:09:34.480 --> 00:09:38.480 +this this is a trickier + +00:09:40.200 --> 00:09:43.440 +question any + +00:09:45.320 --> 00:09:49.720 +ideas or or to put it in a different way + +00:09:48.079 --> 00:09:52.880 +while you might want to do standard + +00:09:49.720 --> 00:09:56.000 +multitasking instead of this yeah just + +00:09:52.880 --> 00:09:59.279 +again so if you don't have much teching + +00:09:56.000 --> 00:10:01.480 +data for example then you might consider + +00:09:59.279 --> 00:10:01.480 +like + +00:10:02.399 --> 00:10:10.200 +doing uh so if you have lots of tagging + +00:10:06.480 --> 00:10:12.320 +data I I think you're you're yeah so I I + +00:10:10.200 --> 00:10:13.560 +think you're basically this is a good + +00:10:12.320 --> 00:10:15.880 +point so if you don't have lots of + +00:10:13.560 --> 00:10:17.240 +tagging data um you might have much much + +00:10:15.880 --> 00:10:18.800 +more language modeling data than you + +00:10:17.240 --> 00:10:21.200 +have tagging data so it's a better idea + +00:10:18.800 --> 00:10:24.959 +to train more on it that is true but you + +00:10:21.200 --> 00:10:26.519 +could sample like 99 mini batches of uh + +00:10:24.959 --> 00:10:29.480 +of language modeling data and One Mini + +00:10:26.519 --> 00:10:31.399 +batch of train data or 999 of language + +00:10:29.480 --> 00:10:34.480 +model in data + +00:10:31.399 --> 00:10:37.040 +so it's a good you're in going in a good + +00:10:34.480 --> 00:10:40.040 +direction anything + +00:10:37.040 --> 00:10:40.040 +else + +00:10:44.639 --> 00:10:50.800 +yeah uh so if your pre-training data has + +00:10:48.959 --> 00:10:52.000 +certain biases you might inherit it do + +00:10:50.800 --> 00:10:54.240 +you think that's a bigger problem with + +00:10:52.000 --> 00:10:56.839 +pre-training or pre- tring and F traing + +00:10:54.240 --> 00:10:56.839 +or standard + +00:10:58.040 --> 00:11:01.040 +Ming + +00:11:12.660 --> 00:11:15.750 +[Music] + +00:11:18.600 --> 00:11:23.240 +yeah um so you might you might lose some + +00:11:21.320 --> 00:11:25.560 +of the information that exists in the + +00:11:23.240 --> 00:11:27.480 +multitask data set I think that's pretty + +00:11:25.560 --> 00:11:29.560 +close to what I'm going to say so let me + +00:11:27.480 --> 00:11:30.920 +um let me just go ahead and give the + +00:11:29.560 --> 00:11:35.160 +hopefully everybody had time to think + +00:11:30.920 --> 00:11:37.279 +about it but um this is a paper that we + +00:11:35.160 --> 00:11:40.320 +wrote previously and basically one + +00:11:37.279 --> 00:11:41.320 +interesting thing is that you actually + +00:11:40.320 --> 00:11:44.560 +do + +00:11:41.320 --> 00:11:47.320 +better um you do better if you train on + +00:11:44.560 --> 00:11:50.160 +multiple tasks at the same time and our + +00:11:47.320 --> 00:11:51.480 +hypothesis about why the reason uh you + +00:11:50.160 --> 00:11:53.279 +do better on the end task that you + +00:11:51.480 --> 00:11:55.200 +finally want to do well on compared to + +00:11:53.279 --> 00:11:58.079 +pre-training and fine tuning and our + +00:11:55.200 --> 00:12:01.079 +hypothesis about this um which I've also + +00:11:58.079 --> 00:12:03.160 +seen a few other works is if you + +00:12:01.079 --> 00:12:05.040 +pre-train on the task that you finally + +00:12:03.160 --> 00:12:07.959 +want to solve while you're also solving + +00:12:05.040 --> 00:12:12.120 +the language modeling task + +00:12:07.959 --> 00:12:14.279 +the essentially the model is learning + +00:12:12.120 --> 00:12:17.000 +representations that are useful for both + +00:12:14.279 --> 00:12:18.839 +at the same time as opposed to if you're + +00:12:17.000 --> 00:12:20.680 +training on the language modeling task + +00:12:18.839 --> 00:12:22.079 +it will be learning representations that + +00:12:20.680 --> 00:12:24.440 +are useful for the language modeling + +00:12:22.079 --> 00:12:26.079 +task but not necessarily focusing on the + +00:12:24.440 --> 00:12:28.639 +representations that would be useful for + +00:12:26.079 --> 00:12:31.360 +the end so like for example if you're + +00:12:28.639 --> 00:12:33.040 +joining training on sentiment analysis + +00:12:31.360 --> 00:12:34.600 +and language modeling the + +00:12:33.040 --> 00:12:36.600 +representations that are useful for + +00:12:34.600 --> 00:12:39.600 +sentiment analysis will be more Salient + +00:12:36.600 --> 00:12:43.199 +than the like in the model essentially + +00:12:39.600 --> 00:12:45.160 +and so we um that will be particularly a + +00:12:43.199 --> 00:12:46.639 +problem when there's multiple when you + +00:12:45.160 --> 00:12:49.560 +have a + +00:12:46.639 --> 00:12:51.519 +like a varied optimization landscape in + +00:12:49.560 --> 00:12:53.199 +multiple local Optima and the language + +00:12:51.519 --> 00:12:55.199 +modeling might not get you into the + +00:12:53.199 --> 00:12:57.480 +global Optimum uh that you want for the + +00:12:55.199 --> 00:12:59.279 +end task that you're solving there's + +00:12:57.480 --> 00:13:02.519 +also another interesting paper from + +00:12:59.279 --> 00:13:04.120 +anthropic more recently than ours that + +00:13:02.519 --> 00:13:05.399 +shows something a little bit similar + +00:13:04.120 --> 00:13:06.279 +specifically from the point of view of + +00:13:05.399 --> 00:13:08.760 +safety + +00:13:06.279 --> 00:13:12.000 +training and they demonstrate that if + +00:13:08.760 --> 00:13:14.040 +you start out by having a concept of + +00:13:12.000 --> 00:13:17.279 +safety early in your training you're + +00:13:14.040 --> 00:13:19.600 +able to reach better um better final + +00:13:17.279 --> 00:13:21.000 +results than if you start safety + +00:13:19.600 --> 00:13:23.760 +training after you trained your model + +00:13:21.000 --> 00:13:26.480 +for a while so um and this is + +00:13:23.760 --> 00:13:28.880 +particularly for things like toxicity to + +00:13:26.480 --> 00:13:30.920 +so there are downsides to pre-training + +00:13:28.880 --> 00:13:32.720 +find tuning but the upsides of you know + +00:13:30.920 --> 00:13:34.360 +spending lots of compute once and then F + +00:13:32.720 --> 00:13:36.440 +tuning for lots of different you know + +00:13:34.360 --> 00:13:40.839 +Downstream tests is like large enough + +00:13:36.440 --> 00:13:40.839 +that that's still the standard not + +00:13:41.160 --> 00:13:49.720 +this um any questions about + +00:13:44.920 --> 00:13:49.720 +that okay cool let's uh let's move + +00:13:49.959 --> 00:13:55.040 +on um so we talked about prompting + +00:13:53.199 --> 00:13:57.399 +before I'm just going to go over that + +00:13:55.040 --> 00:13:59.920 +very quick you know just say it for + +00:13:57.399 --> 00:14:03.079 +completeness but when we're prompting uh + +00:13:59.920 --> 00:14:04.839 +we have an encoder uh we train it on + +00:14:03.079 --> 00:14:07.000 +language modeling or whatever else but + +00:14:04.839 --> 00:14:10.399 +then we freeze it and then we specify + +00:14:07.000 --> 00:14:13.240 +the task by a prefix like + +00:14:10.399 --> 00:14:15.000 +this and and what instruction tuning + +00:14:13.240 --> 00:14:17.240 +does is instruction tuning is like a + +00:14:15.000 --> 00:14:20.839 +combination of fine-tuning and prompting + +00:14:17.240 --> 00:14:23.160 +and so what we do is we pre-train and + +00:14:20.839 --> 00:14:27.360 +then we + +00:14:23.160 --> 00:14:29.040 +oh sorry I uh guess I I failed to update + +00:14:27.360 --> 00:14:31.440 +the the figure here so this is just a + +00:14:29.040 --> 00:14:37.199 +figure for f tuning so normally what you + +00:14:31.440 --> 00:14:39.519 +do is you um you have a prompt for one + +00:14:37.199 --> 00:14:42.480 +task a prompt for another task a prompt + +00:14:39.519 --> 00:14:45.440 +for another task and then you uh train + +00:14:42.480 --> 00:14:47.040 +your model specifically so that it does + +00:14:45.440 --> 00:14:49.079 +good completions of those prps and it'll + +00:14:47.040 --> 00:14:51.680 +give some actual examples of that right + +00:14:49.079 --> 00:14:54.199 +so yeah sorry the the figure uh I I will + +00:14:51.680 --> 00:14:54.199 +need to fix + +00:14:57.680 --> 00:15:03.079 +it + +00:15:00.399 --> 00:15:03.079 +just taking a + +00:15:03.800 --> 00:15:11.399 +note + +00:15:06.560 --> 00:15:14.560 +um okay so we haven't really covered + +00:15:11.399 --> 00:15:16.279 +um fine tuning yet in general so I want + +00:15:14.560 --> 00:15:18.240 +to talk a little bit about what we do uh + +00:15:16.279 --> 00:15:20.160 +for fine tuning and particularly what we + +00:15:18.240 --> 00:15:22.079 +do for fine tuning very large models + +00:15:20.160 --> 00:15:23.639 +because I think that's what a lot of + +00:15:22.079 --> 00:15:27.680 +people want to do + +00:15:23.639 --> 00:15:30.360 +nowadays so for full fine tuning um full + +00:15:27.680 --> 00:15:31.120 +fine tuning is relative L easy uh what + +00:15:30.360 --> 00:15:35.120 +we + +00:15:31.120 --> 00:15:36.920 +do um easy conceptually hard in practice + +00:15:35.120 --> 00:15:40.360 +so what we do is we simply continue + +00:15:36.920 --> 00:15:43.480 +training the language model on uh + +00:15:40.360 --> 00:15:45.839 +whatever data we want to be fitting to + +00:15:43.480 --> 00:15:47.240 +so this could be like translation pairs + +00:15:45.839 --> 00:15:49.199 +it could be question answering pairs it + +00:15:47.240 --> 00:15:52.000 +could be anything else like + +00:15:49.199 --> 00:15:53.839 +that um but the issue is depending on + +00:15:52.000 --> 00:15:56.720 +the method that you're using to optimize + +00:15:53.839 --> 00:15:59.120 +your model uh the method can take lots + +00:15:56.720 --> 00:16:00.959 +of memory and also in some some cases it + +00:15:59.120 --> 00:16:02.319 +can be relatively unstable compared to + +00:16:00.959 --> 00:16:04.240 +some other Alternatives that I'm going + +00:16:02.319 --> 00:16:07.079 +to talk about in a + +00:16:04.240 --> 00:16:10.440 +bit and just to give an example uh + +00:16:07.079 --> 00:16:13.560 +training a 65 billion parameter model uh + +00:16:10.440 --> 00:16:16.319 +which is the largest version of llama 1 + +00:16:13.560 --> 00:16:18.880 +uh with 16 bit mixed Precision actually + +00:16:16.319 --> 00:16:21.759 +takes uh much more memory than you would + +00:16:18.880 --> 00:16:26.440 +expect uh if you haven't done this + +00:16:21.759 --> 00:16:29.240 +before so if you look at the amount of + +00:16:26.440 --> 00:16:32.160 +memory required for + +00:16:29.240 --> 00:16:34.120 +holding the model in the first place if + +00:16:32.160 --> 00:16:38.040 +we have 65 billion + +00:16:34.120 --> 00:16:40.120 +parameters uh times two that would be + +00:16:38.040 --> 00:16:43.160 +130 gigabytes of memory already so + +00:16:40.120 --> 00:16:47.079 +that's already a lot of memory right but + +00:16:43.160 --> 00:16:49.639 +if we want to do um if we want to hold + +00:16:47.079 --> 00:16:52.399 +both the parameters and the gradients of + +00:16:49.639 --> 00:16:55.839 +the model um obviously we need to double + +00:16:52.399 --> 00:16:58.240 +the number of uh points here so we + +00:16:55.839 --> 00:16:59.880 +double we also have 130 gbt for the + +00:16:58.240 --> 00:17:01.880 +param + +00:16:59.880 --> 00:17:04.160 +uh sorry for the + +00:17:01.880 --> 00:17:06.240 +gradients then we have the optimizer and + +00:17:04.160 --> 00:17:09.039 +this could be an Optimizer like atam if + +00:17:06.240 --> 00:17:10.959 +people remember atom has first moments + +00:17:09.039 --> 00:17:12.360 +and second moments so it has the mean + +00:17:10.959 --> 00:17:13.280 +and the and something that looks like + +00:17:12.360 --> 00:17:15.160 +the + +00:17:13.280 --> 00:17:17.839 +variance + +00:17:15.160 --> 00:17:20.079 +and + +00:17:17.839 --> 00:17:21.240 +these at least according to this paper + +00:17:20.079 --> 00:17:25.280 +from + +00:17:21.240 --> 00:17:28.760 +2019 uh needed to be stored in 32 + +00:17:25.280 --> 00:17:31.520 +bits so um these needed to be stored in + +00:17:28.760 --> 00:17:33.960 +uh 32 bits of memory because if you + +00:17:31.520 --> 00:17:35.480 +stored them in smaller amounts of memory + +00:17:33.960 --> 00:17:39.000 +they would have underflow issues + +00:17:35.480 --> 00:17:40.640 +overflow issues and uh basically uh the + +00:17:39.000 --> 00:17:43.960 +numerical Precision would destabilize + +00:17:40.640 --> 00:17:47.000 +your training and then in addition the + +00:17:43.960 --> 00:17:49.440 +parameters also needed to be measured in + +00:17:47.000 --> 00:17:51.760 +uh in 32-bits so you needed a 32-bit + +00:17:49.440 --> 00:17:54.280 +copy of the + +00:17:51.760 --> 00:17:55.919 +parameters this is just the parameters + +00:17:54.280 --> 00:17:57.320 +of the model and then separately from + +00:17:55.919 --> 00:17:59.320 +that you also need to do the forward and + +00:17:57.320 --> 00:18:01.039 +backward passes and so if you do the + +00:17:59.320 --> 00:18:04.640 +forward and backward passes depending on + +00:18:01.039 --> 00:18:07.520 +how big your batch size is how many uh + +00:18:04.640 --> 00:18:09.120 +tokens you have in each instance this + +00:18:07.520 --> 00:18:11.559 +could take you know significant amounts + +00:18:09.120 --> 00:18:14.679 +of memory too like 100 to 200 + +00:18:11.559 --> 00:18:17.679 +gigabytes so overall this would take + +00:18:14.679 --> 00:18:21.240 +around a, to 1,400 gigabytes of GPU + +00:18:17.679 --> 00:18:24.520 +memory in the very naive scenario and + +00:18:21.240 --> 00:18:27.360 +this is uh not that + +00:18:24.520 --> 00:18:30.440 +great now uh this paper was written in + +00:18:27.360 --> 00:18:33.880 +2019 and there's have been some ADV uh + +00:18:30.440 --> 00:18:36.440 +advances since then in optimizing models + +00:18:33.880 --> 00:18:37.720 +so to give some examples of things that + +00:18:36.440 --> 00:18:39.400 +can be + +00:18:37.720 --> 00:18:43.000 +fixed + +00:18:39.400 --> 00:18:47.520 +previously when we were using + +00:18:43.000 --> 00:18:49.280 +fp16 uh so like the regular uh ansy + +00:18:47.520 --> 00:18:53.280 +floating Point numbers like we use on + +00:18:49.280 --> 00:18:55.400 +our CPU this was it you needed 32bit + +00:18:53.280 --> 00:18:57.840 +integer uh 32-bit floats to make this + +00:18:55.400 --> 00:19:01.080 +stable now it's pretty standard to use + +00:18:57.840 --> 00:19:04.799 +BF 16 uh brain float 16 like I talked + +00:19:01.080 --> 00:19:06.559 +about earlier in the uh in the class and + +00:19:04.799 --> 00:19:08.799 +because of that this can be made more + +00:19:06.559 --> 00:19:11.880 +stable so you can reduce this to things + +00:19:08.799 --> 00:19:15.919 +like two bytes instead of four bytes uh + +00:19:11.880 --> 00:19:17.159 +we can also uh if we make do that we + +00:19:15.919 --> 00:19:18.760 +don't need this extra copy of the + +00:19:17.159 --> 00:19:21.760 +parameters so we can get away with about + +00:19:18.760 --> 00:19:24.039 +eight bytes per uh parameter we want to + +00:19:21.760 --> 00:19:26.480 +optimize but that's still you know a lot + +00:19:24.039 --> 00:19:29.000 +of memory that's 130 gabt of memory uh + +00:19:26.480 --> 00:19:30.360 +for a 65 gigabyte m + +00:19:29.000 --> 00:19:32.960 +and the forward and backward path is + +00:19:30.360 --> 00:19:35.120 +still play SP as well so basically what + +00:19:32.960 --> 00:19:38.159 +I want to say is full fine tuning is uh + +00:19:35.120 --> 00:19:42.400 +pretty memory intensive + +00:19:38.159 --> 00:19:47.480 +and if we look at how big a standard GPU + +00:19:42.400 --> 00:19:49.679 +is I took some specs here the memory is + +00:19:47.480 --> 00:19:53.039 +uh the memory is just the memory on the + +00:19:49.679 --> 00:19:55.840 +GPU the cost I did a very unscientific + +00:19:53.039 --> 00:19:58.280 +thing of uh Google the price on Amazon + +00:19:55.840 --> 00:20:01.240 +and and take a look at the price of the + +00:19:58.280 --> 00:20:04.000 +GPU here and then on the right side this + +00:20:01.240 --> 00:20:08.000 +is uh the types of cloud machines that + +00:20:04.000 --> 00:20:09.880 +support these gpus and in this class uh + +00:20:08.000 --> 00:20:13.559 +a lot of people are using Google collab + +00:20:09.880 --> 00:20:15.640 +I think for your uh for your current + +00:20:13.559 --> 00:20:17.640 +assignment and soon we'll have AWS + +00:20:15.640 --> 00:20:20.080 +credits for everybody so you can use AWS + +00:20:17.640 --> 00:20:22.039 +machines so if you look at the gpus that + +00:20:20.080 --> 00:20:26.880 +are available we have things everywhere + +00:20:22.039 --> 00:20:29.799 +from 24 gigabytes uh 32 gigabytes 40 40 + +00:20:26.880 --> 00:20:33.520 +to 80 gigabytes uh 48 + +00:20:29.799 --> 00:20:35.760 +gabes um or on your Mac the GPU and CPU + +00:20:33.520 --> 00:20:40.000 +memory is shared + +00:20:35.760 --> 00:20:42.720 +and basically what we can see is that + +00:20:40.000 --> 00:20:44.760 +there's no GPU with 130 gigabytes of + +00:20:42.720 --> 00:20:47.039 +memory right so none of them can do this + +00:20:44.760 --> 00:20:49.400 +with a single + +00:20:47.039 --> 00:20:52.000 +GPU uh there's also a bunch of other + +00:20:49.400 --> 00:20:54.960 +Hardware options like AMD gpus Google + +00:20:52.000 --> 00:20:58.640 +PPU special purpose uh training things + +00:20:54.960 --> 00:20:59.760 +like cerebrus awsum Etc but I think for + +00:20:58.640 --> 00:21:01.120 +the purpose of this class you're + +00:20:59.760 --> 00:21:04.520 +probably going to use standard Hardware + +00:21:01.120 --> 00:21:07.679 +like this so anyway like that model will + +00:21:04.520 --> 00:21:10.720 +not fit on any or that fine tuning will + +00:21:07.679 --> 00:21:15.000 +not fit on any GPU that you have to get + +00:21:10.720 --> 00:21:15.000 +to um any questions about + +00:21:16.200 --> 00:21:19.200 +this + +00:21:21.360 --> 00:21:28.880 +yeah so a lot of these are created + +00:21:25.080 --> 00:21:30.360 +specifically for training neur networks + +00:21:28.880 --> 00:21:32.799 +so they're like really really good at + +00:21:30.360 --> 00:21:37.360 +the things you need to be training their + +00:21:32.799 --> 00:21:39.600 +networks for um I haven't actually used + +00:21:37.360 --> 00:21:43.000 +any of these so I I can't like endorse + +00:21:39.600 --> 00:21:44.120 +or disor any of them but they're made to + +00:21:43.000 --> 00:21:46.640 +be like really good at training + +00:21:44.120 --> 00:21:48.320 +Transformer langage models or like the + +00:21:46.640 --> 00:21:50.960 +specific thing that everybody wants to + +00:21:48.320 --> 00:21:52.320 +train uh the disadvantage is if you + +00:21:50.960 --> 00:21:54.720 +start wanting to be like a little bit + +00:21:52.320 --> 00:21:57.840 +more creative than you know what they + +00:21:54.720 --> 00:22:00.159 +imagined it might not support that so um + +00:21:57.840 --> 00:22:02.200 +then that's also a problem with tpus + +00:22:00.159 --> 00:22:03.919 +tpus are very good at certain things + +00:22:02.200 --> 00:22:05.600 +like they're very good at like batch + +00:22:03.919 --> 00:22:08.480 +large operations but they're less good + +00:22:05.600 --> 00:22:10.679 +at nimbly executing dynamic constition + +00:22:08.480 --> 00:22:12.720 +graphs and stuff so from that point of + +00:22:10.679 --> 00:22:15.360 +view I think most people in research + +00:22:12.720 --> 00:22:15.360 +selles + +00:22:15.679 --> 00:22:22.000 +to um one one thing I should mention is + +00:22:18.799 --> 00:22:25.000 +the AMD AMD gpus uh a lot of people have + +00:22:22.000 --> 00:22:27.080 +started using them in like 2023 2024 + +00:22:25.000 --> 00:22:28.480 +like I think previously uh it was kind + +00:22:27.080 --> 00:22:30.120 +of a Nvidia + +00:22:28.480 --> 00:22:32.880 +one horse race but I've heard more and + +00:22:30.120 --> 00:22:36.720 +more people using amds so and they're + +00:22:32.880 --> 00:22:39.919 +they're not price Val quite as much so + +00:22:36.720 --> 00:22:39.919 +um any other + +00:22:47.919 --> 00:22:53.279 +questions um so training models like if + +00:22:51.799 --> 00:22:57.240 +they're training pre-training models + +00:22:53.279 --> 00:23:01.960 +they're using like a th a th000 2,000 + +00:22:57.240 --> 00:23:05.279 +4,000 or something um like they meta + +00:23:01.960 --> 00:23:07.360 +just announced that they + +00:23:05.279 --> 00:23:12.480 +got + +00:23:07.360 --> 00:23:14.760 +350,000 h100s or something like this and + +00:23:12.480 --> 00:23:18.360 +in case you're you are too lazy to + +00:23:14.760 --> 00:23:20.559 +calculate that's about um 10 to20 + +00:23:18.360 --> 00:23:24.360 +billion + +00:23:20.559 --> 00:23:25.840 +doll it's a lot of money um and I'm sure + +00:23:24.360 --> 00:23:28.159 +not all of them are being used to train + +00:23:25.840 --> 00:23:29.640 +a model uh you know a lot of them are + +00:23:28.159 --> 00:23:32.520 +used for model surveying and stuff like + +00:23:29.640 --> 00:23:34.240 +that so um but pre there's a reason why + +00:23:32.520 --> 00:23:36.360 +we're not all pre-training models right + +00:23:34.240 --> 00:23:38.159 +you know um it's a big it's a big effort + +00:23:36.360 --> 00:23:43.640 +it's very expensive + +00:23:38.159 --> 00:23:43.640 +so cool any other uh any other + +00:23:44.320 --> 00:23:50.400 +questions cool okay so how can we + +00:23:48.240 --> 00:23:52.039 +overcome this uh the first way we can + +00:23:50.400 --> 00:23:53.919 +overcome this is using things like + +00:23:52.039 --> 00:23:56.919 +multi-gpu + +00:23:53.919 --> 00:23:59.279 +training and uh one solution is just to + +00:23:56.919 --> 00:24:02.600 +throw more Hardware at the models and + +00:23:59.279 --> 00:24:06.159 +distribute the models over multiple + +00:24:02.600 --> 00:24:08.760 +places and the canonical or the most + +00:24:06.159 --> 00:24:10.400 +well-known version of this that still + +00:24:08.760 --> 00:24:12.159 +many many people use when they're + +00:24:10.400 --> 00:24:14.799 +pre-training or F tuning language models + +00:24:12.159 --> 00:24:16.679 +is something called Deep speed zero and + +00:24:14.799 --> 00:24:18.760 +the way deep speed zero works is it + +00:24:16.679 --> 00:24:19.720 +works by partitioning optimization over + +00:24:18.760 --> 00:24:22.559 +different + +00:24:19.720 --> 00:24:25.399 +devices and + +00:24:22.559 --> 00:24:28.640 +so there's different stages of deep + +00:24:25.399 --> 00:24:31.799 +speed uh the F zero the first one is + +00:24:28.640 --> 00:24:35.880 +this one right here and this says 2 + 2 + +00:24:31.799 --> 00:24:39.399 ++ K where K is the size of the optimizer + +00:24:35.880 --> 00:24:41.919 +state that I had here so two uh two + +00:24:39.399 --> 00:24:44.600 +bytes two byes plus all of the bytes + +00:24:41.919 --> 00:24:44.600 +required for + +00:24:44.880 --> 00:24:49.360 +this and the blue is the first two the + +00:24:47.840 --> 00:24:50.720 +Orange is the second two and the green + +00:24:49.360 --> 00:24:54.279 +is the third + +00:24:50.720 --> 00:24:56.559 +one and so basically the Baseline is you + +00:24:54.279 --> 00:24:59.399 +you hold all of these on each + +00:24:56.559 --> 00:25:01.320 +GPU the the second thing is you + +00:24:59.399 --> 00:25:03.279 +partition the optimizer State across + +00:25:01.320 --> 00:25:06.039 +different gpus and because Optimizer + +00:25:03.279 --> 00:25:08.200 +state is generally larger or at least as + +00:25:06.039 --> 00:25:10.440 +large as all of the others this can + +00:25:08.200 --> 00:25:13.919 +reduce memory requirements significantly + +00:25:10.440 --> 00:25:16.000 +so this um went from 120 gabyt for + +00:25:13.919 --> 00:25:19.240 +whatever model they were doing there to + +00:25:16.000 --> 00:25:22.799 +31 gigabytes based on + +00:25:19.240 --> 00:25:26.600 +um so this was a 7.5 billion parameter + +00:25:22.799 --> 00:25:29.600 +model so the seven and they had let's + +00:25:26.600 --> 00:25:34.120 +see yeah and they had devices so they + +00:25:29.600 --> 00:25:36.640 +went down from 120 to 31 um this is with + +00:25:34.120 --> 00:25:38.799 +12 bytes for their Optimizer State like + +00:25:36.640 --> 00:25:40.480 +I said here um but actually we can get + +00:25:38.799 --> 00:25:43.399 +away with four bytes for the optimizer + +00:25:40.480 --> 00:25:46.120 +state so actually you can train a seven + +00:25:43.399 --> 00:25:49.200 +uh billion model reasonably easily on + +00:25:46.120 --> 00:25:52.360 +you know one or two devices uh one or + +00:25:49.200 --> 00:25:55.200 +several devices now with + +00:25:52.360 --> 00:25:57.159 +this so This is called stage one this is + +00:25:55.200 --> 00:26:00.320 +partition partitioning the optimizer + +00:25:57.159 --> 00:26:02.440 +state stage two this is partition + +00:26:00.320 --> 00:26:04.640 +partitioning the optimizer State and the + +00:26:02.440 --> 00:26:06.880 +gradients the optimizer state is + +00:26:04.640 --> 00:26:09.600 +actually doing the optimizer state is + +00:26:06.880 --> 00:26:13.679 +actually relatively like harmless it + +00:26:09.600 --> 00:26:15.600 +doesn't slow down too much um partition + +00:26:13.679 --> 00:26:17.799 +the gradients gets a little bit more + +00:26:15.600 --> 00:26:20.520 +tricky because you start having to uh + +00:26:17.799 --> 00:26:22.320 +move things between devices a lot and + +00:26:20.520 --> 00:26:25.880 +then uh if you do this for the + +00:26:22.320 --> 00:26:28.159 +parameters you can uh you can do even + +00:26:25.880 --> 00:26:30.760 +more so you can get it to like + +00:26:28.159 --> 00:26:32.399 +ridiculously small uh values here but + +00:26:30.760 --> 00:26:35.279 +this is going to be very expensive in + +00:26:32.399 --> 00:26:37.919 +terms of uh you know moving things + +00:26:35.279 --> 00:26:41.360 +around so that you can calculate your + +00:26:37.919 --> 00:26:43.159 +gradients so I I'd say that by default + +00:26:41.360 --> 00:26:45.720 +if you can go to deep speed with like + +00:26:43.159 --> 00:26:48.799 +stage one or stage two you can spread + +00:26:45.720 --> 00:26:52.940 +this out across different uh devices in + +00:26:48.799 --> 00:26:56.019 +in trads yeah is this a DAT + +00:26:52.940 --> 00:26:56.019 +[Music] + +00:26:56.600 --> 00:26:59.600 +question + +00:27:02.520 --> 00:27:09.200 +I does your central device um your + +00:27:05.720 --> 00:27:10.640 +central device can basically be your CPU + +00:27:09.200 --> 00:27:13.520 +when you say multi- device sorry do you + +00:27:10.640 --> 00:27:16.520 +mean multi-gpu or do you mean + +00:27:13.520 --> 00:27:16.520 +multi + +00:27:20.640 --> 00:27:26.240 +okay able + +00:27:23.640 --> 00:27:27.919 +to I don't I don't think so I mean it + +00:27:26.240 --> 00:27:30.320 +depends on the implementation but not + +00:27:27.919 --> 00:27:32.360 +theoretically any way and I can do speed + +00:27:30.320 --> 00:27:34.640 +this that for + +00:27:32.360 --> 00:27:36.880 +you yeah other otherwise you'd have lots + +00:27:34.640 --> 00:27:40.159 +of trouble like getting a machine that + +00:27:36.880 --> 00:27:43.600 +had you know a thousand gigabytes + +00:27:40.159 --> 00:27:46.039 +in um so yeah I would suggest definitely + +00:27:43.600 --> 00:27:48.720 +using something like uh deep speed but + +00:27:46.039 --> 00:27:51.080 +actually a lot of uh a lot of libraries + +00:27:48.720 --> 00:27:53.960 +use deep speed under the hood also so + +00:27:51.080 --> 00:27:56.720 +things like um uh hugging case + +00:27:53.960 --> 00:27:59.720 +accelerate GPD neox other things like + +00:27:56.720 --> 00:28:01.039 +this they they all uh interface many of + +00:27:59.720 --> 00:28:03.760 +them interface to deep speed or + +00:28:01.039 --> 00:28:06.039 +something similar to it so uh whatever + +00:28:03.760 --> 00:28:09.080 +Library you're using for uh training + +00:28:06.039 --> 00:28:12.000 +like this you can do I don't have a a + +00:28:09.080 --> 00:28:14.640 +list but there's a whole bunch of them + +00:28:12.000 --> 00:28:16.960 +you can either use use deep speed uh + +00:28:14.640 --> 00:28:20.480 +things like hugging face accelerator TRL + +00:28:16.960 --> 00:28:23.640 +I think we might have a TRL um uh + +00:28:20.480 --> 00:28:26.000 +recitation later uh also I haven't used + +00:28:23.640 --> 00:28:28.960 +it myself or worked with people who used + +00:28:26.000 --> 00:28:31.120 +it but ax aot a lot of people are using + +00:28:28.960 --> 00:28:33.799 +um so uh we maybe we could come up with + +00:28:31.120 --> 00:28:33.799 +a list of those + +00:28:37.480 --> 00:28:43.039 +later + +00:28:39.760 --> 00:28:44.799 +so the other option that you can use is + +00:28:43.039 --> 00:28:48.399 +don't tune all of the parameters of the + +00:28:44.799 --> 00:28:51.399 +model but just some of them and this is + +00:28:48.399 --> 00:28:54.039 +really popular nowadays because this + +00:28:51.399 --> 00:28:57.799 +further improves your ability to train + +00:28:54.039 --> 00:29:01.240 +on many different uh you know uh data + +00:28:57.799 --> 00:29:03.120 +sets without huge uh gpus or without + +00:29:01.240 --> 00:29:06.919 +many many GPU + +00:29:03.120 --> 00:29:08.519 +devices and so the first one is + +00:29:06.919 --> 00:29:10.399 +something like prefix tuning so I + +00:29:08.519 --> 00:29:12.240 +already talked about this last time + +00:29:10.399 --> 00:29:13.679 +prefix tuning is like a bridge between + +00:29:12.240 --> 00:29:17.480 +parameter efficient F tuning and + +00:29:13.679 --> 00:29:21.640 +prompting right so it Tunes + +00:29:17.480 --> 00:29:21.640 +one prefix for each of the + +00:29:22.799 --> 00:29:28.480 +layers so the next one that I'd like to + +00:29:25.320 --> 00:29:32.840 +talk about is adapters and adapters + +00:29:28.480 --> 00:29:37.559 +basically look like this so what you do + +00:29:32.840 --> 00:29:40.000 +is you have your standard Transformer + +00:29:37.559 --> 00:29:41.440 +architecture uh which um has you know + +00:29:40.000 --> 00:29:47.480 +like multi-headed + +00:29:41.440 --> 00:29:47.480 +detention um and other things like this + +00:29:47.760 --> 00:29:51.360 +and yeah this is written in a slightly + +00:29:50.159 --> 00:29:53.200 +different way than I wrote the + +00:29:51.360 --> 00:29:56.200 +Transformer diagram but it's saying the + +00:29:53.200 --> 00:29:59.960 +same things so multi-headed attention + +00:29:56.200 --> 00:30:03.399 +this is uh kind of of your W your q k + +00:29:59.960 --> 00:30:06.679 +and V matrices and then this is your o + +00:30:03.399 --> 00:30:09.240 +Matrix in um in the Transformer + +00:30:06.679 --> 00:30:10.600 +architecture so this is what we were + +00:30:09.240 --> 00:30:13.279 +calling multi-head attension in the + +00:30:10.600 --> 00:30:15.440 +previous diagram this says 2x feed + +00:30:13.279 --> 00:30:17.960 +forward layer it's basically 2x linear + +00:30:15.440 --> 00:30:21.039 +layer with a sandwiched nonlinearity so + +00:30:17.960 --> 00:30:25.000 +it's basically a a feed forward block so + +00:30:21.039 --> 00:30:27.679 +this is just the standard um the + +00:30:25.000 --> 00:30:30.039 +standard like Transformer so what + +00:30:27.679 --> 00:30:33.600 +adapters do is they add yet another + +00:30:30.039 --> 00:30:35.000 +layer right here and you freeze the + +00:30:33.600 --> 00:30:37.000 +things that are in Gray here like the + +00:30:35.000 --> 00:30:40.000 +feed forward layer feed forward layer + +00:30:37.000 --> 00:30:41.200 +multi-headed attention but only train + +00:30:40.000 --> 00:30:44.399 +this + +00:30:41.200 --> 00:30:46.760 +adapter and the way the adapter works is + +00:30:44.399 --> 00:30:49.880 +you have a standard + +00:30:46.760 --> 00:30:52.760 +large representation Vector here and you + +00:30:49.880 --> 00:30:55.000 +have a feed forward down projection that + +00:30:52.760 --> 00:30:58.000 +down projects to a very small number of + +00:30:55.000 --> 00:30:59.679 +nodes here and then you have a linearity + +00:30:58.000 --> 00:31:02.000 +and then you have a feed forward up + +00:30:59.679 --> 00:31:04.679 +projection that projects it back to the + +00:31:02.000 --> 00:31:08.720 +standard space and this + +00:31:04.679 --> 00:31:13.840 +is uh included within the uh residual + +00:31:08.720 --> 00:31:13.840 +layer here and so + +00:31:14.440 --> 00:31:21.320 +ideally this will project down from like + +00:31:17.519 --> 00:31:21.320 +512 to something like + +00:31:23.559 --> 00:31:31.519 +16 and then back up to 512 + +00:31:27.919 --> 00:31:35.679 +so if it was just a 5002 by 5002 Matrix + +00:31:31.519 --> 00:31:36.720 +that would be 2 the 9 right so you get + +00:31:35.679 --> 00:31:41.399 +two to + +00:31:36.720 --> 00:31:41.399 +the you get two to the 10 par + +00:31:49.200 --> 00:31:59.159 +Ang yeah for is um this is only 2 to the + +00:31:56.159 --> 00:31:59.159 +4 + +00:31:59.440 --> 00:32:08.360 +so if you have uh this that would be 29 + +00:32:03.200 --> 00:32:13.720 ++ 4 + 1 is 2 + +00:32:08.360 --> 00:32:15.720 +14 um so you would have 16 times less + +00:32:13.720 --> 00:32:17.799 +parameters for the adapters than you + +00:32:15.720 --> 00:32:21.200 +would have for the uh for the full + +00:32:17.799 --> 00:32:24.080 +Matrix so and then if we instead of + +00:32:21.200 --> 00:32:25.760 +using 16 we just did two or one or + +00:32:24.080 --> 00:32:30.000 +something like that it would be you know + +00:32:25.760 --> 00:32:33.519 +much much less so basically uh by making + +00:32:30.000 --> 00:32:34.600 +this making these matrices or these + +00:32:33.519 --> 00:32:38.840 +vectors + +00:32:34.600 --> 00:32:44.360 +very um very skinny this allows us to + +00:32:38.840 --> 00:32:44.360 +minimize our um minimize the additional + +00:32:47.080 --> 00:32:52.159 +paramet so are there any uh any + +00:32:49.519 --> 00:32:52.159 +questions about + +00:32:52.519 --> 00:32:55.519 +this + +00:32:56.039 --> 00:32:59.039 +yeah + +00:33:02.200 --> 00:33:05.440 +yeah so why do they make it smaller and + +00:33:03.919 --> 00:33:06.880 +then larger the main reason why they + +00:33:05.440 --> 00:33:08.159 +make it smaller and then larger is + +00:33:06.880 --> 00:33:09.799 +because that's a way to reduce the + +00:33:08.159 --> 00:33:12.480 +parameter count so if they kept it the + +00:33:09.799 --> 00:33:14.320 +same size um if they kept it the same + +00:33:12.480 --> 00:33:17.159 +size it would be two to 18 but you would + +00:33:14.320 --> 00:33:18.799 +actually have two of them uh you would + +00:33:17.159 --> 00:33:21.639 +have two of them so you'd have even more + +00:33:18.799 --> 00:33:21.639 +parameters + +00:33:24.399 --> 00:33:30.399 +but so it would hurt the performance uh + +00:33:28.720 --> 00:33:31.919 +so making them smaller would hurt the + +00:33:30.399 --> 00:33:34.440 +performance if he had lots and lots of + +00:33:31.919 --> 00:33:36.320 +training data so if you have lots and + +00:33:34.440 --> 00:33:39.000 +lots of training data you would benefit + +00:33:36.320 --> 00:33:41.440 +by making the adapter Dimension larger + +00:33:39.000 --> 00:33:43.279 +and uh just you know fitting fitting + +00:33:41.440 --> 00:33:45.080 +very well but if you have lots and lots + +00:33:43.279 --> 00:33:47.919 +of training data and you have the memory + +00:33:45.080 --> 00:33:49.080 +that allows you to train a larger model + +00:33:47.919 --> 00:33:50.440 +then you might as well just train the + +00:33:49.080 --> 00:33:51.679 +whole model itself you might as well do + +00:33:50.440 --> 00:33:53.960 +full fine + +00:33:51.679 --> 00:33:56.200 +tuning there's two advantages to + +00:33:53.960 --> 00:33:58.120 +parameter efficient uh fine tuning + +00:33:56.200 --> 00:34:00.279 +methods uh the first one is that they + +00:33:58.120 --> 00:34:01.960 +reduce memory like I mentioned here + +00:34:00.279 --> 00:34:03.799 +reduce the memory for the parameters + +00:34:01.960 --> 00:34:05.679 +you're training also because there's + +00:34:03.799 --> 00:34:07.320 +fewer parameters it's harder to like + +00:34:05.679 --> 00:34:08.960 +overfit so if you have very small + +00:34:07.320 --> 00:34:12.320 +training data full fine tuning can + +00:34:08.960 --> 00:34:14.399 +overfit and become unstable but because + +00:34:12.320 --> 00:34:18.000 +this has fewer parameters it + +00:34:14.399 --> 00:34:21.040 +essentially is less easy to overfit and + +00:34:18.000 --> 00:34:21.040 +will generalize better + +00:34:24.599 --> 00:34:29.359 +often so when you find tune you only + +00:34:27.159 --> 00:34:30.679 +fine-tune the parameters of the adapters + +00:34:29.359 --> 00:34:32.440 +and so we assume that we have a + +00:34:30.679 --> 00:34:34.200 +pre-trained model like the gray parts + +00:34:32.440 --> 00:34:36.480 +are pre-trained and then we fine tune + +00:34:34.200 --> 00:34:36.480 +just + +00:34:37.960 --> 00:34:40.960 +that + +00:34:43.720 --> 00:34:51.760 +butay so very good + +00:34:48.040 --> 00:34:51.760 +question you need + +00:34:53.760 --> 00:34:59.280 +to so the question was even + +00:34:57.880 --> 00:35:00.760 +though we are only fine tuning the + +00:34:59.280 --> 00:35:02.320 +adapter layers we still need to store + +00:35:00.760 --> 00:35:04.760 +the gradients of the other layers right + +00:35:02.320 --> 00:35:09.480 +so we still need to store this part + +00:35:04.760 --> 00:35:13.320 +that's actually not the case um + +00:35:09.480 --> 00:35:15.000 +so when you are doing back propop you + +00:35:13.320 --> 00:35:18.680 +only need to do back propop into the + +00:35:15.000 --> 00:35:20.839 +parts of the model that are on the path + +00:35:18.680 --> 00:35:23.240 +to the gradients that you want to be + +00:35:20.839 --> 00:35:25.800 +updated so like for + +00:35:23.240 --> 00:35:28.760 +example if I + +00:35:25.800 --> 00:35:32.160 +write + +00:35:28.760 --> 00:35:32.160 +if I write the computation + +00:35:55.800 --> 00:35:58.800 +graph + +00:36:22.599 --> 00:36:28.240 +so this is like the computation graph of + +00:36:25.720 --> 00:36:32.240 +a + +00:36:28.240 --> 00:36:34.319 +um in a tension block so we get our loss + +00:36:32.240 --> 00:36:36.400 +like the gradient from the loss is + +00:36:34.319 --> 00:36:41.160 +flowing in + +00:36:36.400 --> 00:36:44.000 +here and so it goes + +00:36:41.160 --> 00:36:47.640 +back to the Fe forward Network to the + +00:36:44.000 --> 00:36:49.200 +adapter to the attention and then here + +00:36:47.640 --> 00:36:51.119 +so we definitely need to pass it back + +00:36:49.200 --> 00:36:53.880 +through the layers so we get to you know + +00:36:51.119 --> 00:36:56.160 +like the earlier layers and stuff we + +00:36:53.880 --> 00:36:57.720 +don't actually need to pass it into this + +00:36:56.160 --> 00:36:59.400 +into the weights of the attention + +00:36:57.720 --> 00:37:01.280 +because we're not we're not updating + +00:36:59.400 --> 00:37:02.520 +them so we don't really need to even + +00:37:01.280 --> 00:37:04.640 +calculate the gradients of the weights + +00:37:02.520 --> 00:37:07.800 +of the attention we also don't need to + +00:37:04.640 --> 00:37:09.160 +calculate the gradient of this um but we + +00:37:07.800 --> 00:37:11.280 +do need to calculate the gradient of + +00:37:09.160 --> 00:37:14.240 +this because we're updating it so + +00:37:11.280 --> 00:37:15.839 +basically um you don't even need to do + +00:37:14.240 --> 00:37:19.800 +backrop in the parts that you can just + +00:37:15.839 --> 00:37:21.560 +cut off without updating yeah so forward + +00:37:19.800 --> 00:37:23.200 +you do need to you know use them + +00:37:21.560 --> 00:37:25.440 +obviously to calculate the forward path + +00:37:23.200 --> 00:37:27.560 +so by like being smart about that you + +00:37:25.440 --> 00:37:31.119 +can fix that there's also something + +00:37:27.560 --> 00:37:33.319 +called um uh checkpointing like + +00:37:31.119 --> 00:37:34.920 +computation graph checkpointing or + +00:37:33.319 --> 00:37:36.720 +forward pass or backward pass + +00:37:34.920 --> 00:37:38.640 +checkpointing where basically what you + +00:37:36.720 --> 00:37:40.040 +do is you calculate part part of the way + +00:37:38.640 --> 00:37:41.359 +through the graph and then throw out the + +00:37:40.040 --> 00:37:45.280 +intermediate + +00:37:41.359 --> 00:37:47.760 +calculation um and so for example you + +00:37:45.280 --> 00:37:50.000 +might you might do the forward pass all + +00:37:47.760 --> 00:37:52.240 +the way up to here and then throw out + +00:37:50.000 --> 00:37:53.720 +all the intermediate States and then + +00:37:52.240 --> 00:37:55.400 +recalculate them when you're doing the + +00:37:53.720 --> 00:37:57.240 +backward pass and so there's like lots + +00:37:55.400 --> 00:37:58.920 +of tricky things that we can do + +00:37:57.240 --> 00:38:01.920 +like squeeze your memory + +00:37:58.920 --> 00:38:01.920 +you + +00:38:02.839 --> 00:38:10.200 +yeah how uh great question + +00:38:06.079 --> 00:38:12.079 +um do I have that on the slide maybe + +00:38:10.200 --> 00:38:17.119 +not + +00:38:12.079 --> 00:38:19.599 +um so one way that you can do it this is + +00:38:17.119 --> 00:38:22.960 +from Laura but the B the same idea is + +00:38:19.599 --> 00:38:25.960 +basically there so in Laura you do the + +00:38:22.960 --> 00:38:28.920 +upscaling with a zero Matrix you + +00:38:25.960 --> 00:38:30.599 +initiate it to a zero Matrix and the + +00:38:28.920 --> 00:38:34.680 +downscaling you can initialize it to + +00:38:30.599 --> 00:38:38.000 +zero or like some random uh random no + +00:38:34.680 --> 00:38:40.000 +actually this needs to be random uh and + +00:38:38.000 --> 00:38:42.480 +so the reason why this is zero is + +00:38:40.000 --> 00:38:46.839 +because then if you don't do anything it + +00:38:42.480 --> 00:38:51.119 +will just stay the same right so uh so + +00:38:46.839 --> 00:38:51.119 +that is uh the standard one standard + +00:38:52.839 --> 00:38:57.440 +way + +00:38:54.520 --> 00:38:58.960 +cool okay so um another thing that I + +00:38:57.440 --> 00:39:00.839 +want to mention this is a kind of + +00:38:58.960 --> 00:39:03.800 +interesting technique it's not super + +00:39:00.839 --> 00:39:06.359 +standard but I I like it so I'm going to + +00:39:03.800 --> 00:39:08.880 +uh going to talk about it anyway this is + +00:39:06.359 --> 00:39:10.760 +something called adapter Fusion and the + +00:39:08.880 --> 00:39:13.240 +basic idea is to learn an adapter for + +00:39:10.760 --> 00:39:16.040 +various tasks and combine them + +00:39:13.240 --> 00:39:17.880 +together and so instead of having just + +00:39:16.040 --> 00:39:19.400 +your adapter layer you have multiple + +00:39:17.880 --> 00:39:20.880 +adapters and then you have adapter + +00:39:19.400 --> 00:39:22.400 +Fusion up + +00:39:20.880 --> 00:39:26.680 +here + +00:39:22.400 --> 00:39:28.000 +and the basic idea is uh an adapter is + +00:39:26.680 --> 00:39:30.560 +just you know what I wrote on the + +00:39:28.000 --> 00:39:33.599 +previous slide but adapter Fusion is + +00:39:30.560 --> 00:39:36.000 +attention over adapters so you can + +00:39:33.599 --> 00:39:39.720 +decide which adapter to use in which + +00:39:36.000 --> 00:39:42.160 +case and each of the adapters is trained + +00:39:39.720 --> 00:39:44.800 +separately on like task specific data so + +00:39:42.160 --> 00:39:47.200 +you have uh data from lots of question + +00:39:44.800 --> 00:39:49.119 +answering data sets and you train a + +00:39:47.200 --> 00:39:50.640 +question answering adapter you have data + +00:39:49.119 --> 00:39:53.160 +from + +00:39:50.640 --> 00:39:54.880 +uh I don't know translation data sets + +00:39:53.160 --> 00:39:57.560 +and you train a translation adapter you + +00:39:54.880 --> 00:40:00.440 +have uh other things like that + +00:39:57.560 --> 00:40:03.920 +and so then when you actually use them + +00:40:00.440 --> 00:40:06.400 +you do attension over which adapter to + +00:40:03.920 --> 00:40:08.880 +use and then uh take the value from that + +00:40:06.400 --> 00:40:10.520 +adapter and I I kind of like this idea + +00:40:08.880 --> 00:40:12.560 +because it allows you to you know train + +00:40:10.520 --> 00:40:15.200 +modules that are useful for a particular + +00:40:12.560 --> 00:40:17.680 +task and then decide which one to use at + +00:40:15.200 --> 00:40:19.319 +any particular point so uh I think + +00:40:17.680 --> 00:40:22.040 +there's lots of creative things that we + +00:40:19.319 --> 00:40:24.599 +could do with this there's also um + +00:40:22.040 --> 00:40:26.560 +multilingual versions so you train + +00:40:24.599 --> 00:40:28.520 +adapters for individual languages and + +00:40:26.560 --> 00:40:30.119 +you train adapter for individual tasks + +00:40:28.520 --> 00:40:32.200 +and then you combine them together too + +00:40:30.119 --> 00:40:34.079 +so if that's interesting you can take a + +00:40:32.200 --> 00:40:36.319 +look at that + +00:40:34.079 --> 00:40:37.960 +Pap in a way this is kind of like a + +00:40:36.319 --> 00:40:39.200 +mixture of experts model if you've heard + +00:40:37.960 --> 00:40:40.599 +of that we're going to talk about that + +00:40:39.200 --> 00:40:42.760 +in a future class so I won't go into + +00:40:40.599 --> 00:40:45.760 +lots of detail but um I wanted to talk + +00:40:42.760 --> 00:40:48.079 +about it here and we we talk about about + +00:40:45.760 --> 00:40:52.160 +this + +00:40:48.079 --> 00:40:54.480 +cool okay so now I want to go into + +00:40:52.160 --> 00:40:56.440 +talking about Laura and Laura is very + +00:40:54.480 --> 00:40:57.560 +popular you it's very likely that you've + +00:40:56.440 --> 00:41:02.000 +heard of it + +00:40:57.560 --> 00:41:03.960 +nowadays um the way Laura works is very + +00:41:02.000 --> 00:41:05.800 +similar conceptually to adapters but it + +00:41:03.960 --> 00:41:09.000 +has an important implementation + +00:41:05.800 --> 00:41:14.680 +difference and the difference is + +00:41:09.000 --> 00:41:17.560 +that in contrast to adapters which had + +00:41:14.680 --> 00:41:20.720 +a um in contrast to adapters which had a + +00:41:17.560 --> 00:41:23.599 +nonlinear layer here Laura has no + +00:41:20.720 --> 00:41:27.000 +nonlinear layer so basically what it is + +00:41:23.599 --> 00:41:29.560 +doing is it is uh taking + +00:41:27.000 --> 00:41:32.880 +downscaled Matrix in upscale uh + +00:41:29.560 --> 00:41:36.440 +downscale Matrix in upscale Matrix and + +00:41:32.880 --> 00:41:38.319 +just doing a linear transform with them + +00:41:36.440 --> 00:41:42.560 +and + +00:41:38.319 --> 00:41:44.560 +so in this graph or in this figure here + +00:41:42.560 --> 00:41:46.520 +which I took from the Laura paper it's + +00:41:44.560 --> 00:41:48.480 +actually showing them as like separate + +00:41:46.520 --> 00:41:50.040 +computation paths it's showing like you + +00:41:48.480 --> 00:41:54.119 +use a normal Matrix and then you use the + +00:41:50.040 --> 00:41:56.079 +Laura Matrix separately but actually um + +00:41:54.119 --> 00:41:59.240 +you can just add them together and you + +00:41:56.079 --> 00:42:01.200 +get the equivalent result so you add + +00:41:59.240 --> 00:42:04.319 +this Matrix times this Matrix into the + +00:42:01.200 --> 00:42:05.960 +pre-rain weights and that gives you the + +00:42:04.319 --> 00:42:07.960 +same result as if you calculated them + +00:42:05.960 --> 00:42:12.599 +separately and then added them + +00:42:07.960 --> 00:42:14.319 +afterwards so why is Laura so popular uh + +00:42:12.599 --> 00:42:16.599 +I would say Laura is so popular because + +00:42:14.319 --> 00:42:18.760 +it's super convenient after you finished + +00:42:16.599 --> 00:42:19.920 +training with Laura because after you + +00:42:18.760 --> 00:42:22.680 +finished training with Laura you can + +00:42:19.920 --> 00:42:25.040 +just add that the learn matrices back + +00:42:22.680 --> 00:42:26.440 +into the original weight Matrix and you + +00:42:25.040 --> 00:42:27.800 +have a model that's exactly the same + +00:42:26.440 --> 00:42:29.280 +shape it doesn't have any other + +00:42:27.800 --> 00:42:31.839 +components you don't need any different + +00:42:29.280 --> 00:42:34.760 +code path you just have updated + +00:42:31.839 --> 00:42:36.640 +parameters and that contrasts to + +00:42:34.760 --> 00:42:38.359 +adapters because in adapters you + +00:42:36.640 --> 00:42:39.760 +actually need to add extra model + +00:42:38.359 --> 00:42:43.599 +components you have to have different + +00:42:39.760 --> 00:42:46.160 +pip merch code to implement this so um I + +00:42:43.599 --> 00:42:48.359 +think that's the big reason why Laura is + +00:42:46.160 --> 00:42:48.359 +so + +00:42:48.880 --> 00:42:53.920 +po it's not actually that complicated + +00:42:51.359 --> 00:42:55.160 +it's pretty simple but um it's important + +00:42:53.920 --> 00:42:56.960 +to + +00:42:55.160 --> 00:42:58.800 +know + +00:42:56.960 --> 00:43:02.160 +cool + +00:42:58.800 --> 00:43:05.839 +um so another popular thing uh that you + +00:43:02.160 --> 00:43:07.359 +might have heard of is Cur and qora + +00:43:05.839 --> 00:43:10.440 +combines together + +00:43:07.359 --> 00:43:11.760 +quantization um with parameter efficient + +00:43:10.440 --> 00:43:13.480 +tuning and we're going to talk a lot + +00:43:11.760 --> 00:43:17.040 +more about quantisation in a future + +00:43:13.480 --> 00:43:18.760 +class in maybe a week or so but + +00:43:17.040 --> 00:43:21.720 +basically there are ways to compress the + +00:43:18.760 --> 00:43:25.640 +model down to not be in like 16 bits but + +00:43:21.720 --> 00:43:27.319 +be in like four bits and um so if each + +00:43:25.640 --> 00:43:31.720 +parameter and four bits that makes the + +00:43:27.319 --> 00:43:31.720 +model very very Compact and + +00:43:32.240 --> 00:43:40.240 +so if we go back to our calculation in + +00:43:35.599 --> 00:43:44.640 +this previous slide uh if we + +00:43:40.240 --> 00:43:48.000 +had if we had a 16bit model to fit + +00:43:44.640 --> 00:43:49.839 +llama uh on your memory you needed 130 + +00:43:48.000 --> 00:43:54.160 +gigabytes but like let's say we have a + +00:43:49.839 --> 00:43:56.880 +4bit model Suddenly It's not 130 it's uh + +00:43:54.160 --> 00:44:00.559 +something closer to 32 and a half I + +00:43:56.880 --> 00:44:03.880 +guess and 32 and a half + +00:44:00.559 --> 00:44:07.960 +is actually fits on a lot of Hardware it + +00:44:03.880 --> 00:44:12.119 +fits on A1 100s or h100s easily it also + +00:44:07.960 --> 00:44:16.119 +fits on these like less expensive gpus I + +00:44:12.119 --> 00:44:17.599 +mean less expensive might be you know + +00:44:16.119 --> 00:44:19.559 +relative it's still very expensive but + +00:44:17.599 --> 00:44:21.559 +it'll also F on your Mac probably if you + +00:44:19.559 --> 00:44:22.960 +have a Mac with a fair amount of memory + +00:44:21.559 --> 00:44:27.480 +so you could just run it on a local + +00:44:22.960 --> 00:44:32.559 +machine in your CPU memory also so + +00:44:27.480 --> 00:44:34.559 +um so basically the idea is we can press + +00:44:32.559 --> 00:44:36.720 +down the model to be much smaller so the + +00:44:34.559 --> 00:44:41.559 +forward and backward um so the + +00:44:36.720 --> 00:44:45.000 +parameters are small and then we have a + +00:44:41.559 --> 00:44:47.000 +very very compact Laura layer that + +00:44:45.000 --> 00:44:48.280 +doesn't Laura which doesn't take very + +00:44:47.000 --> 00:44:51.079 +much memory + +00:44:48.280 --> 00:44:53.480 +itself and that allows us to basically + +00:44:51.079 --> 00:44:58.280 +train a model on you know commodity + +00:44:53.480 --> 00:45:00.119 +Hardware like 48 48 gigabyte uh GPU or + +00:44:58.280 --> 00:45:02.599 +uh something like your your MacBook or + +00:45:00.119 --> 00:45:05.880 +something like that and it it also has + +00:45:02.599 --> 00:45:07.400 +uh like paging to page things from CPU + +00:45:05.880 --> 00:45:10.760 +to GPU memory to make it even more + +00:45:07.400 --> 00:45:12.359 +efficient but uh basically that's the + +00:45:10.760 --> 00:45:15.880 +general + +00:45:12.359 --> 00:45:18.000 +idea so um I definitely if you want to + +00:45:15.880 --> 00:45:19.520 +train a large model on limited Hardware + +00:45:18.000 --> 00:45:21.480 +I'd recommend this if you're not + +00:45:19.520 --> 00:45:23.880 +training a super large model like 65 + +00:45:21.480 --> 00:45:25.960 +gabes I think just Laura should be fine + +00:45:23.880 --> 00:45:28.319 +like you can probably train a 7D model + +00:45:25.960 --> 00:45:31.400 +or a 1B model with just Laur and that + +00:45:28.319 --> 00:45:36.000 +should be know on a single GP + +00:45:31.400 --> 00:45:36.000 +VI cool uh any questions about + +00:45:41.079 --> 00:45:48.000 +this does low Precision not cause any + +00:45:43.680 --> 00:45:49.559 +problems it definitely is you need to be + +00:45:48.000 --> 00:45:51.680 +a little bit concerned about it but + +00:45:49.559 --> 00:45:53.440 +you're not doing optimization in low + +00:45:51.680 --> 00:45:55.359 +Precision you're just keeping the + +00:45:53.440 --> 00:45:59.040 +original model in low Precision so from + +00:45:55.359 --> 00:46:01.119 +that point of view it's you know it's + +00:45:59.040 --> 00:46:03.599 +manageable I guess + +00:46:01.119 --> 00:46:06.880 +so and you can also look at the hura + +00:46:03.599 --> 00:46:08.400 +paper they have very extensive + +00:46:06.880 --> 00:46:10.680 +experiments + +00:46:08.400 --> 00:46:14.040 +cool um a final one that I'd like to + +00:46:10.680 --> 00:46:15.880 +talk about is bitfit um this is very + +00:46:14.040 --> 00:46:17.680 +very simple you basically just train the + +00:46:15.880 --> 00:46:22.440 +biases of the model for any model that + +00:46:17.680 --> 00:46:24.520 +has biases uh this also can fit uh + +00:46:22.440 --> 00:46:26.119 +models it's very simple because you + +00:46:24.520 --> 00:46:28.359 +don't even need to change you don't need + +00:46:26.119 --> 00:46:30.000 +to add any extra code uh you just need + +00:46:28.359 --> 00:46:33.520 +to freeze all the parameters except the + +00:46:30.000 --> 00:46:36.160 +biases so from that point of view it's + +00:46:33.520 --> 00:46:38.559 +very + +00:46:36.160 --> 00:46:40.520 +easy so I talked about this a little bit + +00:46:38.559 --> 00:46:41.319 +last time but I think everybody didn't + +00:46:40.520 --> 00:46:43.400 +have + +00:46:41.319 --> 00:46:44.960 +full understanding of all the parameter + +00:46:43.400 --> 00:46:48.760 +efficient tuning methods to understand + +00:46:44.960 --> 00:46:50.839 +this well um but we had a paper where we + +00:46:48.760 --> 00:46:52.559 +basically looked at all of these tuning + +00:46:50.839 --> 00:46:56.280 +methods and we kind of decomposed them + +00:46:52.559 --> 00:46:59.240 +into several different design components + +00:46:56.280 --> 00:47:01.839 +and actually um maybe I'll + +00:46:59.240 --> 00:47:04.319 +also pull up + +00:47:01.839 --> 00:47:07.440 +the table that we have of this that + +00:47:04.319 --> 00:47:07.440 +might be even easier to + +00:47:14.079 --> 00:47:17.079 +follow + +00:47:20.839 --> 00:47:25.800 +so basically there there's different + +00:47:23.599 --> 00:47:27.960 +things that you can look at with respect + +00:47:25.800 --> 00:47:30.160 +to parameter efficient tuning methods + +00:47:27.960 --> 00:47:33.000 +there's the functional form of the + +00:47:30.160 --> 00:47:36.680 +nonlinearity that you're using there's + +00:47:33.000 --> 00:47:38.280 +the place where you insert the model + +00:47:36.680 --> 00:47:39.760 +there's how you modify the + +00:47:38.280 --> 00:47:41.200 +representation and then there's a + +00:47:39.760 --> 00:47:42.880 +composition function for how you take + +00:47:41.200 --> 00:47:44.559 +the modified representation and add it + +00:47:42.880 --> 00:47:48.040 +into the original + +00:47:44.559 --> 00:47:49.800 +representation so if you if you want to + +00:47:48.040 --> 00:47:52.559 +take a look at the table you can take a + +00:47:49.800 --> 00:47:55.319 +look at this it's also in the references + +00:47:52.559 --> 00:47:56.359 +but basically what we can find is that + +00:47:55.319 --> 00:47:59.680 +things like + +00:47:56.359 --> 00:48:01.800 +adapters uh Laura and prefix tuning are + +00:47:59.680 --> 00:48:04.280 +actually very uh very similar to each + +00:48:01.800 --> 00:48:07.119 +other but the difference being where do + +00:48:04.280 --> 00:48:09.079 +you get the original representation that + +00:48:07.119 --> 00:48:11.839 +you're feeding in so adapters generally + +00:48:09.079 --> 00:48:15.040 +get it from after the the module that + +00:48:11.839 --> 00:48:17.160 +you're uh adapting prefix tuning gets it + +00:48:15.040 --> 00:48:19.800 +from before Laura also gets it from + +00:48:17.160 --> 00:48:23.559 +before also what's nonlinearity it's a + +00:48:19.800 --> 00:48:25.440 +relu A softmax or nothing um Laura + +00:48:23.559 --> 00:48:27.599 +actually this isn't really mentioned in + +00:48:25.440 --> 00:48:29.200 +the paper but it is uh like actually + +00:48:27.599 --> 00:48:31.920 +implemented in the code there's also a + +00:48:29.200 --> 00:48:33.680 +scalar scaling Factor here uh which is a + +00:48:31.920 --> 00:48:36.280 +hyper parameter so that's something to + +00:48:33.680 --> 00:48:37.640 +be aware of um and so basically by + +00:48:36.280 --> 00:48:40.079 +breaking these down you can number one + +00:48:37.640 --> 00:48:42.359 +better understand each of the uh modules + +00:48:40.079 --> 00:48:44.280 +and how they or each of the methods and + +00:48:42.359 --> 00:48:47.200 +how they interact with each + +00:48:44.280 --> 00:48:48.760 +other and also uh what we show in this + +00:48:47.200 --> 00:48:51.680 +paper is that this understanding can + +00:48:48.760 --> 00:48:53.119 +lead you to you know new variants that + +00:48:51.680 --> 00:48:56.400 +can be more effective than any of the + +00:48:53.119 --> 00:48:59.160 +existing variants and so we proposed two + +00:48:56.400 --> 00:49:00.880 +things called The Parallel adapter and + +00:48:59.160 --> 00:49:04.400 +uh the scaled parallel adapter and we + +00:49:00.880 --> 00:49:06.559 +demonstrate that they get better + +00:49:04.400 --> 00:49:09.760 +results so then the question is which + +00:49:06.559 --> 00:49:11.200 +one to choose um for convenience Laura + +00:49:09.760 --> 00:49:13.799 +and bitfit don't change the model + +00:49:11.200 --> 00:49:15.920 +architecture so if you don't really care + +00:49:13.799 --> 00:49:17.319 +about like the absolute best accuracy + +00:49:15.920 --> 00:49:20.079 +out of these tuning methods I would + +00:49:17.319 --> 00:49:22.119 +definitely recommend um you use + +00:49:20.079 --> 00:49:24.960 +something like this it's definitely the + +00:49:22.119 --> 00:49:27.640 +easiest thing after you're done training + +00:49:24.960 --> 00:49:29.960 +for AC accy uh one thing that we found + +00:49:27.640 --> 00:49:31.920 +in our paper for simpler tasks it really + +00:49:29.960 --> 00:49:33.559 +actually doesn't matter very much so if + +00:49:31.920 --> 00:49:35.480 +you're just doing classification tasks + +00:49:33.559 --> 00:49:37.440 +even something super simple like bitfit + +00:49:35.480 --> 00:49:38.280 +is rather competitive with all of the + +00:49:37.440 --> 00:49:41.319 +other + +00:49:38.280 --> 00:49:43.880 +methods for more complex tasks and a + +00:49:41.319 --> 00:49:46.680 +small parameter budget uh we found + +00:49:43.880 --> 00:49:49.960 +prefix tuning to do a pretty good job uh + +00:49:46.680 --> 00:49:52.359 +this is not a like Universal finding but + +00:49:49.960 --> 00:49:54.319 +it's what we found in our paper and then + +00:49:52.359 --> 00:49:57.319 +for more complex tasks plus larger + +00:49:54.319 --> 00:50:00.079 +parameter budgets um adapters or some + +00:49:57.319 --> 00:50:03.400 +sort of mixture of multiple methods can + +00:50:00.079 --> 00:50:04.720 +be can give you better results so again + +00:50:03.400 --> 00:50:07.160 +all of this is into paper if you want to + +00:50:04.720 --> 00:50:07.160 +look at more + +00:50:07.960 --> 00:50:14.359 +details + +00:50:10.200 --> 00:50:16.000 +cool okay so any any questions about + +00:50:14.359 --> 00:50:18.880 +that + +00:50:16.000 --> 00:50:20.920 +or okay uh next I'm going to go through + +00:50:18.880 --> 00:50:22.440 +some NLP tasks and the reason why I'm + +00:50:20.920 --> 00:50:23.640 +going to go through some NLP tasks is + +00:50:22.440 --> 00:50:25.240 +because when we're fine-tuning we need + +00:50:23.640 --> 00:50:26.680 +to be fine-tuning towards individual + +00:50:25.240 --> 00:50:29.400 +tasks we want to + +00:50:26.680 --> 00:50:30.760 +solve um and so basic fine tuning we + +00:50:29.400 --> 00:50:32.400 +build a model that's good at performing + +00:50:30.760 --> 00:50:34.160 +a single task instruction tuning we + +00:50:32.400 --> 00:50:35.640 +build a general General list model that + +00:50:34.160 --> 00:50:37.240 +is good at many + +00:50:35.640 --> 00:50:40.040 +tasks + +00:50:37.240 --> 00:50:41.799 +um and what I want to go through now is + +00:50:40.040 --> 00:50:46.119 +I want to go through some tasks that + +00:50:41.799 --> 00:50:48.520 +I've seen people use number one being + +00:50:46.119 --> 00:50:50.720 +really important in like actual + +00:50:48.520 --> 00:50:52.559 +applications of NLP models in industry + +00:50:50.720 --> 00:50:54.760 +but number two what what is the set of + +00:50:52.559 --> 00:50:56.680 +tasks that people use to evaluate gener + +00:50:54.760 --> 00:51:00.000 +models so like if you look at the GPD + +00:50:56.680 --> 00:51:01.400 +papers or you look at the Gemini paper + +00:51:00.000 --> 00:51:02.960 +what is the set of tasks that they're + +00:51:01.400 --> 00:51:06.400 +using to demonstrate that their models + +00:51:02.960 --> 00:51:07.599 +work well so the first one is context + +00:51:06.400 --> 00:51:11.000 +free question + +00:51:07.599 --> 00:51:13.880 +answering also called open book QA + +00:51:11.000 --> 00:51:15.640 +basically this requires answering a + +00:51:13.880 --> 00:51:17.720 +question without any specific grounding + +00:51:15.640 --> 00:51:19.480 +into documents it's also what happens + +00:51:17.720 --> 00:51:21.119 +when chat GPT answers your questions + +00:51:19.480 --> 00:51:22.799 +without looking something up on the web + +00:51:21.119 --> 00:51:25.160 +for + +00:51:22.799 --> 00:51:26.920 +example an example data set that lots of + +00:51:25.160 --> 00:51:30.920 +people use is something called + +00:51:26.920 --> 00:51:33.119 +MML um this is uh a massively multitask + +00:51:30.920 --> 00:51:35.920 +language understanding data set and it + +00:51:33.119 --> 00:51:38.559 +has questions in a number of relatively + +00:51:35.920 --> 00:51:42.599 +difficult areas like professional law so + +00:51:38.559 --> 00:51:45.079 +this is asking what happens when a + +00:51:42.599 --> 00:51:47.920 +Salesman ignores that trespassers will + +00:51:45.079 --> 00:51:52.000 +be prosecuted signed and enters a + +00:51:47.920 --> 00:51:54.839 +hermit's house he drives up the driveway + +00:51:52.000 --> 00:51:56.319 +and an explosive charge explodes the + +00:51:54.839 --> 00:51:58.319 +seller was was injured can the Celler + +00:51:56.319 --> 00:52:01.960 +recover damages from The + +00:51:58.319 --> 00:52:03.880 +Hermit so I I would not be able to + +00:52:01.960 --> 00:52:06.480 +answer this with you know certainty + +00:52:03.880 --> 00:52:08.799 +because I'm not a lawyer um the answer + +00:52:06.480 --> 00:52:10.720 +is yes if the hermit was responsible for + +00:52:08.799 --> 00:52:13.240 +the explosive charge under the driveway + +00:52:10.720 --> 00:52:15.200 +so now you know uh you can collect + +00:52:13.240 --> 00:52:17.559 +images if somebody tries to blow you up + +00:52:15.200 --> 00:52:20.559 +when you trespass on their + +00:52:17.559 --> 00:52:22.880 +property but uh yeah and this has lots + +00:52:20.559 --> 00:52:25.079 +and lots of categories like + +00:52:22.880 --> 00:52:27.000 +this the next thing is contextual + +00:52:25.079 --> 00:52:29.720 +question question answering and this is + +00:52:27.000 --> 00:52:30.839 +uh question answering uh grounded in + +00:52:29.720 --> 00:52:34.440 +actual + +00:52:30.839 --> 00:52:35.640 +context um one example data set that a + +00:52:34.440 --> 00:52:38.839 +lot of people use is something called + +00:52:35.640 --> 00:52:40.680 +natural questions and this is uh + +00:52:38.839 --> 00:52:43.200 +questions grounded in a Wikipedia + +00:52:40.680 --> 00:52:46.440 +document or the Wikipedia document + +00:52:43.200 --> 00:52:48.079 +collection so grounded in a Wikipedia + +00:52:46.440 --> 00:52:49.440 +document means they give you the actual + +00:52:48.079 --> 00:52:50.559 +document you should be answering the + +00:52:49.440 --> 00:52:52.559 +question about and then you need to + +00:52:50.559 --> 00:52:55.640 +answer the question about + +00:52:52.559 --> 00:52:57.440 +it this is often called machine reading + +00:52:55.640 --> 00:52:59.960 +because you expect it to like read and + +00:52:57.440 --> 00:53:02.599 +answer questions about the + +00:52:59.960 --> 00:53:04.799 +document or it could be okay we're going + +00:53:02.599 --> 00:53:06.400 +to give you all of Wikipedia please + +00:53:04.799 --> 00:53:10.280 +provide us the answer to this question + +00:53:06.400 --> 00:53:11.880 +and this is uh often called uh retrieval + +00:53:10.280 --> 00:53:14.319 +based question answering or retrieval + +00:53:11.880 --> 00:53:18.000 +augmented one variety of retrial + +00:53:14.319 --> 00:53:21.960 +augmented generation or rag so this is + +00:53:18.000 --> 00:53:23.520 +really really important um I think most + +00:53:21.960 --> 00:53:25.880 +many people that I talked to who want to + +00:53:23.520 --> 00:53:29.079 +build actual systems + +00:53:25.880 --> 00:53:31.400 +from language models or NLP systems are + +00:53:29.079 --> 00:53:34.319 +are trying to do this sort of + +00:53:31.400 --> 00:53:36.680 +thing the second most popular thing that + +00:53:34.319 --> 00:53:39.040 +I talk to people who are trying to build + +00:53:36.680 --> 00:53:41.960 +uh like NLP systems of some variety is + +00:53:39.040 --> 00:53:45.119 +code generation and basically this is + +00:53:41.960 --> 00:53:47.440 +simply generating code like python SQL + +00:53:45.119 --> 00:53:50.160 +from a natural language + +00:53:47.440 --> 00:53:52.799 +command uh the most popular data set for + +00:53:50.160 --> 00:53:55.359 +this is something called human ofel and + +00:53:52.799 --> 00:53:56.920 +basically it has questions about about + +00:53:55.359 --> 00:53:58.720 +the python standard how you do things + +00:53:56.920 --> 00:54:00.440 +with the python standard Library like + +00:53:58.720 --> 00:54:04.799 +return a list with elements incremented + +00:54:00.440 --> 00:54:08.160 +by one um + +00:54:04.799 --> 00:54:09.880 +the it gives you the text and several + +00:54:08.160 --> 00:54:11.119 +examples of what the inputs and outputs + +00:54:09.880 --> 00:54:12.480 +should be and you're supposed to return + +00:54:11.119 --> 00:54:14.040 +a program like this and this is a + +00:54:12.480 --> 00:54:16.680 +simpler version of this there's also + +00:54:14.040 --> 00:54:19.079 +more complex ones one thing I should + +00:54:16.680 --> 00:54:21.760 +note um this is a area that I do a lot + +00:54:19.079 --> 00:54:24.119 +of research in human Nel is a very + +00:54:21.760 --> 00:54:26.079 +simple uh example of this it doesn't use + +00:54:24.119 --> 00:54:27.280 +any external Library it doesn't use + +00:54:26.079 --> 00:54:29.920 +context and other stuff like that + +00:54:27.280 --> 00:54:31.839 +there's a lot of other more interesting + +00:54:29.920 --> 00:54:33.400 +data sets also so if you're working on + +00:54:31.839 --> 00:54:36.839 +code generation I can recommend those as + +00:54:33.400 --> 00:54:36.839 +well and I'll do that later in the class + +00:54:38.000 --> 00:54:43.599 +too cool next is uh summarization and + +00:54:41.839 --> 00:54:45.319 +summarization uh there's a couple + +00:54:43.599 --> 00:54:47.480 +varieties of this one is single document + +00:54:45.319 --> 00:54:49.359 +summarization another is multi-document + +00:54:47.480 --> 00:54:50.920 +summarization uh single document + +00:54:49.359 --> 00:54:53.240 +compresses a longer document to a + +00:54:50.920 --> 00:54:57.040 +shorter one multi-document compresses + +00:54:53.240 --> 00:54:59.799 +multiple documents into one + +00:54:57.040 --> 00:55:02.319 +um honestly right now single document + +00:54:59.799 --> 00:55:05.000 +summarization in English works pretty + +00:55:02.319 --> 00:55:07.079 +well out of the box uh it's not perfect + +00:55:05.000 --> 00:55:09.480 +but it's close enough to being perfect + +00:55:07.079 --> 00:55:10.720 +that um I've worked in summarization + +00:55:09.480 --> 00:55:12.760 +before and I don't know if there's a + +00:55:10.720 --> 00:55:15.000 +whole lot more that we can do there of + +00:55:12.760 --> 00:55:16.400 +course multilingual is interesting + +00:55:15.000 --> 00:55:18.319 +multi-document summarization is + +00:55:16.400 --> 00:55:19.920 +definitely not solved um in + +00:55:18.319 --> 00:55:22.160 +multi-document summarization is when you + +00:55:19.920 --> 00:55:23.960 +have lots of documents about a + +00:55:22.160 --> 00:55:25.920 +particular topic and you want to + +00:55:23.960 --> 00:55:29.039 +summarize them down into a coherent + +00:55:25.920 --> 00:55:31.480 +summary of that topic one example of + +00:55:29.039 --> 00:55:34.039 +that is wikum this is a data set where + +00:55:31.480 --> 00:55:37.319 +you're provided with all of the links to + +00:55:34.039 --> 00:55:39.680 +pages about a Wikipedia article and + +00:55:37.319 --> 00:55:41.400 +you're expected to generate the first + +00:55:39.680 --> 00:55:44.200 +paragraph or few paragraphs of the + +00:55:41.400 --> 00:55:48.039 +article and so you're expected to take + +00:55:44.200 --> 00:55:50.000 +like lots of noisy you know incoherent + +00:55:48.039 --> 00:55:52.160 +articles about Barack Obama and actually + +00:55:50.000 --> 00:55:55.039 +write about Barack OB something like + +00:55:52.160 --> 00:55:57.680 +this uh some other example interesting + +00:55:55.039 --> 00:56:00.680 +tasks for this include things like uh + +00:55:57.680 --> 00:56:02.400 +survey generation for papers or + +00:56:00.680 --> 00:56:05.480 +something like that you want to know + +00:56:02.400 --> 00:56:07.920 +everything about a scientific topic or + +00:56:05.480 --> 00:56:10.400 +um generating + +00:56:07.920 --> 00:56:12.599 +a report of all the things that happened + +00:56:10.400 --> 00:56:14.839 +in the stock market today or something + +00:56:12.599 --> 00:56:17.720 +like that you know there's lots of uh + +00:56:14.839 --> 00:56:17.720 +places where this could be + +00:56:18.240 --> 00:56:23.359 +useful another class of tasks is + +00:56:20.520 --> 00:56:25.400 +information extraction um there's lots + +00:56:23.359 --> 00:56:27.799 +of examples of this but basically they + +00:56:25.400 --> 00:56:31.319 +all boil down to extracting some sort of + +00:56:27.799 --> 00:56:33.200 +information in structured format uh from + +00:56:31.319 --> 00:56:35.240 +text and this is things like entity + +00:56:33.200 --> 00:56:37.960 +recognition identifying which words are + +00:56:35.240 --> 00:56:40.920 +entities entity linking linking entities + +00:56:37.960 --> 00:56:42.799 +to a knowledge base entity co-reference + +00:56:40.920 --> 00:56:45.319 +finding which entities in an input + +00:56:42.799 --> 00:56:47.440 +correspond to each other uh event + +00:56:45.319 --> 00:56:49.079 +recognition linking co- reference so all + +00:56:47.440 --> 00:56:50.799 +of the same things except doing it for + +00:56:49.079 --> 00:56:53.839 +events instead of + +00:56:50.799 --> 00:56:55.480 +entities um an example data set is uh + +00:56:53.839 --> 00:56:57.119 +something called ontonotes it's an older + +00:56:55.480 --> 00:56:59.280 +data set but it has all these things + +00:56:57.119 --> 00:57:00.680 +annotated and you can extract things + +00:56:59.280 --> 00:57:03.119 +from this there's lots of other data + +00:57:00.680 --> 00:57:04.839 +sets for this too Al also kind of more + +00:57:03.119 --> 00:57:07.440 +in general you can think of like what if + +00:57:04.839 --> 00:57:09.680 +I gave you an Excel sheet uh could you + +00:57:07.440 --> 00:57:11.319 +go and like Fill in Excel or Google + +00:57:09.680 --> 00:57:12.880 +sheet could you go and fill in all of + +00:57:11.319 --> 00:57:14.760 +the columns in the sheet uh + +00:57:12.880 --> 00:57:18.160 +appropriately given all the information + +00:57:14.760 --> 00:57:22.000 +on the internet so um this is a a pretty + +00:57:18.160 --> 00:57:22.000 +important T category as + +00:57:22.079 --> 00:57:26.599 +well translation so I don't really to + +00:57:25.160 --> 00:57:30.319 +talk that much about it it's translating + +00:57:26.599 --> 00:57:32.319 +from one language to another um for both + +00:57:30.319 --> 00:57:34.039 +translation and summarization uh + +00:57:32.319 --> 00:57:35.960 +evaluation is kind of tricky I'll talk + +00:57:34.039 --> 00:57:38.680 +about this uh in the future but + +00:57:35.960 --> 00:57:41.559 +basically uh you assess quality based on + +00:57:38.680 --> 00:57:45.079 +similarity to some sort of reference + +00:57:41.559 --> 00:57:46.960 +uh using things like blue score or uh + +00:57:45.079 --> 00:57:49.680 +neural + +00:57:46.960 --> 00:57:51.160 +metrics an example of this uh which I + +00:57:49.680 --> 00:57:52.760 +think is actually a really nice example + +00:57:51.160 --> 00:57:56.200 +is something called the Flores data set + +00:57:52.760 --> 00:57:59.480 +and this is a translation of several uh + +00:57:56.200 --> 00:57:59.480 +or like a thousand + +00:57:59.520 --> 00:58:05.799 +Wikipedia not a thousand but like seever + +00:58:03.079 --> 00:58:07.960 +quite a few Wikipedia articles into 101 + +00:58:05.799 --> 00:58:09.400 +languages the reason why I like this + +00:58:07.960 --> 00:58:10.960 +data set a lot is because if you could + +00:58:09.400 --> 00:58:12.720 +translate into all of these languages + +00:58:10.960 --> 00:58:14.799 +you would be able to you know Aid + +00:58:12.720 --> 00:58:16.720 +information dissemination across the + +00:58:14.799 --> 00:58:20.640 +world make access to information more + +00:58:16.720 --> 00:58:23.799 +Equitable so I I like this D + +00:58:20.640 --> 00:58:25.440 +well separately from this there are + +00:58:23.799 --> 00:58:27.480 +general purpose vent marks these + +00:58:25.440 --> 00:58:31.119 +benchmarks are not really for the + +00:58:27.480 --> 00:58:33.559 +purpose of evaluating any specific task + +00:58:31.119 --> 00:58:35.280 +that people think is actually useful but + +00:58:33.559 --> 00:58:38.200 +rather trying to test the language + +00:58:35.280 --> 00:58:41.119 +abilities of language models themselves + +00:58:38.200 --> 00:58:44.480 +a typical example of this is big bench + +00:58:41.119 --> 00:58:46.640 +and this contains a whole bunch of tasks + +00:58:44.480 --> 00:58:48.720 +that uh test you know different + +00:58:46.640 --> 00:58:50.240 +abilities I have some examples here + +00:58:48.720 --> 00:58:52.440 +these are very small so you might need + +00:58:50.240 --> 00:58:54.359 +to look at the slides to see them but + +00:58:52.440 --> 00:58:57.760 +for example this is tracking shuffled + +00:58:54.359 --> 00:59:00.359 +objects like um Ellis Bob and CLA are + +00:58:57.760 --> 00:59:01.880 +friends uh who occasionally trade books + +00:59:00.359 --> 00:59:04.119 +at the start of the semester each one + +00:59:01.880 --> 00:59:05.640 +has a new book then they trade then they + +00:59:04.119 --> 00:59:09.039 +trade then they trade then they trade + +00:59:05.640 --> 00:59:11.599 +which one does Bob have um today is + +00:59:09.039 --> 00:59:13.640 +Christmas Eve of 1937 what is the date + +00:59:11.599 --> 00:59:17.599 +tomorrow and you need to write it in the + +00:59:13.640 --> 00:59:20.359 +appropriate format um Sherry tells the + +00:59:17.599 --> 00:59:22.960 +truth Vernal says Sherry tells the truth + +00:59:20.359 --> 00:59:25.240 +Alexis says burn lies Michaela says + +00:59:22.960 --> 00:59:26.880 +Alexis tells the truth enor says + +00:59:25.240 --> 00:59:29.880 +Michaela tells the truth does Elanor + +00:59:26.880 --> 00:59:31.119 +tell the truth um hope you all got that + +00:59:29.880 --> 00:59:34.319 +one + +00:59:31.119 --> 00:59:37.440 +right um so like it's just these kind of + +00:59:34.319 --> 00:59:38.880 +exercises and like when you look at how + +00:59:37.440 --> 00:59:40.520 +language models are being evaluated + +00:59:38.880 --> 00:59:42.559 +they're being evaluated against like + +00:59:40.520 --> 00:59:44.400 +many of these tasks not all of them + +00:59:42.559 --> 00:59:47.200 +necessarily but many of them I think + +00:59:44.400 --> 00:59:48.920 +Gemini evaluated with respect to every + +00:59:47.200 --> 00:59:51.680 +all of these task categories except + +00:59:48.920 --> 00:59:53.799 +information Extraction movie so um these + +00:59:51.680 --> 00:59:56.640 +are kind of typical task categories that + +00:59:53.799 --> 00:59:56.640 +people look at + +00:59:57.039 --> 01:00:00.680 +cool um any questions about + +01:00:02.359 --> 01:00:06.680 +this nice okay uh yeah + +01:00:09.400 --> 01:00:14.400 +sorry how how do they ensure that + +01:00:12.880 --> 01:00:18.280 +similar data does not appear in the + +01:00:14.400 --> 01:00:19.680 +training data so people have tried uh a + +01:00:18.280 --> 01:00:20.920 +bunch of different things this is + +01:00:19.680 --> 01:00:23.240 +actually this actually might be a good + +01:00:20.920 --> 01:00:25.480 +thing to talk about uh at some point + +01:00:23.240 --> 01:00:30.559 +when we talk about data cation or things + +01:00:25.480 --> 01:00:33.720 +like this um the first thing is uh you + +01:00:30.559 --> 01:00:35.920 +actually create the data so similar is + +01:00:33.720 --> 01:00:39.119 +actually okay right because you + +01:00:35.920 --> 01:00:42.160 +know if it appears everywhere on the + +01:00:39.119 --> 01:00:44.599 +internet gp4 will learn it the problem + +01:00:42.160 --> 01:00:47.680 +is like if the exact same thing appears + +01:00:44.599 --> 01:00:49.520 +um so number one how do we prevent this + +01:00:47.680 --> 01:00:52.520 +from happening number two how do we even + +01:00:49.520 --> 01:00:54.000 +tell that it did happen um so some + +01:00:52.520 --> 01:00:56.280 +things that people do to tell that it + +01:00:54.000 --> 01:01:00.319 +did happen is they make small + +01:00:56.280 --> 01:01:04.119 +perturbations to the the test data and + +01:01:00.319 --> 01:01:06.200 +test whether that like drops the model + +01:01:04.119 --> 01:01:07.680 +score by a whole lot there was actually + +01:01:06.200 --> 01:01:09.119 +a paper I don't know if I can find it + +01:01:07.680 --> 01:01:12.280 +immediately but there was a paper + +01:01:09.119 --> 01:01:16.520 +recently that just like swapped the + +01:01:12.280 --> 01:01:18.280 +order of uh of outputs in mlu and saw + +01:01:16.520 --> 01:01:21.880 +that the accuracy went down for some + +01:01:18.280 --> 01:01:22.880 +language models so that should have no + +01:01:21.880 --> 01:01:24.119 +that should make no difference + +01:01:22.880 --> 01:01:26.960 +whatsoever because you're just changing + +01:01:24.119 --> 01:01:29.000 +the order of answers but it it ca + +01:01:26.960 --> 01:01:30.880 +outputs to go down if that's the case + +01:01:29.000 --> 01:01:33.920 +it's a pretty clear sign that it's + +01:01:30.880 --> 01:01:35.760 +leaking um other things that people do + +01:01:33.920 --> 01:01:38.480 +are change the input a little bit so + +01:01:35.760 --> 01:01:39.880 +like change the number in a math problem + +01:01:38.480 --> 01:01:41.760 +uh to be a slightly different value and + +01:01:39.880 --> 01:01:43.640 +see if that hurts the accuracy overall + +01:01:41.760 --> 01:01:45.200 +and like making these little + +01:01:43.640 --> 01:01:46.280 +perturbations that shouldn't change the + +01:01:45.200 --> 01:01:47.640 +accuracy and then if they do + +01:01:46.280 --> 01:01:49.920 +significantly then you think there's a + +01:01:47.640 --> 01:01:53.119 +problem so I think that's a basic tool + +01:01:49.920 --> 01:01:56.079 +to diagnose this how do you prevent it + +01:01:53.119 --> 01:01:58.160 +from happening um there's really simple + +01:01:56.079 --> 01:02:04.039 +and silly things that you can do + +01:01:58.160 --> 01:02:07.240 +like uh ZIP the file and like put a + +01:02:04.039 --> 01:02:09.119 +password on the file um and then like a + +01:02:07.240 --> 01:02:12.440 +scraper even if it's scraping all of + +01:02:09.119 --> 01:02:14.680 +GitHub for training data it won't scrape + +01:02:12.440 --> 01:02:16.960 +your zipped and password protected file + +01:02:14.680 --> 01:02:19.279 +right so that's kind of a first line of + +01:02:16.960 --> 01:02:21.799 +defense it doesn't work if someone puts + +01:02:19.279 --> 01:02:25.200 +it like you know someone puts it + +01:02:21.799 --> 01:02:26.839 +somewhere uh in unzipped formats so + +01:02:25.200 --> 01:02:28.039 +that's a problem but you know there + +01:02:26.839 --> 01:02:29.839 +there are things that you can do like + +01:02:28.039 --> 01:02:31.799 +that another thing you can do is just + +01:02:29.839 --> 01:02:34.440 +not reveal your data whatsoever so you + +01:02:31.799 --> 01:02:36.160 +can keep a private version of the data + +01:02:34.440 --> 01:02:39.319 +um you can keep a private version of the + +01:02:36.160 --> 01:02:42.359 +data and not you know let anybody else + +01:02:39.319 --> 01:02:45.000 +see the the outputs so yeah it's pretty + +01:02:42.359 --> 01:02:45.000 +tricky + +01:02:48.279 --> 01:02:54.920 +yeah how do you control for test + +01:02:50.520 --> 01:02:56.480 +complexity that's a great question um + +01:02:54.920 --> 01:02:59.119 +I don't think there's any really good + +01:02:56.480 --> 01:03:01.400 +definition of task complexity yet um + +01:02:59.119 --> 01:03:04.559 +some things that you can do are control + +01:03:01.400 --> 01:03:07.520 +for like length or control + +01:03:04.559 --> 01:03:11.400 +for um you know the + +01:03:07.520 --> 01:03:15.119 +number the number of hops that are + +01:03:11.400 --> 01:03:19.839 +required in like multihop reasoning um + +01:03:15.119 --> 01:03:19.839 +there's actually one really interesting + +01:03:21.760 --> 01:03:26.880 +work that tries to do um not control + +01:03:25.440 --> 01:03:29.880 +necessarily but at + +01:03:26.880 --> 01:03:29.880 +least + +01:03:32.720 --> 01:03:39.359 +evaluate there's actually a + +01:03:35.640 --> 01:03:42.480 +couple so what this tries to do is this + +01:03:39.359 --> 01:03:44.160 +tries to break down questions into kind + +01:03:42.480 --> 01:03:49.039 +of like operations that you would need + +01:03:44.160 --> 01:03:51.640 +to solve do to solve them so it's like + +01:03:49.039 --> 01:03:53.920 +uh which keywords have been contained by + +01:03:51.640 --> 01:03:55.279 +more than 100 ACL papers and they say + +01:03:53.920 --> 01:03:56.839 +okay for you need to select then you + +01:03:55.279 --> 01:03:58.799 +need to filter then you need to project + +01:03:56.839 --> 01:04:00.760 +and stuff like that so they try to at + +01:03:58.799 --> 01:04:03.520 +least Express the level of complexity in + +01:04:00.760 --> 01:04:06.680 +this way um there's also another one + +01:04:03.520 --> 01:04:08.920 +that's not on so much on real + +01:04:06.680 --> 01:04:10.960 +data sorry this is this is called the + +01:04:08.920 --> 01:04:13.079 +break Benchmark if you're + +01:04:10.960 --> 01:04:16.440 +interested there was also a more recent + +01:04:13.079 --> 01:04:16.440 +one paper that I + +01:04:17.599 --> 01:04:22.960 +liked that tried to do something + +01:04:20.559 --> 01:04:22.960 +somewhat + +01:04:23.200 --> 01:04:26.200 +similar + +01:04:27.160 --> 01:04:31.920 +um where they express they come up with + +01:04:29.760 --> 01:04:34.480 +like math or programming problems and + +01:04:31.920 --> 01:04:36.279 +try to express them as a graph and then + +01:04:34.480 --> 01:04:37.799 +do some examination of how Transformer + +01:04:36.279 --> 01:04:39.760 +models do on things of different + +01:04:37.799 --> 01:04:40.920 +complexity I think the problem is + +01:04:39.760 --> 01:04:43.039 +there's so many different things that + +01:04:40.920 --> 01:04:44.720 +could make something hard or easy uh + +01:04:43.039 --> 01:04:47.000 +there's also like is it in distribution + +01:04:44.720 --> 01:04:50.640 +or out of distribution um from the point + +01:04:47.000 --> 01:04:53.119 +of view of topic or language or speaking + +01:04:50.640 --> 01:04:55.640 +style or things like that um and + +01:04:53.119 --> 01:04:58.839 +actually I think uh we're going to talk + +01:04:55.640 --> 01:05:00.440 +about this in the debugging and + +01:04:58.839 --> 01:05:01.880 +evaluation lecture but like one of the + +01:05:00.440 --> 01:05:03.799 +things I really like to do is I like to + +01:05:01.880 --> 01:05:05.119 +Subs segment to data and look at + +01:05:03.799 --> 01:05:06.960 +different Subs segments of the data + +01:05:05.119 --> 01:05:09.920 +where I think the subs segments will + +01:05:06.960 --> 01:05:11.880 +affect accuracy by a lot and basically + +01:05:09.920 --> 01:05:14.359 +anything that you could Subs segment on + +01:05:11.880 --> 01:05:17.720 +is like something that determines + +01:05:14.359 --> 01:05:19.279 +difficulties so um yeah lots to lots to + +01:05:17.720 --> 01:05:21.960 +say about that + +01:05:19.279 --> 01:05:24.520 +basically cool any other + +01:05:21.960 --> 01:05:26.119 +questions okay um if not let me get on + +01:05:24.520 --> 01:05:27.680 +to instruction tuning I don't have a + +01:05:26.119 --> 01:05:30.079 +whole lot about instruction tuning + +01:05:27.680 --> 01:05:31.720 +because it's uh you know conceptually + +01:05:30.079 --> 01:05:34.799 +pretty simple but I I would like to talk + +01:05:31.720 --> 01:05:37.640 +about all of it so basic instruction + +01:05:34.799 --> 01:05:39.359 +tuning the uh was proposed almost + +01:05:37.640 --> 01:05:41.920 +simultaneously by people at Google and + +01:05:39.359 --> 01:05:45.799 +people at hugging face uh the way it + +01:05:41.920 --> 01:05:49.760 +works is you have + +01:05:45.799 --> 01:05:54.960 +tasks and you train on lots of tasks + +01:05:49.760 --> 01:05:57.240 +where you append The Prompt and you uh + +01:05:54.960 --> 01:05:58.599 +you append a prompt you append the input + +01:05:57.240 --> 01:06:01.400 +and then you just try to train to + +01:05:58.599 --> 01:06:03.160 +generate the output and so this is this + +01:06:01.400 --> 01:06:04.480 +contrast from like base language model + +01:06:03.160 --> 01:06:06.039 +training because you're still training a + +01:06:04.480 --> 01:06:08.480 +language model based on a prompt and an + +01:06:06.039 --> 01:06:10.559 +output but you're + +01:06:08.480 --> 01:06:12.400 +specifically formatting them in a + +01:06:10.559 --> 01:06:14.680 +particular way so it corresponds to + +01:06:12.400 --> 01:06:16.119 +solving tasks it's essentially + +01:06:14.680 --> 01:06:17.640 +supervised training but supervised + +01:06:16.119 --> 01:06:19.200 +training over many many tasks fine + +01:06:17.640 --> 01:06:21.520 +tuning over many many + +01:06:19.200 --> 01:06:25.480 +tests the interesting thing that these + +01:06:21.520 --> 01:06:29.359 +papers showed was that basically if you + +01:06:25.480 --> 01:06:31.000 +do this instruction tuning you do well + +01:06:29.359 --> 01:06:32.920 +not only on the tasks that you trained + +01:06:31.000 --> 01:06:36.640 +on but also on new tasks that you didn't + +01:06:32.920 --> 01:06:38.720 +train on um and so this is now really + +01:06:36.640 --> 01:06:40.960 +like important it's Incorporated in + +01:06:38.720 --> 01:06:43.279 +every serious language model that's used + +01:06:40.960 --> 01:06:48.720 +in a kind of like production setting + +01:06:43.279 --> 01:06:52.599 +nowadays and all um yeah so uh I think + +01:06:48.720 --> 01:06:52.599 +that's the basic idea + +01:06:53.000 --> 01:06:57.520 +here + +01:06:55.160 --> 01:06:59.480 +you can also do things like learn to in + +01:06:57.520 --> 01:07:02.160 +context learn so we talked about in + +01:06:59.480 --> 01:07:05.160 +context learning so in context learning + +01:07:02.160 --> 01:07:07.799 +instead of uh giving just a prompt you + +01:07:05.160 --> 01:07:09.960 +give training examples in the context + +01:07:07.799 --> 01:07:12.240 +and so that's what you do in this paper + +01:07:09.960 --> 01:07:14.400 +here as well you sample a whole bunch of + +01:07:12.240 --> 01:07:17.880 +training examples you append them to the + +01:07:14.400 --> 01:07:19.720 +context and then you train the model and + +01:07:17.880 --> 01:07:21.359 +so why is this good this is good because + +01:07:19.720 --> 01:07:24.400 +it will train a model that's better in + +01:07:21.359 --> 01:07:26.160 +context learning basically so if you um + +01:07:24.400 --> 01:07:29.480 +if you want to provide these training + +01:07:26.160 --> 01:07:29.480 +examples then you can trade it like + +01:07:30.039 --> 01:07:35.480 +that so these are the two basic ways of + +01:07:32.680 --> 01:07:37.039 +doing instruction tuning um all all came + +01:07:35.480 --> 01:07:40.920 +out around the same + +01:07:37.039 --> 01:07:43.400 +time um there are a bunch of data sets + +01:07:40.920 --> 01:07:45.440 +that people have compiled and you + +01:07:43.400 --> 01:07:47.160 +probably if you want to do any sort of + +01:07:45.440 --> 01:07:48.599 +instruction tuning you probably want to + +01:07:47.160 --> 01:07:50.680 +use one of these data sets because + +01:07:48.599 --> 01:07:52.920 +compiling together a bunch of data sets + +01:07:50.680 --> 01:07:55.720 +is just annoying it's not hard but it's + +01:07:52.920 --> 01:07:59.319 +annoying um and + +01:07:55.720 --> 01:08:01.520 +so some pop I I very highly recommend + +01:07:59.319 --> 01:08:03.960 +this paper the on the flan collection + +01:08:01.520 --> 01:08:05.480 +because it gives a good uh summary it + +01:08:03.960 --> 01:08:08.079 +has this really nice table that breaks + +01:08:05.480 --> 01:08:10.960 +them down based on like um what's the + +01:08:08.079 --> 01:08:14.440 +name of the data set uh what is the size + +01:08:10.960 --> 01:08:17.880 +of the training data um what prompts do + +01:08:14.440 --> 01:08:20.040 +they use zero shot or F shot uh so like + +01:08:17.880 --> 01:08:21.799 +fot is learning in context learn like I + +01:08:20.040 --> 01:08:24.719 +mentioned before how many tasks are + +01:08:21.799 --> 01:08:26.799 +there um and what detailed methods do + +01:08:24.719 --> 01:08:28.480 +they use so you can take a look at this + +01:08:26.799 --> 01:08:30.520 +some very popular ones that lots of + +01:08:28.480 --> 01:08:33.920 +people use are things like the FL + +01:08:30.520 --> 01:08:36.520 +collection from here also uh natural + +01:08:33.920 --> 01:08:40.640 +instructions is a very popular one uh + +01:08:36.520 --> 01:08:43.040 +that still people use a lot um and self- + +01:08:40.640 --> 01:08:46.560 +instruct is a popular one that I'll I'll + +01:08:43.040 --> 01:08:46.560 +talk about in in a + +01:08:47.640 --> 01:08:53.159 +second cool um so the second thing that + +01:08:50.960 --> 01:08:55.359 +I want to talk about is instruction + +01:08:53.159 --> 01:08:57.359 +tuned models or the next thing I want to + +01:08:55.359 --> 01:08:59.120 +talk about is instruction tun models + +01:08:57.359 --> 01:09:01.600 +these are examples of models that like I + +01:08:59.120 --> 01:09:05.560 +can recommend you use now in 2024 + +01:09:01.600 --> 01:09:10.839 +they're like good bottles to use um + +01:09:05.560 --> 01:09:12.279 +flant T5 I think is a very good model + +01:09:10.839 --> 01:09:16.199 +especially it's a very good model for + +01:09:12.279 --> 01:09:19.679 +its size and it comes in various sizes + +01:09:16.199 --> 01:09:20.880 +uh from smaller models to uh those up to + +01:09:19.679 --> 01:09:23.199 +11 billion + +01:09:20.880 --> 01:09:25.839 +parameters and it's an encoder decoder + +01:09:23.199 --> 01:09:29.279 +model based on T5 that was trained on + +01:09:25.839 --> 01:09:32.080 +lots of data my impression is that this + +01:09:29.279 --> 01:09:34.920 +is a model that's like consistently good + +01:09:32.080 --> 01:09:38.400 +at anything that's like a simple input + +01:09:34.920 --> 01:09:40.759 +output style task not like a chat task + +01:09:38.400 --> 01:09:43.040 +um so if you just have input output you + +01:09:40.759 --> 01:09:45.839 +want to do like uh code generation you + +01:09:43.040 --> 01:09:47.319 +want to do maybe not code generation you + +01:09:45.839 --> 01:09:49.199 +want to do like summarization or other + +01:09:47.319 --> 01:09:52.640 +things like that that's a good model to + +01:09:49.199 --> 01:09:55.560 +use um another one is Lama 2 chat so + +01:09:52.640 --> 01:09:58.120 +Lama 2 chat was in instruction tuned and + +01:09:55.560 --> 01:10:02.719 +uh kind of tuned with human preferences + +01:09:58.120 --> 01:10:05.600 +but it it is quite good at following + +01:10:02.719 --> 01:10:07.520 +instructions and then there's also + +01:10:05.600 --> 01:10:10.600 +excuse me mixol instruct and these are + +01:10:07.520 --> 01:10:13.360 +both decoder only models mixol is a + +01:10:10.600 --> 01:10:17.280 +decoder only mixture of experts model + +01:10:13.360 --> 01:10:19.400 +mixol is smaller and quite strong so I + +01:10:17.280 --> 01:10:20.920 +would recommend that you consider this + +01:10:19.400 --> 01:10:24.480 +maybe as a default if you want a decoder + +01:10:20.920 --> 01:10:26.840 +only model and then a flant T5 if few + +01:10:24.480 --> 01:10:26.840 +inod + +01:10:28.840 --> 01:10:33.800 +decod + +01:10:30.480 --> 01:10:35.719 +cool um the final thing I'd like to talk + +01:10:33.800 --> 01:10:37.000 +about a little bit um and then we're + +01:10:35.719 --> 01:10:39.239 +also going to talk about it a bit more + +01:10:37.000 --> 01:10:42.000 +in the distillation class is data set + +01:10:39.239 --> 01:10:43.440 +generation so it's possible to + +01:10:42.000 --> 01:10:46.440 +automatically generate instruction + +01:10:43.440 --> 01:10:48.199 +tuning data sets and the first or + +01:10:46.440 --> 01:10:51.560 +typical example of this is self- + +01:10:48.199 --> 01:10:55.080 +instruct and the way self instruct works + +01:10:51.560 --> 01:10:56.840 +is you have uh a bunch of of seed tasks + +01:10:55.080 --> 01:10:59.640 +that have one instruction and one + +01:10:56.840 --> 01:11:02.560 +instance per task you throw them into + +01:10:59.640 --> 01:11:05.960 +the task pool and then based on this you + +01:11:02.560 --> 01:11:07.239 +do a prompting to try to generate new + +01:11:05.960 --> 01:11:11.159 +tasks + +01:11:07.239 --> 01:11:14.440 +basically and um you identify what type + +01:11:11.159 --> 01:11:18.640 +of uh what type of task it is and then + +01:11:14.440 --> 01:11:19.640 +based on the task you generate uh inputs + +01:11:18.640 --> 01:11:22.440 +and + +01:11:19.640 --> 01:11:24.400 +outputs and from these inputs and + +01:11:22.440 --> 01:11:26.199 +outputs they do a little bit of minimal + +01:11:24.400 --> 01:11:27.800 +filtering to D duplicate the data set + +01:11:26.199 --> 01:11:29.640 +and also remove things that require like + +01:11:27.800 --> 01:11:31.679 +visual information and other stuff like + +01:11:29.640 --> 01:11:34.080 +that and then feed that back into the + +01:11:31.679 --> 01:11:36.560 +task pool so basically like they start + +01:11:34.080 --> 01:11:38.159 +with 175 examples and then they expand + +01:11:36.560 --> 01:11:40.520 +this data set to be very large to cover + +01:11:38.159 --> 01:11:45.320 +many many different tasks + +01:11:40.520 --> 01:11:46.520 +um so uh this is pretty influential and + +01:11:45.320 --> 01:11:49.679 +like one interesting thing that they + +01:11:46.520 --> 01:11:52.560 +showed here is that you can improve the + +01:11:49.679 --> 01:11:55.960 +model that was used to generate uh these + +01:11:52.560 --> 01:11:58.600 +itself um so basically they took this + +01:11:55.960 --> 01:12:01.719 +and they used it to fine-tune uh gpt3 + +01:11:58.600 --> 01:12:04.679 +basically um they used GPT 3 to generate + +01:12:01.719 --> 01:12:04.679 +the test and they use it to + +01:12:04.760 --> 01:12:11.639 +find um some other more recent examples + +01:12:07.920 --> 01:12:15.600 +are Chain of Thought um uh tuning for + +01:12:11.639 --> 01:12:17.320 +Chain of Thought so um Orca is a nice + +01:12:15.600 --> 01:12:20.840 +example of this this is uh something + +01:12:17.320 --> 01:12:23.120 +where they generated explanations for + +01:12:20.840 --> 01:12:24.679 +why um for why the model made a + +01:12:23.120 --> 01:12:27.159 +particular decision and then they use + +01:12:24.679 --> 01:12:30.400 +that to train models uh and improve + +01:12:27.159 --> 01:12:30.400 +their essentially reasoning + +01:12:31.120 --> 01:12:37.280 +capabilities another interesting example + +01:12:34.159 --> 01:12:38.880 +is uh something called Evol instruct and + +01:12:37.280 --> 01:12:40.760 +basically the idea here is they start + +01:12:38.880 --> 01:12:43.440 +out with a seed set of instructions from + +01:12:40.760 --> 01:12:45.800 +any data set that you want to be using + +01:12:43.440 --> 01:12:48.239 +and they modify those instructions to + +01:12:45.800 --> 01:12:50.480 +make them more complex so they say okay + +01:12:48.239 --> 01:12:52.920 +this is too easy let's make this harder + +01:12:50.480 --> 01:12:55.679 +um and that makes it possible to uh + +01:12:52.920 --> 01:12:58.320 +improve uh the ability of models to + +01:12:55.679 --> 01:13:00.440 +solve complex problems so this is + +01:12:58.320 --> 01:13:02.120 +actually a really popular you know area + +01:13:00.440 --> 01:13:04.080 +overall nowadays I I'm not going to do + +01:13:02.120 --> 01:13:06.960 +it justce in one slide so we'll talk a + +01:13:04.080 --> 01:13:09.199 +bit more about it later but um uh this + +01:13:06.960 --> 01:13:11.440 +is the the general + +01:13:09.199 --> 01:13:14.280 +idea + +01:13:11.440 --> 01:13:18.159 +than cool and yeah that's all I have for + +01:13:14.280 --> 01:13:18.159 +today uh any questions or + +01:13:20.679 --> 01:13:25.360 +yeah talk about other places + +01:13:26.760 --> 01:13:31.199 +oh yeah yeah sorry sorry very very good + +01:13:29.480 --> 01:13:32.880 +question and I actually wanted to put + +01:13:31.199 --> 01:13:34.960 +that on my slide but I just realized I + +01:13:32.880 --> 01:13:36.800 +forgot so thank you for prompting me um + +01:13:34.960 --> 01:13:38.920 +so when would you want to do Bas uh + +01:13:36.800 --> 01:13:40.760 +single task fine tuning versus + +01:13:38.920 --> 01:13:45.199 +instruction + +01:13:40.760 --> 01:13:47.880 +tuning if you have a very carefully like + +01:13:45.199 --> 01:13:49.360 +if you have a very clear task definition + +01:13:47.880 --> 01:13:51.280 +and you have lots of training data doing + +01:13:49.360 --> 01:13:53.440 +full fine tuning can be good for a + +01:13:51.280 --> 01:13:56.120 +number of reasons number one you can get + +01:13:53.440 --> 01:13:57.800 +get maybe slightly Superior accuracy + +01:13:56.120 --> 01:14:00.280 +with bigger models but you can get much + +01:13:57.800 --> 01:14:01.719 +Superior accuracy with smaller models + +01:14:00.280 --> 01:14:04.120 +because smaller models don't have the + +01:14:01.719 --> 01:14:07.960 +capacity to like do really really well + +01:14:04.120 --> 01:14:10.280 +on lots of different tasks so um I think + +01:14:07.960 --> 01:14:12.760 +you'll see you know some some + +01:14:10.280 --> 01:14:14.840 +improvement maybe a somewhat marginal + +01:14:12.760 --> 01:14:17.560 +Improvement on bigger models but you'll + +01:14:14.840 --> 01:14:20.040 +see a big Improvement on smaller models + +01:14:17.560 --> 01:14:22.440 +and there have been some + +01:14:20.040 --> 01:14:24.639 +interesting results on this recently + +01:14:22.440 --> 01:14:27.000 +like there's a really strong text tosql + +01:14:24.639 --> 01:14:28.760 +model that was based on llama 7B that + +01:14:27.000 --> 01:14:32.639 +was just trained on tons and tons of + +01:14:28.760 --> 01:14:35.520 +Text tosql data for example um and so + +01:14:32.639 --> 01:14:38.520 +there's certain tasks where it's really + +01:14:35.520 --> 01:14:41.520 +important another example is + +01:14:38.520 --> 01:14:45.120 +um on translation + +01:14:41.520 --> 01:14:48.280 +tasks uh there's a model called NL which + +01:14:45.120 --> 01:14:50.880 +is 3.3 billion parameters and it's + +01:14:48.280 --> 01:14:53.560 +competitive with gd4 on very + +01:14:50.880 --> 01:14:55.000 +large uh on very large languages with + +01:14:53.560 --> 01:14:57.199 +lots of prining data and way better than + +01:14:55.000 --> 01:15:00.080 +gp4 on languages with lots of prining + +01:14:57.199 --> 01:15:01.800 +data so um it just shows how like if you + +01:15:00.080 --> 01:15:03.880 +very carefully work on a special purpose + +01:15:01.800 --> 01:15:05.800 +model even if it's very small compared + +01:15:03.880 --> 01:15:08.280 +to the bigger model you can still do a + +01:15:05.800 --> 01:15:10.560 +really good job so I think that's the + +01:15:08.280 --> 01:15:13.440 +biggest + +01:15:10.560 --> 01:15:15.199 +ESS another thing is um another thing is + +01:15:13.440 --> 01:15:16.600 +if you have a very fixed format and you + +01:15:15.199 --> 01:15:17.880 +always want something in a format you + +01:15:16.600 --> 01:15:25.199 +might want to be + +01:15:17.880 --> 01:15:25.199 +doing the prev page thank only one Inu + +01:15:37.639 --> 01:15:43.840 +well it you are inputting at least 175 + +01:15:41.400 --> 01:15:47.280 +seed ones and + +01:15:43.840 --> 01:15:49.080 +um you know you're sampling from the + +01:15:47.280 --> 01:15:51.320 +model you're asking it to generate new + +01:15:49.080 --> 01:15:52.400 +instructions so if you have a a model + +01:15:51.320 --> 01:15:54.000 +that's good enough at following + +01:15:52.400 --> 01:15:56.320 +instructions it'll be be able toate + +01:15:54.000 --> 01:15:56.320 +something + +01:16:00.400 --> 01:16:05.400 +new for + +01:16:02.400 --> 01:16:07.600 +this yeah they have a class I believe + +01:16:05.400 --> 01:16:12.560 +they have a classifier that says it will + +01:16:07.600 --> 01:16:12.560 +be one of these two yeah + +01:16:15.000 --> 01:16:18.639 +dur it can be + +01:16:20.280 --> 01:16:26.760 +both yeah well but also the + +01:16:24.080 --> 01:16:28.440 +um the C test can be input first and + +01:16:26.760 --> 01:16:31.520 +output first and you're like generating + +01:16:28.440 --> 01:16:34.760 +a new instruction for the LM + +01:16:31.520 --> 01:16:36.199 +here so this this is from the task pool + +01:16:34.760 --> 01:16:37.960 +but you're asking the LM to generate a + +01:16:36.199 --> 01:16:40.120 +new + +01:16:37.960 --> 01:16:44.800 +instruction + +01:16:40.120 --> 01:16:44.800 +yeah cool and anything + +01:16:45.320 --> 01:16:51.239 +else okay um yeah that that's all we + +01:16:48.719 --> 01:16:54.239 +have for today so thank + +01:16:51.239 --> 01:16:54.239 +you diff --git a/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation.mp4 b/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..45b448eb6a7d84af670ca4fe7024e08fe8d51f4f --- /dev/null +++ b/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac59ef9485c4b152c37764fd4719eb7913c695aff4ff999684daf53970df4309 +size 74350423 diff --git a/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation/metadata.json b/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..a387bdb3656b07c270403ae3f5dd9cec65040ec7 --- /dev/null +++ b/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation/metadata.json @@ -0,0 +1,4 @@ +{ + "url": "https://www.youtube.com/watch?v=IEFPSsu0Obg", + "title": "CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation" +} \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation/transcript.srt b/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation/transcript.srt new file mode 100644 index 0000000000000000000000000000000000000000..0f5212e97d49856ab1d7651d7d54a9fdc5759f9d --- /dev/null +++ b/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation/transcript.srt @@ -0,0 +1,6759 @@ +1 +00:00:00,719 --> 00:00:07,480 +so to get started I want to show an + +2 +00:00:04,120 --> 00:00:10,320 +example of the scientific method I took + +3 +00:00:07,480 --> 00:00:12,920 +this directly from Wikipedia but it's + +4 +00:00:10,320 --> 00:00:15,320 +actually uh pretty nice it's a pretty + +5 +00:00:12,920 --> 00:00:17,480 +nice and concise summary of what we + +6 +00:00:15,320 --> 00:00:19,439 +should do when we're coming up with new + +7 +00:00:17,480 --> 00:00:22,160 +uh kind of research + +8 +00:00:19,439 --> 00:00:24,039 +projects and we start with an + +9 +00:00:22,160 --> 00:00:26,840 +observation or question we do research + +10 +00:00:24,039 --> 00:00:28,599 +of the topic area we form a hypothesis + +11 +00:00:26,840 --> 00:00:31,439 +we test it with an experiment analyze + +12 +00:00:28,599 --> 00:00:33,600 +data and Report conclusions + +13 +00:00:31,439 --> 00:00:35,640 +and even if we're doing kind of an + +14 +00:00:33,600 --> 00:00:37,480 +engineering based project still this + +15 +00:00:35,640 --> 00:00:42,079 +thinking of the stuff that we're doing + +16 +00:00:37,480 --> 00:00:44,399 +in a framework like this can help you a + +17 +00:00:42,079 --> 00:00:46,079 +lot so uh the first thing I'd like to + +18 +00:00:44,399 --> 00:00:49,120 +talk about is identifying good research + +19 +00:00:46,079 --> 00:00:51,800 +directions and so I'm going to look at + +20 +00:00:49,120 --> 00:00:53,640 +that from the observation question + +21 +00:00:51,800 --> 00:00:56,320 +perspective + +22 +00:00:53,640 --> 00:00:58,480 +here so if we think about why we do + +23 +00:00:56,320 --> 00:01:01,160 +research uh particularly why we do + +24 +00:00:58,480 --> 00:01:04,199 +research on natural language process in + +25 +00:01:01,160 --> 00:01:07,159 +um there's a couple reasons why the + +26 +00:01:04,199 --> 00:01:09,439 +first is application driven research and + +27 +00:01:07,159 --> 00:01:13,159 +usually this is I would like to make a + +28 +00:01:09,439 --> 00:01:15,040 +useful system or make one work better so + +29 +00:01:13,159 --> 00:01:18,479 +uh you know this is probably the great + +30 +00:01:15,040 --> 00:01:20,280 +majority of NLP research then separately + +31 +00:01:18,479 --> 00:01:21,960 +from that there's curiosity driven + +32 +00:01:20,280 --> 00:01:24,560 +research which is like I would like to + +33 +00:01:21,960 --> 00:01:27,360 +know more about language or the world + +34 +00:01:24,560 --> 00:01:29,159 +viewed through language and so this + +35 +00:01:27,360 --> 00:01:31,840 +doesn't necessarily have to be + +36 +00:01:29,159 --> 00:01:31,840 +immediately + +37 +00:01:32,000 --> 00:01:37,280 +like a downstream application that users + +38 +00:01:35,399 --> 00:01:39,159 +are using will immediately get better + +39 +00:01:37,280 --> 00:01:40,439 +it's more like we have a burning + +40 +00:01:39,159 --> 00:01:43,159 +question that we would like to answer + +41 +00:01:40,439 --> 00:01:47,399 +and we want to answer + +42 +00:01:43,159 --> 00:01:48,640 +it so NLP encompasses both uh sometimes + +43 +00:01:47,399 --> 00:01:50,479 +if you read a paper you'll have + +44 +00:01:48,640 --> 00:01:54,360 +something that's doing both uh + +45 +00:01:50,479 --> 00:01:56,439 +especially like analyzing the internals + +46 +00:01:54,360 --> 00:01:58,079 +or training dynamics of a a neural + +47 +00:01:56,439 --> 00:01:59,920 +network to answer a curiosity-driven + +48 +00:01:58,079 --> 00:02:02,439 +question and then applying that to come + +49 +00:01:59,920 --> 00:02:04,840 +up with a better method that makes work + +50 +00:02:02,439 --> 00:02:06,560 +better I I would like to say though that + +51 +00:02:04,840 --> 00:02:09,119 +it's kind of rare that there's a paper + +52 +00:02:06,560 --> 00:02:10,879 +that does both of them really well uh + +53 +00:02:09,119 --> 00:02:13,160 +and so usually one of them is kind of + +54 +00:02:10,879 --> 00:02:14,599 +the main focus and I think you can be + +55 +00:02:13,160 --> 00:02:17,680 +well served by choosing which one is + +56 +00:02:14,599 --> 00:02:20,560 +your main focus and then kind of uh the + +57 +00:02:17,680 --> 00:02:23,560 +other might come as a additional uh + +58 +00:02:20,560 --> 00:02:23,560 +bonus on top of + +59 +00:02:23,920 --> 00:02:28,760 +that so here are a few examples of + +60 +00:02:27,160 --> 00:02:32,800 +application driven + +61 +00:02:28,760 --> 00:02:35,239 +research so for example pay at all uh + +62 +00:02:32,800 --> 00:02:37,840 +they proposed the task of sentiment + +63 +00:02:35,239 --> 00:02:39,879 +analysis um so actually there was a + +64 +00:02:37,840 --> 00:02:41,879 +paper 22 years ago that proposed the + +65 +00:02:39,879 --> 00:02:44,879 +task of sentiment analysis it might seem + +66 +00:02:41,879 --> 00:02:46,760 +very you know normal nowadays but uh + +67 +00:02:44,879 --> 00:02:49,519 +there was a paper that proposed it back + +68 +00:02:46,760 --> 00:02:52,840 +then and they proposed sentiment + +69 +00:02:49,519 --> 00:02:54,200 +analysis because um labeling articles + +70 +00:02:52,840 --> 00:02:57,480 +with their sentiment would provide + +71 +00:02:54,200 --> 00:02:59,760 +succinct summaries to the readers um so + +72 +00:02:57,480 --> 00:03:03,319 +they basically wanted to provide + +73 +00:02:59,760 --> 00:03:03,319 +information to readers and that would be + +74 +00:03:03,400 --> 00:03:09,000 +useful another paper by ready at all + +75 +00:03:06,440 --> 00:03:11,519 +2019 proposes a task of conversational + +76 +00:03:09,000 --> 00:03:13,640 +question answering uh because an + +77 +00:03:11,519 --> 00:03:15,599 +inability to build and maintain common + +78 +00:03:13,640 --> 00:03:17,680 +ground is part of the reason why virtual + +79 +00:03:15,599 --> 00:03:20,159 +assistant usually don't seem like + +80 +00:03:17,680 --> 00:03:22,040 +competent conversational Partners so + +81 +00:03:20,159 --> 00:03:24,519 +when you're talking to your Alexa or + +82 +00:03:22,040 --> 00:03:27,000 +your Google uh home or something like + +83 +00:03:24,519 --> 00:03:28,599 +this you might ask it a question and + +84 +00:03:27,000 --> 00:03:30,120 +then after you asked it a question you + +85 +00:03:28,599 --> 00:03:31,480 +ask it another question but it doesn't + +86 +00:03:30,120 --> 00:03:32,879 +go back to the contexts that you had + +87 +00:03:31,480 --> 00:03:34,519 +before and they wanted to solve this + +88 +00:03:32,879 --> 00:03:36,040 +problem so they proposed this data set + +89 +00:03:34,519 --> 00:03:40,000 +for + +90 +00:03:36,040 --> 00:03:41,720 +it um Gerel propos a method for bottom + +91 +00:03:40,000 --> 00:03:43,159 +up abstractive summarization because + +92 +00:03:41,720 --> 00:03:44,760 +neural network-based methods for + +93 +00:03:43,159 --> 00:03:46,879 +abstractive summarization produce + +94 +00:03:44,760 --> 00:03:49,000 +outputs that are fluent but perform + +95 +00:03:46,879 --> 00:03:51,120 +poorly a Content selection so they had a + +96 +00:03:49,000 --> 00:03:53,000 +problem they had a task already in mind + +97 +00:03:51,120 --> 00:03:54,239 +they weren't proposing a new task and + +98 +00:03:53,000 --> 00:03:56,040 +they there was a problem with the + +99 +00:03:54,239 --> 00:03:58,760 +existing system so they fixed + +100 +00:03:56,040 --> 00:04:00,400 +it and then Kudo and Richardson proposed + +101 +00:03:58,760 --> 00:04:02,920 +a method for un supervised word + +102 +00:04:00,400 --> 00:04:04,799 +segmentation namely sentence piece uh + +103 +00:04:02,920 --> 00:04:06,439 +because language dependent processing + +104 +00:04:04,799 --> 00:04:08,920 +makes it hard to train multilingual + +105 +00:04:06,439 --> 00:04:10,360 +models as we have to carefully manage + +106 +00:04:08,920 --> 00:04:12,720 +the configurations of pre- and + +107 +00:04:10,360 --> 00:04:15,879 +post-processors per language so they + +108 +00:04:12,720 --> 00:04:17,519 +tried to make things easier uh so like + +109 +00:04:15,879 --> 00:04:19,600 +you can see all of these things like the + +110 +00:04:17,519 --> 00:04:21,919 +first two are proposing new tasks to + +111 +00:04:19,600 --> 00:04:23,880 +solve and they're doing it from the + +112 +00:04:21,919 --> 00:04:25,919 +point of view of uh creating something + +113 +00:04:23,880 --> 00:04:29,120 +useful for users the second two are + +114 +00:04:25,919 --> 00:04:30,440 +proposing new methods the first one is + +115 +00:04:29,120 --> 00:04:34,360 +like improving + +116 +00:04:30,440 --> 00:04:36,320 +accuracy um so it's this is the most + +117 +00:04:34,360 --> 00:04:37,639 +common most commonly people say I have a + +118 +00:04:36,320 --> 00:04:39,120 +test that I want to solve there's a + +119 +00:04:37,639 --> 00:04:41,280 +problem with accuracy I want to improve + +120 +00:04:39,120 --> 00:04:43,960 +it but you can also improve other things + +121 +00:04:41,280 --> 00:04:45,880 +so you can improve like convenience or + +122 +00:04:43,960 --> 00:04:47,320 +uh you can Pro improve efficiency or + +123 +00:04:45,880 --> 00:04:51,720 +other things like that so all of those + +124 +00:04:47,320 --> 00:04:51,720 +are you know perfectly reasonable + +125 +00:04:52,120 --> 00:04:57,320 +things I also have some examples of + +126 +00:04:54,639 --> 00:04:59,120 +curiosity driven research these are + +127 +00:04:57,320 --> 00:05:00,360 +actually harder to find in the ACL + +128 +00:04:59,120 --> 00:05:03,120 +anthology + +129 +00:05:00,360 --> 00:05:06,400 +it's definitely the minority case but + +130 +00:05:03,120 --> 00:05:09,160 +they still do exist um so for example + +131 +00:05:06,400 --> 00:05:10,960 +rank at all 2017 asked what is the + +132 +00:05:09,160 --> 00:05:13,800 +difference between the language of real + +133 +00:05:10,960 --> 00:05:17,000 +news with that of satire hoaxes and + +134 +00:05:13,800 --> 00:05:18,800 +propaganda so they were not attempting + +135 +00:05:17,000 --> 00:05:21,039 +to create a system for fake news + +136 +00:05:18,800 --> 00:05:23,199 +detection that was not their goal here + +137 +00:05:21,039 --> 00:05:24,600 +their go their goal was just to figure + +138 +00:05:23,199 --> 00:05:26,240 +out what were the different linguistic + +139 +00:05:24,600 --> 00:05:28,000 +characteristics and they found that + +140 +00:05:26,240 --> 00:05:29,720 +scientifically interesting maybe + +141 +00:05:28,000 --> 00:05:31,280 +Downstream that would be useful but that + +142 +00:05:29,720 --> 00:05:35,080 +wasn't the point of their + +143 +00:05:31,280 --> 00:05:36,960 +paper another one uh curell at all ask + +144 +00:05:35,080 --> 00:05:38,960 +are all languages equally hard to + +145 +00:05:36,960 --> 00:05:41,000 +language model and so basically they + +146 +00:05:38,960 --> 00:05:42,440 +wanted to know are all languages just + +147 +00:05:41,000 --> 00:05:45,520 +character strings and so language + +148 +00:05:42,440 --> 00:05:47,479 +modeling them is uh similarly easy or + +149 +00:05:45,520 --> 00:05:49,120 +are there certain characteristics of + +150 +00:05:47,479 --> 00:05:51,080 +language that make them easier or harder + +151 +00:05:49,120 --> 00:05:54,000 +to model with the current architectures + +152 +00:05:51,080 --> 00:05:55,520 +that we have um and so they didn't + +153 +00:05:54,000 --> 00:05:57,039 +propose a new architecture they didn't + +154 +00:05:55,520 --> 00:06:00,479 +propose to improve anything they just + +155 +00:05:57,039 --> 00:06:02,400 +proposed to examine this question + +156 +00:06:00,479 --> 00:06:04,280 +um and also Tenny at all this is + +157 +00:06:02,400 --> 00:06:06,880 +actually an extremely impactful work + +158 +00:06:04,280 --> 00:06:09,319 +Downstream but uh they weren't improving + +159 +00:06:06,880 --> 00:06:11,520 +anything they just Quantified where + +160 +00:06:09,319 --> 00:06:14,440 +specific types of linguistic information + +161 +00:06:11,520 --> 00:06:16,720 +are encoded in birs so they found that + +162 +00:06:14,440 --> 00:06:18,840 +for example syntax was encoded better in + +163 +00:06:16,720 --> 00:06:20,560 +the early layers semantics in the later + +164 +00:06:18,840 --> 00:06:22,520 +layers and then if you go further you + +165 +00:06:20,560 --> 00:06:25,280 +you have other fine grain things like + +166 +00:06:22,520 --> 00:06:27,599 +pragne style + +167 +00:06:25,280 --> 00:06:30,400 +information so I I think you can kind of + +168 +00:06:27,599 --> 00:06:32,120 +see the difference between these two um + +169 +00:06:30,400 --> 00:06:34,800 +are there any questions + +170 +00:06:32,120 --> 00:06:40,199 +about + +171 +00:06:34,800 --> 00:06:41,720 +this no okay let's be that so the next + +172 +00:06:40,199 --> 00:06:43,680 +question which I think a lot of people + +173 +00:06:41,720 --> 00:06:46,240 +might be asking particularly with + +174 +00:06:43,680 --> 00:06:47,720 +respect to assignment 4 which requires + +175 +00:06:46,240 --> 00:06:51,039 +you to come up with something novel to + +176 +00:06:47,720 --> 00:06:53,240 +do is how do we uh get research + +177 +00:06:51,039 --> 00:06:57,360 +ideas + +178 +00:06:53,240 --> 00:07:02,280 +and the way we can do this is uh twofold + +179 +00:06:57,360 --> 00:07:04,479 +so um one is kind of we want to turn a + +180 +00:07:02,280 --> 00:07:07,120 +concrete understanding of existing + +181 +00:07:04,479 --> 00:07:10,120 +research's failings into a higher level + +182 +00:07:07,120 --> 00:07:12,560 +experimental question and the two ways + +183 +00:07:10,120 --> 00:07:15,240 +that I normally characterize doing this + +184 +00:07:12,560 --> 00:07:19,319 +are bottom up discovery of research + +185 +00:07:15,240 --> 00:07:21,080 +ideas um or the way the way I + +186 +00:07:19,319 --> 00:07:24,479 +characterize this is bottom up discovery + +187 +00:07:21,080 --> 00:07:27,000 +of research ideas and this is a great + +188 +00:07:24,479 --> 00:07:29,120 +tool for making incremental progress on + +189 +00:07:27,000 --> 00:07:32,039 +existing systems on tasks that we really + +190 +00:07:29,120 --> 00:07:35,400 +care about or expanding the scope of a + +191 +00:07:32,039 --> 00:07:37,680 +task that we care about so uh some + +192 +00:07:35,400 --> 00:07:41,879 +examples of this would be like in + +193 +00:07:37,680 --> 00:07:45,639 +assignment number three you uh look + +194 +00:07:41,879 --> 00:07:47,720 +let's say you're looking at + +195 +00:07:45,639 --> 00:07:50,159 +um let's say you're looking at the + +196 +00:07:47,720 --> 00:07:53,840 +question answering performance + +197 +00:07:50,159 --> 00:07:58,280 +of models of multilingual models on + +198 +00:07:53,840 --> 00:08:01,479 +different languages um and you for + +199 +00:07:58,280 --> 00:08:03,159 +assignment three you implement a couple + +200 +00:08:01,479 --> 00:08:05,240 +multilingual models on different + +201 +00:08:03,159 --> 00:08:06,560 +languages you run them you look at the + +202 +00:08:05,240 --> 00:08:08,400 +results and you identify that + +203 +00:08:06,560 --> 00:08:10,080 +multilingual models are particularly bad + +204 +00:08:08,400 --> 00:08:12,919 +at answering questions about named + +205 +00:08:10,080 --> 00:08:14,680 +entities and so now you have looked at + +206 +00:08:12,919 --> 00:08:17,759 +the output you have decided that that's + +207 +00:08:14,680 --> 00:08:20,199 +a big problem um you can go in and + +208 +00:08:17,759 --> 00:08:22,080 +improve it so this is a great tool for + +209 +00:08:20,199 --> 00:08:23,720 +incremental progress and like in fact + +210 +00:08:22,080 --> 00:08:26,520 +doing this really effectively has been + +211 +00:08:23,720 --> 00:08:31,000 +very effective in my own research career + +212 +00:08:26,520 --> 00:08:34,680 +like we uh if I feel like I I like to + +213 +00:08:31,000 --> 00:08:36,279 +look at data I try to do that a lot and + +214 +00:08:34,680 --> 00:08:38,440 +by doing that I identify the most + +215 +00:08:36,279 --> 00:08:40,200 +frequent problems and because of that + +216 +00:08:38,440 --> 00:08:42,039 +when I fix those problems my accuracy + +217 +00:08:40,200 --> 00:08:44,560 +goes up a lot more than people who pick + +218 +00:08:42,039 --> 00:08:46,880 +the less good problems right and so if + +219 +00:08:44,560 --> 00:08:49,440 +we want our accuracy to go up uh I'm + +220 +00:08:46,880 --> 00:08:51,360 +more efficient at you know improving + +221 +00:08:49,440 --> 00:08:53,240 +things on the other hand there's + +222 +00:08:51,360 --> 00:08:55,399 +something uh from the opposite direction + +223 +00:08:53,240 --> 00:08:57,080 +is moving from a higher level question + +224 +00:08:55,399 --> 00:08:57,800 +to a lower level concrete testing of + +225 +00:08:57,080 --> 00:09:00,120 +that + +226 +00:08:57,800 --> 00:09:01,760 +question um so this could be tap down + +227 +00:09:00,120 --> 00:09:02,760 +Design This is tap down design of + +228 +00:09:01,760 --> 00:09:06,360 +research + +229 +00:09:02,760 --> 00:09:08,399 +ideas this favors bigger ideas but these + +230 +00:09:06,360 --> 00:09:10,240 +ideas can be disconnected from reality + +231 +00:09:08,399 --> 00:09:13,880 +or they could be not solving the right + +232 +00:09:10,240 --> 00:09:17,079 +problems so the typical like very very + +233 +00:09:13,880 --> 00:09:18,800 +successful example of this is um neural + +234 +00:09:17,079 --> 00:09:20,800 +machine translation or something like + +235 +00:09:18,800 --> 00:09:22,720 +this neural machine translations neural + +236 +00:09:20,800 --> 00:09:26,399 +sequence sequence + +237 +00:09:22,720 --> 00:09:30,040 +models this came out of a few people + +238 +00:09:26,399 --> 00:09:32,040 +like Jeff Hinton and yua + +239 +00:09:30,040 --> 00:09:33,480 +believing for a very long time that + +240 +00:09:32,040 --> 00:09:35,760 +neural networks were the right way to + +241 +00:09:33,480 --> 00:09:37,800 +solve lots of problems uh despite the + +242 +00:09:35,760 --> 00:09:39,640 +fact that there wasn't like super + +243 +00:09:37,800 --> 00:09:42,279 +concrete evidence of that for a long + +244 +00:09:39,640 --> 00:09:43,399 +time and so they had this idea which was + +245 +00:09:42,279 --> 00:09:47,399 +like we should be doing things with + +246 +00:09:43,399 --> 00:09:49,440 +neural networks and uh they you know + +247 +00:09:47,399 --> 00:09:50,720 +they successfully executed that and now + +248 +00:09:49,440 --> 00:09:52,200 +everybody is doing things with neural + +249 +00:09:50,720 --> 00:09:56,560 +networks so they made a really huge + +250 +00:09:52,200 --> 00:09:58,160 +revolution in the research space um that + +251 +00:09:56,560 --> 00:09:59,720 +that's great that's a great example of a + +252 +00:09:58,160 --> 00:10:02,839 +successful topown IDE IDE but the + +253 +00:09:59,720 --> 00:10:05,519 +problem is uh for every example like + +254 +00:10:02,839 --> 00:10:07,560 +that there's a thousand uh top down + +255 +00:10:05,519 --> 00:10:10,760 +ideas in the graveyard of not being very + +256 +00:10:07,560 --> 00:10:12,600 +you know effective so I I think um in + +257 +00:10:10,760 --> 00:10:14,519 +order to do something like this you + +258 +00:10:12,600 --> 00:10:16,200 +better have a very strong conviction or + +259 +00:10:14,519 --> 00:10:18,079 +you better have maybe some initial + +260 +00:10:16,200 --> 00:10:20,920 +evidence or a very strong intuition + +261 +00:10:18,079 --> 00:10:22,320 +about why this might be a good idea and + +262 +00:10:20,920 --> 00:10:25,240 +uh you would be able to test that + +263 +00:10:22,320 --> 00:10:27,240 +intuition through intermediate steps uh + +264 +00:10:25,240 --> 00:10:31,040 +to to demonstrate like through toy data + +265 +00:10:27,240 --> 00:10:31,040 +or other stuff like that + +266 +00:10:31,720 --> 00:10:38,360 +um cool so these are kind of the general + +267 +00:10:36,360 --> 00:10:40,839 +ways that we can come up with research + +268 +00:10:38,360 --> 00:10:42,519 +ideas the next thing that we want to do + +269 +00:10:40,839 --> 00:10:44,480 +is research our topic area were there + +270 +00:10:42,519 --> 00:10:46,720 +any questions about bottom up versus top + +271 +00:10:44,480 --> 00:10:49,120 +down I'm going to talk about effective + +272 +00:10:46,720 --> 00:10:51,920 +strategies to bottom up stuff in uh in + +273 +00:10:49,120 --> 00:10:54,360 +two weeks uh so we can talk more about + +274 +00:10:51,920 --> 00:10:56,800 +that then + +275 +00:10:54,360 --> 00:11:00,959 +but okay if not I'll move + +276 +00:10:56,800 --> 00:11:05,079 +on so next uh we have research topic + +277 +00:11:00,959 --> 00:11:07,360 +areas so this is about how you will do + +278 +00:11:05,079 --> 00:11:10,320 +assignment three which is researching uh + +279 +00:11:07,360 --> 00:11:13,240 +topic area getting forming a very good + +280 +00:11:10,320 --> 00:11:15,680 +understanding of the topic that you're + +281 +00:11:13,240 --> 00:11:18,800 +trying to handle and so there's a bunch + +282 +00:11:15,680 --> 00:11:22,800 +of different ways you can do this uh the + +283 +00:11:18,800 --> 00:11:25,680 +first one is keyword search and so you + +284 +00:11:22,800 --> 00:11:27,839 +look something up on Google Scholar or + +285 +00:11:25,680 --> 00:11:29,480 +something uh finding older and newer + +286 +00:11:27,839 --> 00:11:32,880 +papers so this is like following the + +287 +00:11:29,480 --> 00:11:35,360 +tracks of papers you can uh read the + +288 +00:11:32,880 --> 00:11:39,160 +abstract and intro uh read the details + +289 +00:11:35,360 --> 00:11:43,760 +of most relevant papers and I don't do + +290 +00:11:39,160 --> 00:11:45,440 +this as much now but um when I was a + +291 +00:11:43,760 --> 00:11:47,360 +graduate student I would often make a + +292 +00:11:45,440 --> 00:11:49,800 +short summary of the paper to make sure + +293 +00:11:47,360 --> 00:11:54,680 +I really understood the details uh + +294 +00:11:49,800 --> 00:11:56,000 +because also now I teach a class um and + +295 +00:11:54,680 --> 00:11:58,240 +actually making these slides is very + +296 +00:11:56,000 --> 00:12:00,120 +useful for me so going back into the + +297 +00:11:58,240 --> 00:12:03,440 +Transformer slide slides you know that + +298 +00:12:00,120 --> 00:12:05,160 +kind of serves as my um you know my way + +299 +00:12:03,440 --> 00:12:06,800 +of digesting papers and making sure that + +300 +00:12:05,160 --> 00:12:08,160 +I can explain them and if you're not + +301 +00:12:06,800 --> 00:12:10,480 +teaching a class and you can go in and + +302 +00:12:08,160 --> 00:12:13,560 +make a summary into it yourselves so + +303 +00:12:10,480 --> 00:12:16,480 +that can confirm uh solidify your memory + +304 +00:12:13,560 --> 00:12:19,360 +and like confirm your uh ability to + +305 +00:12:16,480 --> 00:12:19,360 +understand everything that's in + +306 +00:12:20,639 --> 00:12:27,120 +there cool um so next I'd like to talk + +307 +00:12:23,639 --> 00:12:29,600 +about some sources of papers in NLP um + +308 +00:12:27,120 --> 00:12:31,800 +one really good source uh is the ACL + +309 +00:12:29,600 --> 00:12:33,720 +Anthology another good source is Google + +310 +00:12:31,800 --> 00:12:36,120 +Scholar um they both have their + +311 +00:12:33,720 --> 00:12:37,959 +advantages and their disadvantages um + +312 +00:12:36,120 --> 00:12:39,800 +increasingly actually I realized now + +313 +00:12:37,959 --> 00:12:41,959 +that I should add this to my slides but + +314 +00:12:39,800 --> 00:12:43,639 +increasingly a lot of good uh papers in + +315 +00:12:41,959 --> 00:12:47,120 +NLP are also published in machine + +316 +00:12:43,639 --> 00:12:51,199 +learning conferences so like icml or NPS + +317 +00:12:47,120 --> 00:12:53,040 +or um uh I clear or things like that the + +318 +00:12:51,199 --> 00:12:54,920 +problem is the ACL Anthology is way + +319 +00:12:53,040 --> 00:12:56,600 +better than any of them at like + +320 +00:12:54,920 --> 00:13:00,360 +organizing the papers in an easy to + +321 +00:12:56,600 --> 00:13:03,560 +process way so I I think um I I'll talk + +322 +00:13:00,360 --> 00:13:06,000 +about this uh for now and so the ACL + +323 +00:13:03,560 --> 00:13:08,800 +Anthology covers many uh prestigious + +324 +00:13:06,000 --> 00:13:11,639 +venues in NLP it has all of these ones + +325 +00:13:08,800 --> 00:13:15,160 +here this figure is a little bit old uh + +326 +00:13:11,639 --> 00:13:18,839 +I I made it in 21 2021 but you know it + +327 +00:13:15,160 --> 00:13:22,959 +reaches up to the present day and what I + +328 +00:13:18,839 --> 00:13:25,880 +do often is I can start with the past 3 + +329 +00:13:22,959 --> 00:13:30,160 +to 5 years of several top venues in here + +330 +00:13:25,880 --> 00:13:33,880 +like ACL emnlp uh nackle and tackle and + +331 +00:13:30,160 --> 00:13:36,360 +go in and do uh keyword search and so + +332 +00:13:33,880 --> 00:13:36,360 +like let's + +333 +00:13:38,760 --> 00:13:43,600 +say let's say I was interested in + +334 +00:13:44,639 --> 00:13:49,519 +multilingual multilingual large language + +335 +00:13:47,600 --> 00:13:52,079 +models and evaluating them or some way + +336 +00:13:49,519 --> 00:13:54,279 +so I would go to ACL and then I would + +337 +00:13:52,079 --> 00:13:57,560 +just put in multi + +338 +00:13:54,279 --> 00:14:01,360 +lingual um and you get a wonderful paper + +339 +00:13:57,560 --> 00:14:01,360 +by by some research are + +340 +00:14:01,480 --> 00:14:06,440 +named that was not intentional I didn't + +341 +00:14:03,639 --> 00:14:08,800 +know that was going to happen but um so + +342 +00:14:06,440 --> 00:14:11,240 +on the Fly crosslingual masking for + +343 +00:14:08,800 --> 00:14:12,959 +multilingual pre-training um scaling + +344 +00:14:11,240 --> 00:14:15,040 +multilingual corpora and language models + +345 +00:14:12,959 --> 00:14:18,120 +to 500 languages that seems pretty + +346 +00:14:15,040 --> 00:14:19,880 +pretty relevant evaluating multilingual + +347 +00:14:18,120 --> 00:14:22,000 +compositional generalization so you can + +348 +00:14:19,880 --> 00:14:27,680 +just go through here and see a bunch of + +349 +00:14:22,000 --> 00:14:30,680 +papers that like um that could be + +350 +00:14:27,680 --> 00:14:30,680 +useful + +351 +00:14:32,240 --> 00:14:35,199 +and you could uh if you're doing a more + +352 +00:14:33,800 --> 00:14:36,920 +machine learning oriented thing you can + +353 +00:14:35,199 --> 00:14:38,920 +do the same thing for like the nurs + +354 +00:14:36,920 --> 00:14:41,480 +proceedings or the icml proceedings or + +355 +00:14:38,920 --> 00:14:41,480 +something like + +356 +00:14:41,800 --> 00:14:48,120 +that um separately from this you can go + +357 +00:14:44,839 --> 00:14:50,920 +through Google Scholar um this allows + +358 +00:14:48,120 --> 00:14:52,560 +for a search of papers by keyword and so + +359 +00:14:50,920 --> 00:14:54,440 +if I write like neural entity + +360 +00:14:52,560 --> 00:14:56,360 +recognition it will give neural + +361 +00:14:54,440 --> 00:15:00,040 +architectures for identity recognition + +362 +00:14:56,360 --> 00:15:03,399 +all of these things like this um you can + +363 +00:15:00,040 --> 00:15:06,800 +view the more recent papers so like for + +364 +00:15:03,399 --> 00:15:10,120 +example uh if you're researching uh kind + +365 +00:15:06,800 --> 00:15:12,759 +of generic topic that a lot of people + +366 +00:15:10,120 --> 00:15:14,639 +use uh a lot of people do research on + +367 +00:15:12,759 --> 00:15:18,399 +you might be getting papers from like + +368 +00:15:14,639 --> 00:15:19,920 +1998 or something like this and you know + +369 +00:15:18,399 --> 00:15:21,639 +they might be useful but honestly the + +370 +00:15:19,920 --> 00:15:23,519 +methodology has changed so much since + +371 +00:15:21,639 --> 00:15:24,680 +then that most methodical papers from + +372 +00:15:23,519 --> 00:15:26,959 +that long ago are probably not going to + +373 +00:15:24,680 --> 00:15:29,480 +be very useful um so you can view the + +374 +00:15:26,959 --> 00:15:31,079 +recent papers another really useful + +375 +00:15:29,480 --> 00:15:33,759 +thing that you can do is view papers + +376 +00:15:31,079 --> 00:15:35,319 +that site the current paper and you can + +377 +00:15:33,759 --> 00:15:39,560 +even click on this and then you can + +378 +00:15:35,319 --> 00:15:42,519 +search within the sighting papers so + +379 +00:15:39,560 --> 00:15:44,399 +um like let's say I want to know about + +380 +00:15:42,519 --> 00:15:45,620 +how + +381 +00:15:44,399 --> 00:15:48,730 +people + +382 +00:15:45,620 --> 00:15:48,730 +[Music] + +383 +00:15:50,720 --> 00:15:55,720 +do let's say I want to see if anybody + +384 +00:15:53,199 --> 00:15:59,639 +does neural entity recognition with uh + +385 +00:15:55,720 --> 00:16:02,160 +State space models so I do like stage + +386 +00:15:59,639 --> 00:16:05,399 +space + +387 +00:16:02,160 --> 00:16:09,040 +model and then I search within the + +388 +00:16:05,399 --> 00:16:12,279 +citing articles and I'm able to find + +389 +00:16:09,040 --> 00:16:14,319 +three articles that at least cite this + +390 +00:16:12,279 --> 00:16:17,759 +paper and and talk about State space + +391 +00:16:14,319 --> 00:16:20,319 +models so + +392 +00:16:17,759 --> 00:16:21,600 +um none of these seem particularly + +393 +00:16:20,319 --> 00:16:23,240 +relevant to what I was looking for but + +394 +00:16:21,600 --> 00:16:26,800 +you get the idea like this can be a + +395 +00:16:23,240 --> 00:16:26,800 +useful tool for finding more recent + +396 +00:16:27,519 --> 00:16:30,519 +things + +397 +00:16:33,639 --> 00:16:40,480 +and then finding older papers this is + +398 +00:16:36,279 --> 00:16:42,839 +also relatively easy um so you read the + +399 +00:16:40,480 --> 00:16:44,319 +papers that you're interested in and + +400 +00:16:42,839 --> 00:16:45,480 +then it will have back blinks to older + +401 +00:16:44,319 --> 00:16:47,519 +papers and you look them up in the + +402 +00:16:45,480 --> 00:16:50,000 +references this is how I I find older + +403 +00:16:47,519 --> 00:16:53,600 +papers that might be + +404 +00:16:50,000 --> 00:16:57,800 +relevant um and so the these are the + +405 +00:16:53,600 --> 00:16:59,720 +tools that I use um some other so I I'd + +406 +00:16:57,800 --> 00:17:03,600 +like to give a few caveats about Google + +407 +00:16:59,720 --> 00:17:06,120 +Scholar and uh things like Twitter or + +408 +00:17:03,600 --> 00:17:08,360 +LinkedIn or something like this they + +409 +00:17:06,120 --> 00:17:10,720 +give you very biased views on all the + +410 +00:17:08,360 --> 00:17:14,600 +papers that are out there um because + +411 +00:17:10,720 --> 00:17:16,919 +they sort for popularity basically so um + +412 +00:17:14,600 --> 00:17:19,439 +actually if you're looking at like + +413 +00:17:16,919 --> 00:17:22,000 +Twitter or LinkedIn or something like + +414 +00:17:19,439 --> 00:17:23,679 +that you can actually get a pretty bleak + +415 +00:17:22,000 --> 00:17:25,360 +view on natural language processing and + +416 +00:17:23,679 --> 00:17:28,000 +say all anybody is doing is training + +417 +00:17:25,360 --> 00:17:30,080 +large language models because you know + +418 +00:17:28,000 --> 00:17:31,720 +these things tend to become you know + +419 +00:17:30,080 --> 00:17:33,520 +popular and then they get Amplified by + +420 +00:17:31,720 --> 00:17:35,840 +algorithms and stuff like that when in + +421 +00:17:33,520 --> 00:17:37,440 +fact like the landscape is much richer + +422 +00:17:35,840 --> 00:17:40,400 +which is why I do definitely suggest + +423 +00:17:37,440 --> 00:17:42,000 +that you like actually look through uh + +424 +00:17:40,400 --> 00:17:43,880 +conference proceedings and stuff and + +425 +00:17:42,000 --> 00:17:46,720 +find papers that are not you know + +426 +00:17:43,880 --> 00:17:48,520 +Amplified as much so um I I definitely + +427 +00:17:46,720 --> 00:17:50,840 +highly recommend doing this in addition + +428 +00:17:48,520 --> 00:17:52,480 +to you know Google Scholar or social + +429 +00:17:50,840 --> 00:17:54,640 +media or other things like that that + +430 +00:17:52,480 --> 00:17:54,640 +might + +431 +00:17:56,600 --> 00:18:01,760 +be cool um I'd also like to mention a + +432 +00:18:00,200 --> 00:18:04,000 +thing about the ups and downs of + +433 +00:18:01,760 --> 00:18:07,559 +preemptive surveys + +434 +00:18:04,000 --> 00:18:10,440 +so um surveying extensively before doing + +435 +00:18:07,559 --> 00:18:12,840 +research uh has a bunch of good sides so + +436 +00:18:10,440 --> 00:18:14,000 +it prevents you from duplicating work so + +437 +00:18:12,840 --> 00:18:15,039 +somebody else might have done a very + +438 +00:18:14,000 --> 00:18:18,080 +similar + +439 +00:18:15,039 --> 00:18:20,480 +thing um it also increases your toolbox + +440 +00:18:18,080 --> 00:18:21,600 +of methods so you know if it's a problem + +441 +00:18:20,480 --> 00:18:25,400 +that a lot of people have worked on + +442 +00:18:21,600 --> 00:18:27,120 +before then you know it helps uh give + +443 +00:18:25,400 --> 00:18:30,320 +you ideas of methods that you could be + +444 +00:18:27,120 --> 00:18:35,600 +using um however in a way it also kind + +445 +00:18:30,320 --> 00:18:38,720 +of constrains your thinking so um if you + +446 +00:18:35,600 --> 00:18:42,480 +like on once you have built up a very + +447 +00:18:38,720 --> 00:18:45,440 +extensive survey of like ways to do + +448 +00:18:42,480 --> 00:18:47,240 +things you tend to like move away from + +449 +00:18:45,440 --> 00:18:48,799 +there when in fact like if you thought + +450 +00:18:47,240 --> 00:18:50,080 +just thought of ways to solve problems + +451 +00:18:48,799 --> 00:18:52,360 +without looking at everything you might + +452 +00:18:50,080 --> 00:18:54,799 +come up with something over here might + +453 +00:18:52,360 --> 00:18:56,400 +actually be a good idea right um and so + +454 +00:18:54,799 --> 00:18:58,600 +there's this really nice essay it was + +455 +00:18:56,400 --> 00:19:00,799 +actually shared uh shared with me by + +456 +00:18:58,600 --> 00:19:02,440 +Chris Manning from Sanford um it's + +457 +00:19:00,799 --> 00:19:04,720 +called how to build an economics model + +458 +00:19:02,440 --> 00:19:06,679 +in your spare time it's about it's from + +459 +00:19:04,720 --> 00:19:08,880 +a Nobel Prize winner in economics but + +460 +00:19:06,679 --> 00:19:10,480 +he's talking about how when he tries to + +461 +00:19:08,880 --> 00:19:13,039 +come up with new and like important + +462 +00:19:10,480 --> 00:19:15,840 +ideas he doesn't look at economics + +463 +00:19:13,039 --> 00:19:19,679 +journals he looks at the newspaper and + +464 +00:19:15,840 --> 00:19:21,919 +tries to uh you know + +465 +00:19:19,679 --> 00:19:23,480 +like look at problems that people are + +466 +00:19:21,919 --> 00:19:24,840 +talking about in the newspaper and think + +467 +00:19:23,480 --> 00:19:27,159 +about whether there's an economic + +468 +00:19:24,840 --> 00:19:29,919 +solution to them and so if we think + +469 +00:19:27,159 --> 00:19:32,880 +about the anal of how we can do this in + +470 +00:19:29,919 --> 00:19:35,600 +natural language processing you know + +471 +00:19:32,880 --> 00:19:37,360 +maybe you don't necessarily right away + +472 +00:19:35,600 --> 00:19:38,799 +want to do a really extensive survey + +473 +00:19:37,360 --> 00:19:41,080 +first you might just think about like + +474 +00:19:38,799 --> 00:19:44,080 +what's bothering you like when you're + +475 +00:19:41,080 --> 00:19:46,799 +using chat GPT what is really + +476 +00:19:44,080 --> 00:19:49,600 +frustrating to you uh about how it gives + +477 +00:19:46,799 --> 00:19:51,280 +responses or um what are the things you + +478 +00:19:49,600 --> 00:19:53,159 +wish it were possible to do through + +479 +00:19:51,280 --> 00:19:56,240 +natural language processing but not are + +480 +00:19:53,159 --> 00:19:57,640 +not possible to do and um then you can + +481 +00:19:56,240 --> 00:20:00,679 +start from there you can look at you + +482 +00:19:57,640 --> 00:20:03,440 +know what companies are doing in their + +483 +00:20:00,679 --> 00:20:05,799 +Tech demos uh because the tech demos + +484 +00:20:03,440 --> 00:20:08,640 +might be nice but they almost never work + +485 +00:20:05,799 --> 00:20:11,240 +as well as the tech demo makes them seem + +486 +00:20:08,640 --> 00:20:13,840 +like they work so that could be another + +487 +00:20:11,240 --> 00:20:15,720 +place to get ideas um or you can look at + +488 +00:20:13,840 --> 00:20:17,039 +papers in a related field like machine + +489 +00:20:15,720 --> 00:20:18,760 +learning like let's say you're a machine + +490 +00:20:17,039 --> 00:20:21,280 +learning oriented person and you really + +491 +00:20:18,760 --> 00:20:23,000 +love like math and stuff like that it's + +492 +00:20:21,280 --> 00:20:25,799 +like well there's this good mathematical + +493 +00:20:23,000 --> 00:20:27,760 +tool that I think could be applicable to + +494 +00:20:25,799 --> 00:20:30,440 +um a certain problem in NLP or something + +495 +00:20:27,760 --> 00:20:31,960 +like that so you could do that too um + +496 +00:20:30,440 --> 00:20:33,960 +the the final one you know comes with + +497 +00:20:31,960 --> 00:20:35,799 +all the caveats of doing topown research + +498 +00:20:33,960 --> 00:20:37,320 +of course so you know you need to make + +499 +00:20:35,799 --> 00:20:39,799 +sure that that really is the correct + +500 +00:20:37,320 --> 00:20:42,159 +tool for whatever you want to sell but + +501 +00:20:39,799 --> 00:20:45,280 +um definitely this is something to think + +502 +00:20:42,159 --> 00:20:48,240 +about um however for assignment three + +503 +00:20:45,280 --> 00:20:49,559 +you need to do a survey so I'm I'm + +504 +00:20:48,240 --> 00:20:50,720 +forcing you to do a survey for + +505 +00:20:49,559 --> 00:20:52,200 +assignment three so if you're going to + +506 +00:20:50,720 --> 00:20:53,640 +do something like this you can do it + +507 +00:20:52,200 --> 00:20:56,600 +before assignment 3 and start thinking + +508 +00:20:53,640 --> 00:21:00,000 +about what you want to be doing so um + +509 +00:20:56,600 --> 00:21:01,520 +that's something + +510 +00:21:00,000 --> 00:21:03,200 +uh any questions or discussion about + +511 +00:21:01,520 --> 00:21:06,799 +that + +512 +00:21:03,200 --> 00:21:07,840 +part this is hard I'm I'm happy to uh + +513 +00:21:06,799 --> 00:21:11,120 +happy to + +514 +00:21:07,840 --> 00:21:14,039 +discuss either now or in office hours or + +515 +00:21:11,120 --> 00:21:14,039 +anything like this + +516 +00:21:14,200 --> 00:21:19,720 +but Okay + +517 +00:21:17,080 --> 00:21:24,279 +cool so the next thing is a for + +518 +00:21:19,720 --> 00:21:25,640 +hypothesis so uh once you have done you + +519 +00:21:24,279 --> 00:21:28,600 +have a general idea of what you want to + +520 +00:21:25,640 --> 00:21:31,240 +do um and you have done a survey related + +521 +00:21:28,600 --> 00:21:32,480 +work you can devise a final research + +522 +00:21:31,240 --> 00:21:34,159 +question or + +523 +00:21:32,480 --> 00:21:37,760 +hypothesis + +524 +00:21:34,159 --> 00:21:40,039 +and so a research question is one or + +525 +00:21:37,760 --> 00:21:43,400 +several explicit questions regarding the + +526 +00:21:40,039 --> 00:21:45,919 +thing that you want to know um + +527 +00:21:43,400 --> 00:21:47,400 +and this is actually pretty hard for + +528 +00:21:45,919 --> 00:21:49,080 +people like I ask people to write + +529 +00:21:47,400 --> 00:21:50,880 +research questions and very often they + +530 +00:21:49,080 --> 00:21:53,080 +don't write research questions in this + +531 +00:21:50,880 --> 00:21:57,720 +format and I have to ask people to try + +532 +00:21:53,080 --> 00:21:59,919 +to change them and what they what I + +533 +00:21:57,720 --> 00:22:03,159 +think they in general should be are yes + +534 +00:21:59,919 --> 00:22:08,120 +no questions so + +535 +00:22:03,159 --> 00:22:10,400 +it um yes no questions and you have a + +536 +00:22:08,120 --> 00:22:13,120 +hypothesis uh about what you think the + +537 +00:22:10,400 --> 00:22:14,600 +answer to the question may be a priori + +538 +00:22:13,120 --> 00:22:17,520 +and that hypothesis should be + +539 +00:22:14,600 --> 00:22:19,919 +falsifiable so basically it's if you get + +540 +00:22:17,520 --> 00:22:21,240 +a certain result you can demonstrate + +541 +00:22:19,919 --> 00:22:23,120 +that the answer to this question is + +542 +00:22:21,240 --> 00:22:24,679 +probably yes if you get a different + +543 +00:22:23,120 --> 00:22:27,520 +result you can demonstrate that the + +544 +00:22:24,679 --> 00:22:29,640 +answer to the question is probably no + +545 +00:22:27,520 --> 00:22:32,400 +and just to make this a little bit more + +546 +00:22:29,640 --> 00:22:34,360 +concrete I can give a few curiosity + +547 +00:22:32,400 --> 00:22:36,880 +driven questions and + +548 +00:22:34,360 --> 00:22:40,720 +hypothesis C the Curiosity driven + +549 +00:22:36,880 --> 00:22:43,480 +questions are a little bit easier so um + +550 +00:22:40,720 --> 00:22:45,600 +we have the Curiosity driven question of + +551 +00:22:43,480 --> 00:22:49,679 +are all language models are all + +552 +00:22:45,600 --> 00:22:53,559 +languages equally hard to language model + +553 +00:22:49,679 --> 00:22:55,400 +and they say uh it is unlikely that all + +554 +00:22:53,559 --> 00:22:56,760 +languages are equally easy or that + +555 +00:22:55,400 --> 00:22:58,799 +methods are equally good at all + +556 +00:22:56,760 --> 00:23:01,159 +languages um so so that's their + +557 +00:22:58,799 --> 00:23:04,120 +hypothesis so they think a priori that + +558 +00:23:01,159 --> 00:23:05,919 +that's the case um but that might be + +559 +00:23:04,120 --> 00:23:08,400 +falsified by getting a very strong + +560 +00:23:05,919 --> 00:23:10,679 +result that says like no matter which + +561 +00:23:08,400 --> 00:23:13,760 +language you're modeling many models + +562 +00:23:10,679 --> 00:23:18,120 +that we use get get similar results + +563 +00:23:13,760 --> 00:23:20,400 +on um what makes a particular podcast + +564 +00:23:18,120 --> 00:23:21,320 +broadly engaging so this was an analysis + +565 +00:23:20,400 --> 00:23:24,400 +of + +566 +00:23:21,320 --> 00:23:27,960 +podcasts uh where they compared popular + +567 +00:23:24,400 --> 00:23:29,720 +podcasts and unpopular podcasts or + +568 +00:23:27,960 --> 00:23:32,400 +engaging and unengaging + +569 +00:23:29,720 --> 00:23:34,400 +podcasts and it says uh tips such as + +570 +00:23:32,400 --> 00:23:37,039 +reducing filler words and disfluencies + +571 +00:23:34,400 --> 00:23:38,840 +or incorporating emotion are things that + +572 +00:23:37,039 --> 00:23:41,400 +people had anecdotally written on the + +573 +00:23:38,840 --> 00:23:43,039 +internet as tips to make a good podcast + +574 +00:23:41,400 --> 00:23:45,760 +but nobody had actually empirically + +575 +00:23:43,039 --> 00:23:48,440 +valid validated that so they wanted to + +576 +00:23:45,760 --> 00:23:50,000 +like actually go invalidate that so they + +577 +00:23:48,440 --> 00:23:51,679 +came up with hypotheses and they could + +578 +00:23:50,000 --> 00:23:55,720 +demonstrate that those had good or bad + +579 +00:23:51,679 --> 00:23:55,720 +correlation podcast being judged as + +580 +00:23:56,880 --> 00:24:03,600 +engaging application driven questions + +581 +00:23:59,039 --> 00:24:03,600 +and hypotheses are a little bit harder + +582 +00:24:04,520 --> 00:24:10,480 +so here is an + +583 +00:24:07,640 --> 00:24:13,039 +example this is an example from a paper + +584 +00:24:10,480 --> 00:24:18,720 +that I wrote previously which + +585 +00:24:13,039 --> 00:24:22,080 +was where and why or how and why do + +586 +00:24:18,720 --> 00:24:22,960 +pre-trained word embeddings help neural + +587 +00:24:22,080 --> 00:24:25,080 +machine + +588 +00:24:22,960 --> 00:24:26,760 +translation and this was back when + +589 +00:24:25,080 --> 00:24:28,279 +pre-training was mostly like word + +590 +00:24:26,760 --> 00:24:31,880 +embeddings we weren't preing the whole + +591 +00:24:28,279 --> 00:24:34,480 +body of the neural net so + +592 +00:24:31,880 --> 00:24:36,640 +now the answers to this question are a + +593 +00:24:34,480 --> 00:24:37,919 +little bit different but basically the + +594 +00:24:36,640 --> 00:24:40,080 +questions that we asked is is the + +595 +00:24:37,919 --> 00:24:42,360 +behavior of pre-training affected by + +596 +00:24:40,080 --> 00:24:45,960 +language families and other linguistic + +597 +00:24:42,360 --> 00:24:49,520 +features of source and Target languages + +598 +00:24:45,960 --> 00:24:51,360 +so uh we expected that the answer to + +599 +00:24:49,520 --> 00:24:53,640 +this would be yes it would vary across + +600 +00:24:51,360 --> 00:24:54,960 +them do pre-trained edings help more + +601 +00:24:53,640 --> 00:24:57,760 +when the size of the training data is + +602 +00:24:54,960 --> 00:24:59,039 +small we expected that this would be yes + +603 +00:24:57,760 --> 00:25:00,640 +how much does the similarity of the + +604 +00:24:59,039 --> 00:25:03,720 +source and Target languages affect the + +605 +00:25:00,640 --> 00:25:06,200 +efficacy of using pre-trained edings uh + +606 +00:25:03,720 --> 00:25:08,399 +we didn't have a hypothesis about + +607 +00:25:06,200 --> 00:25:10,600 +whether it would or not and is it + +608 +00:25:08,399 --> 00:25:12,320 +helpful to align the embedding spaces + +609 +00:25:10,600 --> 00:25:14,520 +between the source and Target languages + +610 +00:25:12,320 --> 00:25:16,039 +we assume this would be yes and do + +611 +00:25:14,520 --> 00:25:17,640 +pre-trained edings help more in + +612 +00:25:16,039 --> 00:25:19,360 +multilingual systems as compared to + +613 +00:25:17,640 --> 00:25:22,679 +bilingual systems and we didn't have a + +614 +00:25:19,360 --> 00:25:26,279 +good hypothesis about that + +615 +00:25:22,679 --> 00:25:29,559 +I another one is although recent stud uh + +616 +00:25:26,279 --> 00:25:32,760 +sorry the question of whether and how + +617 +00:25:29,559 --> 00:25:35,039 +contextual information benefits endtoend + +618 +00:25:32,760 --> 00:25:38,960 +speech translation has received little + +619 +00:25:35,039 --> 00:25:42,480 +attention and so their guess was that it + +620 +00:25:38,960 --> 00:25:44,880 +probably would help so application + +621 +00:25:42,480 --> 00:25:47,120 +oriented questions are a little bit + +622 +00:25:44,880 --> 00:25:49,200 +tricky because the obvious one is like + +623 +00:25:47,120 --> 00:25:52,200 +does X make y + +624 +00:25:49,200 --> 00:25:54,080 +better and so you you have a method you + +625 +00:25:52,200 --> 00:25:55,559 +think it's going to make the output + +626 +00:25:54,080 --> 00:25:58,120 +better and so that's kind of your + +627 +00:25:55,559 --> 00:26:00,000 +obvious research question but the + +628 +00:25:58,120 --> 00:26:02,080 +problem is the above question or + +629 +00:26:00,000 --> 00:26:04,279 +hypothesis is natural but it's very + +630 +00:26:02,080 --> 00:26:06,679 +indirect so normally you also have a + +631 +00:26:04,279 --> 00:26:09,760 +hypothesis about like why it will help + +632 +00:26:06,679 --> 00:26:13,279 +or something like this and so if the + +633 +00:26:09,760 --> 00:26:15,440 +answer is no after your experiments why + +634 +00:26:13,279 --> 00:26:18,080 +is the answer + +635 +00:26:15,440 --> 00:26:20,640 +no it could be that your original + +636 +00:26:18,080 --> 00:26:23,720 +assumption about why a particular method + +637 +00:26:20,640 --> 00:26:25,039 +would help was wrong which is the worst + +638 +00:26:23,720 --> 00:26:28,360 +case scenario but you also could just + +639 +00:26:25,039 --> 00:26:30,559 +have a bug in your code or uh your + +640 +00:26:28,360 --> 00:26:32,000 +data set your test set might not be + +641 +00:26:30,559 --> 00:26:34,279 +large enough so you wouldn't be able to + +642 +00:26:32,000 --> 00:26:35,840 +get a statistically significant result + +643 +00:26:34,279 --> 00:26:40,039 +based on the amount that it helped you + +644 +00:26:35,840 --> 00:26:42,960 +improve or other things like that so + +645 +00:26:40,039 --> 00:26:44,960 +what I like to do in this case is try to + +646 +00:26:42,960 --> 00:26:48,399 +come up with the intuition about why X + +647 +00:26:44,960 --> 00:26:50,360 +will make y better and can you think of + +648 +00:26:48,399 --> 00:26:52,080 +other research questions or hypotheses + +649 +00:26:50,360 --> 00:26:54,240 +that confirm or falsified these + +650 +00:26:52,080 --> 00:26:56,640 +assumptions + +651 +00:26:54,240 --> 00:26:59,559 +so uh some things that you can do are + +652 +00:26:56,640 --> 00:27:01,240 +come up with like toy data or come up + +653 +00:26:59,559 --> 00:27:03,840 +with a subset of the data where you + +654 +00:27:01,240 --> 00:27:06,600 +think this might be correct so just to + +655 +00:27:03,840 --> 00:27:09,279 +give an example let's say we have a + +656 +00:27:06,600 --> 00:27:12,159 +translation model and we have a + +657 +00:27:09,279 --> 00:27:14,279 +hypothesis that improving entity + +658 +00:27:12,159 --> 00:27:16,520 +translation and low resource languages + +659 +00:27:14,279 --> 00:27:18,799 +will improve translation accuracy and we + +660 +00:27:16,520 --> 00:27:21,399 +run an experiment or actually maybe this + +661 +00:27:18,799 --> 00:27:23,760 +is an even better one we we have a + +662 +00:27:21,399 --> 00:27:26,240 +hypothesis that incorporating contextual + +663 +00:27:23,760 --> 00:27:28,799 +information in speech translation will + +664 +00:27:26,240 --> 00:27:31,760 +help translation results + +665 +00:27:28,799 --> 00:27:36,480 +so incorporating context in machine + +666 +00:27:31,760 --> 00:27:37,600 +translation has been a very old topic + +667 +00:27:36,480 --> 00:27:41,279 +like people have been trying to do this + +668 +00:27:37,600 --> 00:27:43,559 +for a very long time but for a long time + +669 +00:27:41,279 --> 00:27:45,200 +the conclusion was that it essentially + +670 +00:27:43,559 --> 00:27:46,519 +wasn't helping translation people would + +671 +00:27:45,200 --> 00:27:48,039 +incorporate contacts through neural + +672 +00:27:46,519 --> 00:27:50,960 +networks or other things like that and + +673 +00:27:48,039 --> 00:27:53,320 +it just wasn't improving the results + +674 +00:27:50,960 --> 00:27:55,320 +significantly and in the end the reason + +675 +00:27:53,320 --> 00:27:57,960 +why was because there just weren't + +676 +00:27:55,320 --> 00:27:59,799 +enough examples where contextual + +677 +00:27:57,960 --> 00:28:02,200 +information was useful in the data sets + +678 +00:27:59,799 --> 00:28:06,360 +that everybody was using so people were + +679 +00:28:02,200 --> 00:28:09,080 +using really long news sentences to try + +680 +00:28:06,360 --> 00:28:10,880 +to figure out where uh whether context + +681 +00:28:09,080 --> 00:28:12,440 +was helping but really long new + +682 +00:28:10,880 --> 00:28:14,000 +sentences have so much information + +683 +00:28:12,440 --> 00:28:16,080 +included in them that you can mostly + +684 +00:28:14,000 --> 00:28:20,120 +translate sentence by sentence and get + +685 +00:28:16,080 --> 00:28:21,880 +it right like 95% of the time so the + +686 +00:28:20,120 --> 00:28:23,600 +problem wasn't that any of the methods + +687 +00:28:21,880 --> 00:28:26,799 +that people were proposing were bad it + +688 +00:28:23,600 --> 00:28:29,559 +was just that they weren't effective + +689 +00:28:26,799 --> 00:28:31,440 +enough to see big enough uh results and + +690 +00:28:29,559 --> 00:28:33,159 +so then people Chang the data set to + +691 +00:28:31,440 --> 00:28:34,720 +like conversations or something like + +692 +00:28:33,159 --> 00:28:37,399 +that and in conversations they're very + +693 +00:28:34,720 --> 00:28:39,159 +contextual yeah very short utterances + +694 +00:28:37,399 --> 00:28:41,440 +and once you started doing things like + +695 +00:28:39,159 --> 00:28:45,840 +that then the same methods like exactly + +696 +00:28:41,440 --> 00:28:48,640 +the same methods were um were helping + +697 +00:28:45,840 --> 00:28:51,120 +when they weren't helping before and + +698 +00:28:48,640 --> 00:28:52,720 +so the underlying assumption about + +699 +00:28:51,120 --> 00:28:56,240 +incorporating context information is + +700 +00:28:52,720 --> 00:28:58,159 +that context will be helpful and or + +701 +00:28:56,240 --> 00:29:01,760 +context is necessary + +702 +00:28:58,159 --> 00:29:03,880 +to you know do translation well so does + +703 +00:29:01,760 --> 00:29:06,880 +anyone have an idea about how you could + +704 +00:29:03,880 --> 00:29:06,880 +like actually verify that + +705 +00:29:10,880 --> 00:29:16,519 +assumption any idea yeah simplest way + +706 +00:29:14,000 --> 00:29:19,120 +would be just give an El way to set and + +707 +00:29:16,519 --> 00:29:21,000 +then have a measure of okay if it in + +708 +00:29:19,120 --> 00:29:23,679 +more than + +709 +00:29:21,000 --> 00:29:25,519 +x% um and how would that verify the + +710 +00:29:23,679 --> 00:29:28,480 +assumption that context is + +711 +00:29:25,519 --> 00:29:30,720 +necessary so we're asking a question + +712 +00:29:28,480 --> 00:29:33,480 +whether context is helpful in the proect + +713 +00:29:30,720 --> 00:29:36,000 +you're doing that uh we're asking + +714 +00:29:33,480 --> 00:29:39,240 +whether + +715 +00:29:36,000 --> 00:29:40,840 +so we're asking kind of a a two-part the + +716 +00:29:39,240 --> 00:29:44,080 +main question is whether context is + +717 +00:29:40,840 --> 00:29:45,559 +helpful given a particular you know + +718 +00:29:44,080 --> 00:29:47,240 +experimental setup right so like + +719 +00:29:45,559 --> 00:29:50,440 +training data + +720 +00:29:47,240 --> 00:29:52,039 +set modeling method and training + +721 +00:29:50,440 --> 00:29:54,679 +algorithm and evaluation algorithm + +722 +00:29:52,039 --> 00:29:56,480 +that's kind of the big final result that + +723 +00:29:54,679 --> 00:29:58,840 +you want to get in your paper but + +724 +00:29:56,480 --> 00:30:01,399 +there's kind of a the question which is + +725 +00:29:58,840 --> 00:30:04,360 +is context even necessary to translate + +726 +00:30:01,399 --> 00:30:06,559 +well you train a model with context and + +727 +00:30:04,360 --> 00:30:08,200 +one without context you train a model + +728 +00:30:06,559 --> 00:30:10,679 +with context and one without context but + +729 +00:30:08,200 --> 00:30:14,080 +what if your model of context is really + +730 +00:30:10,679 --> 00:30:15,399 +bad J the same model you have the same + +731 +00:30:14,080 --> 00:30:16,840 +model architecture but let's say your + +732 +00:30:15,399 --> 00:30:18,559 +model architecture is really bad at + +733 +00:30:16,840 --> 00:30:19,919 +capturing context so then maybe it's a + +734 +00:30:18,559 --> 00:30:22,399 +problem of your model architecture and + +735 +00:30:19,919 --> 00:30:24,720 +context is necessary or helpful but your + +736 +00:30:22,399 --> 00:30:27,399 +model just isn't very good at capture + +737 +00:30:24,720 --> 00:30:29,720 +human yeah exactly so this is one thing + +738 +00:30:27,399 --> 00:30:31,960 +that people can do so there was a + +739 +00:30:29,720 --> 00:30:34,240 +interesting paper um let me see if I can + +740 +00:30:31,960 --> 00:30:34,240 +find + +741 +00:30:39,960 --> 00:30:49,080 +it so this is a paper from a long time + +742 +00:30:45,760 --> 00:30:51,600 +ago where they did something like + +743 +00:30:49,080 --> 00:30:53,360 +this um it's evaluating machine + +744 +00:30:51,600 --> 00:30:54,480 +translation systems with second language + +745 +00:30:53,360 --> 00:30:57,399 +proficiency + +746 +00:30:54,480 --> 00:31:01,240 +tests and basically what they did is + +747 +00:30:57,399 --> 00:31:03,519 +they had these English proficiency tests + +748 +00:31:01,240 --> 00:31:05,320 +for uh I think it was like middle + +749 +00:31:03,519 --> 00:31:07,480 +schoolers or high schoolers or something + +750 +00:31:05,320 --> 00:31:09,600 +like this and then they used machine + +751 +00:31:07,480 --> 00:31:11,240 +translation systems to translate them + +752 +00:31:09,600 --> 00:31:13,600 +into Japanese and then they asked + +753 +00:31:11,240 --> 00:31:19,720 +Japanese students to solve them in + +754 +00:31:13,600 --> 00:31:19,720 +japanies and so what they did is they + +755 +00:31:20,000 --> 00:31:26,159 +asked uh Anonymous system G and + +756 +00:31:23,679 --> 00:31:28,200 +Anonymous system Y which are Google and + +757 +00:31:26,159 --> 00:31:32,360 +Yahoo + +758 +00:31:28,200 --> 00:31:34,720 +and uh and a human without context and a + +759 +00:31:32,360 --> 00:31:36,279 +human with context to translate them so + +760 +00:31:34,720 --> 00:31:38,720 +they ask humans to translate each + +761 +00:31:36,279 --> 00:31:40,880 +sentence without giving any context and + +762 +00:31:38,720 --> 00:31:44,320 +they ask humans to translate each uh + +763 +00:31:40,880 --> 00:31:46,399 +sentence with giving context and what + +764 +00:31:44,320 --> 00:31:48,960 +they were able to find was in this case + +765 +00:31:46,399 --> 00:31:50,080 +humans with context the Japanese + +766 +00:31:48,960 --> 00:31:53,080 +students were able to answer the + +767 +00:31:50,080 --> 00:31:55,360 +questions most of the time um whereas if + +768 +00:31:53,080 --> 00:31:57,559 +they translated without contexts like G + +769 +00:31:55,360 --> 00:31:59,039 +and Y were doing at that time actually + +770 +00:31:57,559 --> 00:32:01,320 +why was almost as good as human + +771 +00:31:59,039 --> 00:32:04,080 +translators at you know achieving the + +772 +00:32:01,320 --> 00:32:05,440 +the task so but basically like the + +773 +00:32:04,080 --> 00:32:09,159 +important thing here is they were able + +774 +00:32:05,440 --> 00:32:11,039 +to confirm their you know idea that in + +775 +00:32:09,159 --> 00:32:12,519 +this case humans with context were much + +776 +00:32:11,039 --> 00:32:13,799 +better than humans without context so + +777 +00:32:12,519 --> 00:32:16,279 +that would verify your like sub + +778 +00:32:13,799 --> 00:32:18,080 +assumption right and so this is just + +779 +00:32:16,279 --> 00:32:20,279 +like one + +780 +00:32:18,080 --> 00:32:22,240 +example this is just one example of + +781 +00:32:20,279 --> 00:32:25,960 +something that you can + +782 +00:32:22,240 --> 00:32:27,480 +do uh but the basic idea is like your + +783 +00:32:25,960 --> 00:32:29,320 +final result is that you want build of + +784 +00:32:27,480 --> 00:32:30,799 +system that does better on some + +785 +00:32:29,320 --> 00:32:32,159 +Benchmark that you care about there's a + +786 +00:32:30,799 --> 00:32:33,600 +bunch of things that go into whether it + +787 +00:32:32,159 --> 00:32:36,159 +does better or not your evaluation + +788 +00:32:33,600 --> 00:32:38,960 +system your model your training data + +789 +00:32:36,159 --> 00:32:41,559 +your training your evaluation data set + +790 +00:32:38,960 --> 00:32:43,080 +um and things like that so can you break + +791 +00:32:41,559 --> 00:32:45,360 +that down into sub questions that you + +792 +00:32:43,080 --> 00:32:48,039 +could ask where you could verify that + +793 +00:32:45,360 --> 00:32:49,720 +it's working or not uh based on whether + +794 +00:32:48,039 --> 00:32:51,600 +those things are happening another thing + +795 +00:32:49,720 --> 00:32:53,159 +people do an ml oriented things is + +796 +00:32:51,600 --> 00:32:54,919 +create a toy data set where they know + +797 +00:32:53,159 --> 00:32:57,200 +the phenomenon they're interested in + +798 +00:32:54,919 --> 00:32:59,679 +exists and train their models on there + +799 +00:32:57,200 --> 00:33:02,919 +and make sure that they work there um so + +800 +00:32:59,679 --> 00:33:02,919 +that's another thing that you can take + +801 +00:33:03,120 --> 00:33:07,639 +that cool um any questions about + +802 +00:33:08,080 --> 00:33:12,760 +this okay + +803 +00:33:10,200 --> 00:33:16,519 +s so the next thing is running + +804 +00:33:12,760 --> 00:33:19,000 +experiments um so in order to do this + +805 +00:33:16,519 --> 00:33:21,399 +you'll find data that will answer your + +806 +00:33:19,000 --> 00:33:23,639 +research question uh run experiments and + +807 +00:33:21,399 --> 00:33:25,720 +calculate numbers uh calculate + +808 +00:33:23,639 --> 00:33:28,279 +significant differences and analyze + +809 +00:33:25,720 --> 00:33:31,080 +effects whoops + +810 +00:33:28,279 --> 00:33:35,519 +and so this is a basic pipeline that we + +811 +00:33:31,080 --> 00:33:37,760 +want to follow so obtaining test data so + +812 +00:33:35,519 --> 00:33:41,200 +in order to obtain test data uh we would + +813 +00:33:37,760 --> 00:33:42,799 +like to find data sets um so if you're + +814 +00:33:41,200 --> 00:33:46,200 +building on previous work the safest + +815 +00:33:42,799 --> 00:33:48,960 +thing that you can do um is start with + +816 +00:33:46,200 --> 00:33:51,919 +the same data sets if you're answering a + +817 +00:33:48,960 --> 00:33:53,799 +new question um you can think about can + +818 +00:33:51,919 --> 00:33:55,399 +you repurpose other data sets to answer + +819 +00:33:53,799 --> 00:33:57,679 +the question so very often there will be + +820 +00:33:55,399 --> 00:34:00,080 +a data set that is uh appropriate for + +821 +00:33:57,679 --> 00:34:03,360 +answer answering your question um and + +822 +00:34:00,080 --> 00:34:05,760 +you can go and find that um actually our + +823 +00:34:03,360 --> 00:34:06,919 +our wonderful TJ has created a system + +824 +00:34:05,760 --> 00:34:08,800 +called datafinder that will + +825 +00:34:06,919 --> 00:34:11,159 +automatically find it for you so if you + +826 +00:34:08,800 --> 00:34:13,679 +want to uh search for data sets you can + +827 +00:34:11,159 --> 00:34:16,760 +use his system or ask him about it but + +828 +00:34:13,679 --> 00:34:20,359 +um uh but if no appropriate data set + +829 +00:34:16,760 --> 00:34:24,359 +exists you can uh create your own and + +830 +00:34:20,359 --> 00:34:25,879 +particularly for industry use cases it's + +831 +00:34:24,359 --> 00:34:28,119 +very common that you need to go in and + +832 +00:34:25,879 --> 00:34:30,040 +create your own or if you're planning on + +833 +00:34:28,119 --> 00:34:31,639 +doing research in Academia afterwards + +834 +00:34:30,040 --> 00:34:33,119 +very often you'll come up with a + +835 +00:34:31,639 --> 00:34:34,639 +research question where no data set + +836 +00:34:33,119 --> 00:34:36,679 +exists so you'll have to create your own + +837 +00:34:34,639 --> 00:34:38,960 +anyway so this is something that's + +838 +00:34:36,679 --> 00:34:41,639 +really important to be able to do well + +839 +00:34:38,960 --> 00:34:44,639 +uh in most + +840 +00:34:41,639 --> 00:34:49,240 +cases um so I'll be talking about how to + +841 +00:34:44,639 --> 00:34:53,280 +do all of these so data set lists um the + +842 +00:34:49,240 --> 00:34:55,159 +best one I think by far in uh natural + +843 +00:34:53,280 --> 00:34:58,359 +language processing nowadays is hugging + +844 +00:34:55,159 --> 00:35:02,960 +face data sets um there's also other + +845 +00:34:58,359 --> 00:35:05,359 +data resources like um elra is uh + +846 +00:35:02,960 --> 00:35:07,240 +another one kind of by the more + +847 +00:35:05,359 --> 00:35:09,800 +traditional natural language processing + +848 +00:35:07,240 --> 00:35:12,960 +Community there's also the LDC the + +849 +00:35:09,800 --> 00:35:15,680 +linguistic data uh Consortium and there + +850 +00:35:12,960 --> 00:35:17,119 +are some older heavily annotated data + +851 +00:35:15,680 --> 00:35:20,040 +sets that are only available through + +852 +00:35:17,119 --> 00:35:22,000 +those at CMU you have the ability to + +853 +00:35:20,040 --> 00:35:24,520 +download things from LDC so if you find + +854 +00:35:22,000 --> 00:35:26,960 +an LDC data set in any papers that + +855 +00:35:24,520 --> 00:35:29,640 +you're doing or online um you need + +856 +00:35:26,960 --> 00:35:31,000 +register for that and I I'm the person + +857 +00:35:29,640 --> 00:35:33,280 +who's in charge of it so I'll give you + +858 +00:35:31,000 --> 00:35:35,520 +access and then uh and then you can use + +859 +00:35:33,280 --> 00:35:37,400 +it um there's also things like papers + +860 +00:35:35,520 --> 00:35:39,680 +with code and papers with code basically + +861 +00:35:37,400 --> 00:35:41,359 +automatically extracts uh kind of like + +862 +00:35:39,680 --> 00:35:42,839 +the names of data sets so even some + +863 +00:35:41,359 --> 00:35:45,599 +things that don't appear on a hug and + +864 +00:35:42,839 --> 00:35:45,599 +place will appear + +865 +00:35:46,359 --> 00:35:52,440 +there so annotating data um when you + +866 +00:35:50,640 --> 00:35:54,599 +annotate data you first need to decide + +867 +00:35:52,440 --> 00:35:57,599 +how much to annotate sample appropriate + +868 +00:35:54,599 --> 00:36:00,240 +data create annotation guidelines + +869 +00:35:57,599 --> 00:36:03,160 +uh either annotate yourself or hire and + +870 +00:36:00,240 --> 00:36:05,839 +supervis annotators and evaluate + +871 +00:36:03,160 --> 00:36:07,720 +quality so a very common problem that a + +872 +00:36:05,839 --> 00:36:10,240 +lot of people ask me is how much test + +873 +00:36:07,720 --> 00:36:12,800 +data do you need + +874 +00:36:10,240 --> 00:36:14,800 +and I'm going to talk about uh + +875 +00:36:12,800 --> 00:36:17,520 +statistical significance tests in a + +876 +00:36:14,800 --> 00:36:19,520 +second but um basically you need to have + +877 +00:36:17,520 --> 00:36:23,240 +enough to have a statistically + +878 +00:36:19,520 --> 00:36:28,119 +significant difference um between + +879 +00:36:23,240 --> 00:36:32,079 +methods and the way you do this actually + +880 +00:36:28,119 --> 00:36:32,079 +sorry very quickly let me + +881 +00:36:33,240 --> 00:36:37,599 +check I rearrange my slides and I want + +882 +00:36:35,560 --> 00:36:40,359 +to make sure that I didn't accidentally + +883 +00:36:37,599 --> 00:36:42,280 +um I didn't accidentally remove the + +884 +00:36:40,359 --> 00:36:44,520 +slides on statistical significance which + +885 +00:36:42,280 --> 00:36:44,520 +would be + +886 +00:36:51,680 --> 00:36:57,880 +a okay + +887 +00:36:55,240 --> 00:36:59,200 +um sorry hang on one second I just + +888 +00:36:57,880 --> 00:37:02,240 +realized that I don't have the slides + +889 +00:36:59,200 --> 00:37:03,839 +for a statistical significance on this + +890 +00:37:02,240 --> 00:37:05,280 +presentation so let me grab them from + +891 +00:37:03,839 --> 00:37:09,440 +the + +892 +00:37:05,280 --> 00:37:09,440 +last uh the last + +893 +00:37:10,520 --> 00:37:14,640 +us this is is pretty + +894 +00:37:25,599 --> 00:37:28,599 +important + +895 +00:37:33,160 --> 00:37:38,599 +okay so yeah let me explain statistical + +896 +00:37:35,560 --> 00:37:40,319 +significance here um so basically when + +897 +00:37:38,599 --> 00:37:43,319 +we're doing statistical + +898 +00:37:40,319 --> 00:37:44,680 +testing um let's say we have two models + +899 +00:37:43,319 --> 00:37:47,800 +with similar + +900 +00:37:44,680 --> 00:37:50,160 +accuracies and these models with similar + +901 +00:37:47,800 --> 00:37:52,240 +accuracies let's say model one is a + +902 +00:37:50,160 --> 00:37:56,880 +generative model model two is a + +903 +00:37:52,240 --> 00:37:58,520 +discriminative model and we say uh data + +904 +00:37:56,880 --> 00:38:00,200 +set one we have this result on data set + +905 +00:37:58,520 --> 00:38:02,480 +two we have another result on data set + +906 +00:38:00,200 --> 00:38:04,720 +three we have uh another + +907 +00:38:02,480 --> 00:38:06,440 +result and so then the question is how + +908 +00:38:04,720 --> 00:38:09,480 +can we tell if the differences are due + +909 +00:38:06,440 --> 00:38:13,839 +to consistent trends that uh will hold + +910 +00:38:09,480 --> 00:38:16,119 +on other data sets or um if they are + +911 +00:38:13,839 --> 00:38:18,480 +kind of random noise due to the fact + +912 +00:38:16,119 --> 00:38:21,000 +that we have one + +913 +00:38:18,480 --> 00:38:24,200 +uh due to the fact that you know data + +914 +00:38:21,000 --> 00:38:25,640 +sets vary models vary um and so the way + +915 +00:38:24,200 --> 00:38:28,319 +we do this is through statistical + +916 +00:38:25,640 --> 00:38:31,839 +significance testing + +917 +00:38:28,319 --> 00:38:34,319 +um so I'm going to cover this briefly in + +918 +00:38:31,839 --> 00:38:36,920 +this class but you can see a drawer at + +919 +00:38:34,319 --> 00:38:38,640 +all for an overview and also we're going + +920 +00:38:36,920 --> 00:38:41,520 +to have a recitation on how to actually + +921 +00:38:38,640 --> 00:38:44,280 +run statistical significance tests so um + +922 +00:38:41,520 --> 00:38:47,920 +you can take a look at that + +923 +00:38:44,280 --> 00:38:51,680 +there and so the basic idea is given a + +924 +00:38:47,920 --> 00:38:54,280 +quantity we test um certain values of + +925 +00:38:51,680 --> 00:38:57,880 +uncertainty with respect to the quantity + +926 +00:38:54,280 --> 00:38:59,960 +so number one is a p value and the P + +927 +00:38:57,880 --> 00:39:02,240 +value is what is the probability that a + +928 +00:38:59,960 --> 00:39:06,119 +difference with another quantity is by + +929 +00:39:02,240 --> 00:39:08,359 +chance and so a lower uh P value means + +930 +00:39:06,119 --> 00:39:11,839 +more likelihood of having a significant + +931 +00:39:08,359 --> 00:39:13,200 +difference usually the threshold for + +932 +00:39:11,839 --> 00:39:16,520 +saying that we have a significant + +933 +00:39:13,200 --> 00:39:20,280 +difference is there's a 5% chance + +934 +00:39:16,520 --> 00:39:22,160 +0.05 that this difference between the + +935 +00:39:20,280 --> 00:39:25,760 +models was due to chance or like data + +936 +00:39:22,160 --> 00:39:28,520 +sampling or things like that uh so p uh + +937 +00:39:25,760 --> 00:39:30,880 +less than 0.05 is kind of a threshold + +938 +00:39:28,520 --> 00:39:30,880 +for + +939 +00:39:31,119 --> 00:39:35,680 +significance another thing that we can + +940 +00:39:33,040 --> 00:39:38,720 +measure is confidence intervals and the + +941 +00:39:35,680 --> 00:39:40,760 +confidence interval is um what is the + +942 +00:39:38,720 --> 00:39:42,560 +range under which we could expect + +943 +00:39:40,760 --> 00:39:44,760 +another trial to fall and I'll talk + +944 +00:39:42,560 --> 00:39:47,359 +about both of + +945 +00:39:44,760 --> 00:39:49,280 +these um there's another concept called + +946 +00:39:47,359 --> 00:39:53,880 +paired versus unpaired + +947 +00:39:49,280 --> 00:39:56,680 +tests and in unpaired test comp this + +948 +00:39:53,880 --> 00:39:59,480 +means um we compare the means of a + +949 +00:39:56,680 --> 00:40:02,359 +quantity on two unrelated + +950 +00:39:59,480 --> 00:40:04,040 +groups so an example could be the test + +951 +00:40:02,359 --> 00:40:07,040 +of the significance of a difference of + +952 +00:40:04,040 --> 00:40:09,160 +accuracies of a model on two data sets + +953 +00:40:07,040 --> 00:40:12,400 +so like let's say I have data set number + +954 +00:40:09,160 --> 00:40:16,440 +one and data set number two what is the + +955 +00:40:12,400 --> 00:40:18,000 +likelihood that the um there's actually + +956 +00:40:16,440 --> 00:40:20,839 +a real difference in the data sets as + +957 +00:40:18,000 --> 00:40:23,400 +opposed to just random uh random + +958 +00:40:20,839 --> 00:40:26,599 +sampling RS between + +959 +00:40:23,400 --> 00:40:28,560 +them in contrast AED test compares the + +960 +00:40:26,599 --> 00:40:31,400 +means of a quantity on one data set + +961 +00:40:28,560 --> 00:40:32,480 +under two conditions and so an example + +962 +00:40:31,400 --> 00:40:33,760 +of this could be testing the + +963 +00:40:32,480 --> 00:40:37,319 +significance of a difference of + +964 +00:40:33,760 --> 00:40:39,640 +accuracies of two models on one data set + +965 +00:40:37,319 --> 00:40:42,000 +so this is a really important difference + +966 +00:40:39,640 --> 00:40:43,960 +and the reason why it's a really + +967 +00:40:42,000 --> 00:40:45,520 +important difference well number one + +968 +00:40:43,960 --> 00:40:49,119 +we're most commonly interested in the + +969 +00:40:45,520 --> 00:40:51,839 +letter number two if we can make + +970 +00:40:49,119 --> 00:40:54,280 +assumptions about + +971 +00:40:51,839 --> 00:40:56,079 +the association of the points in the + +972 +00:40:54,280 --> 00:40:58,680 +data set we're much much more likely to + +973 +00:40:56,079 --> 00:41:00,440 +get a significant result because we can + +974 +00:40:58,680 --> 00:41:02,240 +um we can look at the difference of the + +975 +00:41:00,440 --> 00:41:06,000 +models on individual data points as + +976 +00:41:02,240 --> 00:41:10,400 +opposed to um uh as opposed to looking + +977 +00:41:06,000 --> 00:41:10,400 +at just the difference in the + +978 +00:41:10,520 --> 00:41:16,839 +means so one example of a statistical + +979 +00:41:13,760 --> 00:41:18,280 +significance test is a bootstrap test + +980 +00:41:16,839 --> 00:41:19,760 +and the bootstrap test is really + +981 +00:41:18,280 --> 00:41:21,680 +convenient because you can implement it + +982 +00:41:19,760 --> 00:41:25,160 +for any evaluation metric that you want + +983 +00:41:21,680 --> 00:41:26,880 +to be using and so in NLP we can use + +984 +00:41:25,160 --> 00:41:29,560 +lots of different evaluations metrics we + +985 +00:41:26,880 --> 00:41:31,119 +can use an evaluation metric like um + +986 +00:41:29,560 --> 00:41:34,160 +accuracy but we can also use an + +987 +00:41:31,119 --> 00:41:37,400 +evaluation metric like fmeasure for + +988 +00:41:34,160 --> 00:41:40,560 +classification or a blue score or + +989 +00:41:37,400 --> 00:41:43,599 +character F score or word error rate or + +990 +00:41:40,560 --> 00:41:48,440 +something like that for um for various + +991 +00:41:43,599 --> 00:41:50,720 +tasks and this is applicable to any any + +992 +00:41:48,440 --> 00:41:54,000 +metric you want to use uh any quantity + +993 +00:41:50,720 --> 00:41:57,319 +you want to measure also so the basic + +994 +00:41:54,000 --> 00:41:59,079 +idea of a bootstrap test is a method + +995 +00:41:57,319 --> 00:42:02,520 +that can measure P values and confidence + +996 +00:41:59,079 --> 00:42:06,040 +intervals by resampling data and so the + +997 +00:42:02,520 --> 00:42:08,480 +way you do this is you sample subsets + +998 +00:42:06,040 --> 00:42:11,960 +from your death Dev test set with + +999 +00:42:08,480 --> 00:42:14,720 +replacement so you might sample 10,000 + +1000 +00:42:11,960 --> 00:42:19,599 +times and you measure accuracy on these + +1001 +00:42:14,720 --> 00:42:22,520 +many subsets and then you take + +1002 +00:42:19,599 --> 00:42:25,640 +the you look at all of the accuracies + +1003 +00:42:22,520 --> 00:42:27,680 +that you got on these subsample data + +1004 +00:42:25,640 --> 00:42:31,079 +sets and then you take the middle + +1005 +00:42:27,680 --> 00:42:32,640 +percentile range like 2.5 to 97.5 and + +1006 +00:42:31,079 --> 00:42:34,960 +you can treat that as a confidence + +1007 +00:42:32,640 --> 00:42:37,640 +interval the 95% confidence interval + +1008 +00:42:34,960 --> 00:42:40,720 +about where you're like 95% certain that + +1009 +00:42:37,640 --> 00:42:40,720 +your results will fall in + +1010 +00:42:40,880 --> 00:42:48,240 +here another thing that you can do is + +1011 +00:42:45,119 --> 00:42:50,040 +you can do a paired test and what the + +1012 +00:42:48,240 --> 00:42:51,200 +paired test does is it measures the + +1013 +00:42:50,040 --> 00:42:53,359 +number of + +1014 +00:42:51,200 --> 00:42:55,839 +winds um + +1015 +00:42:53,359 --> 00:42:57,720 +if and you measure the percentage of + +1016 +00:42:55,839 --> 00:43:00,920 +winds and this is the confidence that a + +1017 +00:42:57,720 --> 00:43:03,280 +gain in accuracy is not by chance um and + +1018 +00:43:00,920 --> 00:43:05,920 +so this could be one minus the P value + +1019 +00:43:03,280 --> 00:43:07,960 +of the paired test so this is easy to + +1020 +00:43:05,920 --> 00:43:09,960 +implement applicable to any evaluation + +1021 +00:43:07,960 --> 00:43:13,480 +measure but somewhat biased on small + +1022 +00:43:09,960 --> 00:43:17,240 +data sets um just to maybe I can give a + +1023 +00:43:13,480 --> 00:43:19,920 +more concrete example so let's say we + +1024 +00:43:17,240 --> 00:43:27,520 +have a classification data set what you + +1025 +00:43:19,920 --> 00:43:30,400 +can do is um let's say we have a b c d e + +1026 +00:43:27,520 --> 00:43:36,960 +e or + +1027 +00:43:30,400 --> 00:43:39,559 +um X1 X2 X3 X4 + +1028 +00:43:36,960 --> 00:43:44,520 +X5 so this is our our classification + +1029 +00:43:39,559 --> 00:43:47,440 +data set and um we have system + +1030 +00:43:44,520 --> 00:43:52,000 +one system + +1031 +00:43:47,440 --> 00:43:53,760 +two and we have right right right right + +1032 +00:43:52,000 --> 00:43:56,599 +wrong + +1033 +00:43:53,760 --> 00:44:00,440 +right uh right wrong + +1034 +00:43:56,599 --> 00:44:03,040 +long right or something like this and so + +1035 +00:44:00,440 --> 00:44:07,079 +what we do is we randomly sample a sub + +1036 +00:44:03,040 --> 00:44:08,760 +data set um and let's say this is like + +1037 +00:44:07,079 --> 00:44:10,440 +X3 + +1038 +00:44:08,760 --> 00:44:13,599 +X2 + +1039 +00:44:10,440 --> 00:44:17,599 +X4 X1 + +1040 +00:44:13,599 --> 00:44:20,440 +X2 and so this is our subd data set uh + +1041 +00:44:17,599 --> 00:44:20,440 +what we do + +1042 +00:44:20,640 --> 00:44:28,920 +is um so X3 would be + +1043 +00:44:23,520 --> 00:44:34,559 +01 X2 would be 1 one X4 would be one Zer + +1044 +00:44:28,920 --> 00:44:39,079 +X X1 would be 1 one and + +1045 +00:44:34,559 --> 00:44:42,319 +then uh X X2 would be one and so the + +1046 +00:44:39,079 --> 00:44:45,319 +overall accuracy here + +1047 +00:44:42,319 --> 00:44:45,319 +is + +1048 +00:44:45,480 --> 00:44:50,240 +60% and + +1049 +00:44:47,440 --> 00:44:51,880 +80% so if we didn't do any statistical + +1050 +00:44:50,240 --> 00:44:55,400 +significance test we might say oh system + +1051 +00:44:51,880 --> 00:44:57,680 +2 is better obviously um but if we do + +1052 +00:44:55,400 --> 00:45:01,079 +the significance test this is one sample + +1053 +00:44:57,680 --> 00:45:03,119 +from the bootstrap test in + +1054 +00:45:01,079 --> 00:45:07,040 +here + +1055 +00:45:03,119 --> 00:45:09,079 +now we get like 80% and 80% and it's + +1056 +00:45:07,040 --> 00:45:11,079 +like okay actually maybe in some cases + +1057 +00:45:09,079 --> 00:45:13,480 +these systems AR equally good maybe + +1058 +00:45:11,079 --> 00:45:16,079 +there's a tie or if we sampled another + +1059 +00:45:13,480 --> 00:45:19,079 +one uh let's say we + +1060 +00:45:16,079 --> 00:45:19,079 +sampled + +1061 +00:45:19,359 --> 00:45:27,319 +uh + +1062 +00:45:20,960 --> 00:45:30,680 +X4 X1 X2 X4 X1 + +1063 +00:45:27,319 --> 00:45:36,160 +um um then we would get something like + +1064 +00:45:30,680 --> 00:45:37,559 +one Z one one one one 1 0 1 one this + +1065 +00:45:36,160 --> 00:45:40,440 +would be + +1066 +00:45:37,559 --> 00:45:42,559 +100% And this would be + +1067 +00:45:40,440 --> 00:45:44,960 +60% and + +1068 +00:45:42,559 --> 00:45:47,000 +so in some cases depending on how we + +1069 +00:45:44,960 --> 00:45:48,440 +sample actually system one wins and so + +1070 +00:45:47,000 --> 00:45:51,440 +you count the number of times that + +1071 +00:45:48,440 --> 00:45:52,880 +system two wins based on um based on + +1072 +00:45:51,440 --> 00:45:54,280 +these sub samples you count the number + +1073 +00:45:52,880 --> 00:45:56,400 +of times that system one wins and you + +1074 +00:45:54,280 --> 00:45:59,000 +count the number of times you get a tie + +1075 +00:45:56,400 --> 00:46:00,920 +and only in the case where system two or + +1076 +00:45:59,000 --> 00:46:03,680 +like the better system wins more than + +1077 +00:46:00,920 --> 00:46:06,280 +95% of the time you say that there's a + +1078 +00:46:03,680 --> 00:46:08,599 +significant difference be these or + +1079 +00:46:06,280 --> 00:46:10,720 +alternatively you could also look at the + +1080 +00:46:08,599 --> 00:46:15,960 +confidence intervals by saying okay I + +1081 +00:46:10,720 --> 00:46:19,000 +sampled um like 90 95% of the time uh + +1082 +00:46:15,960 --> 00:46:20,920 +the accuracy of system one is uh like + +1083 +00:46:19,000 --> 00:46:23,640 +80% or lower and so that would give you + +1084 +00:46:20,920 --> 00:46:23,640 +the upper L + +1085 +00:46:23,760 --> 00:46:29,599 +calculation so yeah sorry this is a very + +1086 +00:46:27,480 --> 00:46:31,760 +uh very quick overview of this but the + +1087 +00:46:29,599 --> 00:46:34,240 +reason why this is useful is let's say + +1088 +00:46:31,760 --> 00:46:36,160 +you create a very small data set if you + +1089 +00:46:34,240 --> 00:46:38,400 +create a very small data set this is + +1090 +00:46:36,160 --> 00:46:39,880 +going to give you a very it's going to + +1091 +00:46:38,400 --> 00:46:41,319 +be very hard to get a statistically + +1092 +00:46:39,880 --> 00:46:44,319 +significant result on this data set + +1093 +00:46:41,319 --> 00:46:47,200 +because it's tiny right and you know + +1094 +00:46:44,319 --> 00:46:50,640 +quite frequently you're going to be + +1095 +00:46:47,200 --> 00:46:53,400 +sampling um you're going to be sampling + +1096 +00:46:50,640 --> 00:46:55,400 +data sets like this where the model like + +1097 +00:46:53,400 --> 00:46:56,640 +where model one wins quite frequently + +1098 +00:46:55,400 --> 00:46:58,520 +you're going to be sampling other data + +1099 +00:46:56,640 --> 00:47:00,359 +sets where key wins and basically you're + +1100 +00:46:58,520 --> 00:47:02,920 +not going to be able to say with + +1101 +00:47:00,359 --> 00:47:04,480 +confidence which model is better because + +1102 +00:47:02,920 --> 00:47:06,359 +you just don't have enough data to say + +1103 +00:47:04,480 --> 00:47:07,880 +that but as you make your data set + +1104 +00:47:06,359 --> 00:47:11,119 +bigger and bigger it becomes easier and + +1105 +00:47:07,880 --> 00:47:14,240 +easier to get a significant result and + +1106 +00:47:11,119 --> 00:47:17,400 +so uh because you're more sure that you + +1107 +00:47:14,240 --> 00:47:20,960 +didn't just randomly pick data that + +1108 +00:47:17,400 --> 00:47:25,400 +model two is better at + +1109 +00:47:20,960 --> 00:47:28,440 +uh so um there's also other varieties + +1110 +00:47:25,400 --> 00:47:31,240 +ofest there's things like T tests for + +1111 +00:47:28,440 --> 00:47:34,720 +unpaired unpaired outputs and paired T + +1112 +00:47:31,240 --> 00:47:38,079 +tests for paired outputs those work when + +1113 +00:47:34,720 --> 00:47:40,440 +your um outputs are eddied so they work + +1114 +00:47:38,079 --> 00:47:43,599 +for accuracy because the accuracy is + +1115 +00:47:40,440 --> 00:47:46,440 +just you add all the add all the ones + +1116 +00:47:43,599 --> 00:47:48,680 +and then divide by the um the number of + +1117 +00:47:46,440 --> 00:47:50,960 +instances and that gives you an accuracy + +1118 +00:47:48,680 --> 00:47:57,880 +that doesn't work for something like + +1119 +00:47:50,960 --> 00:48:03,599 +fmeasure um because fmeasure is um 2 * + +1120 +00:47:57,880 --> 00:48:07,319 +Precision Time recall / Precision plus + +1121 +00:48:03,599 --> 00:48:08,040 +recall um and precision and recall uh + +1122 +00:48:07,319 --> 00:48:10,640 +you + +1123 +00:48:08,040 --> 00:48:12,920 +can like a T Test works for this but + +1124 +00:48:10,640 --> 00:48:15,160 +there's a non-additive component of f + +1125 +00:48:12,920 --> 00:48:16,680 +measure so you can't calculate + +1126 +00:48:15,160 --> 00:48:19,280 +statistically significant differences in + +1127 +00:48:16,680 --> 00:48:21,079 +F measure using a key test in that case + +1128 +00:48:19,280 --> 00:48:23,000 +you're basically you have to use a + +1129 +00:48:21,079 --> 00:48:24,920 +bootstrap method like this in order to + +1130 +00:48:23,000 --> 00:48:29,040 +get it to work or you need to do some + +1131 +00:48:24,920 --> 00:48:29,040 +really complex math but I I just + +1132 +00:48:29,760 --> 00:48:33,920 +use cool um are there any questions + +1133 +00:48:32,680 --> 00:48:35,520 +about this I guess we'll have a code + +1134 +00:48:33,920 --> 00:48:37,680 +example in the recitation so you can go + +1135 +00:48:35,520 --> 00:48:39,599 +in and take a look at that there's also + +1136 +00:48:37,680 --> 00:48:42,599 +tons of code examples + +1137 +00:48:39,599 --> 00:48:42,599 +online + +1138 +00:48:42,960 --> 00:48:49,440 +um is that + +1139 +00:48:45,720 --> 00:48:52,400 +okay okay sounds good um so now let me + +1140 +00:48:49,440 --> 00:48:54,599 +uh let me go back to the actual slides + +1141 +00:48:52,400 --> 00:48:57,400 +for + +1142 +00:48:54,599 --> 00:49:00,559 +today and given those statist uh the + +1143 +00:48:57,400 --> 00:49:04,119 +results about statistical signicance um + +1144 +00:49:00,559 --> 00:49:06,040 +how can we estimate how much testing + +1145 +00:49:04,119 --> 00:49:07,920 +data is enough and there's a method + +1146 +00:49:06,040 --> 00:49:11,079 +called Power analysis that allows you to + +1147 +00:49:07,920 --> 00:49:13,359 +do this and basically the idea of power + +1148 +00:49:11,079 --> 00:49:16,680 +analysis is that you make an assumption + +1149 +00:49:13,359 --> 00:49:18,880 +about the effect size between settings + +1150 +00:49:16,680 --> 00:49:20,680 +um for example the expected accuracy + +1151 +00:49:18,880 --> 00:49:23,480 +difference between tested + +1152 +00:49:20,680 --> 00:49:26,480 +models and given the effect size a + +1153 +00:49:23,480 --> 00:49:28,880 +significance threshold and significant + +1154 +00:49:26,480 --> 00:49:30,839 +threshold you can determine how much + +1155 +00:49:28,880 --> 00:49:32,680 +data is necessary to get a significant + +1156 +00:49:30,839 --> 00:49:36,680 +effect in most + +1157 +00:49:32,680 --> 00:49:39,319 +CLS and so to give an example + +1158 +00:49:36,680 --> 00:49:41,559 +again let's say we're talking about the + +1159 +00:49:39,319 --> 00:49:45,880 +accuracy let's say we have a baseline + +1160 +00:49:41,559 --> 00:49:49,079 +model and we have a um we have a + +1161 +00:49:45,880 --> 00:49:52,280 +baseline model and then we also have our + +1162 +00:49:49,079 --> 00:49:54,000 +uh propos model and we know kind of from + +1163 +00:49:52,280 --> 00:49:55,599 +experience that the Baseline model is + +1164 +00:49:54,000 --> 00:49:58,400 +probably going to get around 90% + +1165 +00:49:55,599 --> 00:50:00,559 +accuracy We Know by like eyeballing + +1166 +00:49:58,400 --> 00:50:06,240 +eyeballing the data or something like + +1167 +00:50:00,559 --> 00:50:09,599 +that and then we think our um we think + +1168 +00:50:06,240 --> 00:50:13,799 +our model is going to get 93% + +1169 +00:50:09,599 --> 00:50:17,160 +accuracy uh and we want a significant + +1170 +00:50:13,799 --> 00:50:19,440 +threshold significance threshold of p is + +1171 +00:50:17,160 --> 00:50:22,319 +less than + +1172 +00:50:19,440 --> 00:50:26,000 +0.05 given these + +1173 +00:50:22,319 --> 00:50:30,559 +two quantities we can basically go in + +1174 +00:50:26,000 --> 00:50:33,720 +and say okay now we need uh 500 training + +1175 +00:50:30,559 --> 00:50:36,200 +500 test examples in order to say with + +1176 +00:50:33,720 --> 00:50:38,920 +confidence that we will be able + +1177 +00:50:36,200 --> 00:50:40,599 +to um that we will be able to + +1178 +00:50:38,920 --> 00:50:42,640 +distinguish between two models with 90 + +1179 +00:50:40,599 --> 00:50:44,400 +and 93% + +1180 +00:50:42,640 --> 00:50:48,240 +accuracy + +1181 +00:50:44,400 --> 00:50:51,079 +and I can go I can show the algorithm + +1182 +00:50:48,240 --> 00:50:51,079 +that they have in this + +1183 +00:50:54,440 --> 00:50:57,440 +paper + +1184 +00:51:01,760 --> 00:51:04,960 +but basically the way this + +1185 +00:51:13,040 --> 00:51:19,720 +works um is you sample a data set um + +1186 +00:51:17,799 --> 00:51:22,960 +Canute the effect of interest on the + +1187 +00:51:19,720 --> 00:51:25,880 +sample I compute the P value and then + +1188 +00:51:22,960 --> 00:51:29,319 +you can calculate the power uh + +1189 +00:51:25,880 --> 00:51:31,520 +by basically um checking the number of + +1190 +00:51:29,319 --> 00:51:34,480 +times that the P value is less than your + +1191 +00:51:31,520 --> 00:51:36,319 +threshold um multiplied by uh the fact + +1192 +00:51:34,480 --> 00:51:38,920 +that the sign is in a particular + +1193 +00:51:36,319 --> 00:51:41,200 +direction and by doing this you can + +1194 +00:51:38,920 --> 00:51:43,280 +essentially um you can essentially + +1195 +00:51:41,200 --> 00:51:46,200 +calculate how much data you would need + +1196 +00:51:43,280 --> 00:51:48,319 +or sorry you can calculate the uh the + +1197 +00:51:46,200 --> 00:51:50,319 +statistical power and then you can do + +1198 +00:51:48,319 --> 00:51:52,000 +this for various sizes of data set so + +1199 +00:51:50,319 --> 00:51:53,559 +you can gradually increase the size of + +1200 +00:51:52,000 --> 00:51:57,160 +the data set or decrease the size of the + +1201 +00:51:53,559 --> 00:51:59,040 +data set and that allows you to figure + +1202 +00:51:57,160 --> 00:52:02,200 +out how big your data set needs to be in + +1203 +00:51:59,040 --> 00:52:04,640 +order to get a statistically significant + +1204 +00:52:02,200 --> 00:52:08,839 +effect of the data + +1205 +00:52:04,640 --> 00:52:10,720 +set and so like many many people ask me + +1206 +00:52:08,839 --> 00:52:12,599 +the question like how big of a data set + +1207 +00:52:10,720 --> 00:52:14,440 +do we need to make this is basically the + +1208 +00:52:12,599 --> 00:52:17,280 +statistically like quote unquote correct + +1209 +00:52:14,440 --> 00:52:19,520 +answer for how you can do this and also + +1210 +00:52:17,280 --> 00:52:20,440 +uh for assignment two we're going to ask + +1211 +00:52:19,520 --> 00:52:24,559 +you to + +1212 +00:52:20,440 --> 00:52:26,720 +justify uh your choice of creation of a + +1213 +00:52:24,559 --> 00:52:30,359 +data set of particular size for testing + +1214 +00:52:26,720 --> 00:52:31,799 +based on this so um uh pay pay attention + +1215 +00:52:30,359 --> 00:52:34,720 +and please look at the references here + +1216 +00:52:31,799 --> 00:52:38,760 +and you should be able to + +1217 +00:52:34,720 --> 00:52:41,280 +that cool um any + +1218 +00:52:38,760 --> 00:52:43,119 +questions I I didn't go like really + +1219 +00:52:41,280 --> 00:52:44,319 +deeply into the formulas here you'll + +1220 +00:52:43,119 --> 00:52:45,720 +you'll probably have to look them up in + +1221 +00:52:44,319 --> 00:52:48,119 +the paper but hopefully that gives you + +1222 +00:52:45,720 --> 00:52:51,799 +the general + +1223 +00:52:48,119 --> 00:52:52,680 +idea okay next um how much training data + +1224 +00:52:51,799 --> 00:52:55,599 +do I + +1225 +00:52:52,680 --> 00:52:58,160 +need so in general more is usually + +1226 +00:52:55,599 --> 00:53:00,760 +better if you're fine tuning a model um + +1227 +00:52:58,160 --> 00:53:02,880 +so I can't tell you like you don't need + +1228 +00:53:00,760 --> 00:53:05,480 +to make more data because + +1229 +00:53:02,880 --> 00:53:06,280 +probably you do if you're not happy with + +1230 +00:53:05,480 --> 00:53:10,799 +your + +1231 +00:53:06,280 --> 00:53:12,599 +performance um but recently you can get + +1232 +00:53:10,799 --> 00:53:14,680 +very reasonable performance with few + +1233 +00:53:12,599 --> 00:53:17,319 +shot or zero shot or pre-trained models + +1234 +00:53:14,680 --> 00:53:19,760 +and prompting and because of this in + +1235 +00:53:17,319 --> 00:53:21,240 +some cases maybe the answer is zero + +1236 +00:53:19,760 --> 00:53:22,960 +maybe you don't need any training data + +1237 +00:53:21,240 --> 00:53:26,559 +and you could just use a zero shot pred + +1238 +00:53:22,960 --> 00:53:29,240 +model so um you you need to choose like + +1239 +00:53:26,559 --> 00:53:31,319 +what your accuracy threshold is um you + +1240 +00:53:29,240 --> 00:53:32,720 +need to decide whether you want to be + +1241 +00:53:31,319 --> 00:53:34,480 +fine-tuning a model to improve + +1242 +00:53:32,720 --> 00:53:36,319 +performance or doing other things like + +1243 +00:53:34,480 --> 00:53:39,119 +prompt engineering or other stuff like + +1244 +00:53:36,319 --> 00:53:41,520 +that so basically there's no uh correct + +1245 +00:53:39,119 --> 00:53:45,440 +answer to this + +1246 +00:53:41,520 --> 00:53:47,359 +um one thing to be aware of is uh + +1247 +00:53:45,440 --> 00:53:51,440 +sometimes if you select data + +1248 +00:53:47,359 --> 00:53:52,880 +intelligently you can uh improve more + +1249 +00:53:51,440 --> 00:53:54,359 +quickly with something like Active + +1250 +00:53:52,880 --> 00:53:56,520 +Learning and active learning chooses + +1251 +00:53:54,359 --> 00:54:00,000 +representative and difficult data that + +1252 +00:53:56,520 --> 00:54:02,559 +you can um be + +1253 +00:54:00,000 --> 00:54:04,839 +using so when you sample data for fine + +1254 +00:54:02,559 --> 00:54:07,440 +tuning uh what you want to be doing is + +1255 +00:54:04,839 --> 00:54:08,839 +you want to be sampling data that has + +1256 +00:54:07,440 --> 00:54:10,040 +good coverage of the domains that you + +1257 +00:54:08,839 --> 00:54:12,760 +want to + +1258 +00:54:10,040 --> 00:54:15,079 +cover um you also want to be covering + +1259 +00:54:12,760 --> 00:54:18,599 +for example language uh languages or + +1260 +00:54:15,079 --> 00:54:23,200 +language varieties or demographics of + +1261 +00:54:18,599 --> 00:54:25,520 +users um and another thing is uh when + +1262 +00:54:23,200 --> 00:54:29,440 +you're doing this it's often good idea + +1263 +00:54:25,520 --> 00:54:31,400 +to document how you're creating data and + +1264 +00:54:29,440 --> 00:54:34,079 +uh there's this paper data statements + +1265 +00:54:31,400 --> 00:54:35,520 +for NLP by vendor and fredman uh which + +1266 +00:54:34,079 --> 00:54:37,440 +suggests a bunch of different things + +1267 +00:54:35,520 --> 00:54:39,520 +that you can use to document your data + +1268 +00:54:37,440 --> 00:54:41,520 +collection and like why and how you + +1269 +00:54:39,520 --> 00:54:44,960 +collected the data and this gives you + +1270 +00:54:41,520 --> 00:54:47,200 +some pieces of information that uh could + +1271 +00:54:44,960 --> 00:54:49,359 +be useful this has been incorporated + +1272 +00:54:47,200 --> 00:54:51,880 +into the hugging face data sets data set + +1273 +00:54:49,359 --> 00:54:53,520 +cards and now hugging face data sets + +1274 +00:54:51,880 --> 00:54:56,040 +actually has lots of metadata that's + +1275 +00:54:53,520 --> 00:54:58,359 +kind of inspired by uh this although + +1276 +00:54:56,040 --> 00:55:01,799 +it's been adjusted for more kind of like + +1277 +00:54:58,359 --> 00:55:01,799 +practical industry use + +1278 +00:55:02,119 --> 00:55:06,480 +cases another thing is annotation + +1279 +00:55:04,400 --> 00:55:09,160 +guidelines so if you're asking humans to + +1280 +00:55:06,480 --> 00:55:11,319 +do anything um or for that matter if + +1281 +00:55:09,160 --> 00:55:16,119 +you're asking gp4 to generate data for + +1282 +00:55:11,319 --> 00:55:21,480 +you um you need to tell people or gp4 in + +1283 +00:55:16,119 --> 00:55:24,440 +um you know a clear manner how you will + +1284 +00:55:21,480 --> 00:55:28,119 +um like how it should be creating data + +1285 +00:55:24,440 --> 00:55:29,920 +so the first thing is um if you try uh + +1286 +00:55:28,119 --> 00:55:32,960 +to an the first thing that you can do is + +1287 +00:55:29,920 --> 00:55:34,240 +you can try to annotate yourself um and + +1288 +00:55:32,960 --> 00:55:37,039 +if you actually try to solve The + +1289 +00:55:34,240 --> 00:55:38,440 +annotation task yourself then you'll + +1290 +00:55:37,039 --> 00:55:41,160 +realize that there's lots of corner + +1291 +00:55:38,440 --> 00:55:43,799 +cases that are hard to decide on um + +1292 +00:55:41,160 --> 00:55:45,440 +other things like that so like if you're + +1293 +00:55:43,799 --> 00:55:47,520 +annotating sentiment what is the + +1294 +00:55:45,440 --> 00:55:49,799 +boundary between very positive and + +1295 +00:55:47,520 --> 00:55:50,880 +positive um if you're annotating + +1296 +00:55:49,799 --> 00:55:54,000 +question + +1297 +00:55:50,880 --> 00:55:56,280 +answering um like for + +1298 +00:55:54,000 --> 00:55:57,720 +example do you want to answer in a whole + +1299 +00:55:56,280 --> 00:56:01,119 +sentence or do you want to answer with + +1300 +00:55:57,720 --> 00:56:03,760 +only a short concise answer like these + +1301 +00:56:01,119 --> 00:56:05,400 +sorts of things you'll need to tell uh + +1302 +00:56:03,760 --> 00:56:07,839 +either an annotator or a model that + +1303 +00:56:05,400 --> 00:56:10,960 +you're asking to do annotation to give + +1304 +00:56:07,839 --> 00:56:12,760 +some examples from pent Tree Bank uh + +1305 +00:56:10,960 --> 00:56:15,440 +part of speech annotation guidelines + +1306 +00:56:12,760 --> 00:56:18,079 +this is very old it's from 1990 but + +1307 +00:56:15,440 --> 00:56:21,200 +basically they have uh like adverb this + +1308 +00:56:18,079 --> 00:56:25,559 +category includes most words that end in + +1309 +00:56:21,200 --> 00:56:30,680 +um ly as well as degree words like + +1310 +00:56:25,559 --> 00:56:33,079 +quite um etc etc it has other things for + +1311 +00:56:30,680 --> 00:56:36,200 +adverbs and then it has like confusing + +1312 +00:56:33,079 --> 00:56:38,039 +parts of speech with examples uh one + +1313 +00:56:36,200 --> 00:56:39,640 +thing that I found like really really + +1314 +00:56:38,039 --> 00:56:42,640 +interesting is like if you look at these + +1315 +00:56:39,640 --> 00:56:46,160 +annotation guidelines it's like uh + +1316 +00:56:42,640 --> 00:56:48,319 +prompts so if you look at this it's like + +1317 +00:56:46,160 --> 00:56:49,880 +these are your your prompts your zero + +1318 +00:56:48,319 --> 00:56:52,359 +shot prompts and these are F shot + +1319 +00:56:49,880 --> 00:56:54,480 +examples so like even for humans we were + +1320 +00:56:52,359 --> 00:56:56,520 +doing F shot prompting with examples + +1321 +00:56:54,480 --> 00:57:00,880 +when they were doing annotations so uh + +1322 +00:56:56,520 --> 00:57:03,119 +it's kind of uh kind of fun um hiring + +1323 +00:57:00,880 --> 00:57:05,000 +annotators so like let's say you want to + +1324 +00:57:03,119 --> 00:57:08,319 +actually build a data set and and pay + +1325 +00:57:05,000 --> 00:57:10,359 +people to do things um for smaller scale + +1326 +00:57:08,319 --> 00:57:13,359 +projects uh very often you can just + +1327 +00:57:10,359 --> 00:57:15,240 +annotate yourself and that's fine um + +1328 +00:57:13,359 --> 00:57:16,720 +there's a fixed set of overhead to get + +1329 +00:57:15,240 --> 00:57:19,480 +other people to do something and train + +1330 +00:57:16,720 --> 00:57:23,200 +them and stuff so you know I often just + +1331 +00:57:19,480 --> 00:57:25,079 +annotate things myself um you can also + +1332 +00:57:23,200 --> 00:57:26,520 +find friends or other students or + +1333 +00:57:25,079 --> 00:57:29,559 +co-workers who can help you out with + +1334 +00:57:26,520 --> 00:57:33,359 +things you can bri bribe them with uh + +1335 +00:57:29,559 --> 00:57:37,280 +pizza or whatever favorite uh food or + +1336 +00:57:33,359 --> 00:57:39,400 +beverage that they like um then for + +1337 +00:57:37,280 --> 00:57:42,440 +finding people online there's a lot of + +1338 +00:57:39,400 --> 00:57:45,160 +things that you can do um I very often + +1339 +00:57:42,440 --> 00:57:46,000 +hire Freelancers uh through platforms + +1340 +00:57:45,160 --> 00:57:50,400 +such as + +1341 +00:57:46,000 --> 00:57:51,799 +upwork um this is good and bad the bad + +1342 +00:57:50,400 --> 00:57:53,760 +thing about it is that this is often + +1343 +00:57:51,799 --> 00:57:56,280 +more expensive the good thing about it + +1344 +00:57:53,760 --> 00:57:58,640 +is um you get people who have pride in + +1345 +00:57:56,280 --> 00:58:00,440 +their work and accountability and + +1346 +00:57:58,640 --> 00:58:02,440 +motivation because like if they get + +1347 +00:58:00,440 --> 00:58:04,480 +rated poorly they it's going to be + +1348 +00:58:02,440 --> 00:58:06,720 +harder to get work and often they're + +1349 +00:58:04,480 --> 00:58:08,160 +Professionals in their fields so like if + +1350 +00:58:06,720 --> 00:58:12,079 +you want to get a code generation data + +1351 +00:58:08,160 --> 00:58:15,880 +set you can hire good um Freelancers + +1352 +00:58:12,079 --> 00:58:18,520 +I've actually heard rumors that uh + +1353 +00:58:15,880 --> 00:58:20,119 +people like open AI they hire people and + +1354 +00:58:18,520 --> 00:58:21,599 +pay them $60 an hour to do The + +1355 +00:58:20,119 --> 00:58:23,599 +annotation because they really want + +1356 +00:58:21,599 --> 00:58:27,119 +people who are very professional and do + +1357 +00:58:23,599 --> 00:58:30,000 +a very good job um I don't pay that + +1358 +00:58:27,119 --> 00:58:34,240 +much but I do pay well more than minimum + +1359 +00:58:30,000 --> 00:58:35,880 +wage and uh you know like it's a I pay a + +1360 +00:58:34,240 --> 00:58:38,039 +competitive price for these freelancing + +1361 +00:58:35,880 --> 00:58:40,319 +sites when I get people to do + +1362 +00:58:38,039 --> 00:58:42,000 +that another thing you can do as crowd + +1363 +00:58:40,319 --> 00:58:44,400 +workers and this is could be through + +1364 +00:58:42,000 --> 00:58:45,960 +sites like Mechanical Turk or prolific + +1365 +00:58:44,400 --> 00:58:48,960 +or other things like this so that's + +1366 +00:58:45,960 --> 00:58:51,680 +another option um here quality control + +1367 +00:58:48,960 --> 00:58:55,240 +becomes very difficult and um we're + +1368 +00:58:51,680 --> 00:58:57,799 +getting to the point where number one + +1369 +00:58:55,240 --> 00:58:59,400 +um if you don't aren't very careful with + +1370 +00:58:57,799 --> 00:59:01,920 +quality control language models actually + +1371 +00:58:59,400 --> 00:59:03,400 +do a similarly good job as crowd workers + +1372 +00:59:01,920 --> 00:59:06,960 +and number two all the crowd workers are + +1373 +00:59:03,400 --> 00:59:10,000 +using gp4 anyway so um you do need to be + +1374 +00:59:06,960 --> 00:59:12,319 +careful about that um one thing that I + +1375 +00:59:10,000 --> 00:59:14,039 +often do is I hire for a small job first + +1376 +00:59:12,319 --> 00:59:16,880 +to gauge timeliness and accuracy and + +1377 +00:59:14,039 --> 00:59:18,920 +then hire for a bigger job so um just + +1378 +00:59:16,880 --> 00:59:21,720 +hire people to do you know 50 examples + +1379 +00:59:18,920 --> 00:59:23,319 +or 20 examples first and then uh you + +1380 +00:59:21,720 --> 00:59:26,240 +know if they do a good job with it then + +1381 +00:59:23,319 --> 00:59:27,960 +I hire them to do 200 th000 + +1382 +00:59:26,240 --> 00:59:30,799 +examples + +1383 +00:59:27,960 --> 00:59:34,720 +um one thing to note is that if you're + +1384 +00:59:30,799 --> 00:59:36,599 +doing research in a university um you + +1385 +00:59:34,720 --> 00:59:39,400 +might need to get approval from an + +1386 +00:59:36,599 --> 00:59:41,480 +Institutional review board and this is + +1387 +00:59:39,400 --> 00:59:43,000 +in particular the case for subjective + +1388 +00:59:41,480 --> 00:59:45,880 +task so this is when you're asking + +1389 +00:59:43,000 --> 00:59:47,440 +people how do you feel about this output + +1390 +00:59:45,880 --> 00:59:50,039 +um do you think this output is + +1391 +00:59:47,440 --> 00:59:51,720 +representative of your beliefs or things + +1392 +00:59:50,039 --> 00:59:54,760 +like that where it doesn't have a + +1393 +00:59:51,720 --> 00:59:56,319 +correct answer a yes and no answer if + +1394 +00:59:54,760 --> 00:59:58,680 +it's something like it it does have a + +1395 +00:59:56,319 --> 01:00:03,640 +yes and no answer which is like how many + +1396 +00:59:58,680 --> 01:00:05,640 +verbs are in this sentence or um how do + +1397 +01:00:03,640 --> 01:00:07,280 +you translate the sentence into another + +1398 +01:00:05,640 --> 01:00:09,880 +language or something like that then you + +1399 +01:00:07,280 --> 01:00:12,039 +don't need an IRB approval um but if + +1400 +01:00:09,880 --> 01:00:15,000 +it's borderline you might want to check + +1401 +01:00:12,039 --> 01:00:17,280 +anyway um so that that's something to be + +1402 +01:00:15,000 --> 01:00:17,280 +aware + +1403 +01:00:18,640 --> 01:00:26,240 +of next is assessing annotation quality + +1404 +01:00:22,640 --> 01:00:27,680 +so um one of my favorite ways to do this + +1405 +01:00:26,240 --> 01:00:30,039 +is assess Human + +1406 +01:00:27,680 --> 01:00:32,240 +Performance and so the way we do this is + +1407 +01:00:30,039 --> 01:00:34,119 +you double annotate some data and then + +1408 +01:00:32,240 --> 01:00:37,160 +you measure whatever metric you want to + +1409 +01:00:34,119 --> 01:00:39,200 +measure for machines just with respect + +1410 +01:00:37,160 --> 01:00:41,039 +to human agreement and so for + +1411 +01:00:39,200 --> 01:00:43,839 +translation if you're using blue score + +1412 +01:00:41,039 --> 01:00:45,440 +or KF score or something like this then + +1413 +01:00:43,839 --> 01:00:47,079 +you would want to use this for + +1414 +01:00:45,440 --> 01:00:50,440 +assessment of the + +1415 +01:00:47,079 --> 01:00:56,039 +outputs um the advantage of doing this + +1416 +01:00:50,440 --> 01:00:58,760 +is that you get a human quality score + +1417 +01:00:56,039 --> 01:01:00,960 +and the human quality score is directly + +1418 +01:00:58,760 --> 01:01:02,480 +comparable to the machine quality score + +1419 +01:01:00,960 --> 01:01:04,599 +and so you can say well humans got the + +1420 +01:01:02,480 --> 01:01:07,280 +task right 90% of the time and gp4 got + +1421 +01:01:04,599 --> 01:01:11,280 +the task right 16% of the time so humans + +1422 +01:01:07,280 --> 01:01:13,760 +are way better than gp4 or um you know + +1423 +01:01:11,280 --> 01:01:16,559 +humans got it right 80% of the time and + +1424 +01:01:13,760 --> 01:01:19,599 +gp4 got it right 78% of the time so this + +1425 +01:01:16,559 --> 01:01:21,000 +task is you know this task or maybe not + +1426 +01:01:19,599 --> 01:01:23,640 +necessarily the task but at least the + +1427 +01:01:21,000 --> 01:01:25,079 +data set is more or less uh been so by + +1428 +01:01:23,640 --> 01:01:26,640 +the strongest language models so now we + +1429 +01:01:25,079 --> 01:01:28,920 +need to catch up open source models so + +1430 +01:01:26,640 --> 01:01:31,680 +SW ones or something like + +1431 +01:01:28,920 --> 01:01:32,880 +that um there are things that you can + +1432 +01:01:31,680 --> 01:01:34,880 +measure you can measure things like + +1433 +01:01:32,880 --> 01:01:36,880 +Kappa statistics this is particularly + +1434 +01:01:34,880 --> 01:01:39,799 +useful for um kind of just + +1435 +01:01:36,880 --> 01:01:41,799 +classification tasks and what this tells + +1436 +01:01:39,799 --> 01:01:43,880 +you is this tells you how much higher is + +1437 +01:01:41,799 --> 01:01:48,000 +the agreement that you would get than if + +1438 +01:01:43,880 --> 01:01:49,920 +you got it by chance and so for example + +1439 +01:01:48,000 --> 01:01:53,279 +let's say you're classifying + +1440 +01:01:49,920 --> 01:01:54,760 +spam uh or you're classifying you know + +1441 +01:01:53,279 --> 01:01:59,520 +toxic content or something something + +1442 +01:01:54,760 --> 01:02:03,400 +like that in 99% of your time 99% of the + +1443 +01:01:59,520 --> 01:02:07,480 +time the content is not toxic and 1% of + +1444 +01:02:03,400 --> 01:02:11,799 +the time the content is toxic and then + +1445 +01:02:07,480 --> 01:02:14,079 +you hire some annotators and you get 98% + +1446 +01:02:11,799 --> 01:02:16,279 +accuracy that's kind of bad right you + +1447 +01:02:14,079 --> 01:02:19,200 +know if you just said not toxic all the + +1448 +01:02:16,279 --> 01:02:20,880 +time you would get 99% um what the Kaus + +1449 +01:02:19,200 --> 01:02:24,599 +statistic does is it accounts for this + +1450 +01:02:20,880 --> 01:02:26,559 +basically it says um how much more like + +1451 +01:02:24,599 --> 01:02:28,440 +assis than chance and if you just had + +1452 +01:02:26,559 --> 01:02:30,720 +chance accuracy you would get zero if + +1453 +01:02:28,440 --> 01:02:33,200 +you had perfect accuracy you would get + +1454 +01:02:30,720 --> 01:02:34,920 +one and you normally get something in + +1455 +01:02:33,200 --> 01:02:37,359 +between + +1456 +01:02:34,920 --> 01:02:39,200 +um so if it's slow you may need to + +1457 +01:02:37,359 --> 01:02:41,319 +revisit guidelines Tire better + +1458 +01:02:39,200 --> 01:02:44,480 +annotators or rethink whether the task + +1459 +01:02:41,319 --> 01:02:46,559 +is possible at all or not um and you + +1460 +01:02:44,480 --> 01:02:48,599 +know some tasks are just impossible like + +1461 +01:02:46,559 --> 01:02:51,599 +if um I'm + +1462 +01:02:48,599 --> 01:02:51,599 +asking + +1463 +01:02:52,240 --> 01:02:58,160 +uh well or um they're very hard for + +1464 +01:02:55,960 --> 01:03:00,039 +annotators so like to give one example + +1465 +01:02:58,160 --> 01:03:04,039 +um annotators are really horrible at + +1466 +01:03:00,039 --> 01:03:06,200 +identifying fake reviews um and so like + +1467 +01:03:04,039 --> 01:03:07,640 +if you even if you hire annotators to + +1468 +01:03:06,200 --> 01:03:09,279 +identify paper reviews they're bad at + +1469 +01:03:07,640 --> 01:03:11,359 +doing that so you're not likely to get + +1470 +01:03:09,279 --> 01:03:14,680 +high + +1471 +01:03:11,359 --> 01:03:17,920 +agreement um cool I'm going to skip over + +1472 +01:03:14,680 --> 01:03:23,279 +this part because I already talked about + +1473 +01:03:17,920 --> 01:03:26,640 +it okay um any any questions + +1474 +01:03:23,279 --> 01:03:29,079 +here okay sounds good uh next I'd like + +1475 +01:03:26,640 --> 01:03:30,640 +to get into running experiments so + +1476 +01:03:29,079 --> 01:03:34,359 +running experiments one thing I find + +1477 +01:03:30,640 --> 01:03:37,200 +very helpful is workflow automation um + +1478 +01:03:34,359 --> 01:03:40,079 +and basically what I I like to do is I + +1479 +01:03:37,200 --> 01:03:41,839 +like to mod modularize each step of an + +1480 +01:03:40,079 --> 01:03:44,119 +experiment into a + +1481 +01:03:41,839 --> 01:03:47,240 +directory + +1482 +01:03:44,119 --> 01:03:51,039 +um where uh you have like a directory as + +1483 +01:03:47,240 --> 01:03:53,279 +input and a directory as output + +1484 +01:03:51,039 --> 01:03:54,559 +um this is my personal way of doing + +1485 +01:03:53,279 --> 01:03:56,799 +things there are other ways of doing + +1486 +01:03:54,559 --> 01:03:58,640 +things that are also good but um very + +1487 +01:03:56,799 --> 01:04:00,760 +often like just to give an example + +1488 +01:03:58,640 --> 01:04:04,680 +you'll need to do pre-processing + +1489 +01:04:00,760 --> 01:04:07,480 +According to some uh you'll need to do + +1490 +01:04:04,680 --> 01:04:09,119 +data selection so you'll need to select + +1491 +01:04:07,480 --> 01:04:11,039 +which data sets you're training on + +1492 +01:04:09,119 --> 01:04:13,520 +you'll need to do pre-processing of them + +1493 +01:04:11,039 --> 01:04:16,160 +with a tokenization model and then you + +1494 +01:04:13,520 --> 01:04:18,359 +will need to run an + +1495 +01:04:16,160 --> 01:04:20,000 +experiment and then you'll need to do + +1496 +01:04:18,359 --> 01:04:23,240 +evaluation and those are all kind of + +1497 +01:04:20,000 --> 01:04:25,079 +like discret Steps where the data + +1498 +01:04:23,240 --> 01:04:27,760 +selection takes in your big pool of data + +1499 +01:04:25,079 --> 01:04:31,200 +and outputs a a data set that's been + +1500 +01:04:27,760 --> 01:04:33,680 +selected the tokenization + +1501 +01:04:31,200 --> 01:04:35,480 +will uh take a tokenizer model maybe + +1502 +01:04:33,680 --> 01:04:38,599 +train a tokenizer model and and split it + +1503 +01:04:35,480 --> 01:04:40,400 +up into different tokens um the training + +1504 +01:04:38,599 --> 01:04:42,079 +will train it might output a whole bunch + +1505 +01:04:40,400 --> 01:04:44,720 +of checkpoints and the evaluation will + +1506 +01:04:42,079 --> 01:04:47,039 +evaluate one checkpoint and so those are + +1507 +01:04:44,720 --> 01:04:48,400 +all kind of modular and you can actually + +1508 +01:04:47,039 --> 01:04:50,039 +think of each one of them as like a + +1509 +01:04:48,400 --> 01:04:52,760 +function in your Python + +1510 +01:04:50,039 --> 01:04:56,400 +program + +1511 +01:04:52,760 --> 01:04:58,160 +and you kind of want to avoid rerunning + +1512 +01:04:56,400 --> 01:05:00,200 +data set selection and tokenization + +1513 +01:04:58,160 --> 01:05:01,720 +every time you do a new evaluation right + +1514 +01:05:00,200 --> 01:05:03,359 +like that would be kind of silly you + +1515 +01:05:01,720 --> 01:05:04,680 +definitely want to avoid rerunning + +1516 +01:05:03,359 --> 01:05:09,119 +training every time you evaluate a + +1517 +01:05:04,680 --> 01:05:11,200 +checkpoint so um what I do is I often + +1518 +01:05:09,119 --> 01:05:12,799 +name directories by parameters where + +1519 +01:05:11,200 --> 01:05:16,079 +it's like Transformer + +1520 +01:05:12,799 --> 01:05:18,640 +layer Transformer layer 8 node 512 + +1521 +01:05:16,079 --> 01:05:21,279 +Dropout 0.5 label smooth + +1522 +01:05:18,640 --> 01:05:25,880 +0.02 um and so I have all the parameters + +1523 +01:05:21,279 --> 01:05:26,880 +in there and then + +1524 +01:05:25,880 --> 01:05:29,680 +the + +1525 +01:05:26,880 --> 01:05:31,960 +training process will output a whole + +1526 +01:05:29,680 --> 01:05:33,960 +bunch of checkpoints in here and then + +1527 +01:05:31,960 --> 01:05:35,520 +for my evaluation I have evaluation + +1528 +01:05:33,960 --> 01:05:38,119 +metrics and I have the checkpoint I'm + +1529 +01:05:35,520 --> 01:05:41,680 +evaluating so uh when I do + +1530 +01:05:38,119 --> 01:05:45,119 +evaluation I will then append checkpoint + +1531 +01:05:41,680 --> 01:05:47,279 +6 uh metric F measure or something like + +1532 +01:05:45,119 --> 01:05:49,079 +that and so I keep around all of the + +1533 +01:05:47,279 --> 01:05:52,520 +previous information and just append + +1534 +01:05:49,079 --> 01:05:54,599 +append append append and so um this + +1535 +01:05:52,520 --> 01:05:56,680 +allows you to avoid rerunning things + +1536 +01:05:54,599 --> 01:05:58,359 +because you can uh just have your python + +1537 +01:05:56,680 --> 01:06:00,520 +code to check if the directory already + +1538 +01:05:58,359 --> 01:06:01,839 +exists and already has been completed + +1539 +01:06:00,520 --> 01:06:03,559 +and then read in the result if it + +1540 +01:06:01,839 --> 01:06:06,319 +already has been or run the experiment + +1541 +01:06:03,559 --> 01:06:08,079 +that it hasn't been so um you can write + +1542 +01:06:06,319 --> 01:06:10,279 +you can write this in pure python by + +1543 +01:06:08,079 --> 01:06:11,599 +just adding like some if statements at + +1544 +01:06:10,279 --> 01:06:14,079 +the beginning of the function some if + +1545 +01:06:11,599 --> 01:06:16,799 +statements at um some like output + +1546 +01:06:14,079 --> 01:06:19,440 +statements at the end of the function um + +1547 +01:06:16,799 --> 01:06:22,000 +there are more sophisticated models + +1548 +01:06:19,440 --> 01:06:24,200 +methods so there's like a toolkit called + +1549 +01:06:22,000 --> 01:06:28,079 +duct tape that was originally created + +1550 +01:06:24,200 --> 01:06:31,760 +here at CMU and um my uh student Patrick + +1551 +01:06:28,079 --> 01:06:33,079 +is maintaining now this link um so you + +1552 +01:06:31,760 --> 01:06:34,960 +can either just roll something on your + +1553 +01:06:33,079 --> 01:06:36,880 +own or look into one of these more + +1554 +01:06:34,960 --> 01:06:39,359 +complex work workflow automation things + +1555 +01:06:36,880 --> 01:06:39,359 +to sve you + +1556 +01:06:39,400 --> 01:06:47,279 +time okay evaluation um so I talked + +1557 +01:06:43,400 --> 01:06:49,000 +about this to some extent um uh so yeah + +1558 +01:06:47,279 --> 01:06:51,000 +I'll just skip over + +1559 +01:06:49,000 --> 01:06:54,559 +that + +1560 +01:06:51,000 --> 01:06:57,200 +and result reporting um + +1561 +01:06:54,559 --> 01:06:59,160 +for papers one thing that I really like + +1562 +01:06:57,200 --> 01:07:01,960 +to do is plan the result section in + +1563 +01:06:59,160 --> 01:07:07,039 +advance or at least imagine the result + +1564 +01:07:01,960 --> 01:07:07,039 +section in advance um + +1565 +01:07:07,200 --> 01:07:11,640 +so what what I think of is like what + +1566 +01:07:09,559 --> 01:07:14,520 +experimental claims would I like to make + +1567 +01:07:11,640 --> 01:07:15,760 +how am I going to support them by the + +1568 +01:07:14,520 --> 01:07:19,039 +experiments that I'm going to show in a + +1569 +01:07:15,760 --> 01:07:21,160 +result section um and this identifies + +1570 +01:07:19,039 --> 01:07:24,640 +unjustified experimental claims like so + +1571 +01:07:21,160 --> 01:07:27,119 +let's say your method is you're saying + +1572 +01:07:24,640 --> 01:07:29,000 +something like uh this method improves + +1573 +01:07:27,119 --> 01:07:30,440 +across a wide variety of languages and + +1574 +01:07:29,000 --> 01:07:32,520 +then you realize that you only have one + +1575 +01:07:30,440 --> 01:07:34,720 +language and you're uh in your + +1576 +01:07:32,520 --> 01:07:37,960 +experiment section that's a problem + +1577 +01:07:34,720 --> 01:07:40,640 +obviously um also I I really enjoy like + +1578 +01:07:37,960 --> 01:07:43,599 +assuming that all of my experiments are + +1579 +01:07:40,640 --> 01:07:46,520 +going really really well um and you know + +1580 +01:07:43,599 --> 01:07:49,440 +none of my uh none of my runs crash with + +1581 +01:07:46,520 --> 01:07:52,000 +Cuda out of memory errors and you know + +1582 +01:07:49,440 --> 01:07:55,319 +all of all of the experiments appear as + +1583 +01:07:52,000 --> 01:07:57,960 +expected and if you do something like + +1584 +01:07:55,319 --> 01:07:59,960 +that you can be ambitious and say okay + +1585 +01:07:57,960 --> 01:08:03,119 +how can I make this research project + +1586 +01:07:59,960 --> 01:08:04,960 +really impactful like um and another + +1587 +01:08:03,119 --> 01:08:08,240 +thing that I like to ask my students or + +1588 +01:08:04,960 --> 01:08:11,200 +people I'm working with recently is like + +1589 +01:08:08,240 --> 01:08:13,440 +who are like three people in the world + +1590 +01:08:11,200 --> 01:08:17,440 +who will be really excited by your paper + +1591 +01:08:13,440 --> 01:08:19,040 +like name actual people um and where do + +1592 +01:08:17,440 --> 01:08:20,839 +those people work what do they care + +1593 +01:08:19,040 --> 01:08:22,359 +about what sort of evidence would you + +1594 +01:08:20,839 --> 01:08:24,560 +need in your paper to make them really + +1595 +01:08:22,359 --> 01:08:26,560 +excited about your paper or something + +1596 +01:08:24,560 --> 01:08:29,679 +like that and very often people will + +1597 +01:08:26,560 --> 01:08:31,480 +reply to me like oh I think people in um + +1598 +01:08:29,679 --> 01:08:32,799 +in Google will be very excited about + +1599 +01:08:31,480 --> 01:08:34,440 +this and they're going to use it and I'm + +1600 +01:08:32,799 --> 01:08:38,719 +like well you're writing all your code + +1601 +01:08:34,440 --> 01:08:39,839 +in pytorch and they don't use pytorch so + +1602 +01:08:38,719 --> 01:08:41,000 +how are you going to convince them to + +1603 +01:08:39,839 --> 01:08:42,640 +use their paper they're going to have to + +1604 +01:08:41,000 --> 01:08:46,120 +reimplement it in Jax and that's going + +1605 +01:08:42,640 --> 01:08:47,520 +to suck for them so like uh you know + +1606 +01:08:46,120 --> 01:08:49,040 +what are the barriers for them actually + +1607 +01:08:47,520 --> 01:08:50,799 +using it and then maybe the people are + +1608 +01:08:49,040 --> 01:08:52,159 +like oh well maybe actually I don't want + +1609 +01:08:50,799 --> 01:08:54,199 +people at Google to use this and I can + +1610 +01:08:52,159 --> 01:08:56,560 +think of somebody else and it's like + +1611 +01:08:54,199 --> 01:08:58,920 +well great so now release it open source + +1612 +01:08:56,560 --> 01:09:00,520 +and people will will have it open source + +1613 +01:08:58,920 --> 01:09:01,920 +so you can kind of think about like the + +1614 +01:09:00,520 --> 01:09:03,719 +types of evidence that you would need to + +1615 +01:09:01,920 --> 01:09:05,440 +convince people to use your work and + +1616 +01:09:03,719 --> 01:09:08,040 +that can result in your work being more + +1617 +01:09:05,440 --> 01:09:09,319 +impactful in the long run and if you + +1618 +01:09:08,040 --> 01:09:10,400 +think about it from the very beginning + +1619 +01:09:09,319 --> 01:09:11,839 +that also helps you plan your + +1620 +01:09:10,400 --> 01:09:13,520 +experiments like what sort of evidence + +1621 +01:09:11,839 --> 01:09:15,359 +is necessary for people to get excited + +1622 +01:09:13,520 --> 01:09:18,440 +about it in the this + +1623 +01:09:15,359 --> 01:09:20,120 +SPS um another thing that I like to do + +1624 +01:09:18,440 --> 01:09:24,000 +with result reporting is result + +1625 +01:09:20,120 --> 01:09:26,880 +generation scripts um so uh I often + +1626 +01:09:24,000 --> 01:09:29,159 +generate paper latex directly from log + +1627 +01:09:26,880 --> 01:09:31,799 +files uh there's two reasons why I do + +1628 +01:09:29,159 --> 01:09:34,480 +this um number one it's efficient and + +1629 +01:09:31,799 --> 01:09:36,719 +minimizes errors number two it allows + +1630 +01:09:34,480 --> 01:09:39,080 +you to preemptively plan experiments + +1631 +01:09:36,719 --> 01:09:41,120 +that you want to run so like for example + +1632 +01:09:39,080 --> 01:09:44,440 +if we go back to the dock um the + +1633 +01:09:41,120 --> 01:09:46,199 +directory that I talked about before um + +1634 +01:09:44,440 --> 01:09:50,359 +I can write + +1635 +01:09:46,199 --> 01:09:52,719 +a a script that reads in 20 evaluation + +1636 +01:09:50,359 --> 01:09:54,800 +results from 20 different directories + +1637 +01:09:52,719 --> 01:09:56,920 +and fills in a table and if that + +1638 +01:09:54,800 --> 01:09:58,600 +directory doesn't exist yet it will put + +1639 +01:09:56,920 --> 01:10:01,239 +like TVD or something like that in the + +1640 +01:09:58,600 --> 01:10:03,960 +table so I can very quickly see okay + +1641 +01:10:01,239 --> 01:10:05,880 +these things are TBD um oh this thing + +1642 +01:10:03,960 --> 01:10:07,480 +has been TBD for a very long time is my + +1643 +01:10:05,880 --> 01:10:09,400 +experiment crashed do I need to go back + +1644 +01:10:07,480 --> 01:10:12,239 +and like restart my experiment or + +1645 +01:10:09,400 --> 01:10:13,719 +something like that so um it's an + +1646 +01:10:12,239 --> 01:10:17,280 +efficient way and when you finish the + +1647 +01:10:13,719 --> 01:10:17,280 +last TBD it's a very good feeling + +1648 +01:10:18,280 --> 01:10:23,719 +also cool um next computational + +1649 +01:10:21,760 --> 01:10:26,159 +resources actually I kind of already + +1650 +01:10:23,719 --> 01:10:28,600 +talked about this a little bit um but on + +1651 +01:10:26,159 --> 01:10:30,280 +Amazon web services we have uh class + +1652 +01:10:28,600 --> 01:10:32,080 +credits that we're going to be issuing + +1653 +01:10:30,280 --> 01:10:34,880 +as soon as uh the assignment one + +1654 +01:10:32,080 --> 01:10:37,560 +deadline is over um there's also Google + +1655 +01:10:34,880 --> 01:10:39,440 +cloud and collab um you can get + +1656 +01:10:37,560 --> 01:10:44,000 +commodity gpus and other things like + +1657 +01:10:39,440 --> 01:10:47,800 +that so um you can also consider + +1658 +01:10:44,000 --> 01:10:53,159 +that okay let me get into Data analysis + +1659 +01:10:47,800 --> 01:10:55,440 +um so I'm going to cover this a lot more + +1660 +01:10:53,159 --> 01:10:58,480 +in an interpretation lecture and this is + +1661 +01:10:55,440 --> 01:10:59,520 +going to be in three classes so this is + +1662 +01:10:58,480 --> 01:11:02,239 +going to + +1663 +01:10:59,520 --> 01:11:07,000 +be the + +1664 +01:11:02,239 --> 01:11:09,719 +Tuesday after next um so uh very + +1665 +01:11:07,000 --> 01:11:11,000 +important things though uh look at data + +1666 +01:11:09,719 --> 01:11:13,679 +um you'll want to do quantitative + +1667 +01:11:11,000 --> 01:11:16,239 +analysis and qualitative analysis um you + +1668 +01:11:13,679 --> 01:11:17,440 +can also look at model explanations so + +1669 +01:11:16,239 --> 01:11:18,719 +I'm going to cover how to do all of + +1670 +01:11:17,440 --> 01:11:21,520 +these things in that lecture I don't + +1671 +01:11:18,719 --> 01:11:24,440 +have enough time to do it + +1672 +01:11:21,520 --> 01:11:26,960 +today then the final thing is accoring + +1673 +01:11:24,440 --> 01:11:30,840 +conclusions um this is also too much for + +1674 +01:11:26,960 --> 01:11:34,000 +a single class but um I very highly + +1675 +01:11:30,840 --> 01:11:35,920 +recommend this lecture um uh sorry these + +1676 +01:11:34,000 --> 01:11:39,320 +lecture slides they don't take that long + +1677 +01:11:35,920 --> 01:11:40,880 +to look through they're maybe um 20 + +1678 +01:11:39,320 --> 01:11:42,880 +minutes or so but they're very very + +1679 +01:11:40,880 --> 01:11:45,480 +helpful um they talk about how to + +1680 +01:11:42,880 --> 01:11:48,199 +structure a paper uh other things like + +1681 +01:11:45,480 --> 01:11:51,440 +this and if you follow this advice for + +1682 +01:11:48,199 --> 01:11:53,239 +writing your reports for like three and + +1683 +01:11:51,440 --> 01:11:54,960 +four assignment three and assignment + +1684 +01:11:53,239 --> 01:11:57,800 +four even assignment two I think you + +1685 +01:11:54,960 --> 01:11:59,400 +can't really go wrong uh actually three + +1686 +01:11:57,800 --> 01:12:00,840 +and four is probably better uh than + +1687 +01:11:59,400 --> 01:12:03,320 +assignment two assignment two can be + +1688 +01:12:00,840 --> 01:12:05,360 +more descriptive so definitely take a + +1689 +01:12:03,320 --> 01:12:08,600 +look at that if + +1690 +01:12:05,360 --> 01:12:08,600 +you cool \ No newline at end of file diff --git a/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation/transcript.vtt b/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation/transcript.vtt new file mode 100644 index 0000000000000000000000000000000000000000..e1e7464165f3121483ce786e055a2a070258a739 --- /dev/null +++ b/CMU Advanced NLP 2024 (9) Experimental Design and Human Annotation/transcript.vtt @@ -0,0 +1,5071 @@ +WEBVTT + +00:00:00.719 --> 00:00:07.480 +so to get started I want to show an + +00:00:04.120 --> 00:00:10.320 +example of the scientific method I took + +00:00:07.480 --> 00:00:12.920 +this directly from Wikipedia but it's + +00:00:10.320 --> 00:00:15.320 +actually uh pretty nice it's a pretty + +00:00:12.920 --> 00:00:17.480 +nice and concise summary of what we + +00:00:15.320 --> 00:00:19.439 +should do when we're coming up with new + +00:00:17.480 --> 00:00:22.160 +uh kind of research + +00:00:19.439 --> 00:00:24.039 +projects and we start with an + +00:00:22.160 --> 00:00:26.840 +observation or question we do research + +00:00:24.039 --> 00:00:28.599 +of the topic area we form a hypothesis + +00:00:26.840 --> 00:00:31.439 +we test it with an experiment analyze + +00:00:28.599 --> 00:00:33.600 +data and Report conclusions + +00:00:31.439 --> 00:00:35.640 +and even if we're doing kind of an + +00:00:33.600 --> 00:00:37.480 +engineering based project still this + +00:00:35.640 --> 00:00:42.079 +thinking of the stuff that we're doing + +00:00:37.480 --> 00:00:44.399 +in a framework like this can help you a + +00:00:42.079 --> 00:00:46.079 +lot so uh the first thing I'd like to + +00:00:44.399 --> 00:00:49.120 +talk about is identifying good research + +00:00:46.079 --> 00:00:51.800 +directions and so I'm going to look at + +00:00:49.120 --> 00:00:53.640 +that from the observation question + +00:00:51.800 --> 00:00:56.320 +perspective + +00:00:53.640 --> 00:00:58.480 +here so if we think about why we do + +00:00:56.320 --> 00:01:01.160 +research uh particularly why we do + +00:00:58.480 --> 00:01:04.199 +research on natural language process in + +00:01:01.160 --> 00:01:07.159 +um there's a couple reasons why the + +00:01:04.199 --> 00:01:09.439 +first is application driven research and + +00:01:07.159 --> 00:01:13.159 +usually this is I would like to make a + +00:01:09.439 --> 00:01:15.040 +useful system or make one work better so + +00:01:13.159 --> 00:01:18.479 +uh you know this is probably the great + +00:01:15.040 --> 00:01:20.280 +majority of NLP research then separately + +00:01:18.479 --> 00:01:21.960 +from that there's curiosity driven + +00:01:20.280 --> 00:01:24.560 +research which is like I would like to + +00:01:21.960 --> 00:01:27.360 +know more about language or the world + +00:01:24.560 --> 00:01:29.159 +viewed through language and so this + +00:01:27.360 --> 00:01:31.840 +doesn't necessarily have to be + +00:01:29.159 --> 00:01:31.840 +immediately + +00:01:32.000 --> 00:01:37.280 +like a downstream application that users + +00:01:35.399 --> 00:01:39.159 +are using will immediately get better + +00:01:37.280 --> 00:01:40.439 +it's more like we have a burning + +00:01:39.159 --> 00:01:43.159 +question that we would like to answer + +00:01:40.439 --> 00:01:47.399 +and we want to answer + +00:01:43.159 --> 00:01:48.640 +it so NLP encompasses both uh sometimes + +00:01:47.399 --> 00:01:50.479 +if you read a paper you'll have + +00:01:48.640 --> 00:01:54.360 +something that's doing both uh + +00:01:50.479 --> 00:01:56.439 +especially like analyzing the internals + +00:01:54.360 --> 00:01:58.079 +or training dynamics of a a neural + +00:01:56.439 --> 00:01:59.920 +network to answer a curiosity-driven + +00:01:58.079 --> 00:02:02.439 +question and then applying that to come + +00:01:59.920 --> 00:02:04.840 +up with a better method that makes work + +00:02:02.439 --> 00:02:06.560 +better I I would like to say though that + +00:02:04.840 --> 00:02:09.119 +it's kind of rare that there's a paper + +00:02:06.560 --> 00:02:10.879 +that does both of them really well uh + +00:02:09.119 --> 00:02:13.160 +and so usually one of them is kind of + +00:02:10.879 --> 00:02:14.599 +the main focus and I think you can be + +00:02:13.160 --> 00:02:17.680 +well served by choosing which one is + +00:02:14.599 --> 00:02:20.560 +your main focus and then kind of uh the + +00:02:17.680 --> 00:02:23.560 +other might come as a additional uh + +00:02:20.560 --> 00:02:23.560 +bonus on top of + +00:02:23.920 --> 00:02:28.760 +that so here are a few examples of + +00:02:27.160 --> 00:02:32.800 +application driven + +00:02:28.760 --> 00:02:35.239 +research so for example pay at all uh + +00:02:32.800 --> 00:02:37.840 +they proposed the task of sentiment + +00:02:35.239 --> 00:02:39.879 +analysis um so actually there was a + +00:02:37.840 --> 00:02:41.879 +paper 22 years ago that proposed the + +00:02:39.879 --> 00:02:44.879 +task of sentiment analysis it might seem + +00:02:41.879 --> 00:02:46.760 +very you know normal nowadays but uh + +00:02:44.879 --> 00:02:49.519 +there was a paper that proposed it back + +00:02:46.760 --> 00:02:52.840 +then and they proposed sentiment + +00:02:49.519 --> 00:02:54.200 +analysis because um labeling articles + +00:02:52.840 --> 00:02:57.480 +with their sentiment would provide + +00:02:54.200 --> 00:02:59.760 +succinct summaries to the readers um so + +00:02:57.480 --> 00:03:03.319 +they basically wanted to provide + +00:02:59.760 --> 00:03:03.319 +information to readers and that would be + +00:03:03.400 --> 00:03:09.000 +useful another paper by ready at all + +00:03:06.440 --> 00:03:11.519 +2019 proposes a task of conversational + +00:03:09.000 --> 00:03:13.640 +question answering uh because an + +00:03:11.519 --> 00:03:15.599 +inability to build and maintain common + +00:03:13.640 --> 00:03:17.680 +ground is part of the reason why virtual + +00:03:15.599 --> 00:03:20.159 +assistant usually don't seem like + +00:03:17.680 --> 00:03:22.040 +competent conversational Partners so + +00:03:20.159 --> 00:03:24.519 +when you're talking to your Alexa or + +00:03:22.040 --> 00:03:27.000 +your Google uh home or something like + +00:03:24.519 --> 00:03:28.599 +this you might ask it a question and + +00:03:27.000 --> 00:03:30.120 +then after you asked it a question you + +00:03:28.599 --> 00:03:31.480 +ask it another question but it doesn't + +00:03:30.120 --> 00:03:32.879 +go back to the contexts that you had + +00:03:31.480 --> 00:03:34.519 +before and they wanted to solve this + +00:03:32.879 --> 00:03:36.040 +problem so they proposed this data set + +00:03:34.519 --> 00:03:40.000 +for + +00:03:36.040 --> 00:03:41.720 +it um Gerel propos a method for bottom + +00:03:40.000 --> 00:03:43.159 +up abstractive summarization because + +00:03:41.720 --> 00:03:44.760 +neural network-based methods for + +00:03:43.159 --> 00:03:46.879 +abstractive summarization produce + +00:03:44.760 --> 00:03:49.000 +outputs that are fluent but perform + +00:03:46.879 --> 00:03:51.120 +poorly a Content selection so they had a + +00:03:49.000 --> 00:03:53.000 +problem they had a task already in mind + +00:03:51.120 --> 00:03:54.239 +they weren't proposing a new task and + +00:03:53.000 --> 00:03:56.040 +they there was a problem with the + +00:03:54.239 --> 00:03:58.760 +existing system so they fixed + +00:03:56.040 --> 00:04:00.400 +it and then Kudo and Richardson proposed + +00:03:58.760 --> 00:04:02.920 +a method for un supervised word + +00:04:00.400 --> 00:04:04.799 +segmentation namely sentence piece uh + +00:04:02.920 --> 00:04:06.439 +because language dependent processing + +00:04:04.799 --> 00:04:08.920 +makes it hard to train multilingual + +00:04:06.439 --> 00:04:10.360 +models as we have to carefully manage + +00:04:08.920 --> 00:04:12.720 +the configurations of pre- and + +00:04:10.360 --> 00:04:15.879 +post-processors per language so they + +00:04:12.720 --> 00:04:17.519 +tried to make things easier uh so like + +00:04:15.879 --> 00:04:19.600 +you can see all of these things like the + +00:04:17.519 --> 00:04:21.919 +first two are proposing new tasks to + +00:04:19.600 --> 00:04:23.880 +solve and they're doing it from the + +00:04:21.919 --> 00:04:25.919 +point of view of uh creating something + +00:04:23.880 --> 00:04:29.120 +useful for users the second two are + +00:04:25.919 --> 00:04:30.440 +proposing new methods the first one is + +00:04:29.120 --> 00:04:34.360 +like improving + +00:04:30.440 --> 00:04:36.320 +accuracy um so it's this is the most + +00:04:34.360 --> 00:04:37.639 +common most commonly people say I have a + +00:04:36.320 --> 00:04:39.120 +test that I want to solve there's a + +00:04:37.639 --> 00:04:41.280 +problem with accuracy I want to improve + +00:04:39.120 --> 00:04:43.960 +it but you can also improve other things + +00:04:41.280 --> 00:04:45.880 +so you can improve like convenience or + +00:04:43.960 --> 00:04:47.320 +uh you can Pro improve efficiency or + +00:04:45.880 --> 00:04:51.720 +other things like that so all of those + +00:04:47.320 --> 00:04:51.720 +are you know perfectly reasonable + +00:04:52.120 --> 00:04:57.320 +things I also have some examples of + +00:04:54.639 --> 00:04:59.120 +curiosity driven research these are + +00:04:57.320 --> 00:05:00.360 +actually harder to find in the ACL + +00:04:59.120 --> 00:05:03.120 +anthology + +00:05:00.360 --> 00:05:06.400 +it's definitely the minority case but + +00:05:03.120 --> 00:05:09.160 +they still do exist um so for example + +00:05:06.400 --> 00:05:10.960 +rank at all 2017 asked what is the + +00:05:09.160 --> 00:05:13.800 +difference between the language of real + +00:05:10.960 --> 00:05:17.000 +news with that of satire hoaxes and + +00:05:13.800 --> 00:05:18.800 +propaganda so they were not attempting + +00:05:17.000 --> 00:05:21.039 +to create a system for fake news + +00:05:18.800 --> 00:05:23.199 +detection that was not their goal here + +00:05:21.039 --> 00:05:24.600 +their go their goal was just to figure + +00:05:23.199 --> 00:05:26.240 +out what were the different linguistic + +00:05:24.600 --> 00:05:28.000 +characteristics and they found that + +00:05:26.240 --> 00:05:29.720 +scientifically interesting maybe + +00:05:28.000 --> 00:05:31.280 +Downstream that would be useful but that + +00:05:29.720 --> 00:05:35.080 +wasn't the point of their + +00:05:31.280 --> 00:05:36.960 +paper another one uh curell at all ask + +00:05:35.080 --> 00:05:38.960 +are all languages equally hard to + +00:05:36.960 --> 00:05:41.000 +language model and so basically they + +00:05:38.960 --> 00:05:42.440 +wanted to know are all languages just + +00:05:41.000 --> 00:05:45.520 +character strings and so language + +00:05:42.440 --> 00:05:47.479 +modeling them is uh similarly easy or + +00:05:45.520 --> 00:05:49.120 +are there certain characteristics of + +00:05:47.479 --> 00:05:51.080 +language that make them easier or harder + +00:05:49.120 --> 00:05:54.000 +to model with the current architectures + +00:05:51.080 --> 00:05:55.520 +that we have um and so they didn't + +00:05:54.000 --> 00:05:57.039 +propose a new architecture they didn't + +00:05:55.520 --> 00:06:00.479 +propose to improve anything they just + +00:05:57.039 --> 00:06:02.400 +proposed to examine this question + +00:06:00.479 --> 00:06:04.280 +um and also Tenny at all this is + +00:06:02.400 --> 00:06:06.880 +actually an extremely impactful work + +00:06:04.280 --> 00:06:09.319 +Downstream but uh they weren't improving + +00:06:06.880 --> 00:06:11.520 +anything they just Quantified where + +00:06:09.319 --> 00:06:14.440 +specific types of linguistic information + +00:06:11.520 --> 00:06:16.720 +are encoded in birs so they found that + +00:06:14.440 --> 00:06:18.840 +for example syntax was encoded better in + +00:06:16.720 --> 00:06:20.560 +the early layers semantics in the later + +00:06:18.840 --> 00:06:22.520 +layers and then if you go further you + +00:06:20.560 --> 00:06:25.280 +you have other fine grain things like + +00:06:22.520 --> 00:06:27.599 +pragne style + +00:06:25.280 --> 00:06:30.400 +information so I I think you can kind of + +00:06:27.599 --> 00:06:32.120 +see the difference between these two um + +00:06:30.400 --> 00:06:34.800 +are there any questions + +00:06:32.120 --> 00:06:40.199 +about + +00:06:34.800 --> 00:06:41.720 +this no okay let's be that so the next + +00:06:40.199 --> 00:06:43.680 +question which I think a lot of people + +00:06:41.720 --> 00:06:46.240 +might be asking particularly with + +00:06:43.680 --> 00:06:47.720 +respect to assignment 4 which requires + +00:06:46.240 --> 00:06:51.039 +you to come up with something novel to + +00:06:47.720 --> 00:06:53.240 +do is how do we uh get research + +00:06:51.039 --> 00:06:57.360 +ideas + +00:06:53.240 --> 00:07:02.280 +and the way we can do this is uh twofold + +00:06:57.360 --> 00:07:04.479 +so um one is kind of we want to turn a + +00:07:02.280 --> 00:07:07.120 +concrete understanding of existing + +00:07:04.479 --> 00:07:10.120 +research's failings into a higher level + +00:07:07.120 --> 00:07:12.560 +experimental question and the two ways + +00:07:10.120 --> 00:07:15.240 +that I normally characterize doing this + +00:07:12.560 --> 00:07:19.319 +are bottom up discovery of research + +00:07:15.240 --> 00:07:21.080 +ideas um or the way the way I + +00:07:19.319 --> 00:07:24.479 +characterize this is bottom up discovery + +00:07:21.080 --> 00:07:27.000 +of research ideas and this is a great + +00:07:24.479 --> 00:07:29.120 +tool for making incremental progress on + +00:07:27.000 --> 00:07:32.039 +existing systems on tasks that we really + +00:07:29.120 --> 00:07:35.400 +care about or expanding the scope of a + +00:07:32.039 --> 00:07:37.680 +task that we care about so uh some + +00:07:35.400 --> 00:07:41.879 +examples of this would be like in + +00:07:37.680 --> 00:07:45.639 +assignment number three you uh look + +00:07:41.879 --> 00:07:47.720 +let's say you're looking at + +00:07:45.639 --> 00:07:50.159 +um let's say you're looking at the + +00:07:47.720 --> 00:07:53.840 +question answering performance + +00:07:50.159 --> 00:07:58.280 +of models of multilingual models on + +00:07:53.840 --> 00:08:01.479 +different languages um and you for + +00:07:58.280 --> 00:08:03.159 +assignment three you implement a couple + +00:08:01.479 --> 00:08:05.240 +multilingual models on different + +00:08:03.159 --> 00:08:06.560 +languages you run them you look at the + +00:08:05.240 --> 00:08:08.400 +results and you identify that + +00:08:06.560 --> 00:08:10.080 +multilingual models are particularly bad + +00:08:08.400 --> 00:08:12.919 +at answering questions about named + +00:08:10.080 --> 00:08:14.680 +entities and so now you have looked at + +00:08:12.919 --> 00:08:17.759 +the output you have decided that that's + +00:08:14.680 --> 00:08:20.199 +a big problem um you can go in and + +00:08:17.759 --> 00:08:22.080 +improve it so this is a great tool for + +00:08:20.199 --> 00:08:23.720 +incremental progress and like in fact + +00:08:22.080 --> 00:08:26.520 +doing this really effectively has been + +00:08:23.720 --> 00:08:31.000 +very effective in my own research career + +00:08:26.520 --> 00:08:34.680 +like we uh if I feel like I I like to + +00:08:31.000 --> 00:08:36.279 +look at data I try to do that a lot and + +00:08:34.680 --> 00:08:38.440 +by doing that I identify the most + +00:08:36.279 --> 00:08:40.200 +frequent problems and because of that + +00:08:38.440 --> 00:08:42.039 +when I fix those problems my accuracy + +00:08:40.200 --> 00:08:44.560 +goes up a lot more than people who pick + +00:08:42.039 --> 00:08:46.880 +the less good problems right and so if + +00:08:44.560 --> 00:08:49.440 +we want our accuracy to go up uh I'm + +00:08:46.880 --> 00:08:51.360 +more efficient at you know improving + +00:08:49.440 --> 00:08:53.240 +things on the other hand there's + +00:08:51.360 --> 00:08:55.399 +something uh from the opposite direction + +00:08:53.240 --> 00:08:57.080 +is moving from a higher level question + +00:08:55.399 --> 00:08:57.800 +to a lower level concrete testing of + +00:08:57.080 --> 00:09:00.120 +that + +00:08:57.800 --> 00:09:01.760 +question um so this could be tap down + +00:09:00.120 --> 00:09:02.760 +Design This is tap down design of + +00:09:01.760 --> 00:09:06.360 +research + +00:09:02.760 --> 00:09:08.399 +ideas this favors bigger ideas but these + +00:09:06.360 --> 00:09:10.240 +ideas can be disconnected from reality + +00:09:08.399 --> 00:09:13.880 +or they could be not solving the right + +00:09:10.240 --> 00:09:17.079 +problems so the typical like very very + +00:09:13.880 --> 00:09:18.800 +successful example of this is um neural + +00:09:17.079 --> 00:09:20.800 +machine translation or something like + +00:09:18.800 --> 00:09:22.720 +this neural machine translations neural + +00:09:20.800 --> 00:09:26.399 +sequence sequence + +00:09:22.720 --> 00:09:30.040 +models this came out of a few people + +00:09:26.399 --> 00:09:32.040 +like Jeff Hinton and yua + +00:09:30.040 --> 00:09:33.480 +believing for a very long time that + +00:09:32.040 --> 00:09:35.760 +neural networks were the right way to + +00:09:33.480 --> 00:09:37.800 +solve lots of problems uh despite the + +00:09:35.760 --> 00:09:39.640 +fact that there wasn't like super + +00:09:37.800 --> 00:09:42.279 +concrete evidence of that for a long + +00:09:39.640 --> 00:09:43.399 +time and so they had this idea which was + +00:09:42.279 --> 00:09:47.399 +like we should be doing things with + +00:09:43.399 --> 00:09:49.440 +neural networks and uh they you know + +00:09:47.399 --> 00:09:50.720 +they successfully executed that and now + +00:09:49.440 --> 00:09:52.200 +everybody is doing things with neural + +00:09:50.720 --> 00:09:56.560 +networks so they made a really huge + +00:09:52.200 --> 00:09:58.160 +revolution in the research space um that + +00:09:56.560 --> 00:09:59.720 +that's great that's a great example of a + +00:09:58.160 --> 00:10:02.839 +successful topown IDE IDE but the + +00:09:59.720 --> 00:10:05.519 +problem is uh for every example like + +00:10:02.839 --> 00:10:07.560 +that there's a thousand uh top down + +00:10:05.519 --> 00:10:10.760 +ideas in the graveyard of not being very + +00:10:07.560 --> 00:10:12.600 +you know effective so I I think um in + +00:10:10.760 --> 00:10:14.519 +order to do something like this you + +00:10:12.600 --> 00:10:16.200 +better have a very strong conviction or + +00:10:14.519 --> 00:10:18.079 +you better have maybe some initial + +00:10:16.200 --> 00:10:20.920 +evidence or a very strong intuition + +00:10:18.079 --> 00:10:22.320 +about why this might be a good idea and + +00:10:20.920 --> 00:10:25.240 +uh you would be able to test that + +00:10:22.320 --> 00:10:27.240 +intuition through intermediate steps uh + +00:10:25.240 --> 00:10:31.040 +to to demonstrate like through toy data + +00:10:27.240 --> 00:10:31.040 +or other stuff like that + +00:10:31.720 --> 00:10:38.360 +um cool so these are kind of the general + +00:10:36.360 --> 00:10:40.839 +ways that we can come up with research + +00:10:38.360 --> 00:10:42.519 +ideas the next thing that we want to do + +00:10:40.839 --> 00:10:44.480 +is research our topic area were there + +00:10:42.519 --> 00:10:46.720 +any questions about bottom up versus top + +00:10:44.480 --> 00:10:49.120 +down I'm going to talk about effective + +00:10:46.720 --> 00:10:51.920 +strategies to bottom up stuff in uh in + +00:10:49.120 --> 00:10:54.360 +two weeks uh so we can talk more about + +00:10:51.920 --> 00:10:56.800 +that then + +00:10:54.360 --> 00:11:00.959 +but okay if not I'll move + +00:10:56.800 --> 00:11:05.079 +on so next uh we have research topic + +00:11:00.959 --> 00:11:07.360 +areas so this is about how you will do + +00:11:05.079 --> 00:11:10.320 +assignment three which is researching uh + +00:11:07.360 --> 00:11:13.240 +topic area getting forming a very good + +00:11:10.320 --> 00:11:15.680 +understanding of the topic that you're + +00:11:13.240 --> 00:11:18.800 +trying to handle and so there's a bunch + +00:11:15.680 --> 00:11:22.800 +of different ways you can do this uh the + +00:11:18.800 --> 00:11:25.680 +first one is keyword search and so you + +00:11:22.800 --> 00:11:27.839 +look something up on Google Scholar or + +00:11:25.680 --> 00:11:29.480 +something uh finding older and newer + +00:11:27.839 --> 00:11:32.880 +papers so this is like following the + +00:11:29.480 --> 00:11:35.360 +tracks of papers you can uh read the + +00:11:32.880 --> 00:11:39.160 +abstract and intro uh read the details + +00:11:35.360 --> 00:11:43.760 +of most relevant papers and I don't do + +00:11:39.160 --> 00:11:45.440 +this as much now but um when I was a + +00:11:43.760 --> 00:11:47.360 +graduate student I would often make a + +00:11:45.440 --> 00:11:49.800 +short summary of the paper to make sure + +00:11:47.360 --> 00:11:54.680 +I really understood the details uh + +00:11:49.800 --> 00:11:56.000 +because also now I teach a class um and + +00:11:54.680 --> 00:11:58.240 +actually making these slides is very + +00:11:56.000 --> 00:12:00.120 +useful for me so going back into the + +00:11:58.240 --> 00:12:03.440 +Transformer slide slides you know that + +00:12:00.120 --> 00:12:05.160 +kind of serves as my um you know my way + +00:12:03.440 --> 00:12:06.800 +of digesting papers and making sure that + +00:12:05.160 --> 00:12:08.160 +I can explain them and if you're not + +00:12:06.800 --> 00:12:10.480 +teaching a class and you can go in and + +00:12:08.160 --> 00:12:13.560 +make a summary into it yourselves so + +00:12:10.480 --> 00:12:16.480 +that can confirm uh solidify your memory + +00:12:13.560 --> 00:12:19.360 +and like confirm your uh ability to + +00:12:16.480 --> 00:12:19.360 +understand everything that's in + +00:12:20.639 --> 00:12:27.120 +there cool um so next I'd like to talk + +00:12:23.639 --> 00:12:29.600 +about some sources of papers in NLP um + +00:12:27.120 --> 00:12:31.800 +one really good source uh is the ACL + +00:12:29.600 --> 00:12:33.720 +Anthology another good source is Google + +00:12:31.800 --> 00:12:36.120 +Scholar um they both have their + +00:12:33.720 --> 00:12:37.959 +advantages and their disadvantages um + +00:12:36.120 --> 00:12:39.800 +increasingly actually I realized now + +00:12:37.959 --> 00:12:41.959 +that I should add this to my slides but + +00:12:39.800 --> 00:12:43.639 +increasingly a lot of good uh papers in + +00:12:41.959 --> 00:12:47.120 +NLP are also published in machine + +00:12:43.639 --> 00:12:51.199 +learning conferences so like icml or NPS + +00:12:47.120 --> 00:12:53.040 +or um uh I clear or things like that the + +00:12:51.199 --> 00:12:54.920 +problem is the ACL Anthology is way + +00:12:53.040 --> 00:12:56.600 +better than any of them at like + +00:12:54.920 --> 00:13:00.360 +organizing the papers in an easy to + +00:12:56.600 --> 00:13:03.560 +process way so I I think um I I'll talk + +00:13:00.360 --> 00:13:06.000 +about this uh for now and so the ACL + +00:13:03.560 --> 00:13:08.800 +Anthology covers many uh prestigious + +00:13:06.000 --> 00:13:11.639 +venues in NLP it has all of these ones + +00:13:08.800 --> 00:13:15.160 +here this figure is a little bit old uh + +00:13:11.639 --> 00:13:18.839 +I I made it in 21 2021 but you know it + +00:13:15.160 --> 00:13:22.959 +reaches up to the present day and what I + +00:13:18.839 --> 00:13:25.880 +do often is I can start with the past 3 + +00:13:22.959 --> 00:13:30.160 +to 5 years of several top venues in here + +00:13:25.880 --> 00:13:33.880 +like ACL emnlp uh nackle and tackle and + +00:13:30.160 --> 00:13:36.360 +go in and do uh keyword search and so + +00:13:33.880 --> 00:13:36.360 +like let's + +00:13:38.760 --> 00:13:43.600 +say let's say I was interested in + +00:13:44.639 --> 00:13:49.519 +multilingual multilingual large language + +00:13:47.600 --> 00:13:52.079 +models and evaluating them or some way + +00:13:49.519 --> 00:13:54.279 +so I would go to ACL and then I would + +00:13:52.079 --> 00:13:57.560 +just put in multi + +00:13:54.279 --> 00:14:01.360 +lingual um and you get a wonderful paper + +00:13:57.560 --> 00:14:01.360 +by by some research are + +00:14:01.480 --> 00:14:06.440 +named that was not intentional I didn't + +00:14:03.639 --> 00:14:08.800 +know that was going to happen but um so + +00:14:06.440 --> 00:14:11.240 +on the Fly crosslingual masking for + +00:14:08.800 --> 00:14:12.959 +multilingual pre-training um scaling + +00:14:11.240 --> 00:14:15.040 +multilingual corpora and language models + +00:14:12.959 --> 00:14:18.120 +to 500 languages that seems pretty + +00:14:15.040 --> 00:14:19.880 +pretty relevant evaluating multilingual + +00:14:18.120 --> 00:14:22.000 +compositional generalization so you can + +00:14:19.880 --> 00:14:27.680 +just go through here and see a bunch of + +00:14:22.000 --> 00:14:30.680 +papers that like um that could be + +00:14:27.680 --> 00:14:30.680 +useful + +00:14:32.240 --> 00:14:35.199 +and you could uh if you're doing a more + +00:14:33.800 --> 00:14:36.920 +machine learning oriented thing you can + +00:14:35.199 --> 00:14:38.920 +do the same thing for like the nurs + +00:14:36.920 --> 00:14:41.480 +proceedings or the icml proceedings or + +00:14:38.920 --> 00:14:41.480 +something like + +00:14:41.800 --> 00:14:48.120 +that um separately from this you can go + +00:14:44.839 --> 00:14:50.920 +through Google Scholar um this allows + +00:14:48.120 --> 00:14:52.560 +for a search of papers by keyword and so + +00:14:50.920 --> 00:14:54.440 +if I write like neural entity + +00:14:52.560 --> 00:14:56.360 +recognition it will give neural + +00:14:54.440 --> 00:15:00.040 +architectures for identity recognition + +00:14:56.360 --> 00:15:03.399 +all of these things like this um you can + +00:15:00.040 --> 00:15:06.800 +view the more recent papers so like for + +00:15:03.399 --> 00:15:10.120 +example uh if you're researching uh kind + +00:15:06.800 --> 00:15:12.759 +of generic topic that a lot of people + +00:15:10.120 --> 00:15:14.639 +use uh a lot of people do research on + +00:15:12.759 --> 00:15:18.399 +you might be getting papers from like + +00:15:14.639 --> 00:15:19.920 +1998 or something like this and you know + +00:15:18.399 --> 00:15:21.639 +they might be useful but honestly the + +00:15:19.920 --> 00:15:23.519 +methodology has changed so much since + +00:15:21.639 --> 00:15:24.680 +then that most methodical papers from + +00:15:23.519 --> 00:15:26.959 +that long ago are probably not going to + +00:15:24.680 --> 00:15:29.480 +be very useful um so you can view the + +00:15:26.959 --> 00:15:31.079 +recent papers another really useful + +00:15:29.480 --> 00:15:33.759 +thing that you can do is view papers + +00:15:31.079 --> 00:15:35.319 +that site the current paper and you can + +00:15:33.759 --> 00:15:39.560 +even click on this and then you can + +00:15:35.319 --> 00:15:42.519 +search within the sighting papers so + +00:15:39.560 --> 00:15:44.399 +um like let's say I want to know about + +00:15:42.519 --> 00:15:45.620 +how + +00:15:44.399 --> 00:15:48.730 +people + +00:15:45.620 --> 00:15:48.730 +[Music] + +00:15:50.720 --> 00:15:55.720 +do let's say I want to see if anybody + +00:15:53.199 --> 00:15:59.639 +does neural entity recognition with uh + +00:15:55.720 --> 00:16:02.160 +State space models so I do like stage + +00:15:59.639 --> 00:16:05.399 +space + +00:16:02.160 --> 00:16:09.040 +model and then I search within the + +00:16:05.399 --> 00:16:12.279 +citing articles and I'm able to find + +00:16:09.040 --> 00:16:14.319 +three articles that at least cite this + +00:16:12.279 --> 00:16:17.759 +paper and and talk about State space + +00:16:14.319 --> 00:16:20.319 +models so + +00:16:17.759 --> 00:16:21.600 +um none of these seem particularly + +00:16:20.319 --> 00:16:23.240 +relevant to what I was looking for but + +00:16:21.600 --> 00:16:26.800 +you get the idea like this can be a + +00:16:23.240 --> 00:16:26.800 +useful tool for finding more recent + +00:16:27.519 --> 00:16:30.519 +things + +00:16:33.639 --> 00:16:40.480 +and then finding older papers this is + +00:16:36.279 --> 00:16:42.839 +also relatively easy um so you read the + +00:16:40.480 --> 00:16:44.319 +papers that you're interested in and + +00:16:42.839 --> 00:16:45.480 +then it will have back blinks to older + +00:16:44.319 --> 00:16:47.519 +papers and you look them up in the + +00:16:45.480 --> 00:16:50.000 +references this is how I I find older + +00:16:47.519 --> 00:16:53.600 +papers that might be + +00:16:50.000 --> 00:16:57.800 +relevant um and so the these are the + +00:16:53.600 --> 00:16:59.720 +tools that I use um some other so I I'd + +00:16:57.800 --> 00:17:03.600 +like to give a few caveats about Google + +00:16:59.720 --> 00:17:06.120 +Scholar and uh things like Twitter or + +00:17:03.600 --> 00:17:08.360 +LinkedIn or something like this they + +00:17:06.120 --> 00:17:10.720 +give you very biased views on all the + +00:17:08.360 --> 00:17:14.600 +papers that are out there um because + +00:17:10.720 --> 00:17:16.919 +they sort for popularity basically so um + +00:17:14.600 --> 00:17:19.439 +actually if you're looking at like + +00:17:16.919 --> 00:17:22.000 +Twitter or LinkedIn or something like + +00:17:19.439 --> 00:17:23.679 +that you can actually get a pretty bleak + +00:17:22.000 --> 00:17:25.360 +view on natural language processing and + +00:17:23.679 --> 00:17:28.000 +say all anybody is doing is training + +00:17:25.360 --> 00:17:30.080 +large language models because you know + +00:17:28.000 --> 00:17:31.720 +these things tend to become you know + +00:17:30.080 --> 00:17:33.520 +popular and then they get Amplified by + +00:17:31.720 --> 00:17:35.840 +algorithms and stuff like that when in + +00:17:33.520 --> 00:17:37.440 +fact like the landscape is much richer + +00:17:35.840 --> 00:17:40.400 +which is why I do definitely suggest + +00:17:37.440 --> 00:17:42.000 +that you like actually look through uh + +00:17:40.400 --> 00:17:43.880 +conference proceedings and stuff and + +00:17:42.000 --> 00:17:46.720 +find papers that are not you know + +00:17:43.880 --> 00:17:48.520 +Amplified as much so um I I definitely + +00:17:46.720 --> 00:17:50.840 +highly recommend doing this in addition + +00:17:48.520 --> 00:17:52.480 +to you know Google Scholar or social + +00:17:50.840 --> 00:17:54.640 +media or other things like that that + +00:17:52.480 --> 00:17:54.640 +might + +00:17:56.600 --> 00:18:01.760 +be cool um I'd also like to mention a + +00:18:00.200 --> 00:18:04.000 +thing about the ups and downs of + +00:18:01.760 --> 00:18:07.559 +preemptive surveys + +00:18:04.000 --> 00:18:10.440 +so um surveying extensively before doing + +00:18:07.559 --> 00:18:12.840 +research uh has a bunch of good sides so + +00:18:10.440 --> 00:18:14.000 +it prevents you from duplicating work so + +00:18:12.840 --> 00:18:15.039 +somebody else might have done a very + +00:18:14.000 --> 00:18:18.080 +similar + +00:18:15.039 --> 00:18:20.480 +thing um it also increases your toolbox + +00:18:18.080 --> 00:18:21.600 +of methods so you know if it's a problem + +00:18:20.480 --> 00:18:25.400 +that a lot of people have worked on + +00:18:21.600 --> 00:18:27.120 +before then you know it helps uh give + +00:18:25.400 --> 00:18:30.320 +you ideas of methods that you could be + +00:18:27.120 --> 00:18:35.600 +using um however in a way it also kind + +00:18:30.320 --> 00:18:38.720 +of constrains your thinking so um if you + +00:18:35.600 --> 00:18:42.480 +like on once you have built up a very + +00:18:38.720 --> 00:18:45.440 +extensive survey of like ways to do + +00:18:42.480 --> 00:18:47.240 +things you tend to like move away from + +00:18:45.440 --> 00:18:48.799 +there when in fact like if you thought + +00:18:47.240 --> 00:18:50.080 +just thought of ways to solve problems + +00:18:48.799 --> 00:18:52.360 +without looking at everything you might + +00:18:50.080 --> 00:18:54.799 +come up with something over here might + +00:18:52.360 --> 00:18:56.400 +actually be a good idea right um and so + +00:18:54.799 --> 00:18:58.600 +there's this really nice essay it was + +00:18:56.400 --> 00:19:00.799 +actually shared uh shared with me by + +00:18:58.600 --> 00:19:02.440 +Chris Manning from Sanford um it's + +00:19:00.799 --> 00:19:04.720 +called how to build an economics model + +00:19:02.440 --> 00:19:06.679 +in your spare time it's about it's from + +00:19:04.720 --> 00:19:08.880 +a Nobel Prize winner in economics but + +00:19:06.679 --> 00:19:10.480 +he's talking about how when he tries to + +00:19:08.880 --> 00:19:13.039 +come up with new and like important + +00:19:10.480 --> 00:19:15.840 +ideas he doesn't look at economics + +00:19:13.039 --> 00:19:19.679 +journals he looks at the newspaper and + +00:19:15.840 --> 00:19:21.919 +tries to uh you know + +00:19:19.679 --> 00:19:23.480 +like look at problems that people are + +00:19:21.919 --> 00:19:24.840 +talking about in the newspaper and think + +00:19:23.480 --> 00:19:27.159 +about whether there's an economic + +00:19:24.840 --> 00:19:29.919 +solution to them and so if we think + +00:19:27.159 --> 00:19:32.880 +about the anal of how we can do this in + +00:19:29.919 --> 00:19:35.600 +natural language processing you know + +00:19:32.880 --> 00:19:37.360 +maybe you don't necessarily right away + +00:19:35.600 --> 00:19:38.799 +want to do a really extensive survey + +00:19:37.360 --> 00:19:41.080 +first you might just think about like + +00:19:38.799 --> 00:19:44.080 +what's bothering you like when you're + +00:19:41.080 --> 00:19:46.799 +using chat GPT what is really + +00:19:44.080 --> 00:19:49.600 +frustrating to you uh about how it gives + +00:19:46.799 --> 00:19:51.280 +responses or um what are the things you + +00:19:49.600 --> 00:19:53.159 +wish it were possible to do through + +00:19:51.280 --> 00:19:56.240 +natural language processing but not are + +00:19:53.159 --> 00:19:57.640 +not possible to do and um then you can + +00:19:56.240 --> 00:20:00.679 +start from there you can look at you + +00:19:57.640 --> 00:20:03.440 +know what companies are doing in their + +00:20:00.679 --> 00:20:05.799 +Tech demos uh because the tech demos + +00:20:03.440 --> 00:20:08.640 +might be nice but they almost never work + +00:20:05.799 --> 00:20:11.240 +as well as the tech demo makes them seem + +00:20:08.640 --> 00:20:13.840 +like they work so that could be another + +00:20:11.240 --> 00:20:15.720 +place to get ideas um or you can look at + +00:20:13.840 --> 00:20:17.039 +papers in a related field like machine + +00:20:15.720 --> 00:20:18.760 +learning like let's say you're a machine + +00:20:17.039 --> 00:20:21.280 +learning oriented person and you really + +00:20:18.760 --> 00:20:23.000 +love like math and stuff like that it's + +00:20:21.280 --> 00:20:25.799 +like well there's this good mathematical + +00:20:23.000 --> 00:20:27.760 +tool that I think could be applicable to + +00:20:25.799 --> 00:20:30.440 +um a certain problem in NLP or something + +00:20:27.760 --> 00:20:31.960 +like that so you could do that too um + +00:20:30.440 --> 00:20:33.960 +the the final one you know comes with + +00:20:31.960 --> 00:20:35.799 +all the caveats of doing topown research + +00:20:33.960 --> 00:20:37.320 +of course so you know you need to make + +00:20:35.799 --> 00:20:39.799 +sure that that really is the correct + +00:20:37.320 --> 00:20:42.159 +tool for whatever you want to sell but + +00:20:39.799 --> 00:20:45.280 +um definitely this is something to think + +00:20:42.159 --> 00:20:48.240 +about um however for assignment three + +00:20:45.280 --> 00:20:49.559 +you need to do a survey so I'm I'm + +00:20:48.240 --> 00:20:50.720 +forcing you to do a survey for + +00:20:49.559 --> 00:20:52.200 +assignment three so if you're going to + +00:20:50.720 --> 00:20:53.640 +do something like this you can do it + +00:20:52.200 --> 00:20:56.600 +before assignment 3 and start thinking + +00:20:53.640 --> 00:21:00.000 +about what you want to be doing so um + +00:20:56.600 --> 00:21:01.520 +that's something + +00:21:00.000 --> 00:21:03.200 +uh any questions or discussion about + +00:21:01.520 --> 00:21:06.799 +that + +00:21:03.200 --> 00:21:07.840 +part this is hard I'm I'm happy to uh + +00:21:06.799 --> 00:21:11.120 +happy to + +00:21:07.840 --> 00:21:14.039 +discuss either now or in office hours or + +00:21:11.120 --> 00:21:14.039 +anything like this + +00:21:14.200 --> 00:21:19.720 +but Okay + +00:21:17.080 --> 00:21:24.279 +cool so the next thing is a for + +00:21:19.720 --> 00:21:25.640 +hypothesis so uh once you have done you + +00:21:24.279 --> 00:21:28.600 +have a general idea of what you want to + +00:21:25.640 --> 00:21:31.240 +do um and you have done a survey related + +00:21:28.600 --> 00:21:32.480 +work you can devise a final research + +00:21:31.240 --> 00:21:34.159 +question or + +00:21:32.480 --> 00:21:37.760 +hypothesis + +00:21:34.159 --> 00:21:40.039 +and so a research question is one or + +00:21:37.760 --> 00:21:43.400 +several explicit questions regarding the + +00:21:40.039 --> 00:21:45.919 +thing that you want to know um + +00:21:43.400 --> 00:21:47.400 +and this is actually pretty hard for + +00:21:45.919 --> 00:21:49.080 +people like I ask people to write + +00:21:47.400 --> 00:21:50.880 +research questions and very often they + +00:21:49.080 --> 00:21:53.080 +don't write research questions in this + +00:21:50.880 --> 00:21:57.720 +format and I have to ask people to try + +00:21:53.080 --> 00:21:59.919 +to change them and what they what I + +00:21:57.720 --> 00:22:03.159 +think they in general should be are yes + +00:21:59.919 --> 00:22:08.120 +no questions so + +00:22:03.159 --> 00:22:10.400 +it um yes no questions and you have a + +00:22:08.120 --> 00:22:13.120 +hypothesis uh about what you think the + +00:22:10.400 --> 00:22:14.600 +answer to the question may be a priori + +00:22:13.120 --> 00:22:17.520 +and that hypothesis should be + +00:22:14.600 --> 00:22:19.919 +falsifiable so basically it's if you get + +00:22:17.520 --> 00:22:21.240 +a certain result you can demonstrate + +00:22:19.919 --> 00:22:23.120 +that the answer to this question is + +00:22:21.240 --> 00:22:24.679 +probably yes if you get a different + +00:22:23.120 --> 00:22:27.520 +result you can demonstrate that the + +00:22:24.679 --> 00:22:29.640 +answer to the question is probably no + +00:22:27.520 --> 00:22:32.400 +and just to make this a little bit more + +00:22:29.640 --> 00:22:34.360 +concrete I can give a few curiosity + +00:22:32.400 --> 00:22:36.880 +driven questions and + +00:22:34.360 --> 00:22:40.720 +hypothesis C the Curiosity driven + +00:22:36.880 --> 00:22:43.480 +questions are a little bit easier so um + +00:22:40.720 --> 00:22:45.600 +we have the Curiosity driven question of + +00:22:43.480 --> 00:22:49.679 +are all language models are all + +00:22:45.600 --> 00:22:53.559 +languages equally hard to language model + +00:22:49.679 --> 00:22:55.400 +and they say uh it is unlikely that all + +00:22:53.559 --> 00:22:56.760 +languages are equally easy or that + +00:22:55.400 --> 00:22:58.799 +methods are equally good at all + +00:22:56.760 --> 00:23:01.159 +languages um so so that's their + +00:22:58.799 --> 00:23:04.120 +hypothesis so they think a priori that + +00:23:01.159 --> 00:23:05.919 +that's the case um but that might be + +00:23:04.120 --> 00:23:08.400 +falsified by getting a very strong + +00:23:05.919 --> 00:23:10.679 +result that says like no matter which + +00:23:08.400 --> 00:23:13.760 +language you're modeling many models + +00:23:10.679 --> 00:23:18.120 +that we use get get similar results + +00:23:13.760 --> 00:23:20.400 +on um what makes a particular podcast + +00:23:18.120 --> 00:23:21.320 +broadly engaging so this was an analysis + +00:23:20.400 --> 00:23:24.400 +of + +00:23:21.320 --> 00:23:27.960 +podcasts uh where they compared popular + +00:23:24.400 --> 00:23:29.720 +podcasts and unpopular podcasts or + +00:23:27.960 --> 00:23:32.400 +engaging and unengaging + +00:23:29.720 --> 00:23:34.400 +podcasts and it says uh tips such as + +00:23:32.400 --> 00:23:37.039 +reducing filler words and disfluencies + +00:23:34.400 --> 00:23:38.840 +or incorporating emotion are things that + +00:23:37.039 --> 00:23:41.400 +people had anecdotally written on the + +00:23:38.840 --> 00:23:43.039 +internet as tips to make a good podcast + +00:23:41.400 --> 00:23:45.760 +but nobody had actually empirically + +00:23:43.039 --> 00:23:48.440 +valid validated that so they wanted to + +00:23:45.760 --> 00:23:50.000 +like actually go invalidate that so they + +00:23:48.440 --> 00:23:51.679 +came up with hypotheses and they could + +00:23:50.000 --> 00:23:55.720 +demonstrate that those had good or bad + +00:23:51.679 --> 00:23:55.720 +correlation podcast being judged as + +00:23:56.880 --> 00:24:03.600 +engaging application driven questions + +00:23:59.039 --> 00:24:03.600 +and hypotheses are a little bit harder + +00:24:04.520 --> 00:24:10.480 +so here is an + +00:24:07.640 --> 00:24:13.039 +example this is an example from a paper + +00:24:10.480 --> 00:24:18.720 +that I wrote previously which + +00:24:13.039 --> 00:24:22.080 +was where and why or how and why do + +00:24:18.720 --> 00:24:22.960 +pre-trained word embeddings help neural + +00:24:22.080 --> 00:24:25.080 +machine + +00:24:22.960 --> 00:24:26.760 +translation and this was back when + +00:24:25.080 --> 00:24:28.279 +pre-training was mostly like word + +00:24:26.760 --> 00:24:31.880 +embeddings we weren't preing the whole + +00:24:28.279 --> 00:24:34.480 +body of the neural net so + +00:24:31.880 --> 00:24:36.640 +now the answers to this question are a + +00:24:34.480 --> 00:24:37.919 +little bit different but basically the + +00:24:36.640 --> 00:24:40.080 +questions that we asked is is the + +00:24:37.919 --> 00:24:42.360 +behavior of pre-training affected by + +00:24:40.080 --> 00:24:45.960 +language families and other linguistic + +00:24:42.360 --> 00:24:49.520 +features of source and Target languages + +00:24:45.960 --> 00:24:51.360 +so uh we expected that the answer to + +00:24:49.520 --> 00:24:53.640 +this would be yes it would vary across + +00:24:51.360 --> 00:24:54.960 +them do pre-trained edings help more + +00:24:53.640 --> 00:24:57.760 +when the size of the training data is + +00:24:54.960 --> 00:24:59.039 +small we expected that this would be yes + +00:24:57.760 --> 00:25:00.640 +how much does the similarity of the + +00:24:59.039 --> 00:25:03.720 +source and Target languages affect the + +00:25:00.640 --> 00:25:06.200 +efficacy of using pre-trained edings uh + +00:25:03.720 --> 00:25:08.399 +we didn't have a hypothesis about + +00:25:06.200 --> 00:25:10.600 +whether it would or not and is it + +00:25:08.399 --> 00:25:12.320 +helpful to align the embedding spaces + +00:25:10.600 --> 00:25:14.520 +between the source and Target languages + +00:25:12.320 --> 00:25:16.039 +we assume this would be yes and do + +00:25:14.520 --> 00:25:17.640 +pre-trained edings help more in + +00:25:16.039 --> 00:25:19.360 +multilingual systems as compared to + +00:25:17.640 --> 00:25:22.679 +bilingual systems and we didn't have a + +00:25:19.360 --> 00:25:26.279 +good hypothesis about that + +00:25:22.679 --> 00:25:29.559 +I another one is although recent stud uh + +00:25:26.279 --> 00:25:32.760 +sorry the question of whether and how + +00:25:29.559 --> 00:25:35.039 +contextual information benefits endtoend + +00:25:32.760 --> 00:25:38.960 +speech translation has received little + +00:25:35.039 --> 00:25:42.480 +attention and so their guess was that it + +00:25:38.960 --> 00:25:44.880 +probably would help so application + +00:25:42.480 --> 00:25:47.120 +oriented questions are a little bit + +00:25:44.880 --> 00:25:49.200 +tricky because the obvious one is like + +00:25:47.120 --> 00:25:52.200 +does X make y + +00:25:49.200 --> 00:25:54.080 +better and so you you have a method you + +00:25:52.200 --> 00:25:55.559 +think it's going to make the output + +00:25:54.080 --> 00:25:58.120 +better and so that's kind of your + +00:25:55.559 --> 00:26:00.000 +obvious research question but the + +00:25:58.120 --> 00:26:02.080 +problem is the above question or + +00:26:00.000 --> 00:26:04.279 +hypothesis is natural but it's very + +00:26:02.080 --> 00:26:06.679 +indirect so normally you also have a + +00:26:04.279 --> 00:26:09.760 +hypothesis about like why it will help + +00:26:06.679 --> 00:26:13.279 +or something like this and so if the + +00:26:09.760 --> 00:26:15.440 +answer is no after your experiments why + +00:26:13.279 --> 00:26:18.080 +is the answer + +00:26:15.440 --> 00:26:20.640 +no it could be that your original + +00:26:18.080 --> 00:26:23.720 +assumption about why a particular method + +00:26:20.640 --> 00:26:25.039 +would help was wrong which is the worst + +00:26:23.720 --> 00:26:28.360 +case scenario but you also could just + +00:26:25.039 --> 00:26:30.559 +have a bug in your code or uh your + +00:26:28.360 --> 00:26:32.000 +data set your test set might not be + +00:26:30.559 --> 00:26:34.279 +large enough so you wouldn't be able to + +00:26:32.000 --> 00:26:35.840 +get a statistically significant result + +00:26:34.279 --> 00:26:40.039 +based on the amount that it helped you + +00:26:35.840 --> 00:26:42.960 +improve or other things like that so + +00:26:40.039 --> 00:26:44.960 +what I like to do in this case is try to + +00:26:42.960 --> 00:26:48.399 +come up with the intuition about why X + +00:26:44.960 --> 00:26:50.360 +will make y better and can you think of + +00:26:48.399 --> 00:26:52.080 +other research questions or hypotheses + +00:26:50.360 --> 00:26:54.240 +that confirm or falsified these + +00:26:52.080 --> 00:26:56.640 +assumptions + +00:26:54.240 --> 00:26:59.559 +so uh some things that you can do are + +00:26:56.640 --> 00:27:01.240 +come up with like toy data or come up + +00:26:59.559 --> 00:27:03.840 +with a subset of the data where you + +00:27:01.240 --> 00:27:06.600 +think this might be correct so just to + +00:27:03.840 --> 00:27:09.279 +give an example let's say we have a + +00:27:06.600 --> 00:27:12.159 +translation model and we have a + +00:27:09.279 --> 00:27:14.279 +hypothesis that improving entity + +00:27:12.159 --> 00:27:16.520 +translation and low resource languages + +00:27:14.279 --> 00:27:18.799 +will improve translation accuracy and we + +00:27:16.520 --> 00:27:21.399 +run an experiment or actually maybe this + +00:27:18.799 --> 00:27:23.760 +is an even better one we we have a + +00:27:21.399 --> 00:27:26.240 +hypothesis that incorporating contextual + +00:27:23.760 --> 00:27:28.799 +information in speech translation will + +00:27:26.240 --> 00:27:31.760 +help translation results + +00:27:28.799 --> 00:27:36.480 +so incorporating context in machine + +00:27:31.760 --> 00:27:37.600 +translation has been a very old topic + +00:27:36.480 --> 00:27:41.279 +like people have been trying to do this + +00:27:37.600 --> 00:27:43.559 +for a very long time but for a long time + +00:27:41.279 --> 00:27:45.200 +the conclusion was that it essentially + +00:27:43.559 --> 00:27:46.519 +wasn't helping translation people would + +00:27:45.200 --> 00:27:48.039 +incorporate contacts through neural + +00:27:46.519 --> 00:27:50.960 +networks or other things like that and + +00:27:48.039 --> 00:27:53.320 +it just wasn't improving the results + +00:27:50.960 --> 00:27:55.320 +significantly and in the end the reason + +00:27:53.320 --> 00:27:57.960 +why was because there just weren't + +00:27:55.320 --> 00:27:59.799 +enough examples where contextual + +00:27:57.960 --> 00:28:02.200 +information was useful in the data sets + +00:27:59.799 --> 00:28:06.360 +that everybody was using so people were + +00:28:02.200 --> 00:28:09.080 +using really long news sentences to try + +00:28:06.360 --> 00:28:10.880 +to figure out where uh whether context + +00:28:09.080 --> 00:28:12.440 +was helping but really long new + +00:28:10.880 --> 00:28:14.000 +sentences have so much information + +00:28:12.440 --> 00:28:16.080 +included in them that you can mostly + +00:28:14.000 --> 00:28:20.120 +translate sentence by sentence and get + +00:28:16.080 --> 00:28:21.880 +it right like 95% of the time so the + +00:28:20.120 --> 00:28:23.600 +problem wasn't that any of the methods + +00:28:21.880 --> 00:28:26.799 +that people were proposing were bad it + +00:28:23.600 --> 00:28:29.559 +was just that they weren't effective + +00:28:26.799 --> 00:28:31.440 +enough to see big enough uh results and + +00:28:29.559 --> 00:28:33.159 +so then people Chang the data set to + +00:28:31.440 --> 00:28:34.720 +like conversations or something like + +00:28:33.159 --> 00:28:37.399 +that and in conversations they're very + +00:28:34.720 --> 00:28:39.159 +contextual yeah very short utterances + +00:28:37.399 --> 00:28:41.440 +and once you started doing things like + +00:28:39.159 --> 00:28:45.840 +that then the same methods like exactly + +00:28:41.440 --> 00:28:48.640 +the same methods were um were helping + +00:28:45.840 --> 00:28:51.120 +when they weren't helping before and + +00:28:48.640 --> 00:28:52.720 +so the underlying assumption about + +00:28:51.120 --> 00:28:56.240 +incorporating context information is + +00:28:52.720 --> 00:28:58.159 +that context will be helpful and or + +00:28:56.240 --> 00:29:01.760 +context is necessary + +00:28:58.159 --> 00:29:03.880 +to you know do translation well so does + +00:29:01.760 --> 00:29:06.880 +anyone have an idea about how you could + +00:29:03.880 --> 00:29:06.880 +like actually verify that + +00:29:10.880 --> 00:29:16.519 +assumption any idea yeah simplest way + +00:29:14.000 --> 00:29:19.120 +would be just give an El way to set and + +00:29:16.519 --> 00:29:21.000 +then have a measure of okay if it in + +00:29:19.120 --> 00:29:23.679 +more than + +00:29:21.000 --> 00:29:25.519 +x% um and how would that verify the + +00:29:23.679 --> 00:29:28.480 +assumption that context is + +00:29:25.519 --> 00:29:30.720 +necessary so we're asking a question + +00:29:28.480 --> 00:29:33.480 +whether context is helpful in the proect + +00:29:30.720 --> 00:29:36.000 +you're doing that uh we're asking + +00:29:33.480 --> 00:29:39.240 +whether + +00:29:36.000 --> 00:29:40.840 +so we're asking kind of a a two-part the + +00:29:39.240 --> 00:29:44.080 +main question is whether context is + +00:29:40.840 --> 00:29:45.559 +helpful given a particular you know + +00:29:44.080 --> 00:29:47.240 +experimental setup right so like + +00:29:45.559 --> 00:29:50.440 +training data + +00:29:47.240 --> 00:29:52.039 +set modeling method and training + +00:29:50.440 --> 00:29:54.679 +algorithm and evaluation algorithm + +00:29:52.039 --> 00:29:56.480 +that's kind of the big final result that + +00:29:54.679 --> 00:29:58.840 +you want to get in your paper but + +00:29:56.480 --> 00:30:01.399 +there's kind of a the question which is + +00:29:58.840 --> 00:30:04.360 +is context even necessary to translate + +00:30:01.399 --> 00:30:06.559 +well you train a model with context and + +00:30:04.360 --> 00:30:08.200 +one without context you train a model + +00:30:06.559 --> 00:30:10.679 +with context and one without context but + +00:30:08.200 --> 00:30:14.080 +what if your model of context is really + +00:30:10.679 --> 00:30:15.399 +bad J the same model you have the same + +00:30:14.080 --> 00:30:16.840 +model architecture but let's say your + +00:30:15.399 --> 00:30:18.559 +model architecture is really bad at + +00:30:16.840 --> 00:30:19.919 +capturing context so then maybe it's a + +00:30:18.559 --> 00:30:22.399 +problem of your model architecture and + +00:30:19.919 --> 00:30:24.720 +context is necessary or helpful but your + +00:30:22.399 --> 00:30:27.399 +model just isn't very good at capture + +00:30:24.720 --> 00:30:29.720 +human yeah exactly so this is one thing + +00:30:27.399 --> 00:30:31.960 +that people can do so there was a + +00:30:29.720 --> 00:30:34.240 +interesting paper um let me see if I can + +00:30:31.960 --> 00:30:34.240 +find + +00:30:39.960 --> 00:30:49.080 +it so this is a paper from a long time + +00:30:45.760 --> 00:30:51.600 +ago where they did something like + +00:30:49.080 --> 00:30:53.360 +this um it's evaluating machine + +00:30:51.600 --> 00:30:54.480 +translation systems with second language + +00:30:53.360 --> 00:30:57.399 +proficiency + +00:30:54.480 --> 00:31:01.240 +tests and basically what they did is + +00:30:57.399 --> 00:31:03.519 +they had these English proficiency tests + +00:31:01.240 --> 00:31:05.320 +for uh I think it was like middle + +00:31:03.519 --> 00:31:07.480 +schoolers or high schoolers or something + +00:31:05.320 --> 00:31:09.600 +like this and then they used machine + +00:31:07.480 --> 00:31:11.240 +translation systems to translate them + +00:31:09.600 --> 00:31:13.600 +into Japanese and then they asked + +00:31:11.240 --> 00:31:19.720 +Japanese students to solve them in + +00:31:13.600 --> 00:31:19.720 +japanies and so what they did is they + +00:31:20.000 --> 00:31:26.159 +asked uh Anonymous system G and + +00:31:23.679 --> 00:31:28.200 +Anonymous system Y which are Google and + +00:31:26.159 --> 00:31:32.360 +Yahoo + +00:31:28.200 --> 00:31:34.720 +and uh and a human without context and a + +00:31:32.360 --> 00:31:36.279 +human with context to translate them so + +00:31:34.720 --> 00:31:38.720 +they ask humans to translate each + +00:31:36.279 --> 00:31:40.880 +sentence without giving any context and + +00:31:38.720 --> 00:31:44.320 +they ask humans to translate each uh + +00:31:40.880 --> 00:31:46.399 +sentence with giving context and what + +00:31:44.320 --> 00:31:48.960 +they were able to find was in this case + +00:31:46.399 --> 00:31:50.080 +humans with context the Japanese + +00:31:48.960 --> 00:31:53.080 +students were able to answer the + +00:31:50.080 --> 00:31:55.360 +questions most of the time um whereas if + +00:31:53.080 --> 00:31:57.559 +they translated without contexts like G + +00:31:55.360 --> 00:31:59.039 +and Y were doing at that time actually + +00:31:57.559 --> 00:32:01.320 +why was almost as good as human + +00:31:59.039 --> 00:32:04.080 +translators at you know achieving the + +00:32:01.320 --> 00:32:05.440 +the task so but basically like the + +00:32:04.080 --> 00:32:09.159 +important thing here is they were able + +00:32:05.440 --> 00:32:11.039 +to confirm their you know idea that in + +00:32:09.159 --> 00:32:12.519 +this case humans with context were much + +00:32:11.039 --> 00:32:13.799 +better than humans without context so + +00:32:12.519 --> 00:32:16.279 +that would verify your like sub + +00:32:13.799 --> 00:32:18.080 +assumption right and so this is just + +00:32:16.279 --> 00:32:20.279 +like one + +00:32:18.080 --> 00:32:22.240 +example this is just one example of + +00:32:20.279 --> 00:32:25.960 +something that you can + +00:32:22.240 --> 00:32:27.480 +do uh but the basic idea is like your + +00:32:25.960 --> 00:32:29.320 +final result is that you want build of + +00:32:27.480 --> 00:32:30.799 +system that does better on some + +00:32:29.320 --> 00:32:32.159 +Benchmark that you care about there's a + +00:32:30.799 --> 00:32:33.600 +bunch of things that go into whether it + +00:32:32.159 --> 00:32:36.159 +does better or not your evaluation + +00:32:33.600 --> 00:32:38.960 +system your model your training data + +00:32:36.159 --> 00:32:41.559 +your training your evaluation data set + +00:32:38.960 --> 00:32:43.080 +um and things like that so can you break + +00:32:41.559 --> 00:32:45.360 +that down into sub questions that you + +00:32:43.080 --> 00:32:48.039 +could ask where you could verify that + +00:32:45.360 --> 00:32:49.720 +it's working or not uh based on whether + +00:32:48.039 --> 00:32:51.600 +those things are happening another thing + +00:32:49.720 --> 00:32:53.159 +people do an ml oriented things is + +00:32:51.600 --> 00:32:54.919 +create a toy data set where they know + +00:32:53.159 --> 00:32:57.200 +the phenomenon they're interested in + +00:32:54.919 --> 00:32:59.679 +exists and train their models on there + +00:32:57.200 --> 00:33:02.919 +and make sure that they work there um so + +00:32:59.679 --> 00:33:02.919 +that's another thing that you can take + +00:33:03.120 --> 00:33:07.639 +that cool um any questions about + +00:33:08.080 --> 00:33:12.760 +this okay + +00:33:10.200 --> 00:33:16.519 +s so the next thing is running + +00:33:12.760 --> 00:33:19.000 +experiments um so in order to do this + +00:33:16.519 --> 00:33:21.399 +you'll find data that will answer your + +00:33:19.000 --> 00:33:23.639 +research question uh run experiments and + +00:33:21.399 --> 00:33:25.720 +calculate numbers uh calculate + +00:33:23.639 --> 00:33:28.279 +significant differences and analyze + +00:33:25.720 --> 00:33:31.080 +effects whoops + +00:33:28.279 --> 00:33:35.519 +and so this is a basic pipeline that we + +00:33:31.080 --> 00:33:37.760 +want to follow so obtaining test data so + +00:33:35.519 --> 00:33:41.200 +in order to obtain test data uh we would + +00:33:37.760 --> 00:33:42.799 +like to find data sets um so if you're + +00:33:41.200 --> 00:33:46.200 +building on previous work the safest + +00:33:42.799 --> 00:33:48.960 +thing that you can do um is start with + +00:33:46.200 --> 00:33:51.919 +the same data sets if you're answering a + +00:33:48.960 --> 00:33:53.799 +new question um you can think about can + +00:33:51.919 --> 00:33:55.399 +you repurpose other data sets to answer + +00:33:53.799 --> 00:33:57.679 +the question so very often there will be + +00:33:55.399 --> 00:34:00.080 +a data set that is uh appropriate for + +00:33:57.679 --> 00:34:03.360 +answer answering your question um and + +00:34:00.080 --> 00:34:05.760 +you can go and find that um actually our + +00:34:03.360 --> 00:34:06.919 +our wonderful TJ has created a system + +00:34:05.760 --> 00:34:08.800 +called datafinder that will + +00:34:06.919 --> 00:34:11.159 +automatically find it for you so if you + +00:34:08.800 --> 00:34:13.679 +want to uh search for data sets you can + +00:34:11.159 --> 00:34:16.760 +use his system or ask him about it but + +00:34:13.679 --> 00:34:20.359 +um uh but if no appropriate data set + +00:34:16.760 --> 00:34:24.359 +exists you can uh create your own and + +00:34:20.359 --> 00:34:25.879 +particularly for industry use cases it's + +00:34:24.359 --> 00:34:28.119 +very common that you need to go in and + +00:34:25.879 --> 00:34:30.040 +create your own or if you're planning on + +00:34:28.119 --> 00:34:31.639 +doing research in Academia afterwards + +00:34:30.040 --> 00:34:33.119 +very often you'll come up with a + +00:34:31.639 --> 00:34:34.639 +research question where no data set + +00:34:33.119 --> 00:34:36.679 +exists so you'll have to create your own + +00:34:34.639 --> 00:34:38.960 +anyway so this is something that's + +00:34:36.679 --> 00:34:41.639 +really important to be able to do well + +00:34:38.960 --> 00:34:44.639 +uh in most + +00:34:41.639 --> 00:34:49.240 +cases um so I'll be talking about how to + +00:34:44.639 --> 00:34:53.280 +do all of these so data set lists um the + +00:34:49.240 --> 00:34:55.159 +best one I think by far in uh natural + +00:34:53.280 --> 00:34:58.359 +language processing nowadays is hugging + +00:34:55.159 --> 00:35:02.960 +face data sets um there's also other + +00:34:58.359 --> 00:35:05.359 +data resources like um elra is uh + +00:35:02.960 --> 00:35:07.240 +another one kind of by the more + +00:35:05.359 --> 00:35:09.800 +traditional natural language processing + +00:35:07.240 --> 00:35:12.960 +Community there's also the LDC the + +00:35:09.800 --> 00:35:15.680 +linguistic data uh Consortium and there + +00:35:12.960 --> 00:35:17.119 +are some older heavily annotated data + +00:35:15.680 --> 00:35:20.040 +sets that are only available through + +00:35:17.119 --> 00:35:22.000 +those at CMU you have the ability to + +00:35:20.040 --> 00:35:24.520 +download things from LDC so if you find + +00:35:22.000 --> 00:35:26.960 +an LDC data set in any papers that + +00:35:24.520 --> 00:35:29.640 +you're doing or online um you need + +00:35:26.960 --> 00:35:31.000 +register for that and I I'm the person + +00:35:29.640 --> 00:35:33.280 +who's in charge of it so I'll give you + +00:35:31.000 --> 00:35:35.520 +access and then uh and then you can use + +00:35:33.280 --> 00:35:37.400 +it um there's also things like papers + +00:35:35.520 --> 00:35:39.680 +with code and papers with code basically + +00:35:37.400 --> 00:35:41.359 +automatically extracts uh kind of like + +00:35:39.680 --> 00:35:42.839 +the names of data sets so even some + +00:35:41.359 --> 00:35:45.599 +things that don't appear on a hug and + +00:35:42.839 --> 00:35:45.599 +place will appear + +00:35:46.359 --> 00:35:52.440 +there so annotating data um when you + +00:35:50.640 --> 00:35:54.599 +annotate data you first need to decide + +00:35:52.440 --> 00:35:57.599 +how much to annotate sample appropriate + +00:35:54.599 --> 00:36:00.240 +data create annotation guidelines + +00:35:57.599 --> 00:36:03.160 +uh either annotate yourself or hire and + +00:36:00.240 --> 00:36:05.839 +supervis annotators and evaluate + +00:36:03.160 --> 00:36:07.720 +quality so a very common problem that a + +00:36:05.839 --> 00:36:10.240 +lot of people ask me is how much test + +00:36:07.720 --> 00:36:12.800 +data do you need + +00:36:10.240 --> 00:36:14.800 +and I'm going to talk about uh + +00:36:12.800 --> 00:36:17.520 +statistical significance tests in a + +00:36:14.800 --> 00:36:19.520 +second but um basically you need to have + +00:36:17.520 --> 00:36:23.240 +enough to have a statistically + +00:36:19.520 --> 00:36:28.119 +significant difference um between + +00:36:23.240 --> 00:36:32.079 +methods and the way you do this actually + +00:36:28.119 --> 00:36:32.079 +sorry very quickly let me + +00:36:33.240 --> 00:36:37.599 +check I rearrange my slides and I want + +00:36:35.560 --> 00:36:40.359 +to make sure that I didn't accidentally + +00:36:37.599 --> 00:36:42.280 +um I didn't accidentally remove the + +00:36:40.359 --> 00:36:44.520 +slides on statistical significance which + +00:36:42.280 --> 00:36:44.520 +would be + +00:36:51.680 --> 00:36:57.880 +a okay + +00:36:55.240 --> 00:36:59.200 +um sorry hang on one second I just + +00:36:57.880 --> 00:37:02.240 +realized that I don't have the slides + +00:36:59.200 --> 00:37:03.839 +for a statistical significance on this + +00:37:02.240 --> 00:37:05.280 +presentation so let me grab them from + +00:37:03.839 --> 00:37:09.440 +the + +00:37:05.280 --> 00:37:09.440 +last uh the last + +00:37:10.520 --> 00:37:14.640 +us this is is pretty + +00:37:25.599 --> 00:37:28.599 +important + +00:37:33.160 --> 00:37:38.599 +okay so yeah let me explain statistical + +00:37:35.560 --> 00:37:40.319 +significance here um so basically when + +00:37:38.599 --> 00:37:43.319 +we're doing statistical + +00:37:40.319 --> 00:37:44.680 +testing um let's say we have two models + +00:37:43.319 --> 00:37:47.800 +with similar + +00:37:44.680 --> 00:37:50.160 +accuracies and these models with similar + +00:37:47.800 --> 00:37:52.240 +accuracies let's say model one is a + +00:37:50.160 --> 00:37:56.880 +generative model model two is a + +00:37:52.240 --> 00:37:58.520 +discriminative model and we say uh data + +00:37:56.880 --> 00:38:00.200 +set one we have this result on data set + +00:37:58.520 --> 00:38:02.480 +two we have another result on data set + +00:38:00.200 --> 00:38:04.720 +three we have uh another + +00:38:02.480 --> 00:38:06.440 +result and so then the question is how + +00:38:04.720 --> 00:38:09.480 +can we tell if the differences are due + +00:38:06.440 --> 00:38:13.839 +to consistent trends that uh will hold + +00:38:09.480 --> 00:38:16.119 +on other data sets or um if they are + +00:38:13.839 --> 00:38:18.480 +kind of random noise due to the fact + +00:38:16.119 --> 00:38:21.000 +that we have one + +00:38:18.480 --> 00:38:24.200 +uh due to the fact that you know data + +00:38:21.000 --> 00:38:25.640 +sets vary models vary um and so the way + +00:38:24.200 --> 00:38:28.319 +we do this is through statistical + +00:38:25.640 --> 00:38:31.839 +significance testing + +00:38:28.319 --> 00:38:34.319 +um so I'm going to cover this briefly in + +00:38:31.839 --> 00:38:36.920 +this class but you can see a drawer at + +00:38:34.319 --> 00:38:38.640 +all for an overview and also we're going + +00:38:36.920 --> 00:38:41.520 +to have a recitation on how to actually + +00:38:38.640 --> 00:38:44.280 +run statistical significance tests so um + +00:38:41.520 --> 00:38:47.920 +you can take a look at that + +00:38:44.280 --> 00:38:51.680 +there and so the basic idea is given a + +00:38:47.920 --> 00:38:54.280 +quantity we test um certain values of + +00:38:51.680 --> 00:38:57.880 +uncertainty with respect to the quantity + +00:38:54.280 --> 00:38:59.960 +so number one is a p value and the P + +00:38:57.880 --> 00:39:02.240 +value is what is the probability that a + +00:38:59.960 --> 00:39:06.119 +difference with another quantity is by + +00:39:02.240 --> 00:39:08.359 +chance and so a lower uh P value means + +00:39:06.119 --> 00:39:11.839 +more likelihood of having a significant + +00:39:08.359 --> 00:39:13.200 +difference usually the threshold for + +00:39:11.839 --> 00:39:16.520 +saying that we have a significant + +00:39:13.200 --> 00:39:20.280 +difference is there's a 5% chance + +00:39:16.520 --> 00:39:22.160 +0.05 that this difference between the + +00:39:20.280 --> 00:39:25.760 +models was due to chance or like data + +00:39:22.160 --> 00:39:28.520 +sampling or things like that uh so p uh + +00:39:25.760 --> 00:39:30.880 +less than 0.05 is kind of a threshold + +00:39:28.520 --> 00:39:30.880 +for + +00:39:31.119 --> 00:39:35.680 +significance another thing that we can + +00:39:33.040 --> 00:39:38.720 +measure is confidence intervals and the + +00:39:35.680 --> 00:39:40.760 +confidence interval is um what is the + +00:39:38.720 --> 00:39:42.560 +range under which we could expect + +00:39:40.760 --> 00:39:44.760 +another trial to fall and I'll talk + +00:39:42.560 --> 00:39:47.359 +about both of + +00:39:44.760 --> 00:39:49.280 +these um there's another concept called + +00:39:47.359 --> 00:39:53.880 +paired versus unpaired + +00:39:49.280 --> 00:39:56.680 +tests and in unpaired test comp this + +00:39:53.880 --> 00:39:59.480 +means um we compare the means of a + +00:39:56.680 --> 00:40:02.359 +quantity on two unrelated + +00:39:59.480 --> 00:40:04.040 +groups so an example could be the test + +00:40:02.359 --> 00:40:07.040 +of the significance of a difference of + +00:40:04.040 --> 00:40:09.160 +accuracies of a model on two data sets + +00:40:07.040 --> 00:40:12.400 +so like let's say I have data set number + +00:40:09.160 --> 00:40:16.440 +one and data set number two what is the + +00:40:12.400 --> 00:40:18.000 +likelihood that the um there's actually + +00:40:16.440 --> 00:40:20.839 +a real difference in the data sets as + +00:40:18.000 --> 00:40:23.400 +opposed to just random uh random + +00:40:20.839 --> 00:40:26.599 +sampling RS between + +00:40:23.400 --> 00:40:28.560 +them in contrast AED test compares the + +00:40:26.599 --> 00:40:31.400 +means of a quantity on one data set + +00:40:28.560 --> 00:40:32.480 +under two conditions and so an example + +00:40:31.400 --> 00:40:33.760 +of this could be testing the + +00:40:32.480 --> 00:40:37.319 +significance of a difference of + +00:40:33.760 --> 00:40:39.640 +accuracies of two models on one data set + +00:40:37.319 --> 00:40:42.000 +so this is a really important difference + +00:40:39.640 --> 00:40:43.960 +and the reason why it's a really + +00:40:42.000 --> 00:40:45.520 +important difference well number one + +00:40:43.960 --> 00:40:49.119 +we're most commonly interested in the + +00:40:45.520 --> 00:40:51.839 +letter number two if we can make + +00:40:49.119 --> 00:40:54.280 +assumptions about + +00:40:51.839 --> 00:40:56.079 +the association of the points in the + +00:40:54.280 --> 00:40:58.680 +data set we're much much more likely to + +00:40:56.079 --> 00:41:00.440 +get a significant result because we can + +00:40:58.680 --> 00:41:02.240 +um we can look at the difference of the + +00:41:00.440 --> 00:41:06.000 +models on individual data points as + +00:41:02.240 --> 00:41:10.400 +opposed to um uh as opposed to looking + +00:41:06.000 --> 00:41:10.400 +at just the difference in the + +00:41:10.520 --> 00:41:16.839 +means so one example of a statistical + +00:41:13.760 --> 00:41:18.280 +significance test is a bootstrap test + +00:41:16.839 --> 00:41:19.760 +and the bootstrap test is really + +00:41:18.280 --> 00:41:21.680 +convenient because you can implement it + +00:41:19.760 --> 00:41:25.160 +for any evaluation metric that you want + +00:41:21.680 --> 00:41:26.880 +to be using and so in NLP we can use + +00:41:25.160 --> 00:41:29.560 +lots of different evaluations metrics we + +00:41:26.880 --> 00:41:31.119 +can use an evaluation metric like um + +00:41:29.560 --> 00:41:34.160 +accuracy but we can also use an + +00:41:31.119 --> 00:41:37.400 +evaluation metric like fmeasure for + +00:41:34.160 --> 00:41:40.560 +classification or a blue score or + +00:41:37.400 --> 00:41:43.599 +character F score or word error rate or + +00:41:40.560 --> 00:41:48.440 +something like that for um for various + +00:41:43.599 --> 00:41:50.720 +tasks and this is applicable to any any + +00:41:48.440 --> 00:41:54.000 +metric you want to use uh any quantity + +00:41:50.720 --> 00:41:57.319 +you want to measure also so the basic + +00:41:54.000 --> 00:41:59.079 +idea of a bootstrap test is a method + +00:41:57.319 --> 00:42:02.520 +that can measure P values and confidence + +00:41:59.079 --> 00:42:06.040 +intervals by resampling data and so the + +00:42:02.520 --> 00:42:08.480 +way you do this is you sample subsets + +00:42:06.040 --> 00:42:11.960 +from your death Dev test set with + +00:42:08.480 --> 00:42:14.720 +replacement so you might sample 10,000 + +00:42:11.960 --> 00:42:19.599 +times and you measure accuracy on these + +00:42:14.720 --> 00:42:22.520 +many subsets and then you take + +00:42:19.599 --> 00:42:25.640 +the you look at all of the accuracies + +00:42:22.520 --> 00:42:27.680 +that you got on these subsample data + +00:42:25.640 --> 00:42:31.079 +sets and then you take the middle + +00:42:27.680 --> 00:42:32.640 +percentile range like 2.5 to 97.5 and + +00:42:31.079 --> 00:42:34.960 +you can treat that as a confidence + +00:42:32.640 --> 00:42:37.640 +interval the 95% confidence interval + +00:42:34.960 --> 00:42:40.720 +about where you're like 95% certain that + +00:42:37.640 --> 00:42:40.720 +your results will fall in + +00:42:40.880 --> 00:42:48.240 +here another thing that you can do is + +00:42:45.119 --> 00:42:50.040 +you can do a paired test and what the + +00:42:48.240 --> 00:42:51.200 +paired test does is it measures the + +00:42:50.040 --> 00:42:53.359 +number of + +00:42:51.200 --> 00:42:55.839 +winds um + +00:42:53.359 --> 00:42:57.720 +if and you measure the percentage of + +00:42:55.839 --> 00:43:00.920 +winds and this is the confidence that a + +00:42:57.720 --> 00:43:03.280 +gain in accuracy is not by chance um and + +00:43:00.920 --> 00:43:05.920 +so this could be one minus the P value + +00:43:03.280 --> 00:43:07.960 +of the paired test so this is easy to + +00:43:05.920 --> 00:43:09.960 +implement applicable to any evaluation + +00:43:07.960 --> 00:43:13.480 +measure but somewhat biased on small + +00:43:09.960 --> 00:43:17.240 +data sets um just to maybe I can give a + +00:43:13.480 --> 00:43:19.920 +more concrete example so let's say we + +00:43:17.240 --> 00:43:27.520 +have a classification data set what you + +00:43:19.920 --> 00:43:30.400 +can do is um let's say we have a b c d e + +00:43:27.520 --> 00:43:36.960 +e or + +00:43:30.400 --> 00:43:39.559 +um X1 X2 X3 X4 + +00:43:36.960 --> 00:43:44.520 +X5 so this is our our classification + +00:43:39.559 --> 00:43:47.440 +data set and um we have system + +00:43:44.520 --> 00:43:52.000 +one system + +00:43:47.440 --> 00:43:53.760 +two and we have right right right right + +00:43:52.000 --> 00:43:56.599 +wrong + +00:43:53.760 --> 00:44:00.440 +right uh right wrong + +00:43:56.599 --> 00:44:03.040 +long right or something like this and so + +00:44:00.440 --> 00:44:07.079 +what we do is we randomly sample a sub + +00:44:03.040 --> 00:44:08.760 +data set um and let's say this is like + +00:44:07.079 --> 00:44:10.440 +X3 + +00:44:08.760 --> 00:44:13.599 +X2 + +00:44:10.440 --> 00:44:17.599 +X4 X1 + +00:44:13.599 --> 00:44:20.440 +X2 and so this is our subd data set uh + +00:44:17.599 --> 00:44:20.440 +what we do + +00:44:20.640 --> 00:44:28.920 +is um so X3 would be + +00:44:23.520 --> 00:44:34.559 +01 X2 would be 1 one X4 would be one Zer + +00:44:28.920 --> 00:44:39.079 +X X1 would be 1 one and + +00:44:34.559 --> 00:44:42.319 +then uh X X2 would be one and so the + +00:44:39.079 --> 00:44:45.319 +overall accuracy here + +00:44:42.319 --> 00:44:45.319 +is + +00:44:45.480 --> 00:44:50.240 +60% and + +00:44:47.440 --> 00:44:51.880 +80% so if we didn't do any statistical + +00:44:50.240 --> 00:44:55.400 +significance test we might say oh system + +00:44:51.880 --> 00:44:57.680 +2 is better obviously um but if we do + +00:44:55.400 --> 00:45:01.079 +the significance test this is one sample + +00:44:57.680 --> 00:45:03.119 +from the bootstrap test in + +00:45:01.079 --> 00:45:07.040 +here + +00:45:03.119 --> 00:45:09.079 +now we get like 80% and 80% and it's + +00:45:07.040 --> 00:45:11.079 +like okay actually maybe in some cases + +00:45:09.079 --> 00:45:13.480 +these systems AR equally good maybe + +00:45:11.079 --> 00:45:16.079 +there's a tie or if we sampled another + +00:45:13.480 --> 00:45:19.079 +one uh let's say we + +00:45:16.079 --> 00:45:19.079 +sampled + +00:45:19.359 --> 00:45:27.319 +uh + +00:45:20.960 --> 00:45:30.680 +X4 X1 X2 X4 X1 + +00:45:27.319 --> 00:45:36.160 +um um then we would get something like + +00:45:30.680 --> 00:45:37.559 +one Z one one one one 1 0 1 one this + +00:45:36.160 --> 00:45:40.440 +would be + +00:45:37.559 --> 00:45:42.559 +100% And this would be + +00:45:40.440 --> 00:45:44.960 +60% and + +00:45:42.559 --> 00:45:47.000 +so in some cases depending on how we + +00:45:44.960 --> 00:45:48.440 +sample actually system one wins and so + +00:45:47.000 --> 00:45:51.440 +you count the number of times that + +00:45:48.440 --> 00:45:52.880 +system two wins based on um based on + +00:45:51.440 --> 00:45:54.280 +these sub samples you count the number + +00:45:52.880 --> 00:45:56.400 +of times that system one wins and you + +00:45:54.280 --> 00:45:59.000 +count the number of times you get a tie + +00:45:56.400 --> 00:46:00.920 +and only in the case where system two or + +00:45:59.000 --> 00:46:03.680 +like the better system wins more than + +00:46:00.920 --> 00:46:06.280 +95% of the time you say that there's a + +00:46:03.680 --> 00:46:08.599 +significant difference be these or + +00:46:06.280 --> 00:46:10.720 +alternatively you could also look at the + +00:46:08.599 --> 00:46:15.960 +confidence intervals by saying okay I + +00:46:10.720 --> 00:46:19.000 +sampled um like 90 95% of the time uh + +00:46:15.960 --> 00:46:20.920 +the accuracy of system one is uh like + +00:46:19.000 --> 00:46:23.640 +80% or lower and so that would give you + +00:46:20.920 --> 00:46:23.640 +the upper L + +00:46:23.760 --> 00:46:29.599 +calculation so yeah sorry this is a very + +00:46:27.480 --> 00:46:31.760 +uh very quick overview of this but the + +00:46:29.599 --> 00:46:34.240 +reason why this is useful is let's say + +00:46:31.760 --> 00:46:36.160 +you create a very small data set if you + +00:46:34.240 --> 00:46:38.400 +create a very small data set this is + +00:46:36.160 --> 00:46:39.880 +going to give you a very it's going to + +00:46:38.400 --> 00:46:41.319 +be very hard to get a statistically + +00:46:39.880 --> 00:46:44.319 +significant result on this data set + +00:46:41.319 --> 00:46:47.200 +because it's tiny right and you know + +00:46:44.319 --> 00:46:50.640 +quite frequently you're going to be + +00:46:47.200 --> 00:46:53.400 +sampling um you're going to be sampling + +00:46:50.640 --> 00:46:55.400 +data sets like this where the model like + +00:46:53.400 --> 00:46:56.640 +where model one wins quite frequently + +00:46:55.400 --> 00:46:58.520 +you're going to be sampling other data + +00:46:56.640 --> 00:47:00.359 +sets where key wins and basically you're + +00:46:58.520 --> 00:47:02.920 +not going to be able to say with + +00:47:00.359 --> 00:47:04.480 +confidence which model is better because + +00:47:02.920 --> 00:47:06.359 +you just don't have enough data to say + +00:47:04.480 --> 00:47:07.880 +that but as you make your data set + +00:47:06.359 --> 00:47:11.119 +bigger and bigger it becomes easier and + +00:47:07.880 --> 00:47:14.240 +easier to get a significant result and + +00:47:11.119 --> 00:47:17.400 +so uh because you're more sure that you + +00:47:14.240 --> 00:47:20.960 +didn't just randomly pick data that + +00:47:17.400 --> 00:47:25.400 +model two is better at + +00:47:20.960 --> 00:47:28.440 +uh so um there's also other varieties + +00:47:25.400 --> 00:47:31.240 +ofest there's things like T tests for + +00:47:28.440 --> 00:47:34.720 +unpaired unpaired outputs and paired T + +00:47:31.240 --> 00:47:38.079 +tests for paired outputs those work when + +00:47:34.720 --> 00:47:40.440 +your um outputs are eddied so they work + +00:47:38.079 --> 00:47:43.599 +for accuracy because the accuracy is + +00:47:40.440 --> 00:47:46.440 +just you add all the add all the ones + +00:47:43.599 --> 00:47:48.680 +and then divide by the um the number of + +00:47:46.440 --> 00:47:50.960 +instances and that gives you an accuracy + +00:47:48.680 --> 00:47:57.880 +that doesn't work for something like + +00:47:50.960 --> 00:48:03.599 +fmeasure um because fmeasure is um 2 * + +00:47:57.880 --> 00:48:07.319 +Precision Time recall / Precision plus + +00:48:03.599 --> 00:48:08.040 +recall um and precision and recall uh + +00:48:07.319 --> 00:48:10.640 +you + +00:48:08.040 --> 00:48:12.920 +can like a T Test works for this but + +00:48:10.640 --> 00:48:15.160 +there's a non-additive component of f + +00:48:12.920 --> 00:48:16.680 +measure so you can't calculate + +00:48:15.160 --> 00:48:19.280 +statistically significant differences in + +00:48:16.680 --> 00:48:21.079 +F measure using a key test in that case + +00:48:19.280 --> 00:48:23.000 +you're basically you have to use a + +00:48:21.079 --> 00:48:24.920 +bootstrap method like this in order to + +00:48:23.000 --> 00:48:29.040 +get it to work or you need to do some + +00:48:24.920 --> 00:48:29.040 +really complex math but I I just + +00:48:29.760 --> 00:48:33.920 +use cool um are there any questions + +00:48:32.680 --> 00:48:35.520 +about this I guess we'll have a code + +00:48:33.920 --> 00:48:37.680 +example in the recitation so you can go + +00:48:35.520 --> 00:48:39.599 +in and take a look at that there's also + +00:48:37.680 --> 00:48:42.599 +tons of code examples + +00:48:39.599 --> 00:48:42.599 +online + +00:48:42.960 --> 00:48:49.440 +um is that + +00:48:45.720 --> 00:48:52.400 +okay okay sounds good um so now let me + +00:48:49.440 --> 00:48:54.599 +uh let me go back to the actual slides + +00:48:52.400 --> 00:48:57.400 +for + +00:48:54.599 --> 00:49:00.559 +today and given those statist uh the + +00:48:57.400 --> 00:49:04.119 +results about statistical signicance um + +00:49:00.559 --> 00:49:06.040 +how can we estimate how much testing + +00:49:04.119 --> 00:49:07.920 +data is enough and there's a method + +00:49:06.040 --> 00:49:11.079 +called Power analysis that allows you to + +00:49:07.920 --> 00:49:13.359 +do this and basically the idea of power + +00:49:11.079 --> 00:49:16.680 +analysis is that you make an assumption + +00:49:13.359 --> 00:49:18.880 +about the effect size between settings + +00:49:16.680 --> 00:49:20.680 +um for example the expected accuracy + +00:49:18.880 --> 00:49:23.480 +difference between tested + +00:49:20.680 --> 00:49:26.480 +models and given the effect size a + +00:49:23.480 --> 00:49:28.880 +significance threshold and significant + +00:49:26.480 --> 00:49:30.839 +threshold you can determine how much + +00:49:28.880 --> 00:49:32.680 +data is necessary to get a significant + +00:49:30.839 --> 00:49:36.680 +effect in most + +00:49:32.680 --> 00:49:39.319 +CLS and so to give an example + +00:49:36.680 --> 00:49:41.559 +again let's say we're talking about the + +00:49:39.319 --> 00:49:45.880 +accuracy let's say we have a baseline + +00:49:41.559 --> 00:49:49.079 +model and we have a um we have a + +00:49:45.880 --> 00:49:52.280 +baseline model and then we also have our + +00:49:49.079 --> 00:49:54.000 +uh propos model and we know kind of from + +00:49:52.280 --> 00:49:55.599 +experience that the Baseline model is + +00:49:54.000 --> 00:49:58.400 +probably going to get around 90% + +00:49:55.599 --> 00:50:00.559 +accuracy We Know by like eyeballing + +00:49:58.400 --> 00:50:06.240 +eyeballing the data or something like + +00:50:00.559 --> 00:50:09.599 +that and then we think our um we think + +00:50:06.240 --> 00:50:13.799 +our model is going to get 93% + +00:50:09.599 --> 00:50:17.160 +accuracy uh and we want a significant + +00:50:13.799 --> 00:50:19.440 +threshold significance threshold of p is + +00:50:17.160 --> 00:50:22.319 +less than + +00:50:19.440 --> 00:50:26.000 +0.05 given these + +00:50:22.319 --> 00:50:30.559 +two quantities we can basically go in + +00:50:26.000 --> 00:50:33.720 +and say okay now we need uh 500 training + +00:50:30.559 --> 00:50:36.200 +500 test examples in order to say with + +00:50:33.720 --> 00:50:38.920 +confidence that we will be able + +00:50:36.200 --> 00:50:40.599 +to um that we will be able to + +00:50:38.920 --> 00:50:42.640 +distinguish between two models with 90 + +00:50:40.599 --> 00:50:44.400 +and 93% + +00:50:42.640 --> 00:50:48.240 +accuracy + +00:50:44.400 --> 00:50:51.079 +and I can go I can show the algorithm + +00:50:48.240 --> 00:50:51.079 +that they have in this + +00:50:54.440 --> 00:50:57.440 +paper + +00:51:01.760 --> 00:51:04.960 +but basically the way this + +00:51:13.040 --> 00:51:19.720 +works um is you sample a data set um + +00:51:17.799 --> 00:51:22.960 +Canute the effect of interest on the + +00:51:19.720 --> 00:51:25.880 +sample I compute the P value and then + +00:51:22.960 --> 00:51:29.319 +you can calculate the power uh + +00:51:25.880 --> 00:51:31.520 +by basically um checking the number of + +00:51:29.319 --> 00:51:34.480 +times that the P value is less than your + +00:51:31.520 --> 00:51:36.319 +threshold um multiplied by uh the fact + +00:51:34.480 --> 00:51:38.920 +that the sign is in a particular + +00:51:36.319 --> 00:51:41.200 +direction and by doing this you can + +00:51:38.920 --> 00:51:43.280 +essentially um you can essentially + +00:51:41.200 --> 00:51:46.200 +calculate how much data you would need + +00:51:43.280 --> 00:51:48.319 +or sorry you can calculate the uh the + +00:51:46.200 --> 00:51:50.319 +statistical power and then you can do + +00:51:48.319 --> 00:51:52.000 +this for various sizes of data set so + +00:51:50.319 --> 00:51:53.559 +you can gradually increase the size of + +00:51:52.000 --> 00:51:57.160 +the data set or decrease the size of the + +00:51:53.559 --> 00:51:59.040 +data set and that allows you to figure + +00:51:57.160 --> 00:52:02.200 +out how big your data set needs to be in + +00:51:59.040 --> 00:52:04.640 +order to get a statistically significant + +00:52:02.200 --> 00:52:08.839 +effect of the data + +00:52:04.640 --> 00:52:10.720 +set and so like many many people ask me + +00:52:08.839 --> 00:52:12.599 +the question like how big of a data set + +00:52:10.720 --> 00:52:14.440 +do we need to make this is basically the + +00:52:12.599 --> 00:52:17.280 +statistically like quote unquote correct + +00:52:14.440 --> 00:52:19.520 +answer for how you can do this and also + +00:52:17.280 --> 00:52:20.440 +uh for assignment two we're going to ask + +00:52:19.520 --> 00:52:24.559 +you to + +00:52:20.440 --> 00:52:26.720 +justify uh your choice of creation of a + +00:52:24.559 --> 00:52:30.359 +data set of particular size for testing + +00:52:26.720 --> 00:52:31.799 +based on this so um uh pay pay attention + +00:52:30.359 --> 00:52:34.720 +and please look at the references here + +00:52:31.799 --> 00:52:38.760 +and you should be able to + +00:52:34.720 --> 00:52:41.280 +that cool um any + +00:52:38.760 --> 00:52:43.119 +questions I I didn't go like really + +00:52:41.280 --> 00:52:44.319 +deeply into the formulas here you'll + +00:52:43.119 --> 00:52:45.720 +you'll probably have to look them up in + +00:52:44.319 --> 00:52:48.119 +the paper but hopefully that gives you + +00:52:45.720 --> 00:52:51.799 +the general + +00:52:48.119 --> 00:52:52.680 +idea okay next um how much training data + +00:52:51.799 --> 00:52:55.599 +do I + +00:52:52.680 --> 00:52:58.160 +need so in general more is usually + +00:52:55.599 --> 00:53:00.760 +better if you're fine tuning a model um + +00:52:58.160 --> 00:53:02.880 +so I can't tell you like you don't need + +00:53:00.760 --> 00:53:05.480 +to make more data because + +00:53:02.880 --> 00:53:06.280 +probably you do if you're not happy with + +00:53:05.480 --> 00:53:10.799 +your + +00:53:06.280 --> 00:53:12.599 +performance um but recently you can get + +00:53:10.799 --> 00:53:14.680 +very reasonable performance with few + +00:53:12.599 --> 00:53:17.319 +shot or zero shot or pre-trained models + +00:53:14.680 --> 00:53:19.760 +and prompting and because of this in + +00:53:17.319 --> 00:53:21.240 +some cases maybe the answer is zero + +00:53:19.760 --> 00:53:22.960 +maybe you don't need any training data + +00:53:21.240 --> 00:53:26.559 +and you could just use a zero shot pred + +00:53:22.960 --> 00:53:29.240 +model so um you you need to choose like + +00:53:26.559 --> 00:53:31.319 +what your accuracy threshold is um you + +00:53:29.240 --> 00:53:32.720 +need to decide whether you want to be + +00:53:31.319 --> 00:53:34.480 +fine-tuning a model to improve + +00:53:32.720 --> 00:53:36.319 +performance or doing other things like + +00:53:34.480 --> 00:53:39.119 +prompt engineering or other stuff like + +00:53:36.319 --> 00:53:41.520 +that so basically there's no uh correct + +00:53:39.119 --> 00:53:45.440 +answer to this + +00:53:41.520 --> 00:53:47.359 +um one thing to be aware of is uh + +00:53:45.440 --> 00:53:51.440 +sometimes if you select data + +00:53:47.359 --> 00:53:52.880 +intelligently you can uh improve more + +00:53:51.440 --> 00:53:54.359 +quickly with something like Active + +00:53:52.880 --> 00:53:56.520 +Learning and active learning chooses + +00:53:54.359 --> 00:54:00.000 +representative and difficult data that + +00:53:56.520 --> 00:54:02.559 +you can um be + +00:54:00.000 --> 00:54:04.839 +using so when you sample data for fine + +00:54:02.559 --> 00:54:07.440 +tuning uh what you want to be doing is + +00:54:04.839 --> 00:54:08.839 +you want to be sampling data that has + +00:54:07.440 --> 00:54:10.040 +good coverage of the domains that you + +00:54:08.839 --> 00:54:12.760 +want to + +00:54:10.040 --> 00:54:15.079 +cover um you also want to be covering + +00:54:12.760 --> 00:54:18.599 +for example language uh languages or + +00:54:15.079 --> 00:54:23.200 +language varieties or demographics of + +00:54:18.599 --> 00:54:25.520 +users um and another thing is uh when + +00:54:23.200 --> 00:54:29.440 +you're doing this it's often good idea + +00:54:25.520 --> 00:54:31.400 +to document how you're creating data and + +00:54:29.440 --> 00:54:34.079 +uh there's this paper data statements + +00:54:31.400 --> 00:54:35.520 +for NLP by vendor and fredman uh which + +00:54:34.079 --> 00:54:37.440 +suggests a bunch of different things + +00:54:35.520 --> 00:54:39.520 +that you can use to document your data + +00:54:37.440 --> 00:54:41.520 +collection and like why and how you + +00:54:39.520 --> 00:54:44.960 +collected the data and this gives you + +00:54:41.520 --> 00:54:47.200 +some pieces of information that uh could + +00:54:44.960 --> 00:54:49.359 +be useful this has been incorporated + +00:54:47.200 --> 00:54:51.880 +into the hugging face data sets data set + +00:54:49.359 --> 00:54:53.520 +cards and now hugging face data sets + +00:54:51.880 --> 00:54:56.040 +actually has lots of metadata that's + +00:54:53.520 --> 00:54:58.359 +kind of inspired by uh this although + +00:54:56.040 --> 00:55:01.799 +it's been adjusted for more kind of like + +00:54:58.359 --> 00:55:01.799 +practical industry use + +00:55:02.119 --> 00:55:06.480 +cases another thing is annotation + +00:55:04.400 --> 00:55:09.160 +guidelines so if you're asking humans to + +00:55:06.480 --> 00:55:11.319 +do anything um or for that matter if + +00:55:09.160 --> 00:55:16.119 +you're asking gp4 to generate data for + +00:55:11.319 --> 00:55:21.480 +you um you need to tell people or gp4 in + +00:55:16.119 --> 00:55:24.440 +um you know a clear manner how you will + +00:55:21.480 --> 00:55:28.119 +um like how it should be creating data + +00:55:24.440 --> 00:55:29.920 +so the first thing is um if you try uh + +00:55:28.119 --> 00:55:32.960 +to an the first thing that you can do is + +00:55:29.920 --> 00:55:34.240 +you can try to annotate yourself um and + +00:55:32.960 --> 00:55:37.039 +if you actually try to solve The + +00:55:34.240 --> 00:55:38.440 +annotation task yourself then you'll + +00:55:37.039 --> 00:55:41.160 +realize that there's lots of corner + +00:55:38.440 --> 00:55:43.799 +cases that are hard to decide on um + +00:55:41.160 --> 00:55:45.440 +other things like that so like if you're + +00:55:43.799 --> 00:55:47.520 +annotating sentiment what is the + +00:55:45.440 --> 00:55:49.799 +boundary between very positive and + +00:55:47.520 --> 00:55:50.880 +positive um if you're annotating + +00:55:49.799 --> 00:55:54.000 +question + +00:55:50.880 --> 00:55:56.280 +answering um like for + +00:55:54.000 --> 00:55:57.720 +example do you want to answer in a whole + +00:55:56.280 --> 00:56:01.119 +sentence or do you want to answer with + +00:55:57.720 --> 00:56:03.760 +only a short concise answer like these + +00:56:01.119 --> 00:56:05.400 +sorts of things you'll need to tell uh + +00:56:03.760 --> 00:56:07.839 +either an annotator or a model that + +00:56:05.400 --> 00:56:10.960 +you're asking to do annotation to give + +00:56:07.839 --> 00:56:12.760 +some examples from pent Tree Bank uh + +00:56:10.960 --> 00:56:15.440 +part of speech annotation guidelines + +00:56:12.760 --> 00:56:18.079 +this is very old it's from 1990 but + +00:56:15.440 --> 00:56:21.200 +basically they have uh like adverb this + +00:56:18.079 --> 00:56:25.559 +category includes most words that end in + +00:56:21.200 --> 00:56:30.680 +um ly as well as degree words like + +00:56:25.559 --> 00:56:33.079 +quite um etc etc it has other things for + +00:56:30.680 --> 00:56:36.200 +adverbs and then it has like confusing + +00:56:33.079 --> 00:56:38.039 +parts of speech with examples uh one + +00:56:36.200 --> 00:56:39.640 +thing that I found like really really + +00:56:38.039 --> 00:56:42.640 +interesting is like if you look at these + +00:56:39.640 --> 00:56:46.160 +annotation guidelines it's like uh + +00:56:42.640 --> 00:56:48.319 +prompts so if you look at this it's like + +00:56:46.160 --> 00:56:49.880 +these are your your prompts your zero + +00:56:48.319 --> 00:56:52.359 +shot prompts and these are F shot + +00:56:49.880 --> 00:56:54.480 +examples so like even for humans we were + +00:56:52.359 --> 00:56:56.520 +doing F shot prompting with examples + +00:56:54.480 --> 00:57:00.880 +when they were doing annotations so uh + +00:56:56.520 --> 00:57:03.119 +it's kind of uh kind of fun um hiring + +00:57:00.880 --> 00:57:05.000 +annotators so like let's say you want to + +00:57:03.119 --> 00:57:08.319 +actually build a data set and and pay + +00:57:05.000 --> 00:57:10.359 +people to do things um for smaller scale + +00:57:08.319 --> 00:57:13.359 +projects uh very often you can just + +00:57:10.359 --> 00:57:15.240 +annotate yourself and that's fine um + +00:57:13.359 --> 00:57:16.720 +there's a fixed set of overhead to get + +00:57:15.240 --> 00:57:19.480 +other people to do something and train + +00:57:16.720 --> 00:57:23.200 +them and stuff so you know I often just + +00:57:19.480 --> 00:57:25.079 +annotate things myself um you can also + +00:57:23.200 --> 00:57:26.520 +find friends or other students or + +00:57:25.079 --> 00:57:29.559 +co-workers who can help you out with + +00:57:26.520 --> 00:57:33.359 +things you can bri bribe them with uh + +00:57:29.559 --> 00:57:37.280 +pizza or whatever favorite uh food or + +00:57:33.359 --> 00:57:39.400 +beverage that they like um then for + +00:57:37.280 --> 00:57:42.440 +finding people online there's a lot of + +00:57:39.400 --> 00:57:45.160 +things that you can do um I very often + +00:57:42.440 --> 00:57:46.000 +hire Freelancers uh through platforms + +00:57:45.160 --> 00:57:50.400 +such as + +00:57:46.000 --> 00:57:51.799 +upwork um this is good and bad the bad + +00:57:50.400 --> 00:57:53.760 +thing about it is that this is often + +00:57:51.799 --> 00:57:56.280 +more expensive the good thing about it + +00:57:53.760 --> 00:57:58.640 +is um you get people who have pride in + +00:57:56.280 --> 00:58:00.440 +their work and accountability and + +00:57:58.640 --> 00:58:02.440 +motivation because like if they get + +00:58:00.440 --> 00:58:04.480 +rated poorly they it's going to be + +00:58:02.440 --> 00:58:06.720 +harder to get work and often they're + +00:58:04.480 --> 00:58:08.160 +Professionals in their fields so like if + +00:58:06.720 --> 00:58:12.079 +you want to get a code generation data + +00:58:08.160 --> 00:58:15.880 +set you can hire good um Freelancers + +00:58:12.079 --> 00:58:18.520 +I've actually heard rumors that uh + +00:58:15.880 --> 00:58:20.119 +people like open AI they hire people and + +00:58:18.520 --> 00:58:21.599 +pay them $60 an hour to do The + +00:58:20.119 --> 00:58:23.599 +annotation because they really want + +00:58:21.599 --> 00:58:27.119 +people who are very professional and do + +00:58:23.599 --> 00:58:30.000 +a very good job um I don't pay that + +00:58:27.119 --> 00:58:34.240 +much but I do pay well more than minimum + +00:58:30.000 --> 00:58:35.880 +wage and uh you know like it's a I pay a + +00:58:34.240 --> 00:58:38.039 +competitive price for these freelancing + +00:58:35.880 --> 00:58:40.319 +sites when I get people to do + +00:58:38.039 --> 00:58:42.000 +that another thing you can do as crowd + +00:58:40.319 --> 00:58:44.400 +workers and this is could be through + +00:58:42.000 --> 00:58:45.960 +sites like Mechanical Turk or prolific + +00:58:44.400 --> 00:58:48.960 +or other things like this so that's + +00:58:45.960 --> 00:58:51.680 +another option um here quality control + +00:58:48.960 --> 00:58:55.240 +becomes very difficult and um we're + +00:58:51.680 --> 00:58:57.799 +getting to the point where number one + +00:58:55.240 --> 00:58:59.400 +um if you don't aren't very careful with + +00:58:57.799 --> 00:59:01.920 +quality control language models actually + +00:58:59.400 --> 00:59:03.400 +do a similarly good job as crowd workers + +00:59:01.920 --> 00:59:06.960 +and number two all the crowd workers are + +00:59:03.400 --> 00:59:10.000 +using gp4 anyway so um you do need to be + +00:59:06.960 --> 00:59:12.319 +careful about that um one thing that I + +00:59:10.000 --> 00:59:14.039 +often do is I hire for a small job first + +00:59:12.319 --> 00:59:16.880 +to gauge timeliness and accuracy and + +00:59:14.039 --> 00:59:18.920 +then hire for a bigger job so um just + +00:59:16.880 --> 00:59:21.720 +hire people to do you know 50 examples + +00:59:18.920 --> 00:59:23.319 +or 20 examples first and then uh you + +00:59:21.720 --> 00:59:26.240 +know if they do a good job with it then + +00:59:23.319 --> 00:59:27.960 +I hire them to do 200 th000 + +00:59:26.240 --> 00:59:30.799 +examples + +00:59:27.960 --> 00:59:34.720 +um one thing to note is that if you're + +00:59:30.799 --> 00:59:36.599 +doing research in a university um you + +00:59:34.720 --> 00:59:39.400 +might need to get approval from an + +00:59:36.599 --> 00:59:41.480 +Institutional review board and this is + +00:59:39.400 --> 00:59:43.000 +in particular the case for subjective + +00:59:41.480 --> 00:59:45.880 +task so this is when you're asking + +00:59:43.000 --> 00:59:47.440 +people how do you feel about this output + +00:59:45.880 --> 00:59:50.039 +um do you think this output is + +00:59:47.440 --> 00:59:51.720 +representative of your beliefs or things + +00:59:50.039 --> 00:59:54.760 +like that where it doesn't have a + +00:59:51.720 --> 00:59:56.319 +correct answer a yes and no answer if + +00:59:54.760 --> 00:59:58.680 +it's something like it it does have a + +00:59:56.319 --> 01:00:03.640 +yes and no answer which is like how many + +00:59:58.680 --> 01:00:05.640 +verbs are in this sentence or um how do + +01:00:03.640 --> 01:00:07.280 +you translate the sentence into another + +01:00:05.640 --> 01:00:09.880 +language or something like that then you + +01:00:07.280 --> 01:00:12.039 +don't need an IRB approval um but if + +01:00:09.880 --> 01:00:15.000 +it's borderline you might want to check + +01:00:12.039 --> 01:00:17.280 +anyway um so that that's something to be + +01:00:15.000 --> 01:00:17.280 +aware + +01:00:18.640 --> 01:00:26.240 +of next is assessing annotation quality + +01:00:22.640 --> 01:00:27.680 +so um one of my favorite ways to do this + +01:00:26.240 --> 01:00:30.039 +is assess Human + +01:00:27.680 --> 01:00:32.240 +Performance and so the way we do this is + +01:00:30.039 --> 01:00:34.119 +you double annotate some data and then + +01:00:32.240 --> 01:00:37.160 +you measure whatever metric you want to + +01:00:34.119 --> 01:00:39.200 +measure for machines just with respect + +01:00:37.160 --> 01:00:41.039 +to human agreement and so for + +01:00:39.200 --> 01:00:43.839 +translation if you're using blue score + +01:00:41.039 --> 01:00:45.440 +or KF score or something like this then + +01:00:43.839 --> 01:00:47.079 +you would want to use this for + +01:00:45.440 --> 01:00:50.440 +assessment of the + +01:00:47.079 --> 01:00:56.039 +outputs um the advantage of doing this + +01:00:50.440 --> 01:00:58.760 +is that you get a human quality score + +01:00:56.039 --> 01:01:00.960 +and the human quality score is directly + +01:00:58.760 --> 01:01:02.480 +comparable to the machine quality score + +01:01:00.960 --> 01:01:04.599 +and so you can say well humans got the + +01:01:02.480 --> 01:01:07.280 +task right 90% of the time and gp4 got + +01:01:04.599 --> 01:01:11.280 +the task right 16% of the time so humans + +01:01:07.280 --> 01:01:13.760 +are way better than gp4 or um you know + +01:01:11.280 --> 01:01:16.559 +humans got it right 80% of the time and + +01:01:13.760 --> 01:01:19.599 +gp4 got it right 78% of the time so this + +01:01:16.559 --> 01:01:21.000 +task is you know this task or maybe not + +01:01:19.599 --> 01:01:23.640 +necessarily the task but at least the + +01:01:21.000 --> 01:01:25.079 +data set is more or less uh been so by + +01:01:23.640 --> 01:01:26.640 +the strongest language models so now we + +01:01:25.079 --> 01:01:28.920 +need to catch up open source models so + +01:01:26.640 --> 01:01:31.680 +SW ones or something like + +01:01:28.920 --> 01:01:32.880 +that um there are things that you can + +01:01:31.680 --> 01:01:34.880 +measure you can measure things like + +01:01:32.880 --> 01:01:36.880 +Kappa statistics this is particularly + +01:01:34.880 --> 01:01:39.799 +useful for um kind of just + +01:01:36.880 --> 01:01:41.799 +classification tasks and what this tells + +01:01:39.799 --> 01:01:43.880 +you is this tells you how much higher is + +01:01:41.799 --> 01:01:48.000 +the agreement that you would get than if + +01:01:43.880 --> 01:01:49.920 +you got it by chance and so for example + +01:01:48.000 --> 01:01:53.279 +let's say you're classifying + +01:01:49.920 --> 01:01:54.760 +spam uh or you're classifying you know + +01:01:53.279 --> 01:01:59.520 +toxic content or something something + +01:01:54.760 --> 01:02:03.400 +like that in 99% of your time 99% of the + +01:01:59.520 --> 01:02:07.480 +time the content is not toxic and 1% of + +01:02:03.400 --> 01:02:11.799 +the time the content is toxic and then + +01:02:07.480 --> 01:02:14.079 +you hire some annotators and you get 98% + +01:02:11.799 --> 01:02:16.279 +accuracy that's kind of bad right you + +01:02:14.079 --> 01:02:19.200 +know if you just said not toxic all the + +01:02:16.279 --> 01:02:20.880 +time you would get 99% um what the Kaus + +01:02:19.200 --> 01:02:24.599 +statistic does is it accounts for this + +01:02:20.880 --> 01:02:26.559 +basically it says um how much more like + +01:02:24.599 --> 01:02:28.440 +assis than chance and if you just had + +01:02:26.559 --> 01:02:30.720 +chance accuracy you would get zero if + +01:02:28.440 --> 01:02:33.200 +you had perfect accuracy you would get + +01:02:30.720 --> 01:02:34.920 +one and you normally get something in + +01:02:33.200 --> 01:02:37.359 +between + +01:02:34.920 --> 01:02:39.200 +um so if it's slow you may need to + +01:02:37.359 --> 01:02:41.319 +revisit guidelines Tire better + +01:02:39.200 --> 01:02:44.480 +annotators or rethink whether the task + +01:02:41.319 --> 01:02:46.559 +is possible at all or not um and you + +01:02:44.480 --> 01:02:48.599 +know some tasks are just impossible like + +01:02:46.559 --> 01:02:51.599 +if um I'm + +01:02:48.599 --> 01:02:51.599 +asking + +01:02:52.240 --> 01:02:58.160 +uh well or um they're very hard for + +01:02:55.960 --> 01:03:00.039 +annotators so like to give one example + +01:02:58.160 --> 01:03:04.039 +um annotators are really horrible at + +01:03:00.039 --> 01:03:06.200 +identifying fake reviews um and so like + +01:03:04.039 --> 01:03:07.640 +if you even if you hire annotators to + +01:03:06.200 --> 01:03:09.279 +identify paper reviews they're bad at + +01:03:07.640 --> 01:03:11.359 +doing that so you're not likely to get + +01:03:09.279 --> 01:03:14.680 +high + +01:03:11.359 --> 01:03:17.920 +agreement um cool I'm going to skip over + +01:03:14.680 --> 01:03:23.279 +this part because I already talked about + +01:03:17.920 --> 01:03:26.640 +it okay um any any questions + +01:03:23.279 --> 01:03:29.079 +here okay sounds good uh next I'd like + +01:03:26.640 --> 01:03:30.640 +to get into running experiments so + +01:03:29.079 --> 01:03:34.359 +running experiments one thing I find + +01:03:30.640 --> 01:03:37.200 +very helpful is workflow automation um + +01:03:34.359 --> 01:03:40.079 +and basically what I I like to do is I + +01:03:37.200 --> 01:03:41.839 +like to mod modularize each step of an + +01:03:40.079 --> 01:03:44.119 +experiment into a + +01:03:41.839 --> 01:03:47.240 +directory + +01:03:44.119 --> 01:03:51.039 +um where uh you have like a directory as + +01:03:47.240 --> 01:03:53.279 +input and a directory as output + +01:03:51.039 --> 01:03:54.559 +um this is my personal way of doing + +01:03:53.279 --> 01:03:56.799 +things there are other ways of doing + +01:03:54.559 --> 01:03:58.640 +things that are also good but um very + +01:03:56.799 --> 01:04:00.760 +often like just to give an example + +01:03:58.640 --> 01:04:04.680 +you'll need to do pre-processing + +01:04:00.760 --> 01:04:07.480 +According to some uh you'll need to do + +01:04:04.680 --> 01:04:09.119 +data selection so you'll need to select + +01:04:07.480 --> 01:04:11.039 +which data sets you're training on + +01:04:09.119 --> 01:04:13.520 +you'll need to do pre-processing of them + +01:04:11.039 --> 01:04:16.160 +with a tokenization model and then you + +01:04:13.520 --> 01:04:18.359 +will need to run an + +01:04:16.160 --> 01:04:20.000 +experiment and then you'll need to do + +01:04:18.359 --> 01:04:23.240 +evaluation and those are all kind of + +01:04:20.000 --> 01:04:25.079 +like discret Steps where the data + +01:04:23.240 --> 01:04:27.760 +selection takes in your big pool of data + +01:04:25.079 --> 01:04:31.200 +and outputs a a data set that's been + +01:04:27.760 --> 01:04:33.680 +selected the tokenization + +01:04:31.200 --> 01:04:35.480 +will uh take a tokenizer model maybe + +01:04:33.680 --> 01:04:38.599 +train a tokenizer model and and split it + +01:04:35.480 --> 01:04:40.400 +up into different tokens um the training + +01:04:38.599 --> 01:04:42.079 +will train it might output a whole bunch + +01:04:40.400 --> 01:04:44.720 +of checkpoints and the evaluation will + +01:04:42.079 --> 01:04:47.039 +evaluate one checkpoint and so those are + +01:04:44.720 --> 01:04:48.400 +all kind of modular and you can actually + +01:04:47.039 --> 01:04:50.039 +think of each one of them as like a + +01:04:48.400 --> 01:04:52.760 +function in your Python + +01:04:50.039 --> 01:04:56.400 +program + +01:04:52.760 --> 01:04:58.160 +and you kind of want to avoid rerunning + +01:04:56.400 --> 01:05:00.200 +data set selection and tokenization + +01:04:58.160 --> 01:05:01.720 +every time you do a new evaluation right + +01:05:00.200 --> 01:05:03.359 +like that would be kind of silly you + +01:05:01.720 --> 01:05:04.680 +definitely want to avoid rerunning + +01:05:03.359 --> 01:05:09.119 +training every time you evaluate a + +01:05:04.680 --> 01:05:11.200 +checkpoint so um what I do is I often + +01:05:09.119 --> 01:05:12.799 +name directories by parameters where + +01:05:11.200 --> 01:05:16.079 +it's like Transformer + +01:05:12.799 --> 01:05:18.640 +layer Transformer layer 8 node 512 + +01:05:16.079 --> 01:05:21.279 +Dropout 0.5 label smooth + +01:05:18.640 --> 01:05:25.880 +0.02 um and so I have all the parameters + +01:05:21.279 --> 01:05:26.880 +in there and then + +01:05:25.880 --> 01:05:29.680 +the + +01:05:26.880 --> 01:05:31.960 +training process will output a whole + +01:05:29.680 --> 01:05:33.960 +bunch of checkpoints in here and then + +01:05:31.960 --> 01:05:35.520 +for my evaluation I have evaluation + +01:05:33.960 --> 01:05:38.119 +metrics and I have the checkpoint I'm + +01:05:35.520 --> 01:05:41.680 +evaluating so uh when I do + +01:05:38.119 --> 01:05:45.119 +evaluation I will then append checkpoint + +01:05:41.680 --> 01:05:47.279 +6 uh metric F measure or something like + +01:05:45.119 --> 01:05:49.079 +that and so I keep around all of the + +01:05:47.279 --> 01:05:52.520 +previous information and just append + +01:05:49.079 --> 01:05:54.599 +append append append and so um this + +01:05:52.520 --> 01:05:56.680 +allows you to avoid rerunning things + +01:05:54.599 --> 01:05:58.359 +because you can uh just have your python + +01:05:56.680 --> 01:06:00.520 +code to check if the directory already + +01:05:58.359 --> 01:06:01.839 +exists and already has been completed + +01:06:00.520 --> 01:06:03.559 +and then read in the result if it + +01:06:01.839 --> 01:06:06.319 +already has been or run the experiment + +01:06:03.559 --> 01:06:08.079 +that it hasn't been so um you can write + +01:06:06.319 --> 01:06:10.279 +you can write this in pure python by + +01:06:08.079 --> 01:06:11.599 +just adding like some if statements at + +01:06:10.279 --> 01:06:14.079 +the beginning of the function some if + +01:06:11.599 --> 01:06:16.799 +statements at um some like output + +01:06:14.079 --> 01:06:19.440 +statements at the end of the function um + +01:06:16.799 --> 01:06:22.000 +there are more sophisticated models + +01:06:19.440 --> 01:06:24.200 +methods so there's like a toolkit called + +01:06:22.000 --> 01:06:28.079 +duct tape that was originally created + +01:06:24.200 --> 01:06:31.760 +here at CMU and um my uh student Patrick + +01:06:28.079 --> 01:06:33.079 +is maintaining now this link um so you + +01:06:31.760 --> 01:06:34.960 +can either just roll something on your + +01:06:33.079 --> 01:06:36.880 +own or look into one of these more + +01:06:34.960 --> 01:06:39.359 +complex work workflow automation things + +01:06:36.880 --> 01:06:39.359 +to sve you + +01:06:39.400 --> 01:06:47.279 +time okay evaluation um so I talked + +01:06:43.400 --> 01:06:49.000 +about this to some extent um uh so yeah + +01:06:47.279 --> 01:06:51.000 +I'll just skip over + +01:06:49.000 --> 01:06:54.559 +that + +01:06:51.000 --> 01:06:57.200 +and result reporting um + +01:06:54.559 --> 01:06:59.160 +for papers one thing that I really like + +01:06:57.200 --> 01:07:01.960 +to do is plan the result section in + +01:06:59.160 --> 01:07:07.039 +advance or at least imagine the result + +01:07:01.960 --> 01:07:07.039 +section in advance um + +01:07:07.200 --> 01:07:11.640 +so what what I think of is like what + +01:07:09.559 --> 01:07:14.520 +experimental claims would I like to make + +01:07:11.640 --> 01:07:15.760 +how am I going to support them by the + +01:07:14.520 --> 01:07:19.039 +experiments that I'm going to show in a + +01:07:15.760 --> 01:07:21.160 +result section um and this identifies + +01:07:19.039 --> 01:07:24.640 +unjustified experimental claims like so + +01:07:21.160 --> 01:07:27.119 +let's say your method is you're saying + +01:07:24.640 --> 01:07:29.000 +something like uh this method improves + +01:07:27.119 --> 01:07:30.440 +across a wide variety of languages and + +01:07:29.000 --> 01:07:32.520 +then you realize that you only have one + +01:07:30.440 --> 01:07:34.720 +language and you're uh in your + +01:07:32.520 --> 01:07:37.960 +experiment section that's a problem + +01:07:34.720 --> 01:07:40.640 +obviously um also I I really enjoy like + +01:07:37.960 --> 01:07:43.599 +assuming that all of my experiments are + +01:07:40.640 --> 01:07:46.520 +going really really well um and you know + +01:07:43.599 --> 01:07:49.440 +none of my uh none of my runs crash with + +01:07:46.520 --> 01:07:52.000 +Cuda out of memory errors and you know + +01:07:49.440 --> 01:07:55.319 +all of all of the experiments appear as + +01:07:52.000 --> 01:07:57.960 +expected and if you do something like + +01:07:55.319 --> 01:07:59.960 +that you can be ambitious and say okay + +01:07:57.960 --> 01:08:03.119 +how can I make this research project + +01:07:59.960 --> 01:08:04.960 +really impactful like um and another + +01:08:03.119 --> 01:08:08.240 +thing that I like to ask my students or + +01:08:04.960 --> 01:08:11.200 +people I'm working with recently is like + +01:08:08.240 --> 01:08:13.440 +who are like three people in the world + +01:08:11.200 --> 01:08:17.440 +who will be really excited by your paper + +01:08:13.440 --> 01:08:19.040 +like name actual people um and where do + +01:08:17.440 --> 01:08:20.839 +those people work what do they care + +01:08:19.040 --> 01:08:22.359 +about what sort of evidence would you + +01:08:20.839 --> 01:08:24.560 +need in your paper to make them really + +01:08:22.359 --> 01:08:26.560 +excited about your paper or something + +01:08:24.560 --> 01:08:29.679 +like that and very often people will + +01:08:26.560 --> 01:08:31.480 +reply to me like oh I think people in um + +01:08:29.679 --> 01:08:32.799 +in Google will be very excited about + +01:08:31.480 --> 01:08:34.440 +this and they're going to use it and I'm + +01:08:32.799 --> 01:08:38.719 +like well you're writing all your code + +01:08:34.440 --> 01:08:39.839 +in pytorch and they don't use pytorch so + +01:08:38.719 --> 01:08:41.000 +how are you going to convince them to + +01:08:39.839 --> 01:08:42.640 +use their paper they're going to have to + +01:08:41.000 --> 01:08:46.120 +reimplement it in Jax and that's going + +01:08:42.640 --> 01:08:47.520 +to suck for them so like uh you know + +01:08:46.120 --> 01:08:49.040 +what are the barriers for them actually + +01:08:47.520 --> 01:08:50.799 +using it and then maybe the people are + +01:08:49.040 --> 01:08:52.159 +like oh well maybe actually I don't want + +01:08:50.799 --> 01:08:54.199 +people at Google to use this and I can + +01:08:52.159 --> 01:08:56.560 +think of somebody else and it's like + +01:08:54.199 --> 01:08:58.920 +well great so now release it open source + +01:08:56.560 --> 01:09:00.520 +and people will will have it open source + +01:08:58.920 --> 01:09:01.920 +so you can kind of think about like the + +01:09:00.520 --> 01:09:03.719 +types of evidence that you would need to + +01:09:01.920 --> 01:09:05.440 +convince people to use your work and + +01:09:03.719 --> 01:09:08.040 +that can result in your work being more + +01:09:05.440 --> 01:09:09.319 +impactful in the long run and if you + +01:09:08.040 --> 01:09:10.400 +think about it from the very beginning + +01:09:09.319 --> 01:09:11.839 +that also helps you plan your + +01:09:10.400 --> 01:09:13.520 +experiments like what sort of evidence + +01:09:11.839 --> 01:09:15.359 +is necessary for people to get excited + +01:09:13.520 --> 01:09:18.440 +about it in the this + +01:09:15.359 --> 01:09:20.120 +SPS um another thing that I like to do + +01:09:18.440 --> 01:09:24.000 +with result reporting is result + +01:09:20.120 --> 01:09:26.880 +generation scripts um so uh I often + +01:09:24.000 --> 01:09:29.159 +generate paper latex directly from log + +01:09:26.880 --> 01:09:31.799 +files uh there's two reasons why I do + +01:09:29.159 --> 01:09:34.480 +this um number one it's efficient and + +01:09:31.799 --> 01:09:36.719 +minimizes errors number two it allows + +01:09:34.480 --> 01:09:39.080 +you to preemptively plan experiments + +01:09:36.719 --> 01:09:41.120 +that you want to run so like for example + +01:09:39.080 --> 01:09:44.440 +if we go back to the dock um the + +01:09:41.120 --> 01:09:46.199 +directory that I talked about before um + +01:09:44.440 --> 01:09:50.359 +I can write + +01:09:46.199 --> 01:09:52.719 +a a script that reads in 20 evaluation + +01:09:50.359 --> 01:09:54.800 +results from 20 different directories + +01:09:52.719 --> 01:09:56.920 +and fills in a table and if that + +01:09:54.800 --> 01:09:58.600 +directory doesn't exist yet it will put + +01:09:56.920 --> 01:10:01.239 +like TVD or something like that in the + +01:09:58.600 --> 01:10:03.960 +table so I can very quickly see okay + +01:10:01.239 --> 01:10:05.880 +these things are TBD um oh this thing + +01:10:03.960 --> 01:10:07.480 +has been TBD for a very long time is my + +01:10:05.880 --> 01:10:09.400 +experiment crashed do I need to go back + +01:10:07.480 --> 01:10:12.239 +and like restart my experiment or + +01:10:09.400 --> 01:10:13.719 +something like that so um it's an + +01:10:12.239 --> 01:10:17.280 +efficient way and when you finish the + +01:10:13.719 --> 01:10:17.280 +last TBD it's a very good feeling + +01:10:18.280 --> 01:10:23.719 +also cool um next computational + +01:10:21.760 --> 01:10:26.159 +resources actually I kind of already + +01:10:23.719 --> 01:10:28.600 +talked about this a little bit um but on + +01:10:26.159 --> 01:10:30.280 +Amazon web services we have uh class + +01:10:28.600 --> 01:10:32.080 +credits that we're going to be issuing + +01:10:30.280 --> 01:10:34.880 +as soon as uh the assignment one + +01:10:32.080 --> 01:10:37.560 +deadline is over um there's also Google + +01:10:34.880 --> 01:10:39.440 +cloud and collab um you can get + +01:10:37.560 --> 01:10:44.000 +commodity gpus and other things like + +01:10:39.440 --> 01:10:47.800 +that so um you can also consider + +01:10:44.000 --> 01:10:53.159 +that okay let me get into Data analysis + +01:10:47.800 --> 01:10:55.440 +um so I'm going to cover this a lot more + +01:10:53.159 --> 01:10:58.480 +in an interpretation lecture and this is + +01:10:55.440 --> 01:10:59.520 +going to be in three classes so this is + +01:10:58.480 --> 01:11:02.239 +going to + +01:10:59.520 --> 01:11:07.000 +be the + +01:11:02.239 --> 01:11:09.719 +Tuesday after next um so uh very + +01:11:07.000 --> 01:11:11.000 +important things though uh look at data + +01:11:09.719 --> 01:11:13.679 +um you'll want to do quantitative + +01:11:11.000 --> 01:11:16.239 +analysis and qualitative analysis um you + +01:11:13.679 --> 01:11:17.440 +can also look at model explanations so + +01:11:16.239 --> 01:11:18.719 +I'm going to cover how to do all of + +01:11:17.440 --> 01:11:21.520 +these things in that lecture I don't + +01:11:18.719 --> 01:11:24.440 +have enough time to do it + +01:11:21.520 --> 01:11:26.960 +today then the final thing is accoring + +01:11:24.440 --> 01:11:30.840 +conclusions um this is also too much for + +01:11:26.960 --> 01:11:34.000 +a single class but um I very highly + +01:11:30.840 --> 01:11:35.920 +recommend this lecture um uh sorry these + +01:11:34.000 --> 01:11:39.320 +lecture slides they don't take that long + +01:11:35.920 --> 01:11:40.880 +to look through they're maybe um 20 + +01:11:39.320 --> 01:11:42.880 +minutes or so but they're very very + +01:11:40.880 --> 01:11:45.480 +helpful um they talk about how to + +01:11:42.880 --> 01:11:48.199 +structure a paper uh other things like + +01:11:45.480 --> 01:11:51.440 +this and if you follow this advice for + +01:11:48.199 --> 01:11:53.239 +writing your reports for like three and + +01:11:51.440 --> 01:11:54.960 +four assignment three and assignment + +01:11:53.239 --> 01:11:57.800 +four even assignment two I think you + +01:11:54.960 --> 01:11:59.400 +can't really go wrong uh actually three + +01:11:57.800 --> 01:12:00.840 +and four is probably better uh than + +01:11:59.400 --> 01:12:03.320 +assignment two assignment two can be + +01:12:00.840 --> 01:12:05.360 +more descriptive so definitely take a + +01:12:03.320 --> 01:12:08.600 +look at that if + +01:12:05.360 --> 01:12:08.600 +you cool